351-400 of 10000 results (81ms)
2024-04-18 ยง
13:17 <arnaudb@cumin1002> START - Cookbook sre.dns.netbox [production]
13:14 <jmm@cumin2002> START - Cookbook sre.puppet.migrate-host for host es2024.codfw.wmnet [production]
13:13 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db2129 (T361627)', diff saved to https://phabricator.wikimedia.org/P60927 and previous config saved to /var/cache/conftool/dbconfig/20240418-131311-marostegui.json [production]
13:13 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db2129.codfw.wmnet with reason: Maintenance [production]
13:12 <arnaudb@cumin1002> START - Cookbook sre.hosts.decommission for hosts db2103.codfw.wmnet [production]
13:12 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 4:00:00 on db2129.codfw.wmnet with reason: Maintenance [production]
13:12 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2124 (T361627)', diff saved to https://phabricator.wikimedia.org/P60926 and previous config saved to /var/cache/conftool/dbconfig/20240418-131248-marostegui.json [production]
13:10 <arnaudb@cumin1002> dbctl commit (dc=all): 'db2103 depool', diff saved to https://phabricator.wikimedia.org/P60925 and previous config saved to /var/cache/conftool/dbconfig/20240418-131027-arnaudb.json [production]
13:07 <elukey@cumin1002> START - Cookbook sre.cassandra.roll-restart for nodes matching aqs20[02-12]*: Deploy new TLS Keystore - PKI - elukey@cumin1002 [production]
13:06 <elukey> aqs2001's Cassandra instances moved to PKI TLS certs [production]
13:01 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts db2105.codfw.wmnet [production]
13:01 <arnaudb@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
13:01 <arnaudb@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: db2105.codfw.wmnet decommissioned, removing all IPs except the asset tag one - arnaudb@cumin1002" [production]
13:00 <arnaudb@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: db2105.codfw.wmnet decommissioned, removing all IPs except the asset tag one - arnaudb@cumin1002" [production]
12:58 <arnaudb@cumin1002> START - Cookbook sre.dns.netbox [production]
12:57 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2124', diff saved to https://phabricator.wikimedia.org/P60923 and previous config saved to /var/cache/conftool/dbconfig/20240418-125739-marostegui.json [production]
12:55 <jmm@cumin2002> END (PASS) - Cookbook sre.puppet.migrate-host (exit_code=0) for host es2023.codfw.wmnet [production]
12:54 <arnaudb@cumin1002> START - Cookbook sre.hosts.decommission for hosts db2105.codfw.wmnet [production]
12:54 <sukhe> sudo cumin -b1 -s600 "A:dnsbox" "systemctl restart ntp.service" to pick up magru /24: T346722 [production]
12:53 <arnaudb@cumin1002> dbctl commit (dc=all): 'db2105 depool', diff saved to https://phabricator.wikimedia.org/P60922 and previous config saved to /var/cache/conftool/dbconfig/20240418-125338-arnaudb.json [production]
12:49 <elukey> move aqs codfw cassandra instances to PKI TLS certs - T352647 [production]
12:45 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts db2106.codfw.wmnet [production]
12:45 <arnaudb@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
12:45 <arnaudb@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: db2106.codfw.wmnet decommissioned, removing all IPs except the asset tag one - arnaudb@cumin1002" [production]
12:27 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2124 (T361627)', diff saved to https://phabricator.wikimedia.org/P60919 and previous config saved to /var/cache/conftool/dbconfig/20240418-122721-marostegui.json [production]
12:26 <arnaudb@cumin1002> START - Cookbook sre.dns.netbox [production]
12:22 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db2124 (T361627)', diff saved to https://phabricator.wikimedia.org/P60918 and previous config saved to /var/cache/conftool/dbconfig/20240418-122227-marostegui.json [production]
12:22 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db2124.codfw.wmnet with reason: Maintenance [production]
12:22 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 4:00:00 on db2124.codfw.wmnet with reason: Maintenance [production]
12:21 <arnaudb@cumin1002> START - Cookbook sre.hosts.decommission for hosts db2107.codfw.wmnet [production]
12:21 <arnaudb@cumin1002> dbctl commit (dc=all): 'db2107 depool', diff saved to https://phabricator.wikimedia.org/P60917 and previous config saved to /var/cache/conftool/dbconfig/20240418-122122-arnaudb.json [production]
12:18 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db2097.codfw.wmnet with reason: Maintenance [production]
12:18 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 4:00:00 on db2097.codfw.wmnet with reason: Maintenance [production]
12:16 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on dbstore1009.eqiad.wmnet with reason: Maintenance [production]
12:16 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 4:00:00 on dbstore1009.eqiad.wmnet with reason: Maintenance [production]
12:15 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1231 (T361627)', diff saved to https://phabricator.wikimedia.org/P60916 and previous config saved to /var/cache/conftool/dbconfig/20240418-121559-marostegui.json [production]
12:15 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host crm2001.codfw.wmnet [production]
12:14 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host crm2001.codfw.wmnet [production]
12:14 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host matomo1003.eqiad.wmnet [production]
12:13 <isaranto@deploy1002> helmfile [ml-serve-eqiad] Ran 'sync' command on namespace 'revscoring-editquality-damaging' for release 'main' . [production]
12:10 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host matomo1003.eqiad.wmnet [production]
12:08 <vgutierrez> depool ncredir2001 [production]
12:06 <eoghan> Switching phab1004 to use cfssl issued ssl cert https://gerrit.wikimedia.org/r/c/operations/puppet/+/1020190 [production]
12:02 <moritzm> installing PHP 8.2 security updates [production]
12:00 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1231', diff saved to https://phabricator.wikimedia.org/P60915 and previous config saved to /var/cache/conftool/dbconfig/20240418-120051-marostegui.json [production]
12:00 <isaranto@deploy1002> helmfile [ml-serve-codfw] Ran 'sync' command on namespace 'revscoring-editquality-damaging' for release 'main' . [production]
11:56 <isaranto@deploy1002> helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'revscoring-editquality-damaging' for release 'main' . [production]
11:54 <moritzm> upgrading PHP security updates on eqiad baremetal servers T362511 [production]
11:52 <cgoubert@cumin1002> conftool action : set/weight=10:pooled=yes; selector: name=(mw2302.codfw.wmnet|mw2303.codfw.wmnet|mw2304.codfw.wmnet|mw2332.codfw.wmnet|mw2333.codfw.wmnet|mw2334.codfw.wmnet),cluster=kubernetes,service=kubesvc [production]
11:52 <claime> Pooling and uncordoning mw2302.codfw.wmnet,mw2303.codfw.wmnet,mw2304.codfw.wmnet,mw2332.codfw.wmnet,mw2333.codfw.wmnet,mw2334.codfw.wmnet - T351074 [production]