5251-5300 of 10000 results (101ms)
2024-02-06 ยง
18:36 <oblivian@deploy2002> oblivian: Continuing with sync [production]
18:32 <oblivian@deploy2002> oblivian: Backport for [[gerrit:997873|Do not add env variables when they're empty (T356780)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
18:30 <oblivian@deploy2002> Started scap: Backport for [[gerrit:997873|Do not add env variables when they're empty (T356780)]] [production]
18:27 <bking@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on 7 hosts with reason: T355860 [production]
18:27 <bking@cumin2002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on 7 hosts with reason: T355860 [production]
18:22 <eevans@cumin1002> START - Cookbook sre.hosts.decommission for hosts restbase[2013-2020].codfw.wmnet [production]
18:21 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1189', diff saved to https://phabricator.wikimedia.org/P56362 and previous config saved to /var/cache/conftool/dbconfig/20240206-182148-marostegui.json [production]
18:20 <kamila_> wikikube codfw: uncordon new nodes [production]
18:13 <kamila_> wikikube codfw: belated homer commit of new nodes [production]
18:06 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1189 (T355609)', diff saved to https://phabricator.wikimedia.org/P56360 and previous config saved to /var/cache/conftool/dbconfig/20240206-180641-marostegui.json [production]
17:59 <kamila_> wikikube codfw: drain newly added nodes [production]
17:58 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db1189 (T355609)', diff saved to https://phabricator.wikimedia.org/P56359 and previous config saved to /var/cache/conftool/dbconfig/20240206-175822-marostegui.json [production]
17:58 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1189.eqiad.wmnet with reason: Maintenance [production]
17:58 <claime> uncordoning kubernetes2010 [production]
17:58 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 6:00:00 on db1189.eqiad.wmnet with reason: Maintenance [production]
17:58 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1175 (T355609)', diff saved to https://phabricator.wikimedia.org/P56358 and previous config saved to /var/cache/conftool/dbconfig/20240206-175800-marostegui.json [production]
17:56 <kamila_> wikikube: cordon nodes added earlier today in codfw [production]
17:51 <cgoubert@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kubernetes2010.codfw.wmnet [production]
17:47 <bking@cumin2002> START - Cookbook sre.hosts.reimage for host sretest2003.codfw.wmnet with OS bullseye [production]
17:43 <cgoubert@cumin2002> START - Cookbook sre.hosts.reboot-single for host kubernetes2010.codfw.wmnet [production]
17:42 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1175', diff saved to https://phabricator.wikimedia.org/P56357 and previous config saved to /var/cache/conftool/dbconfig/20240206-174253-marostegui.json [production]
17:37 <claime> rebooting kubernetes2010.codfw.wmnet [production]
17:36 <eevans@cumin1002> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts sessionstore[1001-1003].eqiad.wmnet [production]
17:36 <eevans@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
17:36 <eevans@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: sessionstore[1001-1003].eqiad.wmnet decommissioned, removing all IPs except the asset tag one - eevans@cumin1002" [production]
17:35 <eevans@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: sessionstore[1001-1003].eqiad.wmnet decommissioned, removing all IPs except the asset tag one - eevans@cumin1002" [production]
17:33 <eevans@cumin1002> START - Cookbook sre.dns.netbox [production]
17:27 <cgoubert@cumin2002> conftool action : set/pooled=yes; selector: name=mw.*,dc=eqiad,cluster=kubernetes,service=kubesvc [production]
17:27 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1175', diff saved to https://phabricator.wikimedia.org/P56356 and previous config saved to /var/cache/conftool/dbconfig/20240206-172747-marostegui.json [production]
17:27 <cgoubert@cumin2002> conftool action : set/weight=10; selector: name=mw.*,dc=eqiad,cluster=kubernetes,service=kubesvc [production]
17:26 <oblivian@puppetmaster1001> conftool action : set/pooled=yes; selector: dc=codfw,service=kubesvc,name=mw.* [production]
17:25 <oblivian@puppetmaster1001> conftool action : set/weight=10; selector: dc=codfw,service=kubesvc,name=mw.* [production]
17:22 <eevans@cumin1002> START - Cookbook sre.hosts.decommission for hosts sessionstore[1001-1003].eqiad.wmnet [production]
17:12 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1175 (T355609)', diff saved to https://phabricator.wikimedia.org/P56355 and previous config saved to /var/cache/conftool/dbconfig/20240206-171240-marostegui.json [production]
17:11 <bking@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - bking@cumin2002" [production]
17:04 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db1175 (T355609)', diff saved to https://phabricator.wikimedia.org/P56354 and previous config saved to /var/cache/conftool/dbconfig/20240206-170431-marostegui.json [production]
17:04 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1175.eqiad.wmnet with reason: Maintenance [production]
17:04 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 6:00:00 on db1175.eqiad.wmnet with reason: Maintenance [production]
17:04 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1166 (T355609)', diff saved to https://phabricator.wikimedia.org/P56353 and previous config saved to /var/cache/conftool/dbconfig/20240206-170408-marostegui.json [production]
16:54 <herron@cumin1002> END (PASS) - Cookbook sre.kafka.roll-restart-reboot-brokers (exit_code=0) rolling restart_daemons on A:kafka-logging-eqiad [production]
16:54 <bking@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cloudelastic1009.eqiad.wmnet with reason: host reimage [production]
16:51 <bking@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on cloudelastic1009.eqiad.wmnet with reason: host reimage [production]
16:49 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1166', diff saved to https://phabricator.wikimedia.org/P56352 and previous config saved to /var/cache/conftool/dbconfig/20240206-164902-marostegui.json [production]
16:38 <arnaudb@cumin1002> END (FAIL) - Cookbook sre.mysql.clone (exit_code=99) Will create a clone of db2169.codfw.wmnet onto db2194.codfw.wmnet [production]
16:35 <bking@cumin2002> START - Cookbook sre.hosts.reimage for host cloudelastic1009.eqiad.wmnet with OS bullseye [production]
16:35 <claime> Roll-restarting mw-api-ext deployment in codfw [production]
16:34 <bking@cumin2002> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) cloudelastic1009.mgmt.eqiad.wmnet on all recursors [production]
16:34 <bking@cumin2002> START - Cookbook sre.dns.wipe-cache cloudelastic1009.mgmt.eqiad.wmnet on all recursors [production]
16:33 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1166', diff saved to https://phabricator.wikimedia.org/P56349 and previous config saved to /var/cache/conftool/dbconfig/20240206-163355-marostegui.json [production]
16:30 <sukhe@puppetmaster1001> conftool action : set/pooled=yes; selector: name=cp2034.codfw.wmnet,service=(cdn|ats-be) [production]