2024-02-06
ยง
|
17:27 |
<cgoubert@cumin2002> |
conftool action : set/weight=10; selector: name=mw.*,dc=eqiad,cluster=kubernetes,service=kubesvc |
[production] |
17:26 |
<oblivian@puppetmaster1001> |
conftool action : set/pooled=yes; selector: dc=codfw,service=kubesvc,name=mw.* |
[production] |
17:25 |
<oblivian@puppetmaster1001> |
conftool action : set/weight=10; selector: dc=codfw,service=kubesvc,name=mw.* |
[production] |
17:22 |
<eevans@cumin1002> |
START - Cookbook sre.hosts.decommission for hosts sessionstore[1001-1003].eqiad.wmnet |
[production] |
17:12 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1175 (T355609)', diff saved to https://phabricator.wikimedia.org/P56355 and previous config saved to /var/cache/conftool/dbconfig/20240206-171240-marostegui.json |
[production] |
17:11 |
<bking@cumin2002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - bking@cumin2002" |
[production] |
17:04 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db1175 (T355609)', diff saved to https://phabricator.wikimedia.org/P56354 and previous config saved to /var/cache/conftool/dbconfig/20240206-170431-marostegui.json |
[production] |
17:04 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1175.eqiad.wmnet with reason: Maintenance |
[production] |
17:04 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1175.eqiad.wmnet with reason: Maintenance |
[production] |
17:04 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1166 (T355609)', diff saved to https://phabricator.wikimedia.org/P56353 and previous config saved to /var/cache/conftool/dbconfig/20240206-170408-marostegui.json |
[production] |
16:54 |
<herron@cumin1002> |
END (PASS) - Cookbook sre.kafka.roll-restart-reboot-brokers (exit_code=0) rolling restart_daemons on A:kafka-logging-eqiad |
[production] |
16:54 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cloudelastic1009.eqiad.wmnet with reason: host reimage |
[production] |
16:51 |
<bking@cumin2002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on cloudelastic1009.eqiad.wmnet with reason: host reimage |
[production] |
16:49 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1166', diff saved to https://phabricator.wikimedia.org/P56352 and previous config saved to /var/cache/conftool/dbconfig/20240206-164902-marostegui.json |
[production] |
16:38 |
<arnaudb@cumin1002> |
END (FAIL) - Cookbook sre.mysql.clone (exit_code=99) Will create a clone of db2169.codfw.wmnet onto db2194.codfw.wmnet |
[production] |
16:35 |
<bking@cumin2002> |
START - Cookbook sre.hosts.reimage for host cloudelastic1009.eqiad.wmnet with OS bullseye |
[production] |
16:35 |
<claime> |
Roll-restarting mw-api-ext deployment in codfw |
[production] |
16:34 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) cloudelastic1009.mgmt.eqiad.wmnet on all recursors |
[production] |
16:34 |
<bking@cumin2002> |
START - Cookbook sre.dns.wipe-cache cloudelastic1009.mgmt.eqiad.wmnet on all recursors |
[production] |
16:33 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1166', diff saved to https://phabricator.wikimedia.org/P56349 and previous config saved to /var/cache/conftool/dbconfig/20240206-163355-marostegui.json |
[production] |
16:30 |
<sukhe@puppetmaster1001> |
conftool action : set/pooled=yes; selector: name=cp2034.codfw.wmnet,service=(cdn|ats-be) |
[production] |
16:30 |
<sukhe@puppetmaster1001> |
conftool action : set/pooled=yes; selector: name=cp2033.codfw.wmnet,service=(cdn|ats-be) |
[production] |
16:29 |
<bking@cumin2002> |
conftool action : set/pooled=yes; selector: name=wdqs2016.codfw.wmnet |
[production] |
16:29 |
<herron@cumin1002> |
START - Cookbook sre.kafka.roll-restart-reboot-brokers rolling restart_daemons on A:kafka-logging-eqiad |
[production] |
16:29 |
<sukhe@cumin2002> |
END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for cp[2033-2034].codfw.wmnet |
[production] |
16:29 |
<sukhe@cumin2002> |
START - Cookbook sre.hosts.remove-downtime for cp[2033-2034].codfw.wmnet |
[production] |
16:26 |
<Daimona> |
T353459 Running mwscript CampaignEvents:GenerateInvitationList --wiki=metawiki --listfile=/home/daimona/list.txt |
[production] |
16:26 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Banning hosts: elastic2087*,elastic2037*,elastic2038*,elastic2055*,elastic2088*,elastic2073*,elastic2074* for switch maintenance - bking@cumin2002 - T355860 |
[production] |
16:26 |
<bking@cumin2002> |
START - Cookbook sre.elasticsearch.ban Banning hosts: elastic2087*,elastic2037*,elastic2038*,elastic2055*,elastic2088*,elastic2073*,elastic2074* for switch maintenance - bking@cumin2002 - T355860 |
[production] |
16:26 |
<isaranto@deploy2002> |
helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'experimental' for release 'main' . |
[production] |
16:18 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1166 (T355609)', diff saved to https://phabricator.wikimedia.org/P56348 and previous config saved to /var/cache/conftool/dbconfig/20240206-161849-marostegui.json |
[production] |
16:18 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Unbanning all hosts in search_codfw |
[production] |
16:18 |
<bking@cumin2002> |
START - Cookbook sre.elasticsearch.ban Unbanning all hosts in search_codfw |
[production] |
16:15 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
16:13 |
<bking@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
16:10 |
<topranks> |
Hosts migrated and basic connectivity ok codfw rack B4 T355860 |
[production] |
16:10 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db1166 (T355609)', diff saved to https://phabricator.wikimedia.org/P56347 and previous config saved to /var/cache/conftool/dbconfig/20240206-161043-marostegui.json |
[production] |
16:10 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1166.eqiad.wmnet with reason: Maintenance |
[production] |
16:10 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1166.eqiad.wmnet with reason: Maintenance |
[production] |
16:08 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.network.configure-switch-interfaces (exit_code=0) for host cloudelastic1009 |
[production] |
16:07 |
<bking@cumin2002> |
START - Cookbook sre.network.configure-switch-interfaces for host cloudelastic1009 |
[production] |
16:05 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
16:05 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: migrate cloudelastic1009 to private IPs - bking@cumin2002" |
[production] |
16:05 |
<topranks> |
Commencing server uplink moves from old switch to new in codfw rack B4 T355860 |
[production] |
16:04 |
<bking@cumin2002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: migrate cloudelastic1009 to private IPs - bking@cumin2002" |
[production] |
16:03 |
<cmooney@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:30:00 on 23 hosts with reason: Migrate servers in codfw rack B4 from asw-b4-codfw to lsw1-b4-codfw |
[production] |
16:02 |
<cmooney@cumin1002> |
START - Cookbook sre.hosts.downtime for 0:30:00 on 23 hosts with reason: Migrate servers in codfw rack B4 from asw-b4-codfw to lsw1-b4-codfw |
[production] |
16:01 |
<bking@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
16:01 |
<jgiannelos@deploy2002> |
Finished deploy [restbase/deploy@05fa5c9]: Disabling storage for ptwiki (duration: 17m 39s) |
[production] |
16:00 |
<topranks> |
configuring lsw1-b4-codfw with port config for new hosts T355860 |
[production] |