2024-02-15
§
|
20:46 |
<brennen@deploy2002> |
Synchronized php: group1 wikis to 1.42.0-wmf.18 refs T354436 (duration: 08m 05s) |
[production] |
20:41 |
<eevans@cumin1002> |
END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching P{P:cassandra%rack = "rack3"} and A:aqs and A:eqiad: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
20:38 |
<brennen@deploy2002> |
rebuilt and synchronized wikiversions files: group1 wikis to 1.42.0-wmf.18 refs T354436 |
[production] |
20:20 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Depooling db2109 (T352010)', diff saved to https://phabricator.wikimedia.org/P56870 and previous config saved to /var/cache/conftool/dbconfig/20240215-202036-ladsgroup.json |
[production] |
20:20 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2109.codfw.wmnet with reason: Maintenance |
[production] |
20:20 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2109.codfw.wmnet with reason: Maintenance |
[production] |
20:20 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2105 (T352010)', diff saved to https://phabricator.wikimedia.org/P56869 and previous config saved to /var/cache/conftool/dbconfig/20240215-202014-ladsgroup.json |
[production] |
20:08 |
<eevans@cumin1002> |
START - Cookbook sre.cassandra.roll-restart for nodes matching P{P:cassandra%rack = "rack3"} and A:aqs and A:eqiad: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
20:06 |
<eevans@cumin1002> |
END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching P{P:cassandra%rack = "rack2"} and A:aqs and A:eqiad: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
20:05 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2105', diff saved to https://phabricator.wikimedia.org/P56868 and previous config saved to /var/cache/conftool/dbconfig/20240215-200507-ladsgroup.json |
[production] |
20:00 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'es2024 (re)pooling @ 100%: T355866 - Post migration repool of es2024', diff saved to https://phabricator.wikimedia.org/P56867 and previous config saved to /var/cache/conftool/dbconfig/20240215-200015-arnaudb.json |
[production] |
19:58 |
<bking@cumin2002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - bking@cumin2002" |
[production] |
19:50 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2105', diff saved to https://phabricator.wikimedia.org/P56866 and previous config saved to /var/cache/conftool/dbconfig/20240215-195001-ladsgroup.json |
[production] |
19:48 |
<ryankemper@cumin2002> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw plugin upgrade - ryankemper@cumin2002 - T356651 |
[production] |
19:45 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'es2024 (re)pooling @ 75%: T355866 - Post migration repool of es2024', diff saved to https://phabricator.wikimedia.org/P56865 and previous config saved to /var/cache/conftool/dbconfig/20240215-194510-arnaudb.json |
[production] |
19:43 |
<apergos> |
manually generating checksums in parallel for wikidata full history dumps run, in screen session, owned by ariel, on snapshot1009 |
[production] |
19:42 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cloudelastic1006.eqiad.wmnet with reason: host reimage |
[production] |
19:39 |
<bking@cumin2002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on cloudelastic1006.eqiad.wmnet with reason: host reimage |
[production] |
19:34 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2105 (T352010)', diff saved to https://phabricator.wikimedia.org/P56864 and previous config saved to /var/cache/conftool/dbconfig/20240215-193455-ladsgroup.json |
[production] |
19:31 |
<eevans@cumin1002> |
START - Cookbook sre.cassandra.roll-restart for nodes matching P{P:cassandra%rack = "rack2"} and A:aqs and A:eqiad: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
19:30 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'es2024 (re)pooling @ 50%: T355866 - Post migration repool of es2024', diff saved to https://phabricator.wikimedia.org/P56863 and previous config saved to /var/cache/conftool/dbconfig/20240215-193005-arnaudb.json |
[production] |
19:24 |
<bking@cumin2002> |
START - Cookbook sre.hosts.reimage for host cloudelastic1006.eqiad.wmnet with OS bullseye |
[production] |
19:22 |
<brennen@deploy2002> |
rebuilt and synchronized wikiversions files: group2 wikis to 1.42.0-wmf.18 refs T354436 |
[production] |
19:15 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'es2024 (re)pooling @ 25%: T355866 - Post migration repool of es2024', diff saved to https://phabricator.wikimedia.org/P56862 and previous config saved to /var/cache/conftool/dbconfig/20240215-191500-arnaudb.json |
[production] |
19:14 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db2122 (re)pooling @ 100%: T355866 - Post migration repool of db2122', diff saved to https://phabricator.wikimedia.org/P56861 and previous config saved to /var/cache/conftool/dbconfig/20240215-191454-arnaudb.json |
[production] |
19:12 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Depooling db1247 (T352010)', diff saved to https://phabricator.wikimedia.org/P56860 and previous config saved to /var/cache/conftool/dbconfig/20240215-191226-ladsgroup.json |
[production] |
19:12 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1247.eqiad.wmnet with reason: Maintenance |
[production] |
19:12 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1247.eqiad.wmnet with reason: Maintenance |
[production] |
19:12 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1246:3314 (T352010)', diff saved to https://phabricator.wikimedia.org/P56859 and previous config saved to /var/cache/conftool/dbconfig/20240215-191203-ladsgroup.json |
[production] |
19:11 |
<bking@cumin2002> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host cloudelastic1006.eqiad.wmnet with OS bullseye |
[production] |
19:04 |
<brennen> |
train 1.42.0-wmf.18 (T354436): no current blockers, rolling to all wikis. |
[production] |
18:59 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db2122 (re)pooling @ 75%: T355866 - Post migration repool of db2122', diff saved to https://phabricator.wikimedia.org/P56858 and previous config saved to /var/cache/conftool/dbconfig/20240215-185949-arnaudb.json |
[production] |
18:56 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1246:3314', diff saved to https://phabricator.wikimedia.org/P56857 and previous config saved to /var/cache/conftool/dbconfig/20240215-185657-ladsgroup.json |
[production] |
18:50 |
<fnegri@cumin1002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cloudservices1006.eqiad.wmnet |
[production] |
18:44 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db2122 (re)pooling @ 50%: T355866 - Post migration repool of db2122', diff saved to https://phabricator.wikimedia.org/P56856 and previous config saved to /var/cache/conftool/dbconfig/20240215-184444-arnaudb.json |
[production] |
18:42 |
<fnegri@cumin1002> |
START - Cookbook sre.hosts.reboot-single for host cloudservices1006.eqiad.wmnet |
[production] |
18:41 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1246:3314', diff saved to https://phabricator.wikimedia.org/P56855 and previous config saved to /var/cache/conftool/dbconfig/20240215-184150-ladsgroup.json |
[production] |
18:29 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db2122 (re)pooling @ 25%: T355866 - Post migration repool of db2122', diff saved to https://phabricator.wikimedia.org/P56853 and previous config saved to /var/cache/conftool/dbconfig/20240215-182939-arnaudb.json |
[production] |
18:29 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db2105 (re)pooling @ 100%: T355866 - Post migration repool of db2105', diff saved to https://phabricator.wikimedia.org/P56852 and previous config saved to /var/cache/conftool/dbconfig/20240215-182934-arnaudb.json |
[production] |
18:26 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1246:3314 (T352010)', diff saved to https://phabricator.wikimedia.org/P56850 and previous config saved to /var/cache/conftool/dbconfig/20240215-182644-ladsgroup.json |
[production] |
18:23 |
<bking@cumin2002> |
START - Cookbook sre.hosts.reimage for host cloudelastic1006.eqiad.wmnet with OS bullseye |
[production] |
18:23 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.network.configure-switch-interfaces (exit_code=0) for host cloudelastic1006 |
[production] |
18:21 |
<bd808@deploy2002> |
helmfile [eqiad] DONE helmfile.d/services/toolhub: apply |
[production] |
18:21 |
<bking@cumin2002> |
START - Cookbook sre.network.configure-switch-interfaces for host cloudelastic1006 |
[production] |
18:21 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
18:20 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: migrate cloudelastic1006 to private IPs - bking@cumin2002" |
[production] |
18:20 |
<bking@cumin2002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: migrate cloudelastic1006 to private IPs - bking@cumin2002" |
[production] |
18:18 |
<bd808@deploy2002> |
helmfile [eqiad] START helmfile.d/services/toolhub: apply |
[production] |
18:18 |
<bking@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
18:17 |
<bd808@deploy2002> |
helmfile [codfw] DONE helmfile.d/services/toolhub: apply |
[production] |