2024-02-15
§
|
21:59 |
<ryankemper@cumin2002> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: eqiad plugin upgrade - ryankemper@cumin2002 - T356651 |
[production] |
21:58 |
<vriley@cumin1002> |
START - Cookbook sre.dns.netbox |
[production] |
21:57 |
<brennen@deploy2002> |
brennen: Continuing with sync |
[production] |
21:56 |
<ryankemper@cumin2002> |
END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw plugin upgrade - ryankemper@cumin2002 - T356651 |
[production] |
21:54 |
<brennen@deploy2002> |
brennen: Backport for [[gerrit:1003832|Filter out null external link attributes (T357668)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) |
[production] |
21:53 |
<brennen@deploy2002> |
Started scap: Backport for [[gerrit:1003832|Filter out null external link attributes (T357668)]] |
[production] |
21:52 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Banning hosts: cloudelastic1005* for IP migration - bking@cumin2002 - T355617 |
[production] |
21:52 |
<bking@cumin2002> |
START - Cookbook sre.elasticsearch.ban Banning hosts: cloudelastic1005* for IP migration - bking@cumin2002 - T355617 |
[production] |
21:51 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Unbanning all hosts in cloudelastic |
[production] |
21:51 |
<bking@cumin2002> |
START - Cookbook sre.elasticsearch.ban Unbanning all hosts in cloudelastic |
[production] |
21:28 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cloudelastic1006.eqiad.wmnet with OS bullseye |
[production] |
21:28 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - bking@cumin2002" |
[production] |
21:26 |
<eevans@cumin1002> |
START - Cookbook sre.cassandra.roll-restart for nodes matching P{P:cassandra%rack = "b_e"} and A:aqs and A:codfw: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
21:20 |
<brennen@deploy2002> |
rebuilt and synchronized wikiversions files: group2 wikis to 1.42.0-wmf.18 refs T354436 |
[production] |
21:20 |
<eevans@cumin1002> |
END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching P{P:cassandra%rack = "a_c"} and A:aqs and A:codfw: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
20:47 |
<eevans@cumin1002> |
START - Cookbook sre.cassandra.roll-restart for nodes matching P{P:cassandra%rack = "a_c"} and A:aqs and A:codfw: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
20:46 |
<brennen@deploy2002> |
Synchronized php: group1 wikis to 1.42.0-wmf.18 refs T354436 (duration: 08m 05s) |
[production] |
20:41 |
<eevans@cumin1002> |
END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching P{P:cassandra%rack = "rack3"} and A:aqs and A:eqiad: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
20:38 |
<brennen@deploy2002> |
rebuilt and synchronized wikiversions files: group1 wikis to 1.42.0-wmf.18 refs T354436 |
[production] |
20:20 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Depooling db2109 (T352010)', diff saved to https://phabricator.wikimedia.org/P56870 and previous config saved to /var/cache/conftool/dbconfig/20240215-202036-ladsgroup.json |
[production] |
20:20 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2109.codfw.wmnet with reason: Maintenance |
[production] |
20:20 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2109.codfw.wmnet with reason: Maintenance |
[production] |
20:20 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2105 (T352010)', diff saved to https://phabricator.wikimedia.org/P56869 and previous config saved to /var/cache/conftool/dbconfig/20240215-202014-ladsgroup.json |
[production] |
20:08 |
<eevans@cumin1002> |
START - Cookbook sre.cassandra.roll-restart for nodes matching P{P:cassandra%rack = "rack3"} and A:aqs and A:eqiad: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
20:06 |
<eevans@cumin1002> |
END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching P{P:cassandra%rack = "rack2"} and A:aqs and A:eqiad: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
20:05 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2105', diff saved to https://phabricator.wikimedia.org/P56868 and previous config saved to /var/cache/conftool/dbconfig/20240215-200507-ladsgroup.json |
[production] |
20:00 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'es2024 (re)pooling @ 100%: T355866 - Post migration repool of es2024', diff saved to https://phabricator.wikimedia.org/P56867 and previous config saved to /var/cache/conftool/dbconfig/20240215-200015-arnaudb.json |
[production] |
19:58 |
<bking@cumin2002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - bking@cumin2002" |
[production] |
19:50 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2105', diff saved to https://phabricator.wikimedia.org/P56866 and previous config saved to /var/cache/conftool/dbconfig/20240215-195001-ladsgroup.json |
[production] |
19:48 |
<ryankemper@cumin2002> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw plugin upgrade - ryankemper@cumin2002 - T356651 |
[production] |
19:45 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'es2024 (re)pooling @ 75%: T355866 - Post migration repool of es2024', diff saved to https://phabricator.wikimedia.org/P56865 and previous config saved to /var/cache/conftool/dbconfig/20240215-194510-arnaudb.json |
[production] |
19:43 |
<apergos> |
manually generating checksums in parallel for wikidata full history dumps run, in screen session, owned by ariel, on snapshot1009 |
[production] |
19:42 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cloudelastic1006.eqiad.wmnet with reason: host reimage |
[production] |
19:39 |
<bking@cumin2002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on cloudelastic1006.eqiad.wmnet with reason: host reimage |
[production] |
19:34 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2105 (T352010)', diff saved to https://phabricator.wikimedia.org/P56864 and previous config saved to /var/cache/conftool/dbconfig/20240215-193455-ladsgroup.json |
[production] |
19:31 |
<eevans@cumin1002> |
START - Cookbook sre.cassandra.roll-restart for nodes matching P{P:cassandra%rack = "rack2"} and A:aqs and A:eqiad: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
19:30 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'es2024 (re)pooling @ 50%: T355866 - Post migration repool of es2024', diff saved to https://phabricator.wikimedia.org/P56863 and previous config saved to /var/cache/conftool/dbconfig/20240215-193005-arnaudb.json |
[production] |
19:24 |
<bking@cumin2002> |
START - Cookbook sre.hosts.reimage for host cloudelastic1006.eqiad.wmnet with OS bullseye |
[production] |
19:22 |
<brennen@deploy2002> |
rebuilt and synchronized wikiversions files: group2 wikis to 1.42.0-wmf.18 refs T354436 |
[production] |
19:15 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'es2024 (re)pooling @ 25%: T355866 - Post migration repool of es2024', diff saved to https://phabricator.wikimedia.org/P56862 and previous config saved to /var/cache/conftool/dbconfig/20240215-191500-arnaudb.json |
[production] |
19:14 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db2122 (re)pooling @ 100%: T355866 - Post migration repool of db2122', diff saved to https://phabricator.wikimedia.org/P56861 and previous config saved to /var/cache/conftool/dbconfig/20240215-191454-arnaudb.json |
[production] |
19:12 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Depooling db1247 (T352010)', diff saved to https://phabricator.wikimedia.org/P56860 and previous config saved to /var/cache/conftool/dbconfig/20240215-191226-ladsgroup.json |
[production] |
19:12 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1247.eqiad.wmnet with reason: Maintenance |
[production] |
19:12 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1247.eqiad.wmnet with reason: Maintenance |
[production] |
19:12 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1246:3314 (T352010)', diff saved to https://phabricator.wikimedia.org/P56859 and previous config saved to /var/cache/conftool/dbconfig/20240215-191203-ladsgroup.json |
[production] |
19:11 |
<bking@cumin2002> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host cloudelastic1006.eqiad.wmnet with OS bullseye |
[production] |
19:04 |
<brennen> |
train 1.42.0-wmf.18 (T354436): no current blockers, rolling to all wikis. |
[production] |
18:59 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db2122 (re)pooling @ 75%: T355866 - Post migration repool of db2122', diff saved to https://phabricator.wikimedia.org/P56858 and previous config saved to /var/cache/conftool/dbconfig/20240215-185949-arnaudb.json |
[production] |
18:56 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1246:3314', diff saved to https://phabricator.wikimedia.org/P56857 and previous config saved to /var/cache/conftool/dbconfig/20240215-185657-ladsgroup.json |
[production] |
18:50 |
<fnegri@cumin1002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cloudservices1006.eqiad.wmnet |
[production] |