2024-02-16
§
|
12:46 |
<hnowlan@cumin1002> |
START - Cookbook sre.hosts.reimage for host mw1349.eqiad.wmnet with OS bullseye |
[production] |
12:14 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Depooling db2149 (T352010)', diff saved to https://phabricator.wikimedia.org/P56892 and previous config saved to /var/cache/conftool/dbconfig/20240216-121416-ladsgroup.json |
[production] |
12:14 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2149.codfw.wmnet with reason: Maintenance |
[production] |
12:13 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2149.codfw.wmnet with reason: Maintenance |
[production] |
10:58 |
<moritzm> |
update bullseye/bookworm netboot images on the Puppet 7 volatile environment to the latest point releases (to bring in sync with volatile for Puppet 5) T341056 |
[production] |
10:50 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on dbstore1007.eqiad.wmnet with reason: Maintenance |
[production] |
10:50 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on dbstore1007.eqiad.wmnet with reason: Maintenance |
[production] |
10:50 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1249 (T352010)', diff saved to https://phabricator.wikimedia.org/P56891 and previous config saved to /var/cache/conftool/dbconfig/20240216-105041-ladsgroup.json |
[production] |
10:44 |
<volans@cumin1002> |
END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts sretest1001.eqiad.wmnet |
[production] |
10:44 |
<volans@cumin1002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts sretest1001.eqiad.wmnet |
[production] |
10:43 |
<volans@cumin1002> |
END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts sretest1001.eqiad.wmnet |
[production] |
10:42 |
<volans@cumin1002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts sretest1001.eqiad.wmnet |
[production] |
10:41 |
<hnowlan@cumin2002> |
conftool action : set/pooled=yes; selector: name=mw2379.codfw.wmnet |
[production] |
10:35 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1249', diff saved to https://phabricator.wikimedia.org/P56890 and previous config saved to /var/cache/conftool/dbconfig/20240216-103535-ladsgroup.json |
[production] |
10:20 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1249', diff saved to https://phabricator.wikimedia.org/P56889 and previous config saved to /var/cache/conftool/dbconfig/20240216-102028-ladsgroup.json |
[production] |
10:05 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1249 (T352010)', diff saved to https://phabricator.wikimedia.org/P56888 and previous config saved to /var/cache/conftool/dbconfig/20240216-100521-ladsgroup.json |
[production] |
10:03 |
<arnaudb@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3 days, 0:00:00 on db2194.codfw.wmnet with reason: Silence for WE |
[production] |
10:03 |
<arnaudb@cumin1002> |
START - Cookbook sre.hosts.downtime for 3 days, 0:00:00 on db2194.codfw.wmnet with reason: Silence for WE |
[production] |
09:07 |
<jclark@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host restbase1036.eqiad.wmnet with OS bullseye |
[production] |
09:07 |
<jclark@cumin1002> |
END (FAIL) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=99) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - jclark@cumin1002" |
[production] |
09:06 |
<jclark@cumin1002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - jclark@cumin1002" |
[production] |
08:38 |
<jclark@cumin1002> |
START - Cookbook sre.hosts.reimage for host an-redacteddb1001.eqiad.wmnet with OS bullseye |
[production] |
08:07 |
<jclark@cumin1002> |
END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['an-redacteddb1001'] |
[production] |
08:07 |
<jclark@cumin1002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['an-redacteddb1001'] |
[production] |
06:04 |
<apergos> |
manually generating 7z files in parallel for wikidata full history dumps run, in screen session, owned by ariel, on snapshot1009 |
[production] |
05:20 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Depooling db1249 (T352010)', diff saved to https://phabricator.wikimedia.org/P56887 and previous config saved to /var/cache/conftool/dbconfig/20240216-052044-ladsgroup.json |
[production] |
05:20 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1249.eqiad.wmnet with reason: Maintenance |
[production] |
05:20 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1249.eqiad.wmnet with reason: Maintenance |
[production] |
05:20 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1248 (T352010)', diff saved to https://phabricator.wikimedia.org/P56886 and previous config saved to /var/cache/conftool/dbconfig/20240216-052021-ladsgroup.json |
[production] |
05:05 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2139.codfw.wmnet with reason: Maintenance |
[production] |
05:05 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1248', diff saved to https://phabricator.wikimedia.org/P56885 and previous config saved to /var/cache/conftool/dbconfig/20240216-050514-ladsgroup.json |
[production] |
05:05 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2139.codfw.wmnet with reason: Maintenance |
[production] |
05:04 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2109 (T352010)', diff saved to https://phabricator.wikimedia.org/P56884 and previous config saved to /var/cache/conftool/dbconfig/20240216-050458-ladsgroup.json |
[production] |
04:50 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1248', diff saved to https://phabricator.wikimedia.org/P56883 and previous config saved to /var/cache/conftool/dbconfig/20240216-045008-ladsgroup.json |
[production] |
04:49 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2109', diff saved to https://phabricator.wikimedia.org/P56882 and previous config saved to /var/cache/conftool/dbconfig/20240216-044952-ladsgroup.json |
[production] |
04:35 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1248 (T352010)', diff saved to https://phabricator.wikimedia.org/P56881 and previous config saved to /var/cache/conftool/dbconfig/20240216-043501-ladsgroup.json |
[production] |
04:34 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2109', diff saved to https://phabricator.wikimedia.org/P56880 and previous config saved to /var/cache/conftool/dbconfig/20240216-043445-ladsgroup.json |
[production] |
04:19 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2109 (T352010)', diff saved to https://phabricator.wikimedia.org/P56879 and previous config saved to /var/cache/conftool/dbconfig/20240216-041938-ladsgroup.json |
[production] |
01:26 |
<eevans@cumin1002> |
END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching A:ml-cache: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
01:08 |
<htriedman@deploy2002> |
Finished deploy [airflow-dags/platform_eng@d93828e]: (no justification provided) (duration: 00m 28s) |
[production] |
01:07 |
<htriedman@deploy2002> |
Started deploy [airflow-dags/platform_eng@d93828e]: (no justification provided) |
[production] |
00:49 |
<eevans@cumin1002> |
START - Cookbook sre.cassandra.roll-restart for nodes matching A:ml-cache: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
00:27 |
<eevans@cumin1002> |
END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching A:cassandra-dev: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
00:27 |
<ryankemper@cumin2002> |
END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: eqiad plugin upgrade - ryankemper@cumin2002 - T356651 |
[production] |
00:16 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Depooling db1248 (T352010)', diff saved to https://phabricator.wikimedia.org/P56877 and previous config saved to /var/cache/conftool/dbconfig/20240216-001636-ladsgroup.json |
[production] |
00:16 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1248.eqiad.wmnet with reason: Maintenance |
[production] |
00:16 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1248.eqiad.wmnet with reason: Maintenance |
[production] |
00:16 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1247 (T352010)', diff saved to https://phabricator.wikimedia.org/P56876 and previous config saved to /var/cache/conftool/dbconfig/20240216-001612-ladsgroup.json |
[production] |
00:06 |
<eevans@cumin1002> |
START - Cookbook sre.cassandra.roll-restart for nodes matching A:cassandra-dev: Restart to pickup logging jars — T353550 - eevans@cumin1002 |
[production] |
00:02 |
<thcipriani@deploy2002> |
Finished scap: Backport for [[gerrit:1003827|Connection: Correct read-only detection (T354793 T356526)]] (duration: 10m 28s) |
[production] |