2024-10-07
ยง
|
17:26 |
<swfrench@deploy2002> |
Started scap sync-world: Testing scap after mw-debug next bring-up - T372604 |
[production] |
17:12 |
<swfrench@deploy2002> |
helmfile [codfw] DONE helmfile.d/services/mw-debug: apply |
[production] |
17:12 |
<swfrench@deploy2002> |
helmfile [codfw] START helmfile.d/services/mw-debug: apply |
[production] |
17:06 |
<swfrench@deploy2002> |
helmfile [eqiad] DONE helmfile.d/services/mw-debug: apply |
[production] |
17:06 |
<swfrench@deploy2002> |
helmfile [eqiad] START helmfile.d/services/mw-debug: apply |
[production] |
16:26 |
<elukey@cumin2002> |
END (ERROR) - Cookbook sre.hosts.reimage (exit_code=97) for host sretest2001.codfw.wmnet with OS bookworm |
[production] |
16:24 |
<elukey@cumin2002> |
START - Cookbook sre.hosts.reimage for host sretest2001.codfw.wmnet with OS bookworm |
[production] |
16:16 |
<jayme@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host kubestage2002.codfw.wmnet with OS bookworm |
[production] |
16:03 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2220.codfw.wmnet with reason: Maintenance |
[production] |
16:03 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db2220.codfw.wmnet with reason: Maintenance |
[production] |
15:59 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2220.codfw.wmnet with reason: Maintenance |
[production] |
15:59 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db2220.codfw.wmnet with reason: Maintenance |
[production] |
15:57 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2220.codfw.wmnet with reason: Maintenance |
[production] |
15:57 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db2220.codfw.wmnet with reason: Maintenance |
[production] |
15:49 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on puppetserver1003.eqiad.wmnet with reason: RAM expansion |
[production] |
15:49 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.downtime for 1:00:00 on puppetserver1003.eqiad.wmnet with reason: RAM expansion |
[production] |
15:25 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on puppetserver1002.eqiad.wmnet with reason: RAM expansion |
[production] |
15:25 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.downtime for 1:00:00 on puppetserver1002.eqiad.wmnet with reason: RAM expansion |
[production] |
15:13 |
<jclark@cumin1002> |
END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts puppetmaster1001.eqiad.wmnet |
[production] |
15:13 |
<jclark@cumin1002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts puppetmaster1001.eqiad.wmnet |
[production] |
15:00 |
<papaul> |
ongoing maintenance on mr1-esams |
[production] |
14:43 |
<jayme@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on kubestage2002.codfw.wmnet with reason: host reimage |
[production] |
14:40 |
<jayme@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on kubestage2002.codfw.wmnet with reason: host reimage |
[production] |
14:18 |
<jayme@cumin1002> |
START - Cookbook sre.hosts.reimage for host kubestage2002.codfw.wmnet with OS bookworm |
[production] |
14:16 |
<kamila@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on wikikube-worker2092.codfw.wmnet with reason: Degraded RAID |
[production] |
14:16 |
<kamila@cumin1002> |
START - Cookbook sre.hosts.downtime for 7 days, 0:00:00 on wikikube-worker2092.codfw.wmnet with reason: Degraded RAID |
[production] |
13:49 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Depooling db1203 (T367856)', diff saved to https://phabricator.wikimedia.org/P69489 and previous config saved to /var/cache/conftool/dbconfig/20241007-134950-ladsgroup.json |
[production] |
13:49 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 12:00:00 on db1203.eqiad.wmnet with reason: Maintenance |
[production] |
13:49 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2036.codfw.wmnet |
[production] |
13:49 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 2 days, 12:00:00 on db1203.eqiad.wmnet with reason: Maintenance |
[production] |
13:49 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1192 (T367856)', diff saved to https://phabricator.wikimedia.org/P69488 and previous config saved to /var/cache/conftool/dbconfig/20241007-134929-ladsgroup.json |
[production] |
13:37 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti2036.codfw.wmnet |
[production] |
13:37 |
<vgutierrez> |
switching to digicert-2024 certificates on esams, eqsin, drmrs and magru |
[production] |
13:36 |
<Lucas_WMDE> |
UTC afternoon backport+config window done |
[production] |
13:35 |
<dreamyjazz@deploy2002> |
Finished scap sync-world: Backport for [[gerrit:1078406|Update globalblocks 'gb_address' index to allow autoblocks (T376052)]] (duration: 06m 49s) |
[production] |
13:34 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1192', diff saved to https://phabricator.wikimedia.org/P69487 and previous config saved to /var/cache/conftool/dbconfig/20241007-133422-ladsgroup.json |
[production] |
13:31 |
<dreamyjazz@deploy2002> |
dreamyjazz: Continuing with sync |
[production] |
13:30 |
<dreamyjazz@deploy2002> |
dreamyjazz: Backport for [[gerrit:1078406|Update globalblocks 'gb_address' index to allow autoblocks (T376052)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) |
[production] |
13:28 |
<dreamyjazz@deploy2002> |
Started scap sync-world: Backport for [[gerrit:1078406|Update globalblocks 'gb_address' index to allow autoblocks (T376052)]] |
[production] |
13:19 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1192', diff saved to https://phabricator.wikimedia.org/P69486 and previous config saved to /var/cache/conftool/dbconfig/20241007-131915-ladsgroup.json |
[production] |
13:12 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.addnode (exit_code=0) for new host ganeti2035.codfw.wmnet to cluster codfw and group C |
[production] |
13:11 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.addnode for new host ganeti2035.codfw.wmnet to cluster codfw and group C |
[production] |
13:10 |
<lucaswerkmeister-wmde@deploy2002> |
Finished scap sync-world: Backport for [[gerrit:1077800|scandium is being replaced by parsoidtest1001 (T363402)]] (duration: 07m 14s) |
[production] |
13:05 |
<lucaswerkmeister-wmde@deploy2002> |
arlolra, lucaswerkmeister-wmde: Continuing with sync |
[production] |
13:05 |
<lucaswerkmeister-wmde@deploy2002> |
arlolra, lucaswerkmeister-wmde: Backport for [[gerrit:1077800|scandium is being replaced by parsoidtest1001 (T363402)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) |
[production] |
13:04 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1192 (T367856)', diff saved to https://phabricator.wikimedia.org/P69485 and previous config saved to /var/cache/conftool/dbconfig/20241007-130409-ladsgroup.json |
[production] |
13:03 |
<lucaswerkmeister-wmde@deploy2002> |
Started scap sync-world: Backport for [[gerrit:1077800|scandium is being replaced by parsoidtest1001 (T363402)]] |
[production] |
13:02 |
<jmm@cumin2002> |
END (FAIL) - Cookbook sre.ganeti.addnode (exit_code=99) for new host ganeti2035.codfw.wmnet to cluster codfw and group C |
[production] |
13:02 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.addnode for new host ganeti2035.codfw.wmnet to cluster codfw and group C |
[production] |
13:00 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2035.codfw.wmnet |
[production] |