2024-06-13
ยง
|
12:56 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db2169.codfw.wmnet with reason: Maintenance |
[production] |
12:56 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2158 (T367261)', diff saved to https://phabricator.wikimedia.org/P64839 and previous config saved to /var/cache/conftool/dbconfig/20240613-125648-marostegui.json |
[production] |
12:52 |
<jmm@cumin1002> |
START - Cookbook sre.hosts.reboot-single for host cumin2002.codfw.wmnet |
[production] |
12:51 |
<taavi@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cloudvirt1032.eqiad.wmnet with reason: host reimage |
[production] |
12:48 |
<taavi@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on cloudvirt1032.eqiad.wmnet with reason: host reimage |
[production] |
12:41 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2158', diff saved to https://phabricator.wikimedia.org/P64838 and previous config saved to /var/cache/conftool/dbconfig/20240613-124141-marostegui.json |
[production] |
12:39 |
<elukey> |
reset BIOS/BMC to factory default on sretest1001 - T365372 |
[production] |
12:30 |
<taavi@cumin1002> |
START - Cookbook sre.hosts.reimage for host cloudvirt1032.eqiad.wmnet with OS bookworm |
[production] |
12:26 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2158', diff saved to https://phabricator.wikimedia.org/P64837 and previous config saved to /var/cache/conftool/dbconfig/20240613-122634-marostegui.json |
[production] |
12:26 |
<taavi@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on cloudvirt1032.eqiad.wmnet with reason: reimage and move to OVS |
[production] |
12:26 |
<taavi@cumin1002> |
START - Cookbook sre.hosts.downtime for 4:00:00 on cloudvirt1032.eqiad.wmnet with reason: reimage and move to OVS |
[production] |
12:21 |
<ladsgroup@deploy1002> |
Finished scap: Backport for [[gerrit:1043006|Temporarily bump circuit breaking threshold to 350]] (duration: 12m 13s) |
[production] |
12:20 |
<pfischer@deploy1002> |
helmfile [staging] DONE helmfile.d/services/cirrus-streaming-updater: apply |
[production] |
12:19 |
<pfischer@deploy1002> |
helmfile [staging] START helmfile.d/services/cirrus-streaming-updater: apply |
[production] |
12:17 |
<pfischer@deploy1002> |
helmfile [staging] DONE helmfile.d/services/cirrus-streaming-updater: apply |
[production] |
12:16 |
<pfischer@deploy1002> |
helmfile [staging] START helmfile.d/services/cirrus-streaming-updater: apply |
[production] |
12:15 |
<pfischer@deploy1002> |
helmfile [staging] DONE helmfile.d/services/cirrus-streaming-updater: apply |
[production] |
12:12 |
<ladsgroup@deploy1002> |
ladsgroup: Continuing with sync |
[production] |
12:12 |
<ladsgroup@deploy1002> |
ladsgroup: Backport for [[gerrit:1043006|Temporarily bump circuit breaking threshold to 350]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) |
[production] |
12:11 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2158 (T367261)', diff saved to https://phabricator.wikimedia.org/P64836 and previous config saved to /var/cache/conftool/dbconfig/20240613-121127-marostegui.json |
[production] |
12:09 |
<ladsgroup@deploy1002> |
Started scap: Backport for [[gerrit:1043006|Temporarily bump circuit breaking threshold to 350]] |
[production] |
12:07 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db2158 (T367261)', diff saved to https://phabricator.wikimedia.org/P64835 and previous config saved to /var/cache/conftool/dbconfig/20240613-120711-marostegui.json |
[production] |
12:07 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2187.codfw.wmnet with reason: Maintenance |
[production] |
12:07 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2187.codfw.wmnet with reason: Maintenance |
[production] |
12:07 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2158.codfw.wmnet with reason: Maintenance |
[production] |
12:06 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db2158.codfw.wmnet with reason: Maintenance |
[production] |
12:06 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2151 (T367261)', diff saved to https://phabricator.wikimedia.org/P64834 and previous config saved to /var/cache/conftool/dbconfig/20240613-120644-marostegui.json |
[production] |
12:04 |
<jiji@cumin1002> |
END (PASS) - Cookbook sre.k8s.reboot-nodes (exit_code=0) rolling reboot on A:wikikube-worker-eqiad |
[production] |
11:58 |
<fabfur@cumin1002> |
conftool action : set/pooled=yes; selector: name=cp4037.ulsfo.wmnet |
[production] |
11:57 |
<fabfur> |
enabling puppet && repool cp4037 (T360454) |
[production] |
11:51 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2151', diff saved to https://phabricator.wikimedia.org/P64832 and previous config saved to /var/cache/conftool/dbconfig/20240613-115137-marostegui.json |
[production] |
11:36 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2151', diff saved to https://phabricator.wikimedia.org/P64831 and previous config saved to /var/cache/conftool/dbconfig/20240613-113630-marostegui.json |
[production] |
11:35 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.gitlab.upgrade (exit_code=0) on GitLab host gitlab1004.wikimedia.org with reason: Upgrade GitLab Replica to new version |
[production] |
11:29 |
<jelto@cumin1002> |
START - Cookbook sre.gitlab.upgrade on GitLab host gitlab1004.wikimedia.org with reason: Upgrade GitLab Replica to new version |
[production] |
11:28 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.gitlab.upgrade (exit_code=0) on GitLab host gitlab1003.wikimedia.org with reason: Upgrade GitLab Replica to new version |
[production] |
11:27 |
<cgoubert@cumin1002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kubemaster2001.codfw.wmnet |
[production] |
11:22 |
<jelto@cumin1002> |
START - Cookbook sre.gitlab.upgrade on GitLab host gitlab1003.wikimedia.org with reason: Upgrade GitLab Replica to new version |
[production] |
11:21 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2151 (T367261)', diff saved to https://phabricator.wikimedia.org/P64830 and previous config saved to /var/cache/conftool/dbconfig/20240613-112122-marostegui.json |
[production] |
11:20 |
<cgoubert@cumin1002> |
START - Cookbook sre.hosts.reboot-single for host kubemaster2001.codfw.wmnet |
[production] |
11:19 |
<cgoubert@cumin1002> |
conftool action : set/pooled=inactive; selector: name=wikikube-ctrl2003.codfw.wmnet |
[production] |
11:17 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db2151 (T367261)', diff saved to https://phabricator.wikimedia.org/P64829 and previous config saved to /var/cache/conftool/dbconfig/20240613-111706-marostegui.json |
[production] |
11:17 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2151.codfw.wmnet with reason: Maintenance |
[production] |
11:16 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Depooling db1222 (T352010)', diff saved to https://phabricator.wikimedia.org/P64828 and previous config saved to /var/cache/conftool/dbconfig/20240613-111655-ladsgroup.json |
[production] |
11:16 |
<moritzm> |
installing pillow security updates |
[production] |
11:16 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1222.eqiad.wmnet with reason: Maintenance |
[production] |
11:16 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db2151.codfw.wmnet with reason: Maintenance |
[production] |
11:16 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2124 (T367261)', diff saved to https://phabricator.wikimedia.org/P64827 and previous config saved to /var/cache/conftool/dbconfig/20240613-111642-marostegui.json |
[production] |
11:16 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1222.eqiad.wmnet with reason: Maintenance |
[production] |
11:16 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1197 (T352010)', diff saved to https://phabricator.wikimedia.org/P64826 and previous config saved to /var/cache/conftool/dbconfig/20240613-111633-ladsgroup.json |
[production] |
11:14 |
<cgoubert@cumin1002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kubemaster2002.codfw.wmnet |
[production] |