2025-06-18
ยง
|
12:01 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1206', diff saved to https://phabricator.wikimedia.org/P78350 and previous config saved to /var/cache/conftool/dbconfig/20250618-120117-marostegui.json |
[production] |
11:56 |
<jiji@deploy1003> |
helmfile [codfw] DONE helmfile.d/services/mw-experimental: apply |
[production] |
11:55 |
<jiji@deploy1003> |
helmfile [codfw] START helmfile.d/services/mw-experimental: apply |
[production] |
11:55 |
<jiji@deploy1003> |
helmfile [codfw] DONE helmfile.d/services/mw-experimental: apply |
[production] |
11:55 |
<hnowlan@deploy1003> |
helmfile [eqiad] DONE helmfile.d/services/mobileapps: apply |
[production] |
11:55 |
<jiji@deploy1003> |
helmfile [codfw] START helmfile.d/services/mw-experimental: apply |
[production] |
11:54 |
<hnowlan@deploy1003> |
helmfile [eqiad] START helmfile.d/services/mobileapps: apply |
[production] |
11:46 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1206', diff saved to https://phabricator.wikimedia.org/P78349 and previous config saved to /var/cache/conftool/dbconfig/20250618-114610-marostegui.json |
[production] |
11:31 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2191 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P78348 and previous config saved to /var/cache/conftool/dbconfig/20250618-113125-root.json |
[production] |
11:31 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1206 (T396130)', diff saved to https://phabricator.wikimedia.org/P78347 and previous config saved to /var/cache/conftool/dbconfig/20250618-113103-marostegui.json |
[production] |
11:27 |
<jiji@deploy1003> |
helmfile [eqiad] DONE helmfile.d/services/mw-experimental: apply |
[production] |
11:26 |
<jiji@deploy1003> |
helmfile [eqiad] START helmfile.d/services/mw-experimental: apply |
[production] |
11:25 |
<mvolz@deploy1003> |
helmfile [eqiad] DONE helmfile.d/services/citoid: apply |
[production] |
11:24 |
<mvolz@deploy1003> |
helmfile [eqiad] START helmfile.d/services/citoid: apply |
[production] |
11:21 |
<mvolz@deploy1003> |
helmfile [codfw] DONE helmfile.d/services/citoid: apply |
[production] |
11:21 |
<mvolz@deploy1003> |
helmfile [codfw] START helmfile.d/services/citoid: apply |
[production] |
11:20 |
<mvolz@deploy1003> |
helmfile [staging] DONE helmfile.d/services/citoid: apply |
[production] |
11:19 |
<mvolz@deploy1003> |
helmfile [staging] START helmfile.d/services/citoid: apply |
[production] |
11:18 |
<jiji@deploy1003> |
helmfile [eqiad] DONE helmfile.d/services/mw-experimental: apply |
[production] |
11:16 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2191 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P78345 and previous config saved to /var/cache/conftool/dbconfig/20250618-111620-root.json |
[production] |
11:13 |
<jmm@cumin1003> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1048.eqiad.wmnet |
[production] |
11:12 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db1206 (T396130)', diff saved to https://phabricator.wikimedia.org/P78344 and previous config saved to /var/cache/conftool/dbconfig/20250618-111239-marostegui.json |
[production] |
11:12 |
<marostegui@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1206.eqiad.wmnet with reason: Maintenance |
[production] |
11:12 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1196 (T396130)', diff saved to https://phabricator.wikimedia.org/P78343 and previous config saved to /var/cache/conftool/dbconfig/20250618-111217-marostegui.json |
[production] |
11:09 |
<root@cumin1002> |
END (FAIL) - Cookbook sre.puppet.migrate-host (exit_code=99) for host backup1009.eqiad.wmnet |
[production] |
11:07 |
<jiji@deploy1003> |
helmfile [eqiad] START helmfile.d/services/mw-experimental: apply |
[production] |
11:07 |
<root@cumin1002> |
START - Cookbook sre.puppet.migrate-host for host backup1009.eqiad.wmnet |
[production] |
11:06 |
<hnowlan@deploy1003> |
helmfile [eqiad] DONE helmfile.d/services/changeprop: apply |
[production] |
11:06 |
<hnowlan@deploy1003> |
helmfile [eqiad] START helmfile.d/services/changeprop: apply |
[production] |
11:04 |
<btullis@dns1004> |
END - running authdns-update |
[production] |
11:03 |
<btullis@dns1004> |
START - running authdns-update |
[production] |
11:02 |
<root@cumin1002> |
END (FAIL) - Cookbook sre.puppet.migrate-host (exit_code=99) for host backup1009.eqiad.wmnet |
[production] |
11:02 |
<root@cumin1002> |
START - Cookbook sre.puppet.migrate-host for host backup1009.eqiad.wmnet |
[production] |
11:01 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2191 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P78342 and previous config saved to /var/cache/conftool/dbconfig/20250618-110114-root.json |
[production] |
10:59 |
<reedy@deploy1003> |
Finished scap sync-world: Backport for [[gerrit:1160144|composer: Various updates]], [[gerrit:1160151|Setup json linting (T397191)]], [[gerrit:1130201|Improve function and property documentation for php code (T171115)]] (duration: 10m 20s) |
[production] |
10:58 |
<jmm@cumin1003> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti2030.codfw.wmnet |
[production] |
10:58 |
<jmm@cumin1003> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2030.codfw.wmnet |
[production] |
10:57 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1196', diff saved to https://phabricator.wikimedia.org/P78341 and previous config saved to /var/cache/conftool/dbconfig/20250618-105710-marostegui.json |
[production] |
10:54 |
<btullis@cumin1003> |
END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=1) upgrade firmware for hosts an-coord1003.eqiad.wmnet |
[production] |
10:54 |
<btullis@cumin1003> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host an-coord1003.eqiad.wmnet |
[production] |
10:52 |
<root@cumin1002> |
DONE (FAIL) - Cookbook sre.puppet.renew-cert (exit_code=99) for backup1009.eqiad.wmnet: Renew puppet certificate - root@cumin1002 |
[production] |
10:52 |
<reedy@deploy1003> |
umherirrender, reedy: Continuing with sync |
[production] |
10:51 |
<reedy@deploy1003> |
umherirrender, reedy: Backport for [[gerrit:1160144|composer: Various updates]], [[gerrit:1160151|Setup json linting (T397191)]], [[gerrit:1130201|Improve function and property documentation for php code (T171115)]] synced to the testservers (see https://wikitech.wikimedia.org/wiki/Mwdebug). Changes can now be verified there. |
[production] |
10:49 |
<jmm@cumin1003> |
START - Cookbook sre.hosts.reboot-single for host ganeti2030.codfw.wmnet |
[production] |
10:48 |
<reedy@deploy1003> |
Started scap sync-world: Backport for [[gerrit:1160144|composer: Various updates]], [[gerrit:1160151|Setup json linting (T397191)]], [[gerrit:1130201|Improve function and property documentation for php code (T171115)]] |
[production] |
10:48 |
<root@cumin1002> |
DONE (FAIL) - Cookbook sre.puppet.renew-cert (exit_code=99) for backup1009.eqiad.wmnet: Renew puppet certificate - root@cumin1002 |
[production] |
10:47 |
<jmm@cumin1003> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2030.codfw.wmnet |
[production] |
10:46 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2191 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P78340 and previous config saved to /var/cache/conftool/dbconfig/20250618-104609-root.json |
[production] |
10:43 |
<btullis@cumin1003> |
START - Cookbook sre.hosts.reboot-single for host an-coord1003.eqiad.wmnet |
[production] |
10:42 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1196', diff saved to https://phabricator.wikimedia.org/P78339 and previous config saved to /var/cache/conftool/dbconfig/20250618-104203-marostegui.json |
[production] |