1601-1650 of 10000 results (87ms)
2025-06-18 ยง
11:19 <mvolz@deploy1003> helmfile [staging] START helmfile.d/services/citoid: apply [production]
11:18 <jiji@deploy1003> helmfile [eqiad] DONE helmfile.d/services/mw-experimental: apply [production]
11:16 <marostegui@cumin1002> dbctl commit (dc=all): 'db2191 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P78345 and previous config saved to /var/cache/conftool/dbconfig/20250618-111620-root.json [production]
11:13 <jmm@cumin1003> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1048.eqiad.wmnet [production]
11:12 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db1206 (T396130)', diff saved to https://phabricator.wikimedia.org/P78344 and previous config saved to /var/cache/conftool/dbconfig/20250618-111239-marostegui.json [production]
11:12 <marostegui@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1206.eqiad.wmnet with reason: Maintenance [production]
11:12 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1196 (T396130)', diff saved to https://phabricator.wikimedia.org/P78343 and previous config saved to /var/cache/conftool/dbconfig/20250618-111217-marostegui.json [production]
11:09 <root@cumin1002> END (FAIL) - Cookbook sre.puppet.migrate-host (exit_code=99) for host backup1009.eqiad.wmnet [production]
11:07 <jiji@deploy1003> helmfile [eqiad] START helmfile.d/services/mw-experimental: apply [production]
11:07 <root@cumin1002> START - Cookbook sre.puppet.migrate-host for host backup1009.eqiad.wmnet [production]
11:06 <hnowlan@deploy1003> helmfile [eqiad] DONE helmfile.d/services/changeprop: apply [production]
11:06 <hnowlan@deploy1003> helmfile [eqiad] START helmfile.d/services/changeprop: apply [production]
11:04 <btullis@dns1004> END - running authdns-update [production]
11:03 <btullis@dns1004> START - running authdns-update [production]
11:02 <root@cumin1002> END (FAIL) - Cookbook sre.puppet.migrate-host (exit_code=99) for host backup1009.eqiad.wmnet [production]
11:02 <root@cumin1002> START - Cookbook sre.puppet.migrate-host for host backup1009.eqiad.wmnet [production]
11:01 <marostegui@cumin1002> dbctl commit (dc=all): 'db2191 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P78342 and previous config saved to /var/cache/conftool/dbconfig/20250618-110114-root.json [production]
10:59 <reedy@deploy1003> Finished scap sync-world: Backport for [[gerrit:1160144|composer: Various updates]], [[gerrit:1160151|Setup json linting (T397191)]], [[gerrit:1130201|Improve function and property documentation for php code (T171115)]] (duration: 10m 20s) [production]
10:58 <jmm@cumin1003> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti2030.codfw.wmnet [production]
10:58 <jmm@cumin1003> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2030.codfw.wmnet [production]
10:57 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1196', diff saved to https://phabricator.wikimedia.org/P78341 and previous config saved to /var/cache/conftool/dbconfig/20250618-105710-marostegui.json [production]
10:54 <btullis@cumin1003> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=1) upgrade firmware for hosts an-coord1003.eqiad.wmnet [production]
10:54 <btullis@cumin1003> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host an-coord1003.eqiad.wmnet [production]
10:52 <root@cumin1002> DONE (FAIL) - Cookbook sre.puppet.renew-cert (exit_code=99) for backup1009.eqiad.wmnet: Renew puppet certificate - root@cumin1002 [production]
10:52 <reedy@deploy1003> umherirrender, reedy: Continuing with sync [production]
10:51 <reedy@deploy1003> umherirrender, reedy: Backport for [[gerrit:1160144|composer: Various updates]], [[gerrit:1160151|Setup json linting (T397191)]], [[gerrit:1130201|Improve function and property documentation for php code (T171115)]] synced to the testservers (see https://wikitech.wikimedia.org/wiki/Mwdebug). Changes can now be verified there. [production]
10:49 <jmm@cumin1003> START - Cookbook sre.hosts.reboot-single for host ganeti2030.codfw.wmnet [production]
10:48 <reedy@deploy1003> Started scap sync-world: Backport for [[gerrit:1160144|composer: Various updates]], [[gerrit:1160151|Setup json linting (T397191)]], [[gerrit:1130201|Improve function and property documentation for php code (T171115)]] [production]
10:48 <root@cumin1002> DONE (FAIL) - Cookbook sre.puppet.renew-cert (exit_code=99) for backup1009.eqiad.wmnet: Renew puppet certificate - root@cumin1002 [production]
10:47 <jmm@cumin1003> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2030.codfw.wmnet [production]
10:46 <marostegui@cumin1002> dbctl commit (dc=all): 'db2191 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P78340 and previous config saved to /var/cache/conftool/dbconfig/20250618-104609-root.json [production]
10:43 <btullis@cumin1003> START - Cookbook sre.hosts.reboot-single for host an-coord1003.eqiad.wmnet [production]
10:42 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1196', diff saved to https://phabricator.wikimedia.org/P78339 and previous config saved to /var/cache/conftool/dbconfig/20250618-104203-marostegui.json [production]
10:40 <marostegui@cumin1003> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on db2191.codfw.wmnet with reason: Maintenance [production]
10:40 <btullis@cumin1003> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts an-coord1003.eqiad.wmnet [production]
10:40 <marostegui@cumin1002> dbctl commit (dc=all): 'Depool db2191', diff saved to https://phabricator.wikimedia.org/P78338 and previous config saved to /var/cache/conftool/dbconfig/20250618-104033-root.json [production]
10:40 <btullis@cumin1003> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on an-coord1003.eqiad.wmnet with reason: Upgrading SSD firmware [production]
10:31 <jmm@cumin1003> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3 days, 0:00:00 on ganeti2024.codfw.wmnet with reason: remove for decom [production]
10:26 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1196 (T396130)', diff saved to https://phabricator.wikimedia.org/P78337 and previous config saved to /var/cache/conftool/dbconfig/20250618-102655-marostegui.json [production]
10:20 <jynus@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on backup[1001,1014].eqiad.wmnet with reason: Backup director migration [production]
10:20 <jmm@cumin1003> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host netboxdb1003.eqiad.wmnet [production]
10:18 <fceratto@cumin1002> END (PASS) - Cookbook sre.mysql.pool (exit_code=0) db2212* slowly with 10 steps - Pooling in [production]
10:16 <jmm@cumin1003> START - Cookbook sre.hosts.reboot-single for host netboxdb1003.eqiad.wmnet [production]
10:14 <jynus> starting backup director migration backup1001 -> backup1014 T387892 [production]
10:10 <jayme@cumin1002> END (FAIL) - Cookbook sre.k8s.pool-depool-cluster (exit_code=99) depool 44 services in codfw/codfw: pre-upgrade-test [production]
10:10 <jayme@cumin1002> START - Cookbook sre.k8s.pool-depool-cluster depool 44 services in codfw/codfw: pre-upgrade-test [production]
10:06 <hnowlan@deploy1003> helmfile [eqiad] DONE helmfile.d/services/changeprop: apply [production]
10:06 <hnowlan@deploy1003> helmfile [eqiad] START helmfile.d/services/changeprop: apply [production]
10:05 <hnowlan@deploy1003> helmfile [codfw] DONE helmfile.d/services/changeprop: apply [production]
10:05 <hnowlan@deploy1003> helmfile [codfw] START helmfile.d/services/changeprop: apply [production]