301-350 of 10000 results (119ms)
2025-10-02 ยง
12:10 <moritzm> failover Ganeti master in codfw to ganeti2048 [production]
12:09 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti6004.drmrs.wmnet [production]
12:09 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti6004.drmrs.wmnet [production]
12:06 <fceratto@cumin1002> END (PASS) - Cookbook sre.mysql.depool (exit_code=0) es2028 - Depool es2028.codfw.wmnet to then clone it to es2051.codfw.wmnet - fceratto@cumin1002 [production]
12:06 <fceratto@cumin1002> START - Cookbook sre.mysql.depool es2028 - Depool es2028.codfw.wmnet to then clone it to es2051.codfw.wmnet - fceratto@cumin1002 [production]
12:06 <fceratto@cumin1002> START - Cookbook sre.mysql.clone_es of es2028.codfw.wmnet onto es2051.codfw.wmnet [production]
12:03 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti6004.drmrs.wmnet [production]
11:46 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti6004.drmrs.wmnet [production]
11:45 <stevemunene@cumin1003> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on druid[1007-1008].eqiad.wmnet with reason: Decommissioning druid_public hosts [production]
11:40 <btullis@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/admin 'apply'. [production]
11:39 <btullis@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/admin 'apply'. [production]
11:35 <moritzm> failover Ganeti master in drmrs02 to ganeti6002 [production]
11:32 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti6002.drmrs.wmnet [production]
11:32 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti6002.drmrs.wmnet [production]
11:28 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti6002.drmrs.wmnet [production]
11:26 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti6002.drmrs.wmnet [production]
11:21 <mvolz@deploy1003> helmfile [eqiad] DONE helmfile.d/services/citoid: apply [production]
11:20 <mvolz@deploy1003> helmfile [eqiad] START helmfile.d/services/citoid: apply [production]
11:19 <mvolz@deploy1003> helmfile [codfw] DONE helmfile.d/services/citoid: apply [production]
11:19 <mvolz@deploy1003> helmfile [codfw] START helmfile.d/services/citoid: apply [production]
11:18 <moritzm> installing postgresql security updates on netboxdb nodes [production]
11:17 <btullis@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/admin 'apply'. [production]
11:14 <jmm@cumin2002> END (FAIL) - Cookbook sre.ganeti.drain-node (exit_code=99) for draining ganeti node ganeti6003.drmrs.wmnet [production]
11:14 <jmm@cumin2002> END (FAIL) - Cookbook sre.hosts.reboot-single (exit_code=1) for host ganeti6003.drmrs.wmnet [production]
11:12 <mvolz@deploy1003> helmfile [staging] DONE helmfile.d/services/citoid: apply [production]
11:12 <mvolz@deploy1003> helmfile [staging] START helmfile.d/services/citoid: apply [production]
11:08 <jmm@cumin2002> END (PASS) - Cookbook sre.cdn.roll-restart-reboot-ncredir (exit_code=0) rolling restart_daemons on A:ncredir [production]
11:07 <btullis@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/admin 'apply'. [production]
11:05 <btullis@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/admin 'sync'. [production]
11:04 <btullis@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/admin 'sync'. [production]
11:02 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti6003.drmrs.wmnet [production]
11:02 <mvolz@deploy1003> helmfile [staging] DONE helmfile.d/services/zotero: apply [production]
11:02 <mvolz@deploy1003> helmfile [staging] START helmfile.d/services/zotero: apply [production]
10:59 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti6003.drmrs.wmnet [production]
10:59 <btullis@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/admin 'sync'. [production]
10:59 <btullis@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/admin 'sync'. [production]
10:57 <jmm@cumin2002> START - Cookbook sre.cdn.roll-restart-reboot-ncredir rolling restart_daemons on A:ncredir [production]
10:52 <zabe@deploy2002> Finished scap sync-world: Backport for [[gerrit:1193073|Revert "RevisionStore: Find identical revisions without using rev_sha1"]] (duration: 11m 06s) [production]
10:48 <moritzm> failover Ganeti master in drmrs01 to ganeti6001 [production]
10:48 <zabe@deploy2002> zabe: Continuing with sync [production]
10:47 <zabe@deploy2002> zabe: Backport for [[gerrit:1193073|Revert "RevisionStore: Find identical revisions without using rev_sha1"]] synced to the testservers (see https://wikitech.wikimedia.org/wiki/Mwdebug). Changes can now be verified there. [production]
10:43 <btullis@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/admin 'apply'. [production]
10:41 <zabe@deploy2002> Started scap sync-world: Backport for [[gerrit:1193073|Revert "RevisionStore: Find identical revisions without using rev_sha1"]] [production]
10:40 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti6001.drmrs.wmnet [production]
10:40 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti6001.drmrs.wmnet [production]
10:34 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti6001.drmrs.wmnet [production]
10:33 <btullis@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/admin 'apply'. [production]
10:32 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti6001.drmrs.wmnet [production]
10:20 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti5004.eqsin.wmnet [production]
10:20 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti5004.eqsin.wmnet [production]