351-400 of 10000 results (17ms)
2025-01-22 ยง
13:37 <marostegui@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: db2133.codfw.wmnet decommissioned, removing all IPs except the asset tag one - marostegui@cumin1002" [production]
13:37 <marostegui@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: db2133.codfw.wmnet decommissioned, removing all IPs except the asset tag one - marostegui@cumin1002" [production]
13:37 <stran@deploy2002> helmfile [staging] START helmfile.d/services/ipoid: apply [production]
13:34 <marostegui@cumin1002> START - Cookbook sre.dns.netbox [production]
13:29 <urbanecm> Deploying security patch for T384244 [production]
13:29 <marostegui@cumin1002> START - Cookbook sre.hosts.decommission for hosts db2133.codfw.wmnet [production]
13:15 <cmooney@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on netflow7001.magru.wmnet with reason: disabling alerts as I'm running gnmic manually rather than with systemd [production]
13:08 <fceratto@cumin1002> START - Cookbook sre.mysql.pool db2189 slowly with 10 steps - Repool host after fixing indexes and performing OS updates [production]
13:08 <federico3> repooling db2189 as per T384202 [production]
12:48 <root@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db1205.eqiad.wmnet with OS bookworm [production]
12:47 <jmm@cumin2002> START - Cookbook sre.hosts.reimage for host ganeti2021.codfw.wmnet with OS bookworm [production]
12:33 <Amir1> creating new schema of file tables everywhere (T368113) [production]
12:25 <root@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1205.eqiad.wmnet with reason: host reimage [production]
12:24 <hnowlan> disabling puppet on A:cp to test r/1113178 [production]
12:22 <root@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on db1205.eqiad.wmnet with reason: host reimage [production]
12:20 <mvolz@deploy2002> helmfile [eqiad] DONE helmfile.d/services/zotero: apply [production]
12:19 <mvolz@deploy2002> helmfile [eqiad] START helmfile.d/services/zotero: apply [production]
12:18 <mvolz@deploy2002> helmfile [codfw] DONE helmfile.d/services/zotero: apply [production]
12:17 <mvolz@deploy2002> helmfile [codfw] START helmfile.d/services/zotero: apply [production]
12:17 <mvolz@deploy2002> helmfile [staging] DONE helmfile.d/services/zotero: apply [production]
12:16 <mvolz@deploy2002> helmfile [staging] START helmfile.d/services/zotero: apply [production]
12:11 <jmm@cumin2002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on ganeti2021.codfw.wmnet with reason: remove from cluster for reimage [production]
12:10 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti2021.codfw.wmnet [production]
12:05 <root@cumin1002> START - Cookbook sre.hosts.reimage for host db1205.eqiad.wmnet with OS bookworm [production]
12:02 <jynus@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1205.eqiad.wmnet with reason: os upgrade [production]
11:43 <vgutierrez> testing acme-chief 0.38 in acmechief-test1001 [production]
11:34 <marostegui@cumin1002> dbctl commit (dc=all): 'db2175 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P72220 and previous config saved to /var/cache/conftool/dbconfig/20250122-113404-root.json [production]
11:32 <urbanecm@deploy2002> Finished scap sync-world: Backport for [[gerrit:1113423|ValidatorFactory: Allow extensions to register validators (T384246)]], [[gerrit:1113424|ValidatorFactory: Allow extensions to register validators (T384246)]] (duration: 11m 55s) [production]
11:20 <urbanecm@deploy2002> Started scap sync-world: Backport for [[gerrit:1113423|ValidatorFactory: Allow extensions to register validators (T384246)]], [[gerrit:1113424|ValidatorFactory: Allow extensions to register validators (T384246)]] [production]
11:18 <marostegui@cumin1002> dbctl commit (dc=all): 'db2175 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P72218 and previous config saved to /var/cache/conftool/dbconfig/20250122-111859-root.json [production]
11:14 <marostegui@cumin1002> dbctl commit (dc=all): 'Remove es1021 from dbctl T384418', diff saved to https://phabricator.wikimedia.org/P72217 and previous config saved to /var/cache/conftool/dbconfig/20250122-111428-root.json [production]
11:10 <marostegui@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2166.codfw.wmnet with reason: Onsite work [production]
11:10 <marostegui@cumin1002> dbctl commit (dc=all): 'Depool db2166 T383709', diff saved to https://phabricator.wikimedia.org/P72216 and previous config saved to /var/cache/conftool/dbconfig/20250122-111019-marostegui.json [production]
11:03 <marostegui@cumin1002> dbctl commit (dc=all): 'db2175 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P72215 and previous config saved to /var/cache/conftool/dbconfig/20250122-110354-root.json [production]
10:59 <marostegui> Deploy schema change in codfw x1 with replication on the master dbmaint T381759 [production]
10:48 <marostegui@cumin1002> dbctl commit (dc=all): 'db2175 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P72214 and previous config saved to /var/cache/conftool/dbconfig/20250122-104848-root.json [production]
10:38 <topranks> disable-pupept on netflow7001 to run gnmic in foregrand for debug/development T369384 [production]
10:38 <cmooney@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on netflow7001.magru.wmnet with reason: disabling alerts as I'm running gnmic manually rather than with systemd [production]
10:33 <marostegui@cumin1002> dbctl commit (dc=all): 'db2175 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P72213 and previous config saved to /var/cache/conftool/dbconfig/20250122-103342-root.json [production]
10:03 <root@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db1240.eqiad.wmnet with OS bookworm [production]
09:40 <root@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1240.eqiad.wmnet with reason: host reimage [production]
09:38 <root@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on db1240.eqiad.wmnet with reason: host reimage [production]
09:21 <root@cumin1002> START - Cookbook sre.hosts.reimage for host db1240.eqiad.wmnet with OS bookworm [production]
09:20 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2021.codfw.wmnet [production]
09:20 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti2021.codfw.wmnet [production]
09:16 <jynus@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1240.eqiad.wmnet with reason: os upgrade [production]
09:15 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2021.codfw.wmnet [production]
08:24 <jmm@cumin2002> END (FAIL) - Cookbook sre.ganeti.addnode (exit_code=99) for new host ganeti2019.codfw.wmnet to cluster codfw and group B [production]
08:22 <jmm@cumin2002> START - Cookbook sre.ganeti.addnode for new host ganeti2019.codfw.wmnet to cluster codfw and group B [production]
08:20 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2019.codfw.wmnet [production]