2451-2500 of 10000 results (58ms)
2024-07-11 ยง
13:47 <arnaudb@cumin1002> dbctl commit (dc=all): 'Depooling db1183 (T367781)', diff saved to https://phabricator.wikimedia.org/P66289 and previous config saved to /var/cache/conftool/dbconfig/20240711-134759-arnaudb.json [production]
13:47 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db1183.eqiad.wmnet with reason: Maintenance [production]
13:47 <arnaudb@cumin1002> START - Cookbook sre.hosts.downtime for 4:00:00 on db1183.eqiad.wmnet with reason: Maintenance [production]
13:47 <arnaudb@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1161 (T367781)', diff saved to https://phabricator.wikimedia.org/P66288 and previous config saved to /var/cache/conftool/dbconfig/20240711-134737-arnaudb.json [production]
13:44 <btullis@cumin1002> END (PASS) - Cookbook sre.presto.roll-restart-workers (exit_code=0) for Presto an-presto cluster: Roll restart of all Presto's jvm daemons. [production]
13:42 <wmbot~dcaro@urcuchillay> END (FAIL) - Cookbook wmcs.ceph.osd.bootstrap_and_add (exit_code=99) [admin]
13:41 <wmbot~dcaro@urcuchillay> START - Cookbook wmcs.ceph.osd.bootstrap_and_add [admin]
13:32 <klausman@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'apply'. [production]
13:32 <arnaudb@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1161', diff saved to https://phabricator.wikimedia.org/P66287 and previous config saved to /var/cache/conftool/dbconfig/20240711-133229-arnaudb.json [production]
13:29 <klausman@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'apply'. [production]
13:28 <btullis@cumin1002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host an-worker1090.eqiad.wmnet [production]
13:26 <klausman@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'apply'. [production]
13:22 <klausman@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'apply'. [production]
13:20 <btullis@cumin1002> START - Cookbook sre.hosts.reboot-single for host an-worker1090.eqiad.wmnet [production]
13:18 <btullis> setting cephosd cluster to noout mode for T365996 [analytics]
13:17 <btullis> draining dse-k8s-worker1007 ready for T365996 [analytics]
13:17 <arnaudb@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1161', diff saved to https://phabricator.wikimedia.org/P66286 and previous config saved to /var/cache/conftool/dbconfig/20240711-131721-arnaudb.json [production]
13:14 <btullis> failed back hive and presto services to an-coord1003 [analytics]
13:14 <cgoubert@cumin1002> conftool action : set/pooled=yes; selector: name=(kubernetes1062.eqiad.wmnet|mw1494.eqiad.wmnet|mw1495.eqiad.wmnet),cluster=kubernetes,service=kubesvc [production]
13:14 <claime> Uncordoning and depooling kubernetes1062.eqiad.wmnet mw1494.eqiad.wmnet mw1495.eqiad.wmnet that were actually not concerned by T365996 [production]
13:13 <klausman@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'apply'. [production]
13:12 <btullis@cumin1002> START - Cookbook sre.presto.roll-restart-workers for Presto an-presto cluster: Roll restart of all Presto's jvm daemons. [production]
13:10 <klausman@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'apply'. [production]
13:09 <klausman@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'apply'. [production]
13:08 <cgoubert@cumin1002> conftool action : set/pooled=inactive; selector: name=(kubernetes1062.eqiad.wmnet|mw1494.eqiad.wmnet|mw1495.eqiad.wmnet),cluster=kubernetes,service=kubesvc [production]
13:05 <klausman@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'apply'. [production]
13:04 <claime> Cordoning and depooling kubernetes1062.eqiad.wmnet mw1494.eqiad.wmnet mw1495.eqiad.wmnet for T365996 [production]
13:04 <bking@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6 days, 0:00:00 on relforge[1003-1004].eqiad.wmnet with reason: T368950 [production]
13:04 <klausman@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'apply'. [production]
13:03 <bking@cumin2002> START - Cookbook sre.hosts.downtime for 6 days, 0:00:00 on relforge[1003-1004].eqiad.wmnet with reason: T368950 [production]
13:02 <arnaudb@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1161 (T367781)', diff saved to https://phabricator.wikimedia.org/P66285 and previous config saved to /var/cache/conftool/dbconfig/20240711-130214-arnaudb.json [production]
13:00 <klausman@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'apply'. [production]
12:59 <arnaudb@cumin1002> dbctl commit (dc=all): 'Depooling db1161 (T367781)', diff saved to https://phabricator.wikimedia.org/P66284 and previous config saved to /var/cache/conftool/dbconfig/20240711-125949-arnaudb.json [production]
12:59 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on an-redacteddb1001.eqiad.wmnet,clouddb[1016,1020-1021].eqiad.wmnet,db1154.eqiad.wmnet with reason: Maintenance [production]
12:59 <arnaudb@cumin1002> START - Cookbook sre.hosts.downtime for 8:00:00 on an-redacteddb1001.eqiad.wmnet,clouddb[1016,1020-1021].eqiad.wmnet,db1154.eqiad.wmnet with reason: Maintenance [production]
12:59 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db1161.eqiad.wmnet with reason: Maintenance [production]
12:59 <arnaudb@cumin1002> START - Cookbook sre.hosts.downtime for 4:00:00 on db1161.eqiad.wmnet with reason: Maintenance [production]
12:55 <godog> reenable benthos@webrequest_live on centrallog2002 - T369737 [production]
12:54 <aborrero@cloudcumin1001> END (PASS) - Cookbook wmcs.toolforge.k8s.worker.upgrade (exit_code=0) for node toolsbeta-test-k8s-ingress-6 from 1.24.17 to 1.25.16 [toolsbeta]
12:54 <aborrero@cloudcumin1001> START - Cookbook wmcs.toolforge.k8s.worker.upgrade for node toolsbeta-test-k8s-ingress-6 from 1.24.17 to 1.25.16 [toolsbeta]
12:53 <aborrero@cloudcumin1001> END (PASS) - Cookbook wmcs.toolforge.k8s.worker.upgrade (exit_code=0) for node toolsbeta-test-k8s-worker-10 from 1.24.17 to 1.25.16 [toolsbeta]
12:52 <aborrero@cloudcumin1001> START - Cookbook wmcs.toolforge.k8s.worker.upgrade for node toolsbeta-test-k8s-worker-10 from 1.24.17 to 1.25.16 [toolsbeta]
12:51 <ayounsi@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4 days, 0:00:00 on netboxdb2003.codfw.wmnet with reason: netbox upgrade prep work [production]
12:51 <ayounsi@cumin1002> START - Cookbook sre.hosts.downtime for 4 days, 0:00:00 on netboxdb2003.codfw.wmnet with reason: netbox upgrade prep work [production]
12:51 <aborrero@cloudcumin1001> END (PASS) - Cookbook wmcs.toolforge.k8s.worker.upgrade (exit_code=0) for node toolsbeta-test-k8s-worker-nfs-1 from 1.24.17 to 1.25.16 [toolsbeta]
12:51 <ayounsi@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4 days, 0:00:00 on netboxdb1003.eqiad.wmnet with reason: netbox upgrade prep work [production]
12:51 <klausman@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'apply'. [production]
12:51 <ayounsi@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on netboxdb2003.codfw.wmnet with reason: host reimage [production]
12:51 <ayounsi@cumin1002> START - Cookbook sre.hosts.downtime for 4 days, 0:00:00 on netboxdb1003.eqiad.wmnet with reason: netbox upgrade prep work [production]
12:50 <klausman@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'apply'. [production]