751-800 of 10000 results (84ms)
2024-07-11 ยง
13:47 <arnaudb@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1161 (T367781)', diff saved to https://phabricator.wikimedia.org/P66288 and previous config saved to /var/cache/conftool/dbconfig/20240711-134737-arnaudb.json [production]
13:44 <btullis@cumin1002> END (PASS) - Cookbook sre.presto.roll-restart-workers (exit_code=0) for Presto an-presto cluster: Roll restart of all Presto's jvm daemons. [production]
13:32 <klausman@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'apply'. [production]
13:32 <arnaudb@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1161', diff saved to https://phabricator.wikimedia.org/P66287 and previous config saved to /var/cache/conftool/dbconfig/20240711-133229-arnaudb.json [production]
13:29 <klausman@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'apply'. [production]
13:28 <btullis@cumin1002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host an-worker1090.eqiad.wmnet [production]
13:26 <klausman@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'apply'. [production]
13:22 <klausman@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'apply'. [production]
13:20 <btullis@cumin1002> START - Cookbook sre.hosts.reboot-single for host an-worker1090.eqiad.wmnet [production]
13:17 <arnaudb@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1161', diff saved to https://phabricator.wikimedia.org/P66286 and previous config saved to /var/cache/conftool/dbconfig/20240711-131721-arnaudb.json [production]
13:14 <cgoubert@cumin1002> conftool action : set/pooled=yes; selector: name=(kubernetes1062.eqiad.wmnet|mw1494.eqiad.wmnet|mw1495.eqiad.wmnet),cluster=kubernetes,service=kubesvc [production]
13:14 <claime> Uncordoning and depooling kubernetes1062.eqiad.wmnet mw1494.eqiad.wmnet mw1495.eqiad.wmnet that were actually not concerned by T365996 [production]
13:13 <klausman@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'apply'. [production]
13:12 <btullis@cumin1002> START - Cookbook sre.presto.roll-restart-workers for Presto an-presto cluster: Roll restart of all Presto's jvm daemons. [production]
13:10 <klausman@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'apply'. [production]
13:09 <klausman@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'apply'. [production]
13:08 <cgoubert@cumin1002> conftool action : set/pooled=inactive; selector: name=(kubernetes1062.eqiad.wmnet|mw1494.eqiad.wmnet|mw1495.eqiad.wmnet),cluster=kubernetes,service=kubesvc [production]
13:05 <klausman@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'apply'. [production]
13:04 <claime> Cordoning and depooling kubernetes1062.eqiad.wmnet mw1494.eqiad.wmnet mw1495.eqiad.wmnet for T365996 [production]
13:04 <bking@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6 days, 0:00:00 on relforge[1003-1004].eqiad.wmnet with reason: T368950 [production]
13:04 <klausman@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'apply'. [production]
13:03 <bking@cumin2002> START - Cookbook sre.hosts.downtime for 6 days, 0:00:00 on relforge[1003-1004].eqiad.wmnet with reason: T368950 [production]
13:02 <arnaudb@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1161 (T367781)', diff saved to https://phabricator.wikimedia.org/P66285 and previous config saved to /var/cache/conftool/dbconfig/20240711-130214-arnaudb.json [production]
13:00 <klausman@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'apply'. [production]
12:59 <arnaudb@cumin1002> dbctl commit (dc=all): 'Depooling db1161 (T367781)', diff saved to https://phabricator.wikimedia.org/P66284 and previous config saved to /var/cache/conftool/dbconfig/20240711-125949-arnaudb.json [production]
12:59 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on an-redacteddb1001.eqiad.wmnet,clouddb[1016,1020-1021].eqiad.wmnet,db1154.eqiad.wmnet with reason: Maintenance [production]
12:59 <arnaudb@cumin1002> START - Cookbook sre.hosts.downtime for 8:00:00 on an-redacteddb1001.eqiad.wmnet,clouddb[1016,1020-1021].eqiad.wmnet,db1154.eqiad.wmnet with reason: Maintenance [production]
12:59 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db1161.eqiad.wmnet with reason: Maintenance [production]
12:59 <arnaudb@cumin1002> START - Cookbook sre.hosts.downtime for 4:00:00 on db1161.eqiad.wmnet with reason: Maintenance [production]
12:55 <godog> reenable benthos@webrequest_live on centrallog2002 - T369737 [production]
12:51 <ayounsi@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4 days, 0:00:00 on netboxdb2003.codfw.wmnet with reason: netbox upgrade prep work [production]
12:51 <ayounsi@cumin1002> START - Cookbook sre.hosts.downtime for 4 days, 0:00:00 on netboxdb2003.codfw.wmnet with reason: netbox upgrade prep work [production]
12:51 <ayounsi@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4 days, 0:00:00 on netboxdb1003.eqiad.wmnet with reason: netbox upgrade prep work [production]
12:51 <klausman@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'apply'. [production]
12:51 <ayounsi@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on netboxdb2003.codfw.wmnet with reason: host reimage [production]
12:51 <ayounsi@cumin1002> START - Cookbook sre.hosts.downtime for 4 days, 0:00:00 on netboxdb1003.eqiad.wmnet with reason: netbox upgrade prep work [production]
12:50 <klausman@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'apply'. [production]
12:50 <dcausse@deploy1002> helmfile [staging] DONE helmfile.d/services/rdf-streaming-updater: apply [production]
12:50 <dcausse@deploy1002> helmfile [staging] START helmfile.d/services/rdf-streaming-updater: apply [production]
12:50 <claime> running puppet on O:analytics_cluster::turnilo,O:analytics_cluster::turnilo::staging [production]
12:48 <godog> temp stop benthos@webrequest_live on centrallog2002 - T369737 [production]
12:47 <ayounsi@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on netboxdb2003.codfw.wmnet with reason: host reimage [production]
12:43 <dcausse@deploy1002> helmfile [staging] DONE helmfile.d/services/rdf-streaming-updater: apply [production]
12:42 <dcausse@deploy1002> helmfile [staging] START helmfile.d/services/rdf-streaming-updater: apply [production]
12:39 <ayounsi@cumin1002> END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) for 4 days, 0:00:00 on netboxdb1003.eqiad.wmnet with reason: netbox upgrade prep work [production]
12:39 <ayounsi@cumin1002> START - Cookbook sre.hosts.downtime for 4 days, 0:00:00 on netboxdb1003.eqiad.wmnet with reason: netbox upgrade prep work [production]
12:30 <ayounsi@cumin2002> START - Cookbook sre.hosts.reimage for host netboxdb2003.codfw.wmnet with OS bookworm [production]
12:30 <ayounsi@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.ganeti.makevm: created new VM netboxdb2003.codfw.wmnet - ayounsi@cumin2002" [production]
12:29 <ayounsi@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.ganeti.makevm: created new VM netboxdb2003.codfw.wmnet - ayounsi@cumin2002" [production]
12:28 <ayounsi@cumin2002> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) netboxdb2003.codfw.wmnet on all recursors [production]