4351-4400 of 10000 results (112ms)
2024-06-11 ยง
12:00 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti4005.ulsfo.wmnet [production]
12:00 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti4005.ulsfo.wmnet [production]
11:57 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1240.eqiad.wmnet with reason: repl issues [production]
11:57 <arnaudb@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1240.eqiad.wmnet with reason: repl issues [production]
11:57 <sfaci@deploy1002> helmfile [eqiad] DONE helmfile.d/services/page-analytics: apply [production]
11:55 <sfaci@deploy1002> helmfile [eqiad] START helmfile.d/services/page-analytics: apply [production]
11:55 <sfaci@deploy1002> helmfile [codfw] DONE helmfile.d/services/page-analytics: apply [production]
11:55 <btullis@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: stat1005.eqiad.wmnet decommissioned, removing all IPs except the asset tag one - btullis@cumin1002" [production]
11:54 <sfaci@deploy1002> helmfile [codfw] START helmfile.d/services/page-analytics: apply [production]
11:52 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1206', diff saved to https://phabricator.wikimedia.org/P64624 and previous config saved to /var/cache/conftool/dbconfig/20240611-115203-ladsgroup.json [production]
11:51 <jayme> removed similar-users deployments from all k8s clusters - T345274 [production]
11:36 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1206', diff saved to https://phabricator.wikimedia.org/P64621 and previous config saved to /var/cache/conftool/dbconfig/20240611-113656-ladsgroup.json [production]
11:34 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db2150 (T364069)', diff saved to https://phabricator.wikimedia.org/P64620 and previous config saved to /var/cache/conftool/dbconfig/20240611-113452-marostegui.json [production]
11:34 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2150.codfw.wmnet with reason: Maintenance [production]
11:34 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2150.codfw.wmnet with reason: Maintenance [production]
11:34 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2122 (T364069)', diff saved to https://phabricator.wikimedia.org/P64619 and previous config saved to /var/cache/conftool/dbconfig/20240611-113430-marostegui.json [production]
11:32 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti1030.eqiad.wmnet [production]
11:31 <marostegui@cumin1002> dbctl commit (dc=all): 'db1223 (re)pooling @ 5%: Repooling', diff saved to https://phabricator.wikimedia.org/P64618 and previous config saved to /var/cache/conftool/dbconfig/20240611-113121-root.json [production]
11:29 <moritzm> failover ganeti master in ulsfo to ganeti4008 [production]
11:27 <btullis@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: stat1004.eqiad.wmnet decommissioned, removing all IPs except the asset tag one - btullis@cumin1002" [production]
11:26 <klausman@deploy1002> helmfile [ml-serve-eqiad] 'sync' command on namespace 'ores-legacy' for release 'main' . [production]
11:24 <klausman@deploy1002> helmfile [ml-serve-eqiad] 'sync' command on namespace 'recommendation-api-ng' for release 'main' . [production]
11:23 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti4008.ulsfo.wmnet [production]
11:23 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti4008.ulsfo.wmnet [production]
11:23 <oblivian@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mw-debug: apply [production]
11:22 <oblivian@deploy1002> helmfile [eqiad] START helmfile.d/services/mw-debug: apply [production]
11:21 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1206 (T352010)', diff saved to https://phabricator.wikimedia.org/P64617 and previous config saved to /var/cache/conftool/dbconfig/20240611-112149-ladsgroup.json [production]
11:21 <klausman@deploy1002> helmfile [ml-serve-codfw] 'sync' command on namespace 'recommendation-api-ng' for release 'main' . [production]
11:19 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2122', diff saved to https://phabricator.wikimedia.org/P64616 and previous config saved to /var/cache/conftool/dbconfig/20240611-111922-marostegui.json [production]
11:16 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti4008.ulsfo.wmnet [production]
11:16 <marostegui@cumin1002> dbctl commit (dc=all): 'db1223 (re)pooling @ 1%: Repooling', diff saved to https://phabricator.wikimedia.org/P64615 and previous config saved to /var/cache/conftool/dbconfig/20240611-111616-root.json [production]
11:15 <klausman@deploy1002> helmfile [ml-serve-codfw] 'sync' command on namespace 'ores-legacy' for release 'main' . [production]
11:13 <jayme> removing similar-users service - T345274 [production]
11:12 <btullis@cumin1002> START - Cookbook sre.dns.netbox [production]
11:09 <fnegri@cumin1002> conftool action : set/pooled=yes; selector: name=clouddb1015.eqiad.wmnet,service=s4 [production]
11:09 <fnegri@cumin1002> conftool action : set/pooled=yes; selector: name=clouddb1015.eqiad.wmnet,service=s6 [production]
11:09 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti4008.ulsfo.wmnet [production]
11:07 <fnegri@cumin1002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host clouddb1015.eqiad.wmnet [production]
11:07 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti4007.ulsfo.wmnet [production]
11:07 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti4007.ulsfo.wmnet [production]
11:06 <cgoubert@cumin1002> START - Cookbook sre.kafka.roll-restart-reboot-brokers rolling reboot on A:kafka-main-codfw [production]
11:05 <klausman@deploy1002> helmfile [ml-staging-codfw] 'sync' command on namespace 'ores-legacy' for release 'main' . [production]
11:05 <claime> Starting kafka-main reboots in codfw [production]
11:04 <btullis@cumin1002> START - Cookbook sre.hosts.decommission for hosts stat1004.eqiad.wmnet [production]
11:04 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2122', diff saved to https://phabricator.wikimedia.org/P64614 and previous config saved to /var/cache/conftool/dbconfig/20240611-110414-marostegui.json [production]
11:00 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti4007.ulsfo.wmnet [production]
10:57 <jayme@deploy1002> helmfile [eqiad] DONE helmfile.d/services/machinetranslation: apply [production]
10:57 <klausman@deploy1002> helmfile [ml-staging-codfw] 'sync' command on namespace 'recommendation-api-ng' for release 'main' . [production]
10:50 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti4007.ulsfo.wmnet [production]
10:49 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2122 (T364069)', diff saved to https://phabricator.wikimedia.org/P64613 and previous config saved to /var/cache/conftool/dbconfig/20240611-104908-marostegui.json [production]