851-900 of 10000 results (78ms)
2023-01-10 ยง
17:14 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1130 (re)pooling @ 75%: Maint over', diff saved to https://phabricator.wikimedia.org/P42953 and previous config saved to /var/cache/conftool/dbconfig/20230110-171457-ladsgroup.json [production]
17:14 <otto@deploy1002> helmfile [dse-k8s-eqiad] DONE helmfile.d/admin 'apply'. [production]
17:10 <otto@deploy1002> helmfile [dse-k8s-eqiad] START helmfile.d/admin 'apply'. [production]
17:03 <ayounsi@deploy1002> deploy aborted: netbox-next to 3.2.9 (duration: 00m 07s) [production]
17:03 <ayounsi@deploy1002> Started deploy [netbox/deploy@ef7451d]: netbox-next to 3.2.9 [production]
16:59 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1130 (re)pooling @ 25%: Maint over', diff saved to https://phabricator.wikimedia.org/P42952 and previous config saved to /var/cache/conftool/dbconfig/20230110-165952-ladsgroup.json [production]
16:54 <marostegui@cumin1001> dbctl commit (dc=all): 'db1143 (re)pooling @ 100%: After the incident', diff saved to https://phabricator.wikimedia.org/P42951 and previous config saved to /var/cache/conftool/dbconfig/20230110-165406-root.json [production]
16:48 <bblack> depooling eqsin from DNS [production]
16:44 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1130 (re)pooling @ 10%: Maint over', diff saved to https://phabricator.wikimedia.org/P42950 and previous config saved to /var/cache/conftool/dbconfig/20230110-164447-ladsgroup.json [production]
16:39 <marostegui@cumin1001> dbctl commit (dc=all): 'db1143 (re)pooling @ 75%: After the incident', diff saved to https://phabricator.wikimedia.org/P42949 and previous config saved to /var/cache/conftool/dbconfig/20230110-163901-root.json [production]
16:36 <jayme@cumin1001> END (FAIL) - Cookbook sre.ganeti.reimage (exit_code=99) for host kubestagetcd2003.codfw.wmnet with OS bullseye [production]
16:24 <jayme@cumin1001> END (FAIL) - Cookbook sre.ganeti.reimage (exit_code=99) for host kubestagetcd2001.codfw.wmnet with OS bullseye [production]
16:23 <marostegui@cumin1001> dbctl commit (dc=all): 'db1143 (re)pooling @ 50%: After the incident', diff saved to https://phabricator.wikimedia.org/P42948 and previous config saved to /var/cache/conftool/dbconfig/20230110-162356-root.json [production]
16:23 <jayme@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on kubestagetcd2003.codfw.wmnet with reason: host reimage [production]
16:21 <jayme@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on kubestagetcd2003.codfw.wmnet with reason: host reimage [production]
16:14 <jayme@cumin1001> END (FAIL) - Cookbook sre.ganeti.reimage (exit_code=99) for host kubestagetcd2002.codfw.wmnet with OS bullseye [production]
16:10 <otto@deploy1002> helmfile [dse-k8s-eqiad] DONE helmfile.d/admin 'apply'. [production]
16:08 <marostegui@cumin1001> dbctl commit (dc=all): 'db1143 (re)pooling @ 25%: After the incident', diff saved to https://phabricator.wikimedia.org/P42947 and previous config saved to /var/cache/conftool/dbconfig/20230110-160851-root.json [production]
16:08 <otto@deploy1002> helmfile [dse-k8s-eqiad] START helmfile.d/admin 'apply'. [production]
16:08 <jayme@cumin1001> START - Cookbook sre.ganeti.reimage for host kubestagetcd2003.codfw.wmnet with OS bullseye [production]
16:04 <jayme@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on kubestagetcd2002.codfw.wmnet with reason: host reimage [production]
16:04 <jelto@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:15:00 on gitlab1004.wikimedia.org with reason: upgrade gitlab1004 to new version [production]
16:03 <jelto@cumin1001> START - Cookbook sre.hosts.downtime for 0:15:00 on gitlab1004.wikimedia.org with reason: upgrade gitlab1004 to new version [production]
16:01 <jayme@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on kubestagetcd2002.codfw.wmnet with reason: host reimage [production]
15:59 <SandraEbele> reran failed pageview-druid-hourly-coord oozie job for 2023-1-10-10. [production]
15:59 <otto@deploy1002> helmfile [dse-k8s-eqiad] DONE helmfile.d/admin 'apply'. [production]
15:58 <otto@deploy1002> helmfile [dse-k8s-eqiad] START helmfile.d/admin 'apply'. [production]
15:55 <cgoubert@cumin1001> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for mw[1373,1384-1385,1387].eqiad.wmnet [production]
15:55 <cgoubert@cumin1001> START - Cookbook sre.hosts.remove-downtime for mw[1373,1384-1385,1387].eqiad.wmnet [production]
15:53 <marostegui@cumin1001> dbctl commit (dc=all): 'db1143 (re)pooling @ 10%: After the incident', diff saved to https://phabricator.wikimedia.org/P42946 and previous config saved to /var/cache/conftool/dbconfig/20230110-155346-root.json [production]
15:52 <jayme@cumin1001> START - Cookbook sre.ganeti.reimage for host kubestagetcd2002.codfw.wmnet with OS bullseye [production]
15:38 <marostegui@cumin1001> dbctl commit (dc=all): 'db1143 (re)pooling @ 5%: After the incident', diff saved to https://phabricator.wikimedia.org/P42945 and previous config saved to /var/cache/conftool/dbconfig/20230110-153841-root.json [production]
15:35 <jiji@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host mc2051.codfw.wmnet [production]
15:30 <cgoubert@cumin1001> START - Cookbook sre.hosts.reboot-cluster [production]
15:29 <claime> Restarting rolling reboots of eqiad appservers [production]
15:28 <jiji@cumin1001> START - Cookbook sre.hosts.reboot-single for host mc2051.codfw.wmnet [production]
15:25 <otto@deploy1002> helmfile [dse-k8s-eqiad] DONE helmfile.d/admin 'apply'. [production]
15:25 <otto@deploy1002> helmfile [dse-k8s-eqiad] START helmfile.d/admin 'apply'. [production]
15:23 <marostegui@cumin1001> dbctl commit (dc=all): 'db1143 (re)pooling @ 1%: After the incident', diff saved to https://phabricator.wikimedia.org/P42944 and previous config saved to /var/cache/conftool/dbconfig/20230110-152336-root.json [production]
15:21 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host search-loader2001.codfw.wmnet [production]
15:17 <bking@cumin1001> START - Cookbook sre.hosts.reboot-single for host search-loader2001.codfw.wmnet [production]
15:14 <jayme@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on kubestagetcd2001.codfw.wmnet with reason: host reimage [production]
15:11 <jayme@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on kubestagetcd2001.codfw.wmnet with reason: host reimage [production]
15:09 <jiji@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host mc2050.codfw.wmnet [production]
15:02 <jayme@cumin1001> START - Cookbook sre.ganeti.reimage for host kubestagetcd2001.codfw.wmnet with OS bullseye [production]
15:01 <jiji@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts mc2037.codfw.wmnet [production]
15:01 <jiji@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
15:01 <jiji@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: mc2037.codfw.wmnet decommissioned, removing all IPs except the asset tag one - jiji@cumin1001" [production]
14:56 <XioNoX> start VC link maintenance in eqiad - T325803 [production]
14:55 <jayme@cumin1001> END (FAIL) - Cookbook sre.ganeti.reimage (exit_code=99) for host kubestagetcd2001.codfw.wmnet with OS bullseye [production]