4451-4500 of 10000 results (135ms)
2024-01-09 ยง
14:01 <ayounsi@cumin1002> END (ERROR) - Cookbook sre.hosts.reimage (exit_code=93) for host ganeti2034.codfw.wmnet with OS bookworm [production]
13:58 <ayounsi@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host ganeti2033.codfw.wmnet with OS bookworm [production]
13:58 <ayounsi@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - ayounsi@cumin1002" [production]
13:56 <ayounsi@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - ayounsi@cumin1002" [production]
13:53 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host snapshot1014.eqiad.wmnet [production]
13:43 <jmm@cumin2002> END (ERROR) - Cookbook sre.hosts.reimage (exit_code=97) for host snapshot1014.eqiad.wmnet with OS bullseye [production]
13:41 <ayounsi@cumin1002> START - Cookbook sre.hosts.reimage for host ganeti2034.codfw.wmnet with OS bookworm [production]
13:37 <ayounsi@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on ganeti2033.codfw.wmnet with reason: host reimage [production]
13:34 <ayounsi@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on ganeti2033.codfw.wmnet with reason: host reimage [production]
13:33 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 100%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54575 and previous config saved to /var/cache/conftool/dbconfig/20240109-133327-root.json [production]
13:20 <jgiannelos@deploy2002> helmfile [staging] DONE helmfile.d/services/wikifeeds: apply [production]
13:18 <jgiannelos@deploy2002> helmfile [staging] START helmfile.d/services/wikifeeds: apply [production]
13:18 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 75%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54574 and previous config saved to /var/cache/conftool/dbconfig/20240109-131822-root.json [production]
13:16 <jgiannelos@deploy2002> helmfile [staging] START helmfile.d/services/wikifeeds: apply [production]
13:14 <ayounsi@cumin1002> START - Cookbook sre.hosts.reimage for host ganeti2033.codfw.wmnet with OS bookworm [production]
13:13 <stevemunene@cumin1002> END (FAIL) - Cookbook sre.hadoop.roll-restart-masters (exit_code=99) restart masters for Hadoop analytics cluster: Restart of jvm daemons. [production]
13:10 <btullis@cumin1002> END (PASS) - Cookbook sre.presto.roll-restart-workers (exit_code=0) for Presto analytics cluster: Roll restart of all Presto's jvm daemons. [production]
13:03 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 50%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54573 and previous config saved to /var/cache/conftool/dbconfig/20240109-130317-root.json [production]
13:00 <hnowlan@deploy2002> helmfile [eqiad] [main] DONE helmfile.d/services/mw-jobrunner : sync [production]
13:00 <hnowlan@deploy2002> helmfile [eqiad] [main] START helmfile.d/services/mw-jobrunner : sync [production]
12:58 <stevemunene@cumin1002> START - Cookbook sre.hadoop.roll-restart-masters restart masters for Hadoop analytics cluster: Restart of jvm daemons. [production]
12:57 <hnowlan@deploy2002> helmfile [codfw] [main] DONE helmfile.d/services/mw-jobrunner : sync [production]
12:57 <hnowlan@deploy2002> helmfile [codfw] [main] START helmfile.d/services/mw-jobrunner : sync [production]
12:48 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 25%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54572 and previous config saved to /var/cache/conftool/dbconfig/20240109-124812-root.json [production]
12:43 <moritzm> imported mwbzutils 0.1.4~wmf-1+deb11u1 for bullseye-wikimedia T325228 [production]
12:43 <cgoubert@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on mw[1380-1382].eqiad.wmnet with reason: failed reimage waiting on fix [production]
12:42 <cgoubert@cumin2002> START - Cookbook sre.hosts.downtime for 7 days, 0:00:00 on mw[1380-1382].eqiad.wmnet with reason: failed reimage waiting on fix [production]
12:39 <btullis@cumin1002> START - Cookbook sre.presto.roll-restart-workers for Presto analytics cluster: Roll restart of all Presto's jvm daemons. [production]
12:33 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 10%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54571 and previous config saved to /var/cache/conftool/dbconfig/20240109-123307-root.json [production]
12:18 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 5%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54570 and previous config saved to /var/cache/conftool/dbconfig/20240109-121802-root.json [production]
12:17 <stevemunene@cumin1002> END (PASS) - Cookbook sre.hadoop.roll-restart-masters (exit_code=0) restart masters for Hadoop test cluster: Restart of jvm daemons. [production]
12:10 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.cdn.roll-upgrade-haproxy (exit_code=0) rolling upgrade of HAProxy on A:cp-upload_esams and A:cp [production]
12:07 <taavi@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
12:07 <taavi@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: remove wiki replica LVS VIPs - taavi@cumin1002" [production]
12:06 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db1180.eqiad.wmnet with OS bookworm [production]
12:06 <taavi@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: remove wiki replica LVS VIPs - taavi@cumin1002" [production]
12:04 <taavi@cumin1002> START - Cookbook sre.dns.netbox [production]
12:02 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 1%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54569 and previous config saved to /var/cache/conftool/dbconfig/20240109-120257-root.json [production]
12:01 <btullis@cumin1002> END (PASS) - Cookbook sre.kafka.roll-restart-mirror-maker (exit_code=0) restart MirrorMaker for Kafka A:kafka-mirror-maker-jumbo-eqiad cluster: Roll restart of jvm daemons. [production]
11:50 <cmooney@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
11:50 <cmooney@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Update dns entry for kubestage2002.codfw.wmnet - cmooney@cumin1002" [production]
11:50 <stevemunene@cumin1002> START - Cookbook sre.hadoop.roll-restart-masters restart masters for Hadoop test cluster: Restart of jvm daemons. [production]
11:50 <vgutierrez@cumin1002> START - Cookbook sre.cdn.roll-upgrade-haproxy rolling upgrade of HAProxy on A:cp-upload_esams and A:cp [production]
11:49 <cmooney@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Update dns entry for kubestage2002.codfw.wmnet - cmooney@cumin1002" [production]
11:46 <cmooney@cumin1002> START - Cookbook sre.dns.netbox [production]
11:46 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1180.eqiad.wmnet with reason: host reimage [production]
11:43 <btullis@cumin1002> START - Cookbook sre.kafka.roll-restart-mirror-maker restart MirrorMaker for Kafka A:kafka-mirror-maker-jumbo-eqiad cluster: Roll restart of jvm daemons. [production]
11:42 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on db1180.eqiad.wmnet with reason: host reimage [production]
11:38 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.cdn.roll-upgrade-haproxy (exit_code=0) rolling upgrade of HAProxy on A:cp-text_drmrs and A:cp [production]
11:37 <cmooney@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on lsw1-b8-codfw,lsw1-b8-codfw IPv6 with reason: Adding vlan to switch, precaution in case it triggers EVPN L3 bug. [production]