5451-5500 of 10000 results (151ms)
2024-01-09 ยง
13:03 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 50%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54573 and previous config saved to /var/cache/conftool/dbconfig/20240109-130317-root.json [production]
13:00 <hnowlan@deploy2002> helmfile [eqiad] [main] DONE helmfile.d/services/mw-jobrunner : sync [production]
13:00 <hnowlan@deploy2002> helmfile [eqiad] [main] START helmfile.d/services/mw-jobrunner : sync [production]
12:58 <stevemunene@cumin1002> START - Cookbook sre.hadoop.roll-restart-masters restart masters for Hadoop analytics cluster: Restart of jvm daemons. [production]
12:57 <hnowlan@deploy2002> helmfile [codfw] [main] DONE helmfile.d/services/mw-jobrunner : sync [production]
12:57 <hnowlan@deploy2002> helmfile [codfw] [main] START helmfile.d/services/mw-jobrunner : sync [production]
12:48 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 25%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54572 and previous config saved to /var/cache/conftool/dbconfig/20240109-124812-root.json [production]
12:43 <moritzm> imported mwbzutils 0.1.4~wmf-1+deb11u1 for bullseye-wikimedia T325228 [production]
12:43 <cgoubert@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on mw[1380-1382].eqiad.wmnet with reason: failed reimage waiting on fix [production]
12:42 <cgoubert@cumin2002> START - Cookbook sre.hosts.downtime for 7 days, 0:00:00 on mw[1380-1382].eqiad.wmnet with reason: failed reimage waiting on fix [production]
12:39 <btullis@cumin1002> START - Cookbook sre.presto.roll-restart-workers for Presto analytics cluster: Roll restart of all Presto's jvm daemons. [production]
12:33 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 10%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54571 and previous config saved to /var/cache/conftool/dbconfig/20240109-123307-root.json [production]
12:18 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 5%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54570 and previous config saved to /var/cache/conftool/dbconfig/20240109-121802-root.json [production]
12:17 <stevemunene@cumin1002> END (PASS) - Cookbook sre.hadoop.roll-restart-masters (exit_code=0) restart masters for Hadoop test cluster: Restart of jvm daemons. [production]
12:10 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.cdn.roll-upgrade-haproxy (exit_code=0) rolling upgrade of HAProxy on A:cp-upload_esams and A:cp [production]
12:07 <taavi@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
12:07 <taavi@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: remove wiki replica LVS VIPs - taavi@cumin1002" [production]
12:06 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db1180.eqiad.wmnet with OS bookworm [production]
12:06 <taavi@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: remove wiki replica LVS VIPs - taavi@cumin1002" [production]
12:04 <taavi@cumin1002> START - Cookbook sre.dns.netbox [production]
12:02 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 1%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54569 and previous config saved to /var/cache/conftool/dbconfig/20240109-120257-root.json [production]
12:01 <btullis@cumin1002> END (PASS) - Cookbook sre.kafka.roll-restart-mirror-maker (exit_code=0) restart MirrorMaker for Kafka A:kafka-mirror-maker-jumbo-eqiad cluster: Roll restart of jvm daemons. [production]
11:50 <cmooney@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
11:50 <cmooney@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Update dns entry for kubestage2002.codfw.wmnet - cmooney@cumin1002" [production]
11:50 <stevemunene@cumin1002> START - Cookbook sre.hadoop.roll-restart-masters restart masters for Hadoop test cluster: Restart of jvm daemons. [production]
11:50 <vgutierrez@cumin1002> START - Cookbook sre.cdn.roll-upgrade-haproxy rolling upgrade of HAProxy on A:cp-upload_esams and A:cp [production]
11:49 <cmooney@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Update dns entry for kubestage2002.codfw.wmnet - cmooney@cumin1002" [production]
11:46 <cmooney@cumin1002> START - Cookbook sre.dns.netbox [production]
11:46 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1180.eqiad.wmnet with reason: host reimage [production]
11:43 <btullis@cumin1002> START - Cookbook sre.kafka.roll-restart-mirror-maker restart MirrorMaker for Kafka A:kafka-mirror-maker-jumbo-eqiad cluster: Roll restart of jvm daemons. [production]
11:42 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on db1180.eqiad.wmnet with reason: host reimage [production]
11:38 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.cdn.roll-upgrade-haproxy (exit_code=0) rolling upgrade of HAProxy on A:cp-text_drmrs and A:cp [production]
11:37 <cmooney@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on lsw1-b8-codfw,lsw1-b8-codfw IPv6 with reason: Adding vlan to switch, precaution in case it triggers EVPN L3 bug. [production]
11:37 <btullis@cumin1002> END (PASS) - Cookbook sre.kafka.roll-restart-reboot-brokers (exit_code=0) rolling restart_daemons on A:kafka-jumbo-eqiad [production]
11:37 <cmooney@cumin1002> START - Cookbook sre.hosts.downtime for 1:00:00 on lsw1-b8-codfw,lsw1-b8-codfw IPv6 with reason: Adding vlan to switch, precaution in case it triggers EVPN L3 bug. [production]
11:35 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on snapshot1014.eqiad.wmnet with reason: host reimage [production]
11:32 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on snapshot1014.eqiad.wmnet with reason: host reimage [production]
11:30 <marostegui@cumin1002> START - Cookbook sre.hosts.reimage for host db1180.eqiad.wmnet with OS bookworm [production]
11:30 <cgoubert@cumin2002> conftool action : set/pooled=yes; selector: name=mw2394.codfw.wmnet,cluster=jobrunner [production]
11:29 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1180 T354506', diff saved to https://phabricator.wikimedia.org/P54568 and previous config saved to /var/cache/conftool/dbconfig/20240109-112922-root.json [production]
11:22 <cgoubert@cumin2002> conftool action : set/pooled=no; selector: name=mw2394.codfw.wmnet [production]
11:19 <jmm@cumin2002> START - Cookbook sre.hosts.reimage for host snapshot1014.eqiad.wmnet with OS bullseye [production]
11:19 <hnowlan@deploy2002> helmfile [codfw] DONE helmfile.d/services/changeprop-jobqueue: apply [production]
11:19 <hnowlan@deploy2002> helmfile [codfw] START helmfile.d/services/changeprop-jobqueue: apply [production]
11:18 <hnowlan@deploy2002> helmfile [eqiad] DONE helmfile.d/services/changeprop-jobqueue: apply [production]
11:18 <hnowlan@deploy2002> helmfile [eqiad] START helmfile.d/services/changeprop-jobqueue: apply [production]
11:17 <hnowlan@deploy2002> helmfile [staging] DONE helmfile.d/services/changeprop-jobqueue: apply [production]
11:17 <hnowlan@deploy2002> helmfile [staging] START helmfile.d/services/changeprop-jobqueue: apply [production]
11:15 <taavi@cumin1002> conftool action : set/pooled=yes; selector: name=clouddb1013.eqiad.wmnet,service=s3 [production]
11:14 <taavi@cumin1002> conftool action : set/pooled=no; selector: name=clouddb1013.eqiad.wmnet,service=s3 [production]