8501-8550 of 10000 results (109ms)
2023-06-07 ยง
16:15 <sukhe@cumin2002> START - Cookbook sre.hosts.reboot-single for host cp3050.esams.wmnet [production]
16:14 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1144:3314 (T336886)', diff saved to https://phabricator.wikimedia.org/P49129 and previous config saved to /var/cache/conftool/dbconfig/20230607-161416-ladsgroup.json [production]
16:13 <pt1979@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['lvs2014'] [production]
16:12 <pt1979@cumin2002> END (PASS) - Cookbook sre.hardware.upgrade-firmware (exit_code=0) upgrade firmware for hosts ['lvs2014'] [production]
16:12 <pt1979@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['lvs2014'] [production]
16:12 <pt1979@cumin2002> END (PASS) - Cookbook sre.hardware.upgrade-firmware (exit_code=0) upgrade firmware for hosts ['lvs2014'] [production]
16:11 <pt1979@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['lvs2014'] [production]
16:09 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1144:3314 (T336886)', diff saved to https://phabricator.wikimedia.org/P49128 and previous config saved to /var/cache/conftool/dbconfig/20230607-160912-ladsgroup.json [production]
16:09 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1144.eqiad.wmnet with reason: Maintenance [production]
16:08 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db1144.eqiad.wmnet with reason: Maintenance [production]
16:08 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1143 (T336886)', diff saved to https://phabricator.wikimedia.org/P49127 and previous config saved to /var/cache/conftool/dbconfig/20230607-160851-ladsgroup.json [production]
16:07 <jiji@deploy1002> helmfile [staging] DONE helmfile.d/services/ipoid: apply [production]
16:04 <jbond@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - jbond@cumin2002" [production]
16:02 <pt1979@cumin2002> END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host lvs2014.mgmt.codfw.wmnet with reboot policy FORCED [production]
16:02 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1182', diff saved to https://phabricator.wikimedia.org/P49126 and previous config saved to /var/cache/conftool/dbconfig/20230607-160234-ladsgroup.json [production]
16:00 <jmm@cumin2002> END (ERROR) - Cookbook sre.hosts.reboot-single (exit_code=97) for host lists1003.wikimedia.org [production]
15:57 <jiji@deploy1002> helmfile [staging] START helmfile.d/services/ipoid: apply [production]
15:56 <urandom> Beginning (3 hour) generated traffic testing of sessionstore.svc.codfw.wmnet โ€” T337426 [production]
15:56 <jiji@deploy1002> helmfile [staging] START helmfile.d/services/ipoid: apply [production]
15:53 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1143', diff saved to https://phabricator.wikimedia.org/P49125 and previous config saved to /var/cache/conftool/dbconfig/20230607-155345-ladsgroup.json [production]
15:52 <urandom> Upgrading Cassandra to 4.1.1, sessionstore2003 โ€” T337426 [production]
15:51 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host lists1003.wikimedia.org [production]
15:50 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host testvm2005.codfw.wmnet [production]
15:47 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1182', diff saved to https://phabricator.wikimedia.org/P49124 and previous config saved to /var/cache/conftool/dbconfig/20230607-154727-ladsgroup.json [production]
15:47 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host testvm2005.codfw.wmnet [production]
15:44 <urandom> Upgrading Cassandra to 4.1.1, sessionstore2002 โ€” T337426 [production]
15:43 <pt1979@cumin2002> START - Cookbook sre.hosts.provision for host lvs2014.mgmt.codfw.wmnet with reboot policy FORCED [production]
15:42 <pt1979@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
15:42 <pt1979@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add DNS entry for lvs2014 - pt1979@cumin2002" [production]
15:41 <pt1979@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add DNS entry for lvs2014 - pt1979@cumin2002" [production]
15:40 <jbond@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on puppetserver2001.codfw.wmnet with reason: host reimage [production]
15:39 <moritzm> installing isc-dhcp bugfixes updates from Bullseye 11.7 point release [production]
15:38 <pt1979@cumin2002> START - Cookbook sre.dns.netbox [production]
15:38 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1143', diff saved to https://phabricator.wikimedia.org/P49123 and previous config saved to /var/cache/conftool/dbconfig/20230607-153839-ladsgroup.json [production]
15:37 <jbond@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on puppetserver2001.codfw.wmnet with reason: host reimage [production]
15:34 <pt1979@cumin2002> END (FAIL) - Cookbook sre.dns.netbox (exit_code=99) [production]
15:33 <jiji@deploy1002> helmfile [staging] DONE helmfile.d/services/ipoid: apply [production]
15:33 <pt1979@cumin2002> START - Cookbook sre.dns.netbox [production]
15:32 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1182 (T336886)', diff saved to https://phabricator.wikimedia.org/P49122 and previous config saved to /var/cache/conftool/dbconfig/20230607-153221-ladsgroup.json [production]
15:26 <moritzm> rolling restart of FPM on mw canaries to pick up libwebp security updates [production]
15:26 <pt1979@cumin2002> END (ERROR) - Cookbook sre.dns.netbox (exit_code=97) [production]
15:26 <pt1979@cumin2002> START - Cookbook sre.dns.netbox [production]
15:24 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1182 (T336886)', diff saved to https://phabricator.wikimedia.org/P49121 and previous config saved to /var/cache/conftool/dbconfig/20230607-152456-ladsgroup.json [production]
15:24 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1182.eqiad.wmnet with reason: Maintenance [production]
15:24 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db1182.eqiad.wmnet with reason: Maintenance [production]
15:24 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1170:3312 (T336886)', diff saved to https://phabricator.wikimedia.org/P49120 and previous config saved to /var/cache/conftool/dbconfig/20230607-152425-ladsgroup.json [production]
15:23 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1143 (T336886)', diff saved to https://phabricator.wikimedia.org/P49119 and previous config saved to /var/cache/conftool/dbconfig/20230607-152333-ladsgroup.json [production]
15:23 <elukey> all varnishkafka instances on caching nodes are getting restarted due to https://gerrit.wikimedia.org/r/c/operations/puppet/+/928087 - T337825 [production]
15:22 <jiji@deploy1002> helmfile [staging] START helmfile.d/services/ipoid: apply [production]
15:22 <cgoubert@deploy1002> helmfile [eqiad] DONE helmfile.d/services/changeprop-jobqueue: apply [production]