9751-9800 of 10000 results (40ms)
2021-02-26 ยง
10:07 <marostegui@cumin1001> dbctl commit (dc=all): 'db1169 (re)pooling @ 85%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14504 and previous config saved to /var/cache/conftool/dbconfig/20210226-100750-root.json [production]
10:06 <aborrero@cumin2001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cloudweb2001-dev.wikimedia.org [production]
10:05 <dcaro@cumin1001> START - Cookbook sre.hosts.reboot-single for host cloudcephosd2002-dev.codfw.wmnet [production]
10:05 <dcaro> [codfw1dev] rebooting cloudcephosd2002-dev for kernel upgrade (T275753) [admin]
10:01 <arturo> [codfw1dev] rebooting cloudvirt200X-dev for kernel upgrade (T275753) [admin]
09:59 <arturo> [codfw1dev] rebooting cloudweb2001-dev for kernel upgrade (T275753) [admin]
09:59 <aborrero@cumin2001> START - Cookbook sre.hosts.reboot-single for host cloudweb2001-dev.wikimedia.org [production]
09:59 <aborrero@cumin2001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cloudservices2003-dev.wikimedia.org [production]
09:55 <aborrero@cumin2001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cloudservices2002-dev.wikimedia.org [production]
09:54 <dcaro@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cloudcephosd2001-dev.codfw.wmnet [production]
09:53 <arturo> [codfw1dev] rebooting cloudservices2003-dev for kernel upgrade (T275753) [admin]
09:52 <aborrero@cumin2001> START - Cookbook sre.hosts.reboot-single for host cloudservices2003-dev.wikimedia.org [production]
09:52 <marostegui@cumin1001> dbctl commit (dc=all): 'db1169 (re)pooling @ 75%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14503 and previous config saved to /var/cache/conftool/dbconfig/20210226-095247-root.json [production]
09:51 <elukey> reimaged analytics1058 to debian buster (preserving datanode partitions) [analytics]
09:51 <arturo> [codfw1dev] rebooting cloudservices2002-dev for kernel upgrade (T275753) [admin]
09:50 <aborrero@cumin2001> START - Cookbook sre.hosts.reboot-single for host cloudservices2002-dev.wikimedia.org [production]
09:50 <dcaro@cumin1001> START - Cookbook sre.hosts.reboot-single for host cloudcephosd2001-dev.codfw.wmnet [production]
09:48 <dcaro@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cloudcephosd2001-dev.codfw.wmnet [production]
09:45 <arturo> [codfw1dev] rebooting cloudcontrol2004-dev for kernel upgrade (T275753) [admin]
09:44 <arturo> [codfw1dev] rebooting cloudbackup[2001-2002].codfw.wmnet for kernel upgrade (T275753) [admin]
09:43 <dcaro> [codfw1dev] rebooting cloudcephosd2001-dev for kernel upgrade (T275753) [admin]
09:43 <dcaro@cumin1001> START - Cookbook sre.hosts.reboot-single for host cloudcephosd2001-dev.codfw.wmnet [production]
09:41 <arturo> [codfw1dev] rebooting cloudcontrol2003-dev for kernel upgrade (T275753) [admin]
09:41 <aborrero@cumin2001> END (ERROR) - Cookbook sre.hosts.reboot-single (exit_code=97) for host cloudcontrol2001-dev.wikimedia.org [production]
09:37 <marostegui@cumin1001> dbctl commit (dc=all): 'db1169 (re)pooling @ 65%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14502 and previous config saved to /var/cache/conftool/dbconfig/20210226-093743-root.json [production]
09:33 <arturo> [codfw1dev] rebooting cloudcontrol2001-dev for kernel upgrade (T275753) [admin]
09:33 <aborrero@cumin2001> START - Cookbook sre.hosts.reboot-single for host cloudcontrol2001-dev.wikimedia.org [production]
09:28 <root@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
09:24 <root@cumin1001> START - Cookbook sre.dns.netbox [production]
09:22 <marostegui@cumin1001> dbctl commit (dc=all): 'db1169 (re)pooling @ 50%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14501 and previous config saved to /var/cache/conftool/dbconfig/20210226-092240-root.json [production]
09:13 <jbond42> pupet enabled post sudoers fix, running puppet fleet wide with cumin -b 15 '*' 'run-puppet-agent ' [production]
09:09 <dcaro> Playing around with cookbooks by adding/removing etcd nodes, etcd might missbehave from time to time (T274497) [toolsbeta]
09:07 <marostegui@cumin1001> dbctl commit (dc=all): 'db1169 (re)pooling @ 40%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14500 and previous config saved to /var/cache/conftool/dbconfig/20210226-090736-root.json [production]
08:55 <jbond42> disabled puppet pending rollback of https://gerrit.wikimedia.org/r/666899 [production]
08:52 <marostegui@cumin1001> dbctl commit (dc=all): 'db1169 (re)pooling @ 25%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14498 and previous config saved to /var/cache/conftool/dbconfig/20210226-085233-root.json [production]
08:37 <marostegui@cumin1001> dbctl commit (dc=all): 'db1169 (re)pooling @ 15%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14497 and previous config saved to /var/cache/conftool/dbconfig/20210226-083729-root.json [production]
08:22 <marostegui@cumin1001> dbctl commit (dc=all): 'db1169 (re)pooling @ 10%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14496 and previous config saved to /var/cache/conftool/dbconfig/20210226-082226-root.json [production]
08:19 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on analytics1058.eqiad.wmnet with reason: REIMAGE [production]
08:17 <elukey@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on analytics1058.eqiad.wmnet with reason: REIMAGE [production]
08:07 <marostegui@cumin1001> dbctl commit (dc=all): 'db1169 (re)pooling @ 5%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14495 and previous config saved to /var/cache/conftool/dbconfig/20210226-080722-root.json [production]
08:04 <elukey> run ipmi mc reset cold for analytics1058 - mgmt responding to pings and ipmi, but not to ssh [production]
07:52 <marostegui@cumin1001> dbctl commit (dc=all): 'db1169 (re)pooling @ 1%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14494 and previous config saved to /var/cache/conftool/dbconfig/20210226-075219-root.json [production]
07:50 <elukey> attempt to reimage analytics1058 (part of the cluster, not a new worker node) to Buster [analytics]
07:29 <elukey> added journalnode partition to all hadoop workers not having it in the Analytics cluster [analytics]
07:02 <marostegui> Stop MySQL on db2106 to clone db2147 T275633 [production]
07:01 <elukey> reboot an-worker1099 to clear out kernel soft lockup errors [analytics]
07:01 <elukey> reboot an-worker1099 to clear out kernel soft lockup errors [production]
06:59 <elukey> restart datanode on an-worker1099 - soft lockup kernel errors [production]
06:59 <elukey> restart datanode on an-worker1099 - soft lockup kernel errors [analytics]
06:53 <kartik@deploy1001> Synchronized php-1.36.0-wmf.32/extensions/ContentTranslation: Bump ContentTranslation to e6b1a7c to include lost {{gerrit|666327}} backport (duration: 00m 58s) [production]