8651-8700 of 10000 results (69ms)
2021-03-05 ยง
16:22 <aborrero@cumin1001> START - Cookbook sre.hosts.reboot-single for host cloudvirt1036.eqiad.wmnet [production]
16:22 <arturo> briefly rebooting traffic-cache-atsupload-buster because reboot of the hypervisor cloudvirt1036 [traffic]
16:17 <razzi@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts labsdb1012.eqiad.wmnet [production]
16:11 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-worker1086.eqiad.wmnet with reason: REIMAGE [production]
16:09 <elukey@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on an-worker1086.eqiad.wmnet with reason: REIMAGE [production]
16:08 <razzi> sudo cookbook sre.hosts.decommission labsdb1012.eqiad.wmnet -t T269211 [analytics]
16:07 <razzi@cumin1001> START - Cookbook sre.hosts.decommission for hosts labsdb1012.eqiad.wmnet [production]
15:56 <razzi> stop mariadb on labsdb1012 to reimage and rename to clouddb1021: T269211 [production]
15:52 <razzi> stop mariadb on labsdb1012 [analytics]
15:39 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on analytics1073.eqiad.wmnet with reason: REIMAGE [production]
15:39 <razzi> rebalance kafka partitions for webrequest_upload partition 10 [analytics]
15:38 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
15:37 <elukey@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on analytics1073.eqiad.wmnet with reason: REIMAGE [production]
15:29 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]
15:07 <elukey> drain + reimage analytics1073 and an-worker1086 to Debian Buster [production]
15:07 <elukey> drain + reimage analytics1073 and an-worker1086 to Debian Buster [analytics]
14:24 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
14:20 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]
13:59 <elukey@cumin1001> END (FAIL) - Cookbook sre.hadoop.roll-restart-masters (exit_code=99) [production]
13:52 <marostegui> Rebuild some indexes on db2102 [production]
13:40 <Majavah> create deployment-etcd02 and sign its puppet certificate T276462 [releng]
13:38 <elukey@cumin1001> START - Cookbook sre.hadoop.roll-restart-masters [production]
13:38 <marostegui@cumin1001> dbctl commit (dc=all): 'DEpool db1134', diff saved to https://phabricator.wikimedia.org/P14644 and previous config saved to /var/cache/conftool/dbconfig/20210305-133833-marostegui.json [production]
13:36 <elukey> roll restart HDFS Namenodes for the Hadoop cluster to pick up new Xmx settings (https://gerrit.wikimedia.org/r/c/operations/puppet/+/668659) [analytics]
13:24 <marostegui> Check tables on db1134 [production]
13:13 <Majavah> move profile::etcd::cluster_name hiera key from deployment-etcd prefix to deployment-etcd-01 vm specific [releng]
12:31 <aborrero@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cloudvirt1035.eqiad.wmnet [production]
12:30 <arturo> draining cloudvirt1036 for T275753 [admin]
12:30 <arturo> started tools-redis-1004 again [tools]
12:25 <arturo> rebooting cloudvirt1035 for T275753 [admin]
12:24 <aborrero@cumin1001> START - Cookbook sre.hosts.reboot-single for host cloudvirt1035.eqiad.wmnet [production]
12:22 <arturo> stop tools-redis-1004 to ease draining of cloudvirt1035 [tools]
11:48 <Majavah> live hack beta puppetmaster to fix hopefully trust store location; T276521 and possibly others [releng]
11:28 <marostegui> Temporarily set innodb_change_buffering = none on db1134 (s1) - T263443 [production]
11:09 <marostegui> Run check table on db2092, db2116, db2145, db2146 (there will be lag) [production]
11:01 <Majavah> restart both bots as they had disconnected from freenode [tools.stewardbots]
10:54 <aborrero@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cloudvirt1034.eqiad.wmnet [production]
10:49 <arturo> rebooting cloudvirt1035 for T275753 [admin]
10:47 <arturo> rebooting cloudvirt1034 for T275753 [admin]
10:47 <aborrero@cumin1001> START - Cookbook sre.hosts.reboot-single for host cloudvirt1034.eqiad.wmnet [production]
10:43 <jakob@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'termbox' for release 'production' . [production]
10:38 <jakob@deploy1002> helmfile [codfw] Ran 'sync' command on namespace 'termbox' for release 'production' . [production]
10:32 <aborrero@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cloudvirt1033.eqiad.wmnet [production]
10:26 <arturo> draining cloudvirt1034 for T275753 [admin]
10:25 <arturo> rebooting cloudvirt1033 for T275753 [admin]
10:25 <aborrero@cumin1001> START - Cookbook sre.hosts.reboot-single for host cloudvirt1033.eqiad.wmnet [production]
10:20 <elukey> force run of refinery-druid-drop-public-snapshots to check Druid public's performances [analytics]
10:06 <elukey> failover HDFS Namenode from 1002 to 1001 (high GC pauses triggered the HDFS zkfc daemon on 1001 and the failover to 1002) [analytics]
10:03 <wm-bot> <rhinosf1> cleanup used stuff [tools.zppixbot-test]
10:01 <wm-bot> <rhinosf1> clean a bunch of junk up [tools.zppixbot]