2021-03-05
ยง
|
17:07 |
<razzi> |
sudo -i wmf-auto-reimage-host -p T269211 clouddb1021.eqiad.wmnet --new |
[analytics] |
16:58 |
<razzi@cumin1001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
16:54 |
<razzi> |
sudo cookbook sre.dns.netbox -t T269211 "Reimage and rename labsdb1012 to clouddb1021" |
[analytics] |
16:54 |
<effie> |
depool mw1276 and pool back |
[production] |
16:53 |
<razzi@cumin1001> |
START - Cookbook sre.dns.netbox |
[production] |
16:52 |
<razzi> |
run script at https://netbox.wikimedia.org/extras/scripts/interface_automation.ProvisionServerNetwork/ |
[analytics] |
16:48 |
<razzi> |
edit https://netbox.wikimedia.org/dcim/devices/2078/ device name from labsdb1012 to clouddb1021 |
[production] |
16:47 |
<razzi> |
edit https://netbox.wikimedia.org/dcim/devices/2078/ device name from labsdb1012 to clouddb1021 |
[analytics] |
16:36 |
<aborrero@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cloudvirt1036.eqiad.wmnet |
[production] |
16:30 |
<razzi> |
delete non-mgmt interfaces for labsdb1012 at https://netbox.wikimedia.org/dcim/devices/2078/interfaces/ |
[production] |
16:30 |
<razzi> |
delete non-mgmt interfaces for labsdb1012 at https://netbox.wikimedia.org/dcim/devices/2078/interfaces/ |
[analytics] |
16:28 |
<razzi> |
rename https://netbox.wikimedia.org/ipam/ip-addresses/734/ DNS name from labsdb1012.mgmt.eqiad.wmnet to clouddb1021.mgmt.eqiad.wmnet |
[production] |
16:28 |
<razzi> |
rename https://netbox.wikimedia.org/ipam/ip-addresses/734/ DNS name from labsdb1012.mgmt.eqiad.wmnet to clouddb1021.mgmt.eqiad.wmnet |
[analytics] |
16:23 |
<arturo> |
rebooting cloudvirt1036 for T275753 |
[admin] |
16:22 |
<aborrero@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host cloudvirt1036.eqiad.wmnet |
[production] |
16:22 |
<arturo> |
briefly rebooting traffic-cache-atsupload-buster because reboot of the hypervisor cloudvirt1036 |
[traffic] |
16:17 |
<razzi@cumin1001> |
END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts labsdb1012.eqiad.wmnet |
[production] |
16:11 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-worker1086.eqiad.wmnet with reason: REIMAGE |
[production] |
16:09 |
<elukey@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on an-worker1086.eqiad.wmnet with reason: REIMAGE |
[production] |
16:08 |
<razzi> |
sudo cookbook sre.hosts.decommission labsdb1012.eqiad.wmnet -t T269211 |
[analytics] |
16:07 |
<razzi@cumin1001> |
START - Cookbook sre.hosts.decommission for hosts labsdb1012.eqiad.wmnet |
[production] |
15:56 |
<razzi> |
stop mariadb on labsdb1012 to reimage and rename to clouddb1021: T269211 |
[production] |
15:52 |
<razzi> |
stop mariadb on labsdb1012 |
[analytics] |
15:39 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on analytics1073.eqiad.wmnet with reason: REIMAGE |
[production] |
15:39 |
<razzi> |
rebalance kafka partitions for webrequest_upload partition 10 |
[analytics] |
15:38 |
<cmjohnson@cumin1001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
15:37 |
<elukey@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on analytics1073.eqiad.wmnet with reason: REIMAGE |
[production] |
15:29 |
<cmjohnson@cumin1001> |
START - Cookbook sre.dns.netbox |
[production] |
15:07 |
<elukey> |
drain + reimage analytics1073 and an-worker1086 to Debian Buster |
[production] |
15:07 |
<elukey> |
drain + reimage analytics1073 and an-worker1086 to Debian Buster |
[analytics] |
14:24 |
<cmjohnson@cumin1001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
14:20 |
<cmjohnson@cumin1001> |
START - Cookbook sre.dns.netbox |
[production] |
13:59 |
<elukey@cumin1001> |
END (FAIL) - Cookbook sre.hadoop.roll-restart-masters (exit_code=99) |
[production] |
13:52 |
<marostegui> |
Rebuild some indexes on db2102 |
[production] |
13:40 |
<Majavah> |
create deployment-etcd02 and sign its puppet certificate T276462 |
[releng] |
13:38 |
<elukey@cumin1001> |
START - Cookbook sre.hadoop.roll-restart-masters |
[production] |
13:38 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'DEpool db1134', diff saved to https://phabricator.wikimedia.org/P14644 and previous config saved to /var/cache/conftool/dbconfig/20210305-133833-marostegui.json |
[production] |
13:36 |
<elukey> |
roll restart HDFS Namenodes for the Hadoop cluster to pick up new Xmx settings (https://gerrit.wikimedia.org/r/c/operations/puppet/+/668659) |
[analytics] |
13:24 |
<marostegui> |
Check tables on db1134 |
[production] |
13:13 |
<Majavah> |
move profile::etcd::cluster_name hiera key from deployment-etcd prefix to deployment-etcd-01 vm specific |
[releng] |
12:31 |
<aborrero@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cloudvirt1035.eqiad.wmnet |
[production] |
12:30 |
<arturo> |
draining cloudvirt1036 for T275753 |
[admin] |
12:30 |
<arturo> |
started tools-redis-1004 again |
[tools] |
12:25 |
<arturo> |
rebooting cloudvirt1035 for T275753 |
[admin] |
12:24 |
<aborrero@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host cloudvirt1035.eqiad.wmnet |
[production] |
12:22 |
<arturo> |
stop tools-redis-1004 to ease draining of cloudvirt1035 |
[tools] |
11:48 |
<Majavah> |
live hack beta puppetmaster to fix hopefully trust store location; T276521 and possibly others |
[releng] |
11:28 |
<marostegui> |
Temporarily set innodb_change_buffering = none on db1134 (s1) - T263443 |
[production] |
11:09 |
<marostegui> |
Run check table on db2092, db2116, db2145, db2146 (there will be lag) |
[production] |
11:01 |
<Majavah> |
restart both bots as they had disconnected from freenode |
[tools.stewardbots] |