2021-02-25
§
|
11:40 |
<marostegui> |
Stop MySQL on db1134 to reimage it to buster T275343 |
[production] |
11:29 |
<kormat@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:15:00 on dborch1001.wikimedia.org with reason: Restart for new kernel |
[production] |
11:29 |
<kormat@cumin1001> |
START - Cookbook sre.hosts.downtime for 0:15:00 on dborch1001.wikimedia.org with reason: Restart for new kernel |
[production] |
11:28 |
<jmm@cumin2001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host otrs1001.eqiad.wmnet |
[production] |
11:22 |
<moritzm> |
reset-failed ifup@ens5.service on otrs1001 T273026 |
[production] |
11:15 |
<jmm@cumin2001> |
START - Cookbook sre.hosts.reboot-single for host otrs1001.eqiad.wmnet |
[production] |
11:15 |
<moritzm> |
rebooting otrs1001 (ticket.wikimedia.org) for a kernel update |
[production] |
10:59 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hadoop.init-hadoop-workers (exit_code=0) for hosts an-worker[1117-1118].eqiad.wmnet |
[production] |
10:57 |
<elukey@cumin1001> |
START - Cookbook sre.hadoop.init-hadoop-workers for hosts an-worker[1117-1118].eqiad.wmnet |
[production] |
10:42 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-worker1118.eqiad.wmnet with reason: REIMAGE |
[production] |
10:40 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-worker1117.eqiad.wmnet with reason: REIMAGE |
[production] |
10:40 |
<elukey@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on an-worker1118.eqiad.wmnet with reason: REIMAGE |
[production] |
10:38 |
<elukey@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on an-worker1117.eqiad.wmnet with reason: REIMAGE |
[production] |
10:37 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1088 (re)pooling @ 100%: After cloning db1168', diff saved to https://phabricator.wikimedia.org/P14481 and previous config saved to /var/cache/conftool/dbconfig/20210225-103719-root.json |
[production] |
10:34 |
<klausman@cumin2001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on ml-serve2002.codfw.wmnet with reason: REIMAGE |
[production] |
10:32 |
<klausman@cumin2001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on ml-serve2002.codfw.wmnet with reason: REIMAGE |
[production] |
10:22 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1088 (re)pooling @ 75%: After cloning db1168', diff saved to https://phabricator.wikimedia.org/P14480 and previous config saved to /var/cache/conftool/dbconfig/20210225-102215-root.json |
[production] |
10:07 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1088 (re)pooling @ 50%: After cloning db1168', diff saved to https://phabricator.wikimedia.org/P14479 and previous config saved to /var/cache/conftool/dbconfig/20210225-100712-root.json |
[production] |
10:05 |
<klausman@cumin2001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on ml-serve2003.codfw.wmnet with reason: REIMAGE |
[production] |
10:03 |
<klausman@cumin2001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on ml-serve2004.codfw.wmnet with reason: REIMAGE |
[production] |
10:01 |
<klausman@cumin2001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on ml-serve2003.codfw.wmnet with reason: REIMAGE |
[production] |
10:01 |
<klausman@cumin2001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on ml-serve2004.codfw.wmnet with reason: REIMAGE |
[production] |
09:52 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1088 (re)pooling @ 25%: After cloning db1168', diff saved to https://phabricator.wikimedia.org/P14477 and previous config saved to /var/cache/conftool/dbconfig/20210225-095208-root.json |
[production] |
09:37 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1088 (re)pooling @ 10%: After cloning db1168', diff saved to https://phabricator.wikimedia.org/P14476 and previous config saved to /var/cache/conftool/dbconfig/20210225-093705-root.json |
[production] |
09:32 |
<klausman@cumin2001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on ml-serve2001.codfw.wmnet with reason: REIMAGE |
[production] |
09:32 |
<klausman@cumin2001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on ml-serve2001.codfw.wmnet with reason: REIMAGE |
[production] |
09:21 |
<jiji@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host mc1032.eqiad.wmnet |
[production] |
09:14 |
<jiji@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host mc1032.eqiad.wmnet |
[production] |
09:10 |
<effie> |
upgrade memcached on mc1032, mc2032, mc2036 |
[production] |
08:32 |
<volans@cumin2001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
08:29 |
<volans@cumin2001> |
START - Cookbook sre.dns.netbox |
[production] |
08:15 |
<vgutierrez> |
restart ats-tls on cp5006 to enable parent proxies support - T274888 |
[production] |
08:15 |
<XioNoX> |
un-drain lumen eqiad-codfw link for BW testing |
[production] |
08:07 |
<XioNoX> |
drain lumen eqiad-codfw link for BW testing |
[production] |
06:50 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1088 to clone db1168 T258361', diff saved to https://phabricator.wikimedia.org/P14474 and previous config saved to /var/cache/conftool/dbconfig/20210225-065018-marostegui.json |
[production] |
06:32 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1092 T275019', diff saved to https://phabricator.wikimedia.org/P14473 and previous config saved to /var/cache/conftool/dbconfig/20210225-063243-marostegui.json |
[production] |
00:29 |
<ryankemper> |
T274204 Restored service health on `elastic106[0,4,5]` via `sudo apt-get remove --purge wmf-elasticsearch-search-plugins --yes && sudo dpkg -i /var/cache/apt/archives/wmf-elasticsearch-search-plugins_6.5.4-4~stretch_all.deb && sudo puppet agent -tv`. There's some sort of issue with `6.5.4-5~stretch` that we will need to circle back and investigate; for now the fleet is staying on `6.5.4-4~stretch` |
[production] |
00:05 |
<ryankemper> |
T274204 `Ctrl+C`'d out of the current rolling-upgrade; the 3 hosts that have their elasticsearch systemd units in a failing state are running the latest plugin version, meaning the new version is likely the cause of the failures |
[production] |
00:01 |
<mutante> |
mwlog1001 - temp disabling puppet to deploy gerrit::661200 - because this is a jessie |
[production] |
00:01 |
<ryankemper@cumin1001> |
END (ERROR) - Cookbook sre.elasticsearch.rolling-upgrade (exit_code=97) |
[production] |
2021-02-24
§
|
23:42 |
<ryankemper@cumin1001> |
START - Cookbook sre.elasticsearch.rolling-upgrade |
[production] |
23:30 |
<ryankemper@cumin1001> |
END (FAIL) - Cookbook sre.elasticsearch.rolling-upgrade (exit_code=99) |
[production] |
23:18 |
<ryankemper> |
T274204 `sudo -i cookbook sre.elasticsearch.rolling-upgrade search_eqiad "eqiad cluster restarts" --task-id T274204 --nodes-per-run 3` |
[production] |
23:18 |
<ryankemper@cumin1001> |
START - Cookbook sre.elasticsearch.rolling-upgrade |
[production] |
23:17 |
<ryankemper> |
T274204 Beginning rolling-upgrade of `eqiad` CirrusSearch cluster to upgrade to `wmf-elasticsearch-search-plugins/stretch-wikimedia 6.5.4-5~stretch`, see tmux session `elastic_rolling_upgrade` on `ryankemper@cumin1001` |
[production] |
23:13 |
<eileen> |
civicrm revision is 5e042e6e57, config revision is 8572611a32 |
[production] |
22:09 |
<ryankemper> |
T265113 Unbanned `elastic1063` from both Elasticsearch clusters (`production-search-eqiad` and `production-search-omega-eqiad`) |
[production] |
22:03 |
<Urbanecm> |
Deploy security patches for T275669 |
[production] |
20:59 |
<andrew@cumin1001> |
END (PASS) - Cookbook wmcs.wikireplicas.add_wiki (exit_code=0) |
[production] |
20:59 |
<andrew@cumin1001> |
Added views for new wiki: mniwiki T273465 |
[production] |