2020-11-04
ยง
|
11:01 |
<kormat@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
10:27 |
<kormat@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
10:27 |
<kormat@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
10:23 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'Repooling after reboot. T261389', diff saved to https://phabricator.wikimedia.org/P13185 and previous config saved to /var/cache/conftool/dbconfig/20201104-102341-kormat.json |
[production] |
10:23 |
<Urbanecm> |
Start of `mwscript extensions/AbuseFilter/maintenance/updateVarDumps.php --wiki=$wiki --print-orphaned-records-to=/tmp/urbanecm/$wiki-orphaned.log --progress-markers > $wiki.log` in a tmux session updateVarDumps at mwmaint1002 (wiki=fiwiki; T246539) |
[production] |
10:17 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'Rebooting for T261389', diff saved to https://phabricator.wikimedia.org/P13184 and previous config saved to /var/cache/conftool/dbconfig/20201104-101729-kormat.json |
[production] |
10:17 |
<kormat@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
10:17 |
<kormat@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
10:17 |
<kormat@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
10:17 |
<kormat@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
10:08 |
<_joe_> |
restarting envoyproxy on all of restbase codfw, sending the command in parallel via cumin, to test poolcounter usage by the safe restart scripts |
[production] |
10:05 |
<_joe_> |
restarting envoyproxy on restbase20{09,10} to test poolcounter usage by the safe restart scripts |
[production] |
09:25 |
<kormat@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:25 |
<kormat@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:24 |
<kormat@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:24 |
<kormat@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:19 |
<ayounsi@cumin1001> |
END (PASS) - Cookbook sre.network.cf (exit_code=0) |
[production] |
09:19 |
<ayounsi@cumin1001> |
START - Cookbook sre.network.cf |
[production] |
09:01 |
<kormat@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:01 |
<kormat@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:00 |
<kormat@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:00 |
<kormat@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
08:44 |
<moritzm> |
uploaded freetype 2.5.2+deb8u4+wmf1 to apt.wikimedia.org/jessie-wikimedia |
[production] |
08:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1028 (re)pooling @ 100%: Slowly pool es1028 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13182 and previous config saved to /var/cache/conftool/dbconfig/20201104-080033-root.json |
[production] |
08:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1027 (re)pooling @ 100%: Slowly pool es1027 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13181 and previous config saved to /var/cache/conftool/dbconfig/20201104-080024-root.json |
[production] |
07:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1026 (re)pooling @ 100%: Slowly pool es1026 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13180 and previous config saved to /var/cache/conftool/dbconfig/20201104-075953-root.json |
[production] |
07:45 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1028 (re)pooling @ 75%: Slowly pool es1028 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13179 and previous config saved to /var/cache/conftool/dbconfig/20201104-074530-root.json |
[production] |
07:45 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1027 (re)pooling @ 75%: Slowly pool es1027 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13178 and previous config saved to /var/cache/conftool/dbconfig/20201104-074520-root.json |
[production] |
07:44 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1026 (re)pooling @ 75%: Slowly pool es1026 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13177 and previous config saved to /var/cache/conftool/dbconfig/20201104-074449-root.json |
[production] |
07:30 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1028 (re)pooling @ 50%: Slowly pool es1028 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13176 and previous config saved to /var/cache/conftool/dbconfig/20201104-073026-root.json |
[production] |
07:30 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1027 (re)pooling @ 50%: Slowly pool es1027 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13175 and previous config saved to /var/cache/conftool/dbconfig/20201104-073017-root.json |
[production] |
07:29 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1026 (re)pooling @ 50%: Slowly pool es1026 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13174 and previous config saved to /var/cache/conftool/dbconfig/20201104-072946-root.json |
[production] |
07:15 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1028 (re)pooling @ 25%: Slowly pool es1028 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13173 and previous config saved to /var/cache/conftool/dbconfig/20201104-071523-root.json |
[production] |
07:15 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1027 (re)pooling @ 25%: Slowly pool es1027 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13172 and previous config saved to /var/cache/conftool/dbconfig/20201104-071513-root.json |
[production] |
07:14 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1026 (re)pooling @ 25%: Slowly pool es1026 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13171 and previous config saved to /var/cache/conftool/dbconfig/20201104-071443-root.json |
[production] |
07:09 |
<elukey> |
manual cleanup of mcelog and its wmf-auto-restart (failing) on mw1381 (kernel 4.19, doesn't support mcelog) |
[production] |
07:01 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool es1016 es1013 es1017 T261717', diff saved to https://phabricator.wikimedia.org/P13170 and previous config saved to /var/cache/conftool/dbconfig/20201104-070121-marostegui.json |
[production] |
07:00 |
<marostegui> |
Stop mysql on es1016, es1013, es1017 to clone es1029, es1030, es1031 T261717 |
[production] |
07:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1028 (re)pooling @ 10%: Slowly pool es1028 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13169 and previous config saved to /var/cache/conftool/dbconfig/20201104-070020-root.json |
[production] |
07:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1027 (re)pooling @ 10%: Slowly pool es1027 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13168 and previous config saved to /var/cache/conftool/dbconfig/20201104-070010-root.json |
[production] |
06:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1026 (re)pooling @ 10%: Slowly pool es1026 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13167 and previous config saved to /var/cache/conftool/dbconfig/20201104-065939-root.json |
[production] |
06:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1014 (re)pooling @ 100%: After cloning es1028 T261717', diff saved to https://phabricator.wikimedia.org/P13166 and previous config saved to /var/cache/conftool/dbconfig/20201104-065926-root.json |
[production] |
06:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1012 (re)pooling @ 100%: After cloning es1027 T261717', diff saved to https://phabricator.wikimedia.org/P13165 and previous config saved to /var/cache/conftool/dbconfig/20201104-065905-root.json |
[production] |
06:58 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1011 (re)pooling @ 100%: After cloning es1026 T261717', diff saved to https://phabricator.wikimedia.org/P13164 and previous config saved to /var/cache/conftool/dbconfig/20201104-065849-root.json |
[production] |
06:52 |
<elukey> |
force start of rasdaemon.service on dumpsdata1002 (its auto-restart unit was failing for it) |
[production] |
06:47 |
<elukey> |
set an-presto1004's netbox status as "active" (was: failed) after hw maintenance - T253438 |
[production] |
06:44 |
<elukey> |
force restart of uwsgi-ores on ores1005 - daemon down after reload, max client reached error messages in the logs |
[production] |
06:44 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1014 (re)pooling @ 75%: After cloning es1028 T261717', diff saved to https://phabricator.wikimedia.org/P13163 and previous config saved to /var/cache/conftool/dbconfig/20201104-064422-root.json |
[production] |
06:44 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1012 (re)pooling @ 75%: After cloning es1027 T261717', diff saved to https://phabricator.wikimedia.org/P13162 and previous config saved to /var/cache/conftool/dbconfig/20201104-064402-root.json |
[production] |
06:43 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1011 (re)pooling @ 75%: After cloning es1026 T261717', diff saved to https://phabricator.wikimedia.org/P13161 and previous config saved to /var/cache/conftool/dbconfig/20201104-064345-root.json |
[production] |