2020-11-04
ยง
|
07:01 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool es1016 es1013 es1017 T261717', diff saved to https://phabricator.wikimedia.org/P13170 and previous config saved to /var/cache/conftool/dbconfig/20201104-070121-marostegui.json |
[production] |
07:00 |
<marostegui> |
Stop mysql on es1016, es1013, es1017 to clone es1029, es1030, es1031 T261717 |
[production] |
07:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1028 (re)pooling @ 10%: Slowly pool es1028 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13169 and previous config saved to /var/cache/conftool/dbconfig/20201104-070020-root.json |
[production] |
07:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1027 (re)pooling @ 10%: Slowly pool es1027 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13168 and previous config saved to /var/cache/conftool/dbconfig/20201104-070010-root.json |
[production] |
06:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1026 (re)pooling @ 10%: Slowly pool es1026 after being recloned T261717', diff saved to https://phabricator.wikimedia.org/P13167 and previous config saved to /var/cache/conftool/dbconfig/20201104-065939-root.json |
[production] |
06:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1014 (re)pooling @ 100%: After cloning es1028 T261717', diff saved to https://phabricator.wikimedia.org/P13166 and previous config saved to /var/cache/conftool/dbconfig/20201104-065926-root.json |
[production] |
06:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1012 (re)pooling @ 100%: After cloning es1027 T261717', diff saved to https://phabricator.wikimedia.org/P13165 and previous config saved to /var/cache/conftool/dbconfig/20201104-065905-root.json |
[production] |
06:58 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1011 (re)pooling @ 100%: After cloning es1026 T261717', diff saved to https://phabricator.wikimedia.org/P13164 and previous config saved to /var/cache/conftool/dbconfig/20201104-065849-root.json |
[production] |
06:52 |
<elukey> |
force start of rasdaemon.service on dumpsdata1002 (its auto-restart unit was failing for it) |
[production] |
06:47 |
<elukey> |
set an-presto1004's netbox status as "active" (was: failed) after hw maintenance - T253438 |
[production] |
06:44 |
<elukey> |
force restart of uwsgi-ores on ores1005 - daemon down after reload, max client reached error messages in the logs |
[production] |
06:44 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1014 (re)pooling @ 75%: After cloning es1028 T261717', diff saved to https://phabricator.wikimedia.org/P13163 and previous config saved to /var/cache/conftool/dbconfig/20201104-064422-root.json |
[production] |
06:44 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1012 (re)pooling @ 75%: After cloning es1027 T261717', diff saved to https://phabricator.wikimedia.org/P13162 and previous config saved to /var/cache/conftool/dbconfig/20201104-064402-root.json |
[production] |
06:43 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1011 (re)pooling @ 75%: After cloning es1026 T261717', diff saved to https://phabricator.wikimedia.org/P13161 and previous config saved to /var/cache/conftool/dbconfig/20201104-064345-root.json |
[production] |
06:30 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Pool es1028 with minimum weight after recloning T261717', diff saved to https://phabricator.wikimedia.org/P13160 and previous config saved to /var/cache/conftool/dbconfig/20201104-063028-marostegui.json |
[production] |
06:29 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1014 (re)pooling @ 50%: After cloning es1028 T261717', diff saved to https://phabricator.wikimedia.org/P13159 and previous config saved to /var/cache/conftool/dbconfig/20201104-062919-root.json |
[production] |
06:28 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1012 (re)pooling @ 50%: After cloning es1027 T261717', diff saved to https://phabricator.wikimedia.org/P13158 and previous config saved to /var/cache/conftool/dbconfig/20201104-062858-root.json |
[production] |
06:28 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1011 (re)pooling @ 50%: After cloning es1026 T261717', diff saved to https://phabricator.wikimedia.org/P13157 and previous config saved to /var/cache/conftool/dbconfig/20201104-062842-root.json |
[production] |
06:18 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Pool es1027 with minimum weight after recloning T261717', diff saved to https://phabricator.wikimedia.org/P13156 and previous config saved to /var/cache/conftool/dbconfig/20201104-061829-marostegui.json |
[production] |
06:15 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Pool es1026 with minimum weight after recloning T261717', diff saved to https://phabricator.wikimedia.org/P13155 and previous config saved to /var/cache/conftool/dbconfig/20201104-061549-marostegui.json |
[production] |
06:14 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1014 (re)pooling @ 25%: After cloning es1028 T261717', diff saved to https://phabricator.wikimedia.org/P13154 and previous config saved to /var/cache/conftool/dbconfig/20201104-061416-root.json |
[production] |
06:13 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1012 (re)pooling @ 25%: After cloning es1027 T261717', diff saved to https://phabricator.wikimedia.org/P13153 and previous config saved to /var/cache/conftool/dbconfig/20201104-061355-root.json |
[production] |
06:13 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'es1011 (re)pooling @ 25%: After cloning es1026 T261717', diff saved to https://phabricator.wikimedia.org/P13152 and previous config saved to /var/cache/conftool/dbconfig/20201104-061339-root.json |
[production] |
2020-11-03
ยง
|
22:56 |
<_joe_> |
repooling mw1346 |
[production] |
22:55 |
<_joe_> |
depooling mw1346 |
[production] |
22:49 |
<cdanis> |
mw1342 restart-php7.2-fpm |
[production] |
22:37 |
<cdanis> |
repool mw1278 and mw1279 |
[production] |
22:35 |
<cdanis> |
โ๏ธ cdanis@mw1290.eqiad.wmnet ~ ๐ ๐บ sudo restart-php7.2-fpm |
[production] |
22:34 |
<cdanis> |
restart-php7.2-fpm and pool on mw1276 |
[production] |
22:31 |
<cdanis> |
depool mw1276 and mw1279 also |
[production] |
22:25 |
<cdanis> |
โ๏ธ cdanis@mw1278.eqiad.wmnet ~ ๐ ๐บ sudo depool |
[production] |
21:16 |
<hashar> |
Gerrit: triggering java garbage collection # T263008 |
[production] |
19:32 |
<gehel> |
restarting blazegraph on wdqs1007 to reset ban list |
[production] |
18:21 |
<cmjohnson@cumin1001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
18:15 |
<cmjohnson@cumin1001> |
START - Cookbook sre.dns.netbox |
[production] |
17:45 |
<cmjohnson1> |
shutting elastic1063 down to reseat DIMM T265113 |
[production] |
17:32 |
<hnowlan@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
17:31 |
<hnowlan@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
17:31 |
<hnowlan@cumin1001> |
END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) |
[production] |
17:31 |
<hnowlan@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
17:30 |
<hnowlan@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
17:30 |
<hnowlan@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
17:30 |
<hnowlan@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
17:29 |
<hnowlan@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
16:57 |
<cmjohnson@cumin1001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
16:51 |
<cmjohnson@cumin1001> |
START - Cookbook sre.dns.netbox |
[production] |
16:39 |
<kormat@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
16:39 |
<kormat@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
16:36 |
<kormat@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
16:35 |
<kormat@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |