2015-09-21
§
|
10:06 |
<moritzm> |
depooled mw1100-mw1109 (for T104968) |
[production] |
09:56 |
<moritzm> |
repooled mw1140 and mw1142-mw1148 (for T104968) |
[production] |
09:41 |
<moritzm> |
depooled mw1140 and mw1142-mw1148 (for T104968) |
[production] |
09:36 |
<moritzm> |
repooled mw1130-mw1139 (for T104968) |
[production] |
09:22 |
<moritzm> |
depooled mw1130-mw1139 (for T104968) |
[production] |
09:14 |
<moritzm> |
repooled mw1120-mw1129 (for T104968) |
[production] |
09:02 |
<moritzm> |
depooled mw1120-mw1129 (for T104968) |
[production] |
08:48 |
<moritzm> |
repooled mw1189 and mw1200-mw1208 (for T104968) |
[production] |
08:33 |
<moritzm> |
depooled mw1189 and mw1200-mw1208 (for T104968) |
[production] |
08:29 |
<godog> |
switch to 'restbase' cassandra user on restbase test cluster |
[production] |
08:29 |
<moritzm> |
repooled mw1190-mw1195 and mw1197-mw1199 (for T104968) |
[production] |
08:21 |
<_joe_> |
restarted the logstash agent on logstash1003, OOM'd |
[production] |
08:18 |
<moritzm> |
depooled mw1190-mw1195 and mw1197-mw1199 (for T104968) |
[production] |
08:07 |
<_joe_> |
installing the new HHVM package on the api canaries |
[production] |
08:04 |
<moritzm> |
repooled mw1221-mw1229 (for T104968) |
[production] |
07:53 |
<moritzm> |
depooled mw1221-mw1229 (for T104968) |
[production] |
07:49 |
<moritzm> |
repooled mw1230-mw1235 (for T104968) |
[production] |
07:43 |
<_joe_> |
installing the new hhvm package on the canary appservers |
[production] |
07:08 |
<moritzm> |
depooled mw1230-mw1235 (for T104968) |
[production] |
04:31 |
<l10nupdate@tin> |
ResourceLoader cache refresh completed at Mon Sep 21 04:31:03 UTC 2015 (duration 31m 2s) |
[production] |
02:23 |
<l10nupdate@tin> |
LocalisationUpdate completed (1.26wmf23) at 2015-09-21 02:23:12+00:00 |
[production] |
02:19 |
<l10nupdate@tin> |
Synchronized php-1.26wmf23/cache/l10n: l10nupdate for 1.26wmf23 (duration: 06m 25s) |
[production] |
02:06 |
<MaxSem> |
Maps: created indexes on admin. <3 Postgres :( |
[production] |
01:56 |
<bblack> |
downtimed eqiad ipv6 text/upload alerts as well, as with mobile above ( 1 301 TLS Redirect - 505 bytes in 1.008 second response time |
[production] |
01:46 |
<bblack> |
downtimed the "LVS HTTP IPv6 on mobile-lb.eqiad.wikimedia.org_ipv6" alert for now ( https://phabricator.wikimedia.org/T113154 ) |
[production] |
2015-09-19
§
|
23:12 |
<urandom> |
begining Cassandra repair on restbase1005 (nodetool repair -pr) |
[production] |
23:08 |
<urandom> |
begining Cassandra repair on restbase1004 (nodetool repair -pr) |
[production] |
19:56 |
<jynus> |
restarting once more giblit, last chance |
[production] |
19:04 |
<paravoid> |
salt rm /etc/systemd/system/txstatsd.service from all cp*, leftover because of ::txstatsd::decommission (removed with 4a1d4e) missing it |
[production] |
18:45 |
<_joe_> |
restarted gitblit. I will now substitute myself with a clever perl one-liner. |
[production] |
18:38 |
<paravoid> |
pooling back cp1046 to pybal eqiad/mobile, has stayed stable |
[production] |
18:34 |
<paravoid> |
reactivating ΒGP with GTT @ eqiad |
[production] |
08:42 |
<_joe_> |
cp1046 dead on console again, powercycling to inspect it |
[production] |
05:49 |
<aaron@tin> |
Synchronized php-1.26wmf23/extensions/TitleBlacklist: 80d3a21a51f9c54ed2d94 (duration: 00m 12s) |
[production] |
05:22 |
<paravoid> |
pybal-depooling cp1046 from eqiad/mobile until further investigation |
[production] |
05:21 |
<paravoid> |
powercycling cp1046, dead on console |
[production] |
04:28 |
<l10nupdate@tin> |
ResourceLoader cache refresh completed at Sat Sep 19 04:28:59 UTC 2015 (duration 28m 57s) |
[production] |
02:23 |
<l10nupdate@tin> |
LocalisationUpdate completed (1.26wmf23) at 2015-09-19 02:23:29+00:00 |
[production] |
02:20 |
<l10nupdate@tin> |
Synchronized php-1.26wmf23/cache/l10n: l10nupdate for 1.26wmf23 (duration: 06m 05s) |
[production] |