2016-03-19
§
|
22:28 |
<jynus> |
powercycling oxygen, looks kernel-dead |
[production] |
22:16 |
<urandom> |
removing 22G of heap dumps from restbase2004.codfw.wmnet |
[production] |
22:16 |
<urandom> |
removing 22G of heap dumps |
[production] |
22:07 |
<urandom> |
clearing snapshots on restbase2004.codfw.wmnet |
[production] |
15:43 |
<reedy@tin> |
Synchronized wmf-config/throttle.php: Throttle rules for event T130447 (duration: 00m 26s) |
[production] |
11:38 |
<godog> |
restart slapd on seaborgium, oom-killed |
[production] |
10:51 |
<hashar> |
Labs LDAP is probably down. T130446 Cant log to tools-login.wmflabs.org / Jenkins interface and Nodepool yields error 500 communicating with OpenStack API |
[production] |
02:31 |
<l10nupdate@tin> |
ResourceLoader cache refresh completed at Sat Mar 19 02:31:46 UTC 2016 (duration 8m 31s) |
[production] |
02:23 |
<mwdeploy@tin> |
sync-l10n completed (1.27.0-wmf.17) (duration: 10m 07s) |
[production] |
01:54 |
<urandom> |
bootstrapping restbase1013-b.eqiad.wmnet : T125842 |
[production] |
2016-03-18
§
|
23:35 |
<krinkle@tin> |
Synchronized php-1.27.0-wmf.17/extensions/WikimediaEvents/modules/ext.wikimediaEvents.deprecate.js: (no message) (duration: 00m 35s) |
[production] |
21:11 |
<ostriches> |
cleaned up stale /srv/mediawiki/php-1.27.0-wmf.{10,11} from the apaches. |
[production] |
21:09 |
<krinkle@tin> |
Synchronized wmf-config/missing.php: (no message) (duration: 00m 25s) |
[production] |
20:53 |
<ottomata> |
reenabling puppet on krypton |
[production] |
19:52 |
<ottomata> |
temporarily disabling puppet on krypton |
[production] |
19:21 |
<ori> |
rebooting bohrium |
[production] |
19:20 |
<ori> |
upgraded bohrium VM: vcpus 2 => 8, ram 4 => 8g |
[production] |
19:06 |
<ori@tin> |
Synchronized wmf-config/logging.php: Iabca8858e: Allow finer-grained control over debug logging via XWD (duration: 00m 32s) |
[production] |
18:56 |
<demon@tin> |
Synchronized .arclint: no op really, co master sync (duration: 00m 39s) |
[production] |
18:08 |
<gehel> |
restarting elasticsearch server elastic1031.eqiad.wmnet |
[production] |
17:59 |
<mutante> |
netmon1001: failed torrus service - recovery steps as outlined on wikitech [[Torrus]] |
[production] |
17:55 |
<ori> |
on bohrium: /etc/apache2/sites-enabled/.links2 ; was causing puppet to refresh apache2 on each run |
[production] |
17:30 |
<gehel> |
restarting elasticsearch server elastic1030.eqiad.wmnet |
[production] |
17:05 |
<gehel> |
restarting elasticsearch server elastic1029.eqiad.wmnet |
[production] |
16:53 |
<jynus> |
starting enwiki import to labs from dbstore1002 (expect lag and consistency problems during the hot import) |
[production] |
16:37 |
<moritzm> |
restarted hhvm on mw1205 |
[production] |
16:30 |
<moritzm> |
bumped connection tracking table size on mw1161-mw1169 to 524288 to cope with currently elevated connections on those (T130364) |
[production] |
16:19 |
<godog> |
reboot ms-be2010 to pick up new disk ordering |
[production] |
15:23 |
<elukey@tin> |
Synchronized wmf-config/jobqueue-eqiad.php: REVERT - Re-enabled persistence between Job Queues and Job Runners. (duration: 00m 19s) |
[production] |
15:03 |
<elukey@tin> |
Synchronized wmf-config/jobqueue-eqiad.php: Re-enabled persistence between Job Queues and Job Runners. (duration: 00m 30s) |
[production] |
15:02 |
<godog> |
bootstrap restbase1013-a |
[production] |
14:36 |
<gehel> |
restarting elasticsearch server elastic1028.eqiad.wmnet |
[production] |
14:02 |
<elukey> |
restarted eventlog1001.eqiad.wmnet and eventlog2001.codfw.wmnet for kernel upgrade |
[production] |
13:43 |
<gehel> |
restarting elasticsearch server elastic1027.eqiad.wmnet |
[production] |
13:24 |
<gehel> |
restarting pybal on lvs2003.codfw.wmnet |
[production] |
13:22 |
<gehel> |
enabling all nodes for service search.svc.codfw.wmnet:9243 (elastic-https) on codfw |
[production] |
13:22 |
<gehel> |
restarting pybal on lvs2006.codfw.wmnet |
[production] |
13:06 |
<gehel> |
restarting elasticsearch server elastic1026.eqiad.wmnet |
[production] |
12:43 |
<gehel> |
restarting elasticsearch server elastic1025.eqiad.wmnet |
[production] |
12:35 |
<godog> |
finished ms-fe1* rolling reboot |
[production] |
12:15 |
<godog> |
finished ms-be1* rolling reboot |
[production] |
12:00 |
<elukey> |
Forcing puppet agent run on all the Jobrunners and videoscalers since rdb1005 is now back in service. Will also restart jobchron as well. |
[production] |
11:58 |
<elukey> |
Added rdb1005 back to the jobrunners puppet config after maintenance. |
[production] |
11:57 |
<gehel> |
restarting elasticsearch server elastic1024.eqiad.wmnet |
[production] |
11:46 |
<gehel> |
restarting pybal on lvs1003 |
[production] |
11:43 |
<elukey@tin> |
Synchronized wmf-config/jobqueue-eqiad.php: Add rdb1005 back to the Redis Job Queues after maintenance (duration: 01m 22s) |
[production] |
11:23 |
<moritzm> |
powercycled mw1163, hung on reboot and serial console stuck |
[production] |
11:05 |
<moritzm> |
rolling reboot of mw1161 to mw1169 for kernel upgrade |
[production] |
11:04 |
<gehel> |
restarting pybal on lvs1012 |
[production] |
11:04 |
<gehel> |
restarting pybal on lvs1009 |
[production] |