2014-04-04
§
|
19:16 |
<hashar> |
restarting Jenkins |
[production] |
19:07 |
<hashar> |
Jenkins un pooling gallium slave |
[production] |
19:05 |
<hashar> |
Zuul / Jenkins stalled again. |
[production] |
18:43 |
<csteipp> |
redeployed updated patch for bug63251 to fix a reported bug |
[production] |
16:10 |
<_joe_> |
restarting gitlbit, for the last time today |
[production] |
15:07 |
<_joe_> |
restarting gitblit as it has eaten up all of its ram again and is trashing cpu |
[production] |
12:32 |
<mutante> |
hume - shutting down |
[production] |
12:06 |
<mutante> |
hume - disable puppet/salt/monitoring |
[production] |
11:13 |
<mutante> |
restarting gitblit with new option to use incremental GC in an attempt to fix timeouts caused by GC eating CPU |
[production] |
08:07 |
<paravoid> |
deactivating cr1-eqiad<->HE peerings, translantic par2<->ash1 is congested |
[production] |
07:25 |
<mutante> |
restarting gitblit |
[production] |
05:45 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Fri Apr 4 05:45:07 UTC 2014 (duration 18m 25s) |
[production] |
04:56 |
<LocalisationUpdate> |
completed (1.23wmf21) at 2014-04-04 04:56:06+00:00 |
[production] |
04:45 |
<LocalisationUpdate> |
completed (1.23wmf20) at 2014-04-04 04:45:01+00:00 |
[production] |
04:20 |
<demon> |
rebuilt wikiversions.cdb and synchronized wikiversions files: unbreak test2.wp and test.wikidata as well |
[production] |
04:17 |
<demon> |
rebuilt wikiversions.cdb and synchronized wikiversions files: mw.org back to 1.23wmf20 |
[production] |
03:43 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Fri Apr 4 03:43:03 UTC 2014 (duration 43m 2s) |
[production] |
03:28 |
<ori> |
Interface messages are missing on group0 / 1.23wmf21 wikis (mediawikiwiki, testwiki, test2wiki, and testwikidata) |
[production] |
02:50 |
<LocalisationUpdate> |
completed (1.23wmf21) at 2014-04-04 02:50:26+00:00 |
[production] |
02:24 |
<LocalisationUpdate> |
completed (1.23wmf20) at 2014-04-04 02:24:51+00:00 |
[production] |
01:08 |
<krinkle> |
synchronized php-1.23wmf21/resources 'I6e93d9ab0e4a926c09c' |
[production] |
2014-04-03
§
|
22:00 |
<demon> |
synchronized wmf-config/CirrusSearch-production.php 'lowering cache time, for testing' |
[production] |
21:55 |
<demon> |
updated /a/common/php-1.23wmf20 to {{Gerrit|Ic853ebff4}}: Cherry-pick I550eb4b0a8fa18344e8b0de3ec85d61c2122ffb8 |
[production] |
21:54 |
<demon> |
synchronized php-1.23wmf20/extensions/CirrusSearch 'Cirrus back to master again' |
[production] |
21:50 |
<ori> |
synchronized multiversion/updateBitsBranchPointers 'updateBitsBranchPointers: get rid of 'static-stable' branch link' |
[production] |
21:50 |
<ori> |
updated /a/common to {{Gerrit|Ic1602c045}}: updateBitsBranchPointers: get rid of 'static-stable' branch link |
[production] |
21:46 |
<demon> |
synchronized php-1.23wmf20/extensions/CirrusSearch 'Rolling back to 1.23wmf20 branch point from master' |
[production] |
21:38 |
<demon> |
synchronized php-1.23wmf20/extensions/CirrusSearch 'Updating Cirrus to master' |
[production] |
21:33 |
<demon> |
synchronized wmf-config/CirrusSearch-production.php 'italian wikis getting interwiki search. they're my favorite beta testers' |
[production] |
19:23 |
<reedy> |
synchronized docroot and w |
[production] |
19:21 |
<reedy> |
rebuilt wikiversions.cdb and synchronized wikiversions files: group0 wikis to 1.23wmf21 |
[production] |
19:17 |
<reedy> |
rebuilt wikiversions.cdb and synchronized wikiversions files: wikipedias actually to 1.23wmf20 |
[production] |
19:15 |
<reedy> |
rebuilt wikiversions.cdb and synchronized wikiversions files: wikipedias to 1.23wmf20 |
[production] |
19:09 |
<reedy> |
Finished scap: testwiki to 1.23wmf21 and build l10n cache (duration: 38m 23s) |
[production] |
18:30 |
<reedy> |
Started scap: testwiki to 1.23wmf21 and build l10n cache |
[production] |
18:23 |
<reedy> |
updated /a/common to {{Gerrit|I835c2b1d5}}: Depool. See RT 7191. |
[production] |
11:10 |
<paravoid> |
IPv4 eqiad<->esams private link also elevated by ~15ms but no packet loss observed |
[production] |
11:09 |
<paravoid> |
affects both IPv6 transit at esams (slowdowns) as well as IPv6 eqiad<->esams |
[production] |
11:08 |
<paravoid> |
deactivating cr1-esams<->HE peering, latency > 160ms, over at 200ms (congestion?); back to 84ms now; |
[production] |
10:51 |
<akosiaris> |
temporarily stopped squid on brewster |
[production] |
10:26 |
<hashar> |
Jenkins job mediawiki-core-phpunit-hhvm is back around thanks to {{gerrit|123573}} |
[production] |
06:28 |
<paravoid> |
powercycling ms-be1003, unresponsive, no console output |
[production] |
04:43 |
<springle> |
synchronized wmf-config/db-eqiad.php 'return upgraded DB slaves to normal load' |
[production] |
04:11 |
<springle> |
synchronized wmf-config/db-eqiad.php 's6 repool db1015, warm up' |
[production] |
04:04 |
<springle> |
synchronized wmf-config/db-eqiad.php 's6 depool db1015 for upgrade' |
[production] |
04:03 |
<springle> |
synchronized wmf-config/db-eqiad.php 's5 repool db1037, warm up' |
[production] |
03:53 |
<springle> |
synchronized wmf-config/db-eqiad.php 's5 depool db1037 for upgrade' |
[production] |
03:53 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Thu Apr 3 03:53:18 UTC 2014 (duration 53m 16s) |
[production] |
03:34 |
<springle> |
db1020 raid controller dimm ecc errors |
[production] |
03:14 |
<springle> |
synchronized wmf-config/db-eqiad.php 's4 depool db1020 for upgrade' |
[production] |