2601-2650 of 10000 results (34ms)
2014-10-30 §
16:17 <cmjohnson> powering off elastic1015-16 to replace ssds [production]
16:04 <hashar> restarted Zuul with upgraded version ( wmf-deploy-20140924-1..wmf-deploy-20141030-1 ) [production]
16:03 <hashar> Stopping zuul [production]
16:00 <hoo> Synchronized wmf-config/CommonSettings.php: Fix oauthadmin (duration: 00m 09s) [production]
15:43 <hashar> Going to upgrade Zuul and monitor the result over the next hour. [production]
15:39 <ottomata> starting to reimage mw1032 [production]
15:29 <oblivian> Synchronized wmf-config/CommonSettings.php: Serving 10% of anons with HHVM (duration: 00m 06s) [production]
15:22 <reedy> Synchronized docroot and w: Fix dbtree caching (duration: 00m 15s) [production]
15:13 <akosiaris> upgrading PHP on mw1113 to php5_5.3.10-1ubuntu3.15+wmf1 [production]
15:07 <manybubbles> moving shards off of elastic1015 and elastic1016 so we can replace their hard drives/turn on hyper threading [production]
15:07 <marktraceur> Synchronized php-1.25wmf6/extensions/Wikidata/: [SWAT] [wmf6] Fix edit link for aliases (duration: 00m 12s) [production]
14:37 <cmjohnson> powering down elastic1003-1006 to replace ssds [production]
14:33 <_joe_> pooling mw1031/2 in the hhvm appservers pool [production]
12:51 <_joe_> rebooting mw1030 and mw1031 to use the updated kernel [production]
12:48 <akosiaris> enabled puppet on uranium [production]
11:38 <_joe_> depooling mw1030 and mw1031 for reimaging as hhvm appservers [production]
10:15 <_joe_> load test ended [production]
09:48 <_joe_> load testing the hhvm appserver pool as well [production]
08:17 <_joe_> powercycling mw1189, enabling hyperthreading [production]
08:04 <_joe_> doing the same with mw1189, to see how different appserver generations respond [production]
07:25 <_joe_> raising the weight of mw1114 in the api pool to test the throughput it can withstand [production]
04:47 <ori> enabled heap profiling on mw1189 [production]
2014-10-29 §
23:42 <ejegg> updated tool from 19928683a8112e9aadd71ba47f199885ba517a02 to 419fb7aa32c6d0776056968378e358ee01985565 [production]
23:38 <maxsem> Synchronized php-1.25wmf6/extensions/MobileFrontend/: (no message) (duration: 00m 07s) [production]
23:35 <maxsem> Synchronized php-1.25wmf5/extensions/MobileFrontend/: (no message) (duration: 00m 04s) [production]
23:13 <catrope> Synchronized php-1.25wmf6/extensions/VisualEditor: SWAT (duration: 00m 04s) [production]
23:00 <mutante> restarting nginx on cp1044 [production]
22:11 <AaronSchulz> Re-running setZoneAccess.php for swift [production]
22:04 <Krinkle> git-deploy: Deploying integration/slave-scripts a6a23ac1ec [production]
20:28 <subbu> reverted parsoid to version 617e9e61b625f25d79dfaab08830c396537be632 (due to stuck processes) [production]
20:16 <reedy> Synchronized wmf-config/mc-labs.php: noop for prod (duration: 00m 17s) [production]
20:07 <arlolra> updated Parsoid to version 4e21bdb6fccc377468fd3d1cbc656fb64464ea78 [production]
19:45 <reedy> Synchronized wmf-config/InitialiseSettings.php: (no message) (duration: 00m 16s) [production]
19:39 <reedy> Synchronized wmf-config/InitialiseSettings.php: (no message) (duration: 00m 16s) [production]
19:26 <reedy> Synchronized wmf-config/InitialiseSettings.php: (no message) (duration: 00m 15s) [production]
19:17 <ori> upgraded HHVM to 3.3.0+dfsg1-1+wm1 [production]
18:58 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: group0 to 1.25wmf6 [production]
18:57 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: wikipedias to 1.25wmf5 [production]
18:47 <reedy> Finished scap: testwiki to 1.25wmf6 and build l10n cache (duration: 28m 30s) [production]
18:18 <reedy> Started scap: testwiki to 1.25wmf6 and build l10n cache [production]
17:24 <cmjohnson> shutting down to replace ssds in elastic1002,1007,1014 [production]
17:07 <ori> Synchronized wmf-config/CommonSettings.php: I8dd62e2cc: Re-enable hhvm beta feature on Wikidata (duration: 00m 06s) [production]
16:20 <manybubbles> elastic101[7-9] look good to me - adding them to the cluster [production]
16:17 <manybubbles> shutting down elasticsearch on elastic1002 - its empty and ready to have its disk upgraded/hyper threading enabled [production]
16:05 <manybubbles> ignore my last log message about 1017 - typod [production]
16:05 <manybubbles> shutting down elasticsearch on elastic1007 - its empty and ready to have its disk upgraded/hyper threading enabled [production]
16:04 <manybubbles> shutting down elasticsearch on elastic1014 - its empty and ready to have its disk upgraded/hyper threading enabled [production]
16:04 <manybubbles> shutting down elasticsearch on elastic1017 - its empty and ready to have its disk upgraded/hyper threading enabled [production]
15:39 <manybubbles> start moving shards back to elastic1001 and elastic1008 now that they are up with hyperthreading on [production]
15:37 <Reedy> deleted php-1.24wmf21 from mediawiki-installation [production]