2951-3000 of 10000 results (25ms)
2014-10-30 §
17:38 <hashar> Zuul seems to be happy. Reverted my lame patch to send Cache-Control headers since we have a cache breaker it is not needed. [production]
17:21 <bd808> 10.64.16.29 is db1040 in the s4 pool [production]
17:18 <bd808> "Connection error: Unknown error (10.64.16.29)" 1052 in last 5m; 2877 in last 15m [production]
17:16 <hashar> Upgrading Zuul to have the status page emit a Cache-Control header {{bug|72766}} wmf-deploy-20141030-1..wmf-deploy-20141030-2 [production]
17:11 <bd808> Upgraded kibana to v3.1.1 again. Better testing now that logstash is working. [production]
17:01 <bd808> Logs on logstash1003 showed "Failed to flush outgoing items <Errno::EBADF: Bad file descriptor - Bad file descriptor>" on shutdown. Maybe something not quite right about elasticsearch_http plugin? [production]
17:00 <awight> Synchronized php-1.25wmf6/includes/specials/SpecialUpload.php: Parse 'upload_source_url' message on SpecialUpload (duration: 00m 10s) [production]
16:59 <bd808> restarted logstash on logstash1003. No events logged since 00:00Z [production]
16:58 <awight> Synchronized php-1.25wmf5/includes/specials/SpecialUpload.php: Parse 'upload_source_url' message on SpecialUpload (duration: 00m 11s) [production]
16:58 <bd808> restarted logstash on logstash1002. No events logged since 00:00Z [production]
16:58 <bd808> restarted logstash on logstash1001. No events logged since 00:00Z [production]
16:55 <akosiaris> uploaded php5_5.3.10-1ubuntu3.15+wmf1 on apt.wikimedia.org [production]
16:46 <bd808> Reverted kibana to e317bc6 [production]
16:44 <oblivian> Synchronized wmf-config/CommonSettings.php: Serving 15% of anons with HHVM (ludicrous speed!) (duration: 00m 16s) [production]
16:38 <bd808> Upgraded kibana to v3.1.1 via Trebuchet [production]
16:38 <hashar> Zuul status page is freezing because the status.json is being cached :-/ [production]
16:31 <awight> Synchronized php-1.25wmf6/extensions/CentralNotice: push CentralNotice updates (duration: 00m 09s) [production]
16:28 <awight> Synchronized php-1.25wmf5/extensions/CentralNotice: push CentralNotice updates (duration: 00m 11s) [production]
16:22 <manybubbles> moving shards off of elastic1003 and elastic1006 so they can be restarted. elastic1003 need hyperthreading and elastic1006 needs noatime. [production]
16:17 <cmjohnson> powering off elastic1015-16 to replace ssds [production]
16:04 <hashar> restarted Zuul with upgraded version ( wmf-deploy-20140924-1..wmf-deploy-20141030-1 ) [production]
16:03 <hashar> Stopping zuul [production]
16:00 <hoo> Synchronized wmf-config/CommonSettings.php: Fix oauthadmin (duration: 00m 09s) [production]
15:43 <hashar> Going to upgrade Zuul and monitor the result over the next hour. [production]
15:39 <ottomata> starting to reimage mw1032 [production]
15:29 <oblivian> Synchronized wmf-config/CommonSettings.php: Serving 10% of anons with HHVM (duration: 00m 06s) [production]
15:22 <reedy> Synchronized docroot and w: Fix dbtree caching (duration: 00m 15s) [production]
15:13 <akosiaris> upgrading PHP on mw1113 to php5_5.3.10-1ubuntu3.15+wmf1 [production]
15:07 <manybubbles> moving shards off of elastic1015 and elastic1016 so we can replace their hard drives/turn on hyper threading [production]
15:07 <marktraceur> Synchronized php-1.25wmf6/extensions/Wikidata/: [SWAT] [wmf6] Fix edit link for aliases (duration: 00m 12s) [production]
14:37 <cmjohnson> powering down elastic1003-1006 to replace ssds [production]
14:33 <_joe_> pooling mw1031/2 in the hhvm appservers pool [production]
12:51 <_joe_> rebooting mw1030 and mw1031 to use the updated kernel [production]
12:48 <akosiaris> enabled puppet on uranium [production]
11:38 <_joe_> depooling mw1030 and mw1031 for reimaging as hhvm appservers [production]
10:15 <_joe_> load test ended [production]
09:48 <_joe_> load testing the hhvm appserver pool as well [production]
08:17 <_joe_> powercycling mw1189, enabling hyperthreading [production]
08:04 <_joe_> doing the same with mw1189, to see how different appserver generations respond [production]
07:25 <_joe_> raising the weight of mw1114 in the api pool to test the throughput it can withstand [production]
04:47 <ori> enabled heap profiling on mw1189 [production]
2014-10-29 §
23:42 <ejegg> updated tool from 19928683a8112e9aadd71ba47f199885ba517a02 to 419fb7aa32c6d0776056968378e358ee01985565 [production]
23:38 <maxsem> Synchronized php-1.25wmf6/extensions/MobileFrontend/: (no message) (duration: 00m 07s) [production]
23:35 <maxsem> Synchronized php-1.25wmf5/extensions/MobileFrontend/: (no message) (duration: 00m 04s) [production]
23:13 <catrope> Synchronized php-1.25wmf6/extensions/VisualEditor: SWAT (duration: 00m 04s) [production]
23:00 <mutante> restarting nginx on cp1044 [production]
22:11 <AaronSchulz> Re-running setZoneAccess.php for swift [production]
22:04 <Krinkle> git-deploy: Deploying integration/slave-scripts a6a23ac1ec [production]
20:28 <subbu> reverted parsoid to version 617e9e61b625f25d79dfaab08830c396537be632 (due to stuck processes) [production]
20:16 <reedy> Synchronized wmf-config/mc-labs.php: noop for prod (duration: 00m 17s) [production]