8851-8900 of 10000 results (74ms)
2018-04-20 §
09:58 <godog> upload scap 3.8.0-2 - T192124 [production]
09:51 <moritzm> upgrading deployment servers to MEMC_VAL_COMPRESSION_ZLIB enabled HHVM build [production]
09:41 <jynus> starting reimage of db2070 [production]
09:41 <moritzm> upgrading mwdebug servers to MEMC_VAL_COMPRESSION_ZLIB enabled HHVM build [production]
09:33 <jynus@tin> Synchronized wmf-config/db-codfw.php: Repool db2071, depool db2070 (duration: 01m 16s) [production]
09:12 <elukey> restart of mw apis showing ~50% cpu utilization as precaution before the weekend - mw[1224,1225,1228,1230,1231,1233-1235,1276-1283,1286,1312,1313,1315,1316,1341,1343,1344,1347,1348]* [production]
09:06 <moritzm> upgrading video scalers in codfw to MEMC_VAL_COMPRESSION_ZLIB enabled HHVM build [production]
08:41 <moritzm> upgrading job runners in codfw to MEMC_VAL_COMPRESSION_ZLIB enabled HHVM build [production]
08:39 <marostegui> Going to sanitize gorwiki euwikisource romdwikimedia inhwiki on db1095 - T189112 T189466 T187774 T184375 [production]
08:39 <elukey> restart hhvm on mw[1226,1232].eqiad.wmnet - high load [production]
08:05 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Fully repool db1114 in API - T191996 (duration: 01m 16s) [production]
07:57 <jynus> starting reimage of db2071 [production]
07:52 <jynus@tin> Synchronized wmf-config/db-codfw.php: Depool db2071 (duration: 01m 16s) [production]
07:48 <moritzm> upgrading app servers in codfw to MEMC_VAL_COMPRESSION_ZLIB enabled HHVM build [production]
07:40 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Give more API traffic to db1114 (duration: 01m 17s) [production]
07:38 <ema> cp3041: restart varnish-be due to mbox lag [production]
07:37 <akosiaris> upgrade qemu on ganeti2006 to 1:2.8+dfsg-3~bpo8+1 and migrate mwdebug2001 to it T150532 [production]
07:32 <ema> cp3030: restart varnish-be due to mbox lag [production]
07:30 <_joe_> upgrading hhvm on all jobrunners in eqiad [production]
07:13 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Slowly repool db1114 in API (duration: 01m 15s) [production]
07:09 <ema> cp3032/cp3043: restart varnish-be due to mbox lag [production]
07:08 <moritzm> upgrading API servers in codfw to MEMC_VAL_COMPRESSION_ZLIB enabled HHVM build [production]
07:06 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1110 after alter table (duration: 01m 16s) [production]
06:54 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Restore main traffic original weight for db1114 (duration: 01m 15s) [production]
06:26 <ema> kafka::analytics remove strongswan leftovers T185136 [production]
06:25 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Slowly repool db1114 (duration: 01m 15s) [production]
06:07 <marostegui> Stop mysql db1114 for a reboot [production]
06:07 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1114 (duration: 01m 16s) [production]
05:55 <_joe_> depooling mw1227 from live traffic for investigation [production]
05:31 <marostegui> Start atop on db1114 with "-R" option enabled - T192551 [production]
05:31 <marostegui> Deploy schema change on db1110 - T191519 T188299 T190148 [production]
05:30 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1110 for alter table (duration: 01m 17s) [production]
05:21 <ariel@tin> Finished deploy [dumps/dumps@c2d3bb4]: keep completed stubs/abstracts/logs files around for retries (duration: 00m 04s) [production]
05:20 <ariel@tin> Started deploy [dumps/dumps@c2d3bb4]: keep completed stubs/abstracts/logs files around for retries [production]
01:50 <krinkle@tin> Synchronized wmf-config/CommonSettings.php: If8fdce707d (duration: 01m 17s) [production]
2018-04-19 §
23:16 <ebernhardson@tin> Synchronized php-1.31.0-wmf.29/extensions/WikimediaEvents/modules/all/ext.wikimediaEvents.searchSatisfaction.js: SWAT: T187148: Turn off cirrus ab test (duration: 01m 18s) [production]
23:13 <ebernhardson@tin> Synchronized php-1.31.0-wmf.30/extensions/WikimediaEvents/modules/all/ext.wikimediaEvents.searchSatisfaction.js: SWAT: T187148: Turn off cirrus ab test (duration: 01m 17s) [production]
23:04 <thcipriani@tin> Synchronized php: complete group1 and group2 wikis back to 1.31.0-wmf.29 (duration: 01m 16s) [production]
22:30 <thcipriani@tin> rebuilt and synchronized wikiversions files: group1 and group2 wikis back to 1.31.0-wmf.29 [production]
21:41 <urandom> Start cleanup, restbase10{07,11,16}-c -- T189822 [production]
21:22 <urandom> Start cleanup, restbase10{07,11,16}-b -- T189822 [production]
21:15 <urandom> Start cleanup, restbase10{07,11,16}-a -- T189822 [production]
21:11 <urandom> restarting cassandra to (temporarily) rollback prometheus jmx exporter, restbase1010-c -- T189822, T192456 [production]
21:00 <ebernhardson> issue move of enwiki_content shard 2 from overloaded elasti1027 to elastic1017 [production]
20:48 <urandom> restarting cassandra to (temporarily) rollback prometheus jmx exporter, restbase1010-a -- T189822, T192456 [production]
20:48 <urandom> restarting cassandra to (temporarily) rollback prometheus jmx exporter -- T189822, T192456 [production]
20:32 <thcipriani@tin> rebuilt and synchronized wikiversions files: All wikis to 1.31.0-wmf.30 [production]
20:27 <milimetric@tin> Finished deploy [analytics/refinery@c1c9885]: Correcting hql from last deployment (duration: 05m 09s) [production]
20:22 <milimetric@tin> Started deploy [analytics/refinery@c1c9885]: Correcting hql from last deployment [production]
19:53 <thcipriani@tin> Synchronized php: group1 to 1.31.0-wmf.30 (duration: 01m 15s) [production]