7451-7500 of 10000 results (61ms)
2016-02-10 ยง
23:49 <tgr> switched mw1017 to wmf.13 (all groups) [production]
23:38 <ori@mira> rebuilt wikiversions.php and synchronized wikiversions files: group1 wikis back to php-1.27.0-wmf.12 [production]
23:34 <ori> Restarted HHVM on mw1017 [production]
23:27 <urandom> performing rolling restart of Cassandra in restbase staging (experimental gc settings) [production]
23:15 <ori@mira> Synchronized wmf-config/mobile.php: I6946eccf9c: Better hack for T49647 (duration: 02m 19s) [production]
23:10 <ori@tin> Synchronized php-1.27.0-wmf.12/includes/WebResponse.php: I13fcc3ce4: Allow changing cookie options in WebResponseSetCookie hook (duration: 01m 30s) [production]
23:08 <ori@tin> Synchronized php-1.27.0-wmf.13/includes/WebResponse.php: I13fcc3ce4: Allow changing cookie options in WebResponseSetCookie hook (duration: 01m 37s) [production]
22:55 <hashar_> gallium: find /var/lib/jenkins/config-history/config -type f -wholename '*/2015*' -delete ( https://phabricator.wikimedia.org/T126552 ) [releng]
22:49 <krinkle@mira> Synchronized w/static.php: (no message) (duration: 02m 18s) [production]
22:40 <krenair@mira> Synchronized php-1.27.0-wmf.13/extensions/MobileFrontend/resources/mobile.loggingSchemas/SchemaEdit.js: https://gerrit.wikimedia.org/r/#/c/269755/ (duration: 02m 23s) [production]
22:36 <urandom> rolling restart of Cassandra staging complete (experimental gc settings) [production]
22:35 <krenair@mira> Synchronized php-1.27.0-wmf.12/extensions/MobileFrontend/resources/mobile.loggingSchemas/SchemaEdit.js: https://gerrit.wikimedia.org/r/#/c/269848/ (duration: 02m 18s) [production]
22:34 <Krinkle> Zuul is back up and procesing Gerrit events, but jobs are still queued indefinitely. Jenkins is not accepting new jobs [releng]
22:32 <yurik> deployed and restarted kartotherian services [production]
22:31 <Krinkle> Full restart of Zuul. Seems Gearman/Zuul got stuck. All executors were idling. No new Gerrit events processed either. [releng]
22:28 <urandom> performing rolling restart of Cassandra in restbase staging (experimental gc settings) [production]
22:23 <yurik> deployed and restarted tilerator & tileratorui services [production]
22:15 <ori> Restarted apache on palladium and strontium [production]
21:42 <urandom> ephemerally lowering compactor thread count on restbase1002 from 10 to 8 (limit combined working space) [production]
21:22 <legoktm> cherry-picking https://gerrit.wikimedia.org/r/#/c/269370/ on integration-puppetmaster again [releng]
21:21 <subbu> finished deploying parsoid version 8976ab93 [production]
21:16 <hashar> CI dust have settled. Krinkle and I have pooled a lot more Trusty slaves to accommodate for the overload caused by switching to php55 (jobs run on Trusty) [releng]
21:16 <hashar> CI dust have settled. Krinkle and I have pooled a lot more Trusty slaves to accommodate for the overload caused by switching to php55 (jobs run on Trusty) [production]
21:14 <subbu> synced code + restarted parsoid on wtp1001 as a canary [production]
21:10 <subbu> starting parsoid deploy [production]
21:08 <hashar> pooling trusty slaves 1009, 1010, 1021, 1022 with 2 executors (they are ci.medium) [releng]
20:38 <hashar> cancelling mediawiki-core-jsduck-publish and mediawiki-core-doxygen-publish jobs manually. They will catch up on next merge [releng]
20:34 <Krinkle> Pooled integration-slave-trusty-1019 (new) [releng]
20:28 <Krinkle> Pooled integration-slave-trusty-1020 (new) [releng]
20:24 <Krinkle> created integration-slave-trusty-1019 and integration-slave-trusty-1020 (ci1.medium) [releng]
20:24 <chasemp> tc per client shaping for labstore1001 test [production]
20:21 <demon@mira> Synchronized php-1.27.0-wmf.13/includes/interwiki/Interwiki.php: fix cache stuff (duration: 02m 18s) [production]
20:18 <hashar> created integration-slave-trusty-1009 and 1010 (trusty ci.medium) [releng]
20:17 <apergos> cleaned up integrations slave trusty 1001,10012,10013, 1016, missed in first round. [production]
20:10 <demon@mira> rebuilt wikiversions.php and synchronized wikiversions files: rebuild [production]
20:10 <demon@mira> Synchronized multiversion/MWWikiversions.php: rm newline addition (duration: 02m 19s) [production]
20:06 <hashar> creating integration-slave-trusty-1021 and integration-slave-trusty-1022 (ci.medium) [releng]
20:03 <demon@mira> rebuilt wikiversions.php and synchronized wikiversions files: group1 to wmf.13 [production]
19:48 <greg-g> that cleanup was done by apergos [releng]
19:48 <greg-g> did cleanup across all integration slaves, some were very close to out of room. results: https://phabricator.wikimedia.org/P2587 [releng]
19:45 <apergos> did cleanup across all integration slaves, some were very close to out of room. results: https://phabricator.wikimedia.org/P2587 [production]
19:43 <hashar> Dropping slaves Precise m1.large integration-slave-precise-1014 and integration-slave-precise-1013 , most load shifted to Trusty (php53 -> php55 transition) [releng]
19:26 <mutante> ms-be1008 - powercycling - the known XFS issue [production]
19:04 <mobrovac> restbase disabled puppet in staging, testing brotli compression which requires JAVA_OPTS tuning [production]
18:30 <mutante> puppetstoredconfigclean.rb iodine.wikimedia.org, revoke puppet cert, delete salt key on new master [production]
18:20 <Krinkle> Creating a Trusty slave to support increased demand following MediaWIki php53(precise)>php55(trusty) bump [releng]
18:15 <mutante> iodine - shutdown, decom [production]
18:10 <mutante> iodine - schedule downtime, stop puppet, stop salt, .. [production]
17:21 <demon@mira> Synchronized multiversion/MWWikiversions.php: newlines in wikiversion.json (duration: 02m 21s) [production]
17:15 <demon@mira> Synchronized errorpages/404.html: minor html fix (duration: 02m 17s) [production]