9351-9400 of 10000 results (38ms)
2016-02-17 ยง
21:35 <mobrovac> restbase restarted restbase1002 on nodejs v4.3.0 [production]
21:11 <jzerebecki> reloading zuul for e11a9ff..d0914a7 [releng]
20:46 <jzerebecki> reloading zuul for 52b90b2..e11a9ff [releng]
20:40 <papaul> es201[1-9] - signing puppet certs, salt-key, initial run [production]
20:25 <krinkle@tin> Synchronized wmf-config/CommonSettings.php: Re-enable T99096 for mediawiki.org (duration: 01m 29s) [production]
20:23 <catrope@tin> Synchronized docroot/: (no message) (duration: 01m 33s) [production]
19:18 <yuvipanda> truncate 1.2T php error log file on labstore1003 from cluebot [production]
18:59 <jzerebecki> updating chery-pick https://gerrit.wikimedia.org/r/#/c/204528/15 on integration-puppetmaster T126699 [releng]
18:35 <jynus> testing now that alerts still work by stopping db1024 replication (depooled) [production]
18:30 <krinkle@tin> Synchronized wmf-config/CommonSettings.php: T127194 (duration: 01m 31s) [production]
18:27 <jynus> no issues found with new mysql, lag monitoring, renabling puppet again on the pending eqiad servers [production]
18:11 <jzerebecki> updated cherry-pick https://gerrit.wikimedia.org/r/#/c/204528/14 on integration-puppetmaster T126699 [releng]
17:49 <bblack> restarting pybal on eqiad primary LVS ( lvs100[123] ) [production]
17:47 <bblack> restarting pybal on codfw primary LVS ( lvs200[123]) [production]
17:42 <bblack> restarting pybal on ulsfo/esams primary LVS ( lvs[34]00[12]) [production]
17:40 <bblack> restarting pybal on eqiad backup LVS ( lvs100[456] ) [production]
17:38 <bblack> restarting pybal on eqiad inactive LVS clusters ( lvs1007-12 ) [production]
17:38 <bblack> restarting pybal on codfw backup LVS ( lvs200[456] ) [production]
17:34 <bblack> restarting pybal on ulsfo/esams backup LVS ( lvs[34]00[34]) [production]
17:13 <hoo> Updated the sites and site_identifiers table for on all non-Wikipedias (including Wikidata) [production]
17:02 <ema> depooled ulsfo https://phabricator.wikimedia.org/T127094 [production]
16:48 <ostriches> purged ancient boardvote gpg key from mediawiki fleet. unused since forever. [production]
16:25 <anomie@tin> Synchronized wmf-config/: SWAT: Undeploy Extension:ApiSandbox (duration: 01m 30s) [production]
16:20 <anomie@tin> Synchronized wmf-config/CommonSettings.php: SWAT: Remove $wgMWOAuthGrantPermissions (duration: 01m 34s) [production]
16:16 <urandom> restbase deploy (15a6c50) complete, sans restbase1008.eqiad.wmnet (down for maintenance during deploy) [production]
16:16 <anomie> Ran namespaceDupes.php on tawiki [production]
16:14 <urandom> restbase deploy (15a6c50) completet [production]
16:14 <hoo> Re-populating the sites and site_identifiers table for all Wikipedias and testwikidata [production]
16:10 <urandom> restbase deploy restarting at restbase1009 [production]
16:09 <anomie@tin> Synchronized wmf-config/InitialiseSettings.php: SWAT: New userrights and configuration for cswiki (task [[phab:T126931|]]) (duration: 01m 31s) [production]
16:09 <urandom> restbase deploy stalled at restbase1008 (under maintenance) [production]
16:05 <anomie@tin> Synchronized wmf-config/InitialiseSettings.php: SWAT: Namespace aliases for tawiki (task [[phab:T126604|]]) (duration: 01m 31s) [production]
16:00 <urandom> continuing production-wide restbase deploy (15a6c50) [production]
15:58 <godog> copy restbase.{v1_*,sys_*,ALL,GET,HEAD,POST,OPTIONS,_robots} to restbase.external on graphite1001 and graphite2001 [production]
15:55 <godog> copy restbase.private to restbase.internal on graphite1001 and graphite2001 [production]
15:53 <aude@tin> Synchronized php-1.27.0-wmf.13/extensions/Wikidata: Fix caching data types bug: T127095 (duration: 01m 44s) [production]
15:53 <urandom> canary deploy of restbase to restbase1001.eqiad.wmnet (15a6c50) complete [production]
15:53 <urandom> canary deploy of restbase to restbase1001.eqiad.wmnet (15a6c50) [production]
15:51 <bblack> package upgrades commencing on lvs* [production]
15:43 <urandom> restbase staging deploy (15a6c50) complete [production]
15:38 <urandom> deploying restbase (15a6c50) in staging [production]
15:33 <jynus> stopping puppet on all database hosts (db, dbstore, es, etc.) for lag alert testing [production]
15:32 <jzerebecki> reloading zuul for e945e92..52b90b2 [releng]
15:14 <ottomata> restarting eventlogging back with pykafka 2.1.1 [analytics]
15:12 <ottomata> restarting eventlogging with pykafka 2.2. [analytics]
15:10 <jzerebecki> reloading zuul for bba1873..e945e92 [releng]
14:44 <hashar> On Trusty slaves, reducing number of executors from 4 to 3 to leave room for Mysql/Java/Lua etc [releng]
14:05 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Repool db1022 after maintenance (duration: 01m 34s) [production]
14:02 <bblack> package upgrades on cp* commence [production]
13:23 <elukey> rebooted kafka1013 for maintenance [production]