2251-2300 of 10000 results (31ms)
2016-01-25 §
21:45 <mutante> alsafi - was reported down in icinga , is ganeti VM - fixed by just logging in as if it went to hibernate [production]
21:37 <mdholloway> mobileapps deployed 9252a22 [production]
21:30 <mutante> nitrogen - stop puppet, stop salt, remove from stored configs / icinga [production]
20:19 <hoo@mira> Synchronized wmf-config/Wikibase-labs.php: (no message) (duration: 01m 28s) [production]
20:14 <chasemp> bump labstore nfs threads to 288 from 244 [production]
19:32 <paravoid> eqiad: removing static routes for 6to4/Teredo to nitrogen (decommissioning our own relays) [production]
19:10 <bd808> Live hacking on mw1017 to debug 1.27.0-wmf.11 issues. All wikis there currently set to use 1.27.0-wmf.11. [production]
19:05 <chasemp> labstore1001 temp change to CFQ scheduler on 01/22/2015 [production]
19:04 <chasemp> the nfsd thread change is on labstore1001 [production]
19:04 <chasemp> nfsd has 224 threads atm and was bumped up over the weekend [production]
18:58 <ori> removed unused wikiversions.cdb on mira and tin [production]
18:28 <jynus> retroactively logging the depool of mw1217, mw1178 and mw1257 3 hours ago (Jan 25 15:45:26) [production]
16:49 <ema> Finished migration of mobile traffic to text cluster in ulsfo https://phabricator.wikimedia.org/T109286 [production]
16:38 <jynus@mira> Synchronized wmf-config/db-eqiad.php: Preparing ips for new parsercache deployments (third try) (duration: 01m 35s) [production]
16:26 <jynus@mira> Synchronized wmf-config/db-eqiad.php: Preparing ips for new parsercache deployments (second try after running puppet) (duration: 03m 23s) [production]
16:25 <_joe_> restarting salt-minion on all deployment targets [production]
16:24 <_joe_> running salt deploy.fixurl on all deployment targets [production]
16:09 <jynus@mira> Synchronized wmf-config/db-eqiad.php: Preparing ips for new parsercache deployments (duration: 03m 32s) [production]
15:35 <ema> Starting migration of mobile traffic to text cluster in ulsfo https://phabricator.wikimedia.org/T109286 [production]
15:14 <chasemp> restart of pdns and pdns-recursor on labservices1001 [production]
14:56 <jynus@mira> Synchronized wmf-config/db-eqiad.php: deploy new parsercache hardware (pc1004) substituting pc1001 (duration: 03m 25s) [production]
13:16 <elukey> ran kafka preferred-replica-election on kafka1022 to balance the leaders [production]
13:07 <elukey> restarting kafka on kafka1022 [production]
12:57 <elukey> restarting kafka on kafka1013 [production]
12:38 <elukey> restarting kafka on kafka1014 [production]
12:20 <jynus> compressed and truncated iridium's phab daemons.log - it was taking 20% of disk space [production]
12:04 <ema> restarting kafka on kafka1018 [production]
11:26 <jynus> stopping mysql at pc1001 and cloning to pc1004 [production]
10:55 <jynus@mira> Synchronized wmf-config/db-eqiad.php: Depool pc1001 for maintenance (clone to pc1004) (duration: 01m 41s) [production]
10:11 <_joe_> switching the active deployment host to mira [production]
09:56 <ema> limiting GCLogFileSize and restarting kafka on kafka1012 [production]
09:31 <_joe_> rolling reboot of the eqiad appserver cluster [production]
09:27 <moritzm> installed fuse security update on labnodepool1001 (the other fuse installations are on Ubuntu, which doesn't ship the udev rule, but uses mountall instead) [production]
07:47 <paravoid> stat1002: umount -f /mnt/hdfs [production]
07:34 <_joe_> rebooting alsafi, unresponsive to ssh [production]
07:24 <_joe_> restarting hhvm on mw1148, stuck in HPHP::Treadmill::startRequest (__lll_lock_wait) [production]
07:23 <_joe_> restarting hhvm on mw1143, stuck into HPHP::SynchronizableMulti::waitImpl (__pthread_cond_wait) [production]
03:10 <tstarling@tin> Synchronized php-1.27.0-wmf.10/includes/parser/ParserCache.php: (no message) (duration: 00m 25s) [production]
03:03 <tstarling@tin> Synchronized php-1.27.0-wmf.10/includes/parser/ParserCache.php: (no message) (duration: 00m 25s) [production]
03:02 <tstarling@tin> Synchronized php-1.27.0-wmf.10/includes/parser/ParserOutput.php: (no message) (duration: 00m 27s) [production]
02:30 <l10nupdate@tin> ResourceLoader cache refresh completed at Mon Jan 25 02:30:13 UTC 2016 (duration 6m 52s) [production]
02:23 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.10) (duration: 09m 09s) [production]
2016-01-24 §
02:31 <l10nupdate@tin> ResourceLoader cache refresh completed at Sun Jan 24 02:31:21 UTC 2016 (duration 6m 58s) [production]
02:24 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.10) (duration: 09m 11s) [production]
2016-01-23 §
19:03 <ebernhardson@tin> Synchronized wmf-config/CirrusSearch-production.php: config change to repoint morelike search from eqiad to codfw (duration: 00m 26s) [production]
19:02 <ebernhardson@tin> Synchronized php-1.27.0-wmf.10/extensions/CirrusSearch/: Support code for repointing morelike queries from eqiad to codfw (duration: 00m 30s) [production]
19:00 <ebernhardson> repoint most expensive search queries (morelike) at codfw cluster to reduce load. 1/2 of eqiad cluster maxed on cpu [production]
16:47 <Krinkle> mwscript deleteEqualMessages.php --wiki wowiki [production]
13:25 <jynus> upgrading and restarting db1046 [production]
13:13 <jynus> db1046 maintenance finished- restarting mysql to apply latest configuration [production]