4351-4400 of 10000 results (22ms)
2014-09-18 §
02:52 <yurikR> yurik Fixing graph ext namespace name - otherwise get screen of WMF death on graph: ns visits [production]
02:36 <LocalisationUpdate> completed (1.24wmf20) at 2014-09-18 02:36:46+00:00 [production]
00:32 <marktraceur> Finished scap: [SWAT] Move things out of assets/ and into resources/assets/ (duration: 35m 28s) [production]
2014-09-17 §
23:57 <marktraceur> Started scap: [SWAT] Move things out of assets/ and into resources/assets/ [production]
23:47 <marktraceur> Synchronized wmf-config/InitialiseSettings.php: [SWAT] Enable Graph on metawiki and labswiki (duration: 00m 10s) [production]
23:42 <marktraceur> Synchronized php-1.24wmf21/extensions/Graph/: [SWAT] Update Graph to master (duration: 00m 08s) [production]
23:41 <marktraceur> Synchronized php-1.24wmf20/extensions/Graph/: [SWAT] Update Graph to master (duration: 00m 07s) [production]
23:35 <marktraceur> Synchronized php-1.24wmf21/extensions/MultimediaViewer/: [SWAT] Fix reuse dropdown message weirdness (duration: 00m 07s) [production]
23:29 <marktraceur> Synchronized php-1.24wmf20/extensions/MultimediaViewer/: [SWAT] Fix reuse dropdown message weirdness (duration: 00m 08s) [production]
23:10 <marktraceur> Synchronized php-1.24wmf21/extensions/UploadWizard/: [SWAT] Fix EventLogging schema declarations for UploadWizard (duration: 00m 11s) [production]
21:41 <mutante> fixing updates on planet feeds - file permissions [production]
21:11 <manybubbles> restarting rebuilding cirrus's enwiki index now that I've found the reason it wasn't working before - the new index was putting too many shards on an already full node and overwhelming it. silly allocation algorithm! thats a bad idea! [production]
21:08 <yurik> Synchronized php-1.24wmf21/extensions/ZeroPortal/: (no message) (duration: 01m 05s) [production]
20:19 <godog> rebooting ms-be1006 [production]
19:00 <Krinkle> jenkins-slave tmpfs on lanthanum was filling up (> 500MB). I purged tmp dbs for old jobs. We should get these purged automatically and also increase the size as 500MB is too little. [production]
18:59 <robh> disabled icinga alerts for ms-be1001, rebooting it to look at its raid bios settings for codfw deployment mirroring [production]
18:47 <yurik> Synchronized php-1.24wmf20/extensions/: update to JsonConfig, ZeroBanner, ZeroPortal (duration: 01m 39s) [production]
18:43 <yurik> Synchronized php-1.24wmf21/extensions/: update to JsonConfig, ZeroBanner, ZeroPortal (duration: 01m 35s) [production]
18:40 <yurik> Synchronized wmf-config/: private wikis login/logout page names, zeroportal impersonator acct (duration: 01m 06s) [production]
18:23 <mutante> phabricator - made aklapper an admin [production]
17:26 <andrew> rebuilt wikiversions.cdb and synchronized wikiversions files: (no message) [production]
17:24 <andrew> Synchronized wikiversions.json: (no message) (duration: 00m 05s) [production]
17:04 <manybubbles> cirrus brownout looks just about fixed. So! My plan for periodically explicitly merging deletes has some problems..... [production]
16:42 <gwicke> restarted parsoid on wtp102{2,3,4} [production]
16:23 <akosiaris> restarted node on wtp boxes except wtp1022,wtp1023,wtp1024 [production]
16:23 <manybubbles> caused cirrus brownout by executing a force merge for enwiki's general index. ooops [production]
16:06 <manybubbles> Synchronized wmf-config/: set cirrus as primary search backend for ruwiki and make permanent some settings set on the fly (duration: 00m 06s) [production]
15:57 <manybubbles> manually pushed apart ruwiki and nlwiki's shards as well - might help - updated commit to reflect that [production]
15:42 <manybubbles> gerrit change to lock that into place is https://gerrit.wikimedia.org/r/#/c/160974/ and I'll deploy it in my window in 15 minutes. [production]
15:41 <manybubbles> manually forcing Cirrus's commonswiki's file index apart from one another in an attempt to lower the consistently high load on elastic1013 [production]
15:34 <reedy> Synchronized wmf-config/InitialiseSettings.php: Set wgMetaNamespace for labswiki (duration: 00m 14s) [production]
14:54 <springle> db1062 out of action for bug hunt https://mariadb.atlassian.net/browse/MDEV-6751 [production]
14:48 <reedy> Synchronized wmf-config/interwiki.cdb: (no message) (duration: 00m 16s) [production]
14:45 <godog> restarted apache2 on magnesium, validate removal of ssl certs [production]
13:38 <hashar> Zuul upgraded successfully apparently. [production]
13:33 <hashar> stopping zuul for upgrade [production]
13:29 <hashar> upgrading Zuul to 2.0.0.286.gb1811ab [production]
12:20 <hashar> upgrading jenkins 1.565.1 -> 1.565.2 [production]
09:53 <akosiaris> stopped apache2 on fenari, it was leaking memory, puppet restarted it, need to kill this machine ASAP [production]
09:52 <springle> Synchronized wmf-config/db-eqiad.php: repool s1 db1061 (duration: 00m 08s) [production]
06:55 <springle> xtrabackup clone db1061 to db2016 [production]
06:52 <springle> Synchronized wmf-config/db-eqiad.php: depool s1 db1061 for codfw cloning (duration: 00m 07s) [production]
06:27 <springle> Synchronized wmf-config/db-eqiad.php: repool s7 db1039 (duration: 00m 08s) [production]
04:34 <tstarling> Synchronized docroot/bits: (no message) (duration: 00m 10s) [production]
04:32 <LocalisationUpdate> ResourceLoader cache refresh completed at Wed Sep 17 04:32:17 UTC 2014 (duration 32m 16s) [production]
03:17 <LocalisationUpdate> completed (1.24wmf21) at 2014-09-17 03:17:38+00:00 [production]
03:07 <springle> Synchronized wmf-config/db-eqiad.php: repool s6 db1015 (duration: 01m 41s) [production]
02:43 <LocalisationUpdate> completed (1.24wmf20) at 2014-09-17 02:43:02+00:00 [production]
02:21 <springle> xtrabackup clone db1048 to db2012 [production]
02:15 <springle> xtrabackup clone db1046 to db2011 [production]