801-850 of 10000 results (25ms)
2014-09-16 §
14:12 <akosiaris_> stopped apache on fenari . It was in swap, investigating [production]
12:35 <springle> Synchronized wmf-config/db-eqiad.php: repool s2 db1054, s3 db1027, s4 db1056, s5 db1037 (duration: 00m 10s) [production]
12:26 <godog> reboot ms-be1014, xfs issues [production]
12:22 <godog> temporarily chgrp wikidev /var/log/hhvm/error.log on mw1018 [production]
12:21 <reedy> Synchronized php-1.24wmf20/LocalSettings.php: Fix path to be /srv based (duration: 00m 32s) [production]
11:25 <reedy> Synchronized docroot and w: (no message) (duration: 00m 35s) [production]
11:12 <reedy> Purged l10n cache for 1.24wmf19 [production]
11:12 <reedy> Purged l10n cache for 1.24wmf18 [production]
11:10 <reedy> Purged l10n cache for 1.24wmf15 [production]
09:21 <_joe_> reimaging mw1018 and mw1021 w HAT: removing from pybal, etc. [production]
06:29 <springle> xtrabackup clone db1037 to db2023 [production]
05:31 <springle> xtrabackup clone db1056 to db2019 [production]
04:01 <LocalisationUpdate> ResourceLoader cache refresh completed at Tue Sep 16 04:01:05 UTC 2014 (duration 1m 4s) [production]
03:11 <springle> xtrabackup clone db1027 to db2018 [production]
03:04 <LocalisationUpdate> completed (1.24wmf21) at 2014-09-16 03:04:46+00:00 [production]
02:53 <springle> xtrabackup clone db1054 to db2017 [production]
02:50 <springle> Synchronized wmf-config/db-eqiad.php: depool s2 db1054, s3 db1027, s4 db1056, s5 db1037 for codfw cloning (duration: 01m 12s) [production]
02:39 <springle> Synchronized wmf-config/db-eqiad.php: repool db1036, depool db1002 (duration: 00m 07s) [production]
02:31 <LocalisationUpdate> completed (1.24wmf20) at 2014-09-16 02:31:16+00:00 [production]
2014-09-15 §
23:32 <maxsem> Synchronized php-1.24wmf21/resources/: SWAT: https://gerrit.wikimedia.org/r/#/c/160488/1 https://gerrit.wikimedia.org/r/#/c/160543/ (duration: 00m 06s) [production]
23:26 <bblack> restarting lvs1001 for HT disable + kernel upgrade [production]
23:19 <maxsem> Synchronized php-1.24wmf21/extensions/VisualEditor/: SWAT: https://gerrit.wikimedia.org/r/#/c/160554/ (duration: 00m 07s) [production]
23:12 <bblack> restarting lvs1002 for HT disable + kernel upgrade [production]
23:07 <greg-g> Running sample job on integration-slave1006 and warming up npmjs.org cache [production]
22:56 <Krinkle> Running sample job on integration-slave1008 and warming up npmjs.org cache [production]
22:49 <Krinkle> Running sample job on integration-slave1007 and warming up npmjs.org cache [production]
22:48 <Krinkle> Pooling the newly setup Trusty-based Jenkins slaves (integration-slave1006, integration-slave1007 and integration-slave1008) [production]
22:42 <bblack> dropping static routes for 2620:0:861:ed1a::[d,f,10,11] -> lvs1005 from cr[12]-eqiad (only 11 is of any consequence, misc-web-lb, and they're advertised by bgp and this is preventing failover to lvs1002) [production]
21:28 <cscott> updated OCG to version 188a3c221d927bd0601ef5e1b0c0f4a9d1cdbd31 [production]
20:46 <subbu> deployed Parsoid version b845bff9 [production]
18:49 <ejegg> Synchronized php-1.24wmf20/extensions/CentralNotice/: Update CentralNotice to remove jquery.json dependency (duration: 00m 23s) [production]
18:46 <hoo> Sync to tmh100[12] failed, according to awight [production]
18:44 <ejegg> Synchronized php-1.24wmf21/extensions/CentralNotice/: Update CentralNotice to remove jquery.json dependency (duration: 00m 09s) [production]
18:43 <manybubbles> performance tests show cirrus should handle jawiki with no problem but if load spirals out of control and I'm not around then revert https://gerrit.wikimedia.org/r/#/c/160465/ [production]
18:40 <hoo> Local part of the global rename of Gnumarcoo => .avgas fatally timed out on itwiki. This needs to be fixed per hand. [production]
18:40 <manybubbles> Setting Cirrus to jawiki's primary search backend went well but Japan is mostly asleep. If Elasticsearch load takes a turn for the worse in four or five hours then we'll know how it went. [production]
17:14 <bd808> Restarted elasticsearch on logstash1003; 2014-09-14T09:33:57Z java.lang.OutOfMemoryError [production]
17:09 <_joe_> killing salt-call on all mediawiki hosts [production]
17:06 <bd808> Restarted elasticsearch on logstash1001; 2014-09-15T06:12:09Z java.lang.OutOfMemoryError [production]
17:04 <bblack> using salt to kill salt-minion everywhere... [production]
17:02 <bd808> Restarted logstash on logstash1001. I hoped this would fix the dashboards, but it looks like the backing elasticsearch cluster is too sad for them to work at the moment. [production]
16:55 <bd808> Restarted hung elasticsearch service on logstash1002 [production]
16:15 <manybubbles> jawiki now has cirrus as primary. we're back to where we were before the great cascading failure of two months ago [production]
16:13 <manybubbles> Synchronized wmf-config/InitialiseSettings.php: (no message) (duration: 00m 06s) [production]
15:29 <marktraceur> Synchronized php-1.24wmf21/extensions/MultimediaViewer/: [SWAT] Several backports for metrics and bugfixes in Media Viewer (duration: 00m 07s) [production]
15:27 <marktraceur> Synchronized php-1.24wmf20/extensions/MultimediaViewer/: [SWAT] Several backports for metrics and bugfixes in Media Viewer (duration: 00m 07s) [production]
15:18 <marktraceur> Synchronized php-1.24wmf21/extensions/GeoCrumbs/GeoCrumbs.class.php: [SWAT] Handle return value NULL of GeoCrumbs::getParserCache (duration: 00m 07s) [production]
15:17 <marktraceur> Synchronized php-1.24wmf20/extensions/GeoCrumbs/GeoCrumbs.class.php: [SWAT] Handle return value NULL of GeoCrumbs::getParserCache (duration: 00m 07s) [production]
15:06 <marktraceur> Synchronized wmf-config/: [SWAT] Remove 'renameuser' right from bureaucrats on CentralAuth wikis (duration: 00m 09s) [production]
14:54 <aude> Synchronized wmf-config/Wikibase.php: Bump wikibase memcached key for test.wikidata, test, test2 (duration: 00m 16s) [production]