501-550 of 10000 results (20ms)
2014-04-03 §
21:46 <demon> synchronized php-1.23wmf20/extensions/CirrusSearch 'Rolling back to 1.23wmf20 branch point from master' [production]
21:38 <demon> synchronized php-1.23wmf20/extensions/CirrusSearch 'Updating Cirrus to master' [production]
21:33 <demon> synchronized wmf-config/CirrusSearch-production.php 'italian wikis getting interwiki search. they're my favorite beta testers' [production]
19:23 <reedy> synchronized docroot and w [production]
19:21 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: group0 wikis to 1.23wmf21 [production]
19:17 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: wikipedias actually to 1.23wmf20 [production]
19:15 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: wikipedias to 1.23wmf20 [production]
19:09 <reedy> Finished scap: testwiki to 1.23wmf21 and build l10n cache (duration: 38m 23s) [production]
18:30 <reedy> Started scap: testwiki to 1.23wmf21 and build l10n cache [production]
18:23 <reedy> updated /a/common to {{Gerrit|I835c2b1d5}}: Depool. See RT 7191. [production]
11:10 <paravoid> IPv4 eqiad<->esams private link also elevated by ~15ms but no packet loss observed [production]
11:09 <paravoid> affects both IPv6 transit at esams (slowdowns) as well as IPv6 eqiad<->esams [production]
11:08 <paravoid> deactivating cr1-esams<->HE peering, latency > 160ms, over at 200ms (congestion?); back to 84ms now; [production]
10:51 <akosiaris> temporarily stopped squid on brewster [production]
10:26 <hashar> Jenkins job mediawiki-core-phpunit-hhvm is back around thanks to {{gerrit|123573}} [production]
06:28 <paravoid> powercycling ms-be1003, unresponsive, no console output [production]
04:43 <springle> synchronized wmf-config/db-eqiad.php 'return upgraded DB slaves to normal load' [production]
04:11 <springle> synchronized wmf-config/db-eqiad.php 's6 repool db1015, warm up' [production]
04:04 <springle> synchronized wmf-config/db-eqiad.php 's6 depool db1015 for upgrade' [production]
04:03 <springle> synchronized wmf-config/db-eqiad.php 's5 repool db1037, warm up' [production]
03:53 <springle> synchronized wmf-config/db-eqiad.php 's5 depool db1037 for upgrade' [production]
03:53 <LocalisationUpdate> ResourceLoader cache refresh completed at Thu Apr 3 03:53:18 UTC 2014 (duration 53m 16s) [production]
03:34 <springle> db1020 raid controller dimm ecc errors [production]
03:14 <springle> synchronized wmf-config/db-eqiad.php 's4 depool db1020 for upgrade' [production]
03:12 <springle> synchronized wmf-config/db-eqiad.php 's3 repool db1019, warm up' [production]
02:57 <springle> synchronized wmf-config/db-eqiad.php 's3 depool db1019 for upgrade' [production]
02:56 <springle> synchronized wmf-config/db-eqiad.php 's2 repool db1060, warm up' [production]
02:48 <LocalisationUpdate> completed (1.23wmf20) at 2014-04-03 02:48:01+00:00 [production]
02:47 <springle> synchronized wmf-config/db-eqiad.php 's2 depool db1060 for upgrade' [production]
02:45 <springle> synchronized wmf-config/db-eqiad.php 's1 repool db1061, warm up' [production]
02:35 <springle> synchronized wmf-config/db-eqiad.php 's1 depool db1061 for upgrade' [production]
02:24 <LocalisationUpdate> completed (1.23wmf19) at 2014-04-03 02:24:07+00:00 [production]
2014-04-02 §
23:48 <aaron> synchronized wmf-config/CommonSettings.php 'Bumped wgJobBackoffThrottling for htmlCacheUpdate to 15' [production]
23:47 <mwalker> ... deploy was for mobile frontend {{gerrit|123454}} [production]
23:46 <mwalker> synchronized php-1.23wmf20/extensions/MobileFrontend 'SWAT deploy for MaxSem' [production]
20:23 <subbu> deployed Parsoid 33471172 with deploy repo sha 5c620e54 [production]
19:03 <ori> synchronized php-1.23wmf20/extensions/WikimediaEvents 'Update WikimediaEvents for I7fdaa5524: Use simple random sampling to log deprecated usage at 1:100' [production]
19:03 <ori> synchronized php-1.23wmf19/extensions/WikimediaEvents 'Update WikimediaEvents for I7fdaa5524: Use simple random sampling to log deprecated usage at 1:100' [production]
17:00 <andrewbogott> fixed updating crons on wikitech-status, I think. Time will tell... [production]
16:20 <manybubbles> synchronized wmf-config/InitialiseSettings.php 'Lower timeout on prefix searches and make the cirrus.dblist sync I just did take effect.' [production]
16:19 <manybubbles> synchronized cirrus.dblist 'Cirrus as primary for most of group1' [production]
16:14 <akosiaris> banned tools-exec-03.eqiad.wmflabs. using manual iptables on ytterbium [production]
15:20 <ottomata> stopping puppet on stat1 [production]
14:27 <hashar> Jenkins applying label <tt>contintLabsSlave</tt> on slaves in labs used for ci (integration-slave1001 and 1002) [production]
14:15 <hashar> Jenkins deleting pmtpa slaves (they all have been shutdown and jobs got deleted) [production]
14:00 <manybubbles> tried restarting some lsearchd services (carefully) to clear out some crashing when searching for a particular query term. It caused pool queue full errors.... serves me right for trying? [production]
11:20 <mutante> running CheckUser/maintenance/purgeOldData.php on all wikis [production]
09:43 <akosiaris> rsynced brewster /srv to carbon [production]
09:34 <mutante> restarting gitblit on antimony [production]
09:14 <mutante> DNS update - removing capella [production]