6651-6700 of 10000 results (39ms)
2014-10-06 §
16:19 <ori> Synchronized wmf-config/CommonSettings.php: Set $wgPercentHHVM to 1 (duration: 00m 27s) [production]
15:54 <cscott> updated OCG to version aee3712b352f51f96569de0bcccf3facf654e688 [production]
15:45 <GroggyPanda> deleted graphite data for deployment-rsync02 by hand on labmon1001, since instance has been dead. Need to move to shinken + dynamic host.cfg [production]
15:37 <manybubbles> Synchronized php-1.25wmf1/extensions/Wikidata/: SWAT update wikidata (duration: 00m 10s) [production]
15:23 <manybubbles> Synchronized php-1.25wmf2/extensions/Wikidata/: SWAT update wikidata (duration: 00m 10s) [production]
15:22 <hashar> Zuul jobs proceeding again [production]
15:22 <godog> swiftrepl replicating non-sharded originals containers eqiad -> codfw [production]
15:22 <manybubbles> Synchronized wmf-config/: SWAT Add tracking categories for files with attribution problems (duration: 00m 06s) [production]
15:19 <cscott> ran 'sudo -u ocg -g ocg nodejs-ocg scripts/run-garbage-collect.js -c /home/cscott/config.js' from /home/cscott/ocg/mw-ocg-service in order to clear caches (working around https://gerrit.wikimedia.org/r/164644 ) on ocg100x.eqiad.wmnet [production]
14:51 <cmjohnson1> disconnecting Tampa servers [production]
13:46 <godog> starting test swiftrepl run on wikibooks eqiad -> codfw [production]
11:49 <_joe_> done restarting ocg servers [production]
11:34 <_joe_> rolling restart and cleaning of ocg nodes, trying to unlock pdf generation [production]
11:11 <mark> Shutdown tarin [production]
11:11 <mark> Shutdown sanger [production]
09:27 <_joe_> cleaned ocg another time [production]
09:07 <mark> Stopped dovecot on sanger [production]
08:06 <_joe_> cleaned ocg1001, again [production]
05:57 <Nemo_bis> "book creator seems stuck": PDF servers at 97 % CPU, little traffic, enough disk free for about 1 day more [production]
03:26 <LocalisationUpdate> ResourceLoader cache refresh completed at Mon Oct 6 03:26:31 UTC 2014 (duration 26m 30s) [production]
02:59 <springle> Synchronized wmf-config/db-eqiad.php: depool db1060 (duration: 00m 06s) [production]
02:28 <LocalisationUpdate> completed (1.25wmf2) at 2014-10-06 02:28:42+00:00 [production]
02:17 <LocalisationUpdate> completed (1.25wmf1) at 2014-10-06 02:17:40+00:00 [production]
2014-10-05 §
22:28 <Coren> Q183 superprotected as a safeguard [production]
22:27 <hoo> Q183 is on revision 116786096 again, please don't alter this further! [production]
22:21 <qchris> Updated gerrit's hooks-bugzilla to 6e1e659 (with hooks-its at a421db4) [production]
22:11 <hoo> WD:Q183 was frozen on version 120566337, see bug 71519 (and others) [production]
21:23 <hoo> Bypassed Wikibase restrictions and set https://www.wikidata.org/wiki/Q183 back to old serialization format [production]
20:08 <Nemo_bis> 22.03 < Ainali> It was just noticed on svwp village pump that http://stats.wikimedia.org is down [production]
16:39 <paravoid> restore ns1 routing to codfw [production]
11:23 <paravoid> adding static route for ns1 to rubidium (ns0) on cr1-eqiad to temporarily redirect its traffic while the codfw is offline [production]
03:20 <LocalisationUpdate> ResourceLoader cache refresh completed at Sun Oct 5 03:19:55 UTC 2014 (duration 19m 53s) [production]
03:02 <ori> Synchronized wmf-config/CommonSettings.php: I707b5754: Enable LuaSandbox profiling when is true (duration: 00m 07s) [production]
02:22 <LocalisationUpdate> completed (1.25wmf2) at 2014-10-05 02:22:47+00:00 [production]
02:13 <LocalisationUpdate> completed (1.25wmf1) at 2014-10-05 02:13:26+00:00 [production]
2014-10-04 §
21:08 <_joe_> cleaning ocg1001 tmpfs from a 32 gb pdf file [production]
19:59 <jgage> restarted pdns on virt1000 for ldap config update [production]
07:08 <springle> powercycle es1004 [production]
03:27 <LocalisationUpdate> ResourceLoader cache refresh completed at Sat Oct 4 03:27:20 UTC 2014 (duration 27m 19s) [production]
02:25 <LocalisationUpdate> completed (1.25wmf2) at 2014-10-04 02:25:04+00:00 [production]
02:15 <LocalisationUpdate> completed (1.25wmf1) at 2014-10-04 02:15:02+00:00 [production]
01:01 <bblack> depooling cp1045 for persistent cache wipe [production]
00:01 <andrewbogott> updated the defaut labs precise image: updated ldap setup, new /var/log partition [production]
2014-10-03 §
22:46 <bd808> Restarting zuul on gallium [production]
22:41 <bd808> Trying a soft restart of zuul on gallium [production]
22:37 <bd808> NoConnectedServersError("No connected Gearman servers") in zuul.log on gallium [production]
22:33 <bd808|deploy> Updated integration/phpunit to 6c1d11d (Regenerate autoloader) [production]
22:31 <subbu> restarted Parsoid servers after another gradual cpu load creep [production]
22:19 <aaron> Synchronized wmf-config/InitialiseSettings.php: Fixed the parser cache type for labswiki (duration: 00m 03s) [production]
21:55 <andrewbogott> updated the defaut labs trusty image: updated packages, updated ldap setup, new /var/log partition [production]