6801-6850 of 10000 results (33ms)
2014-04-21 §
17:37 <cmjohnson> shutting down mexia to relocate to 12th floor [production]
17:25 <aaron> synchronized php-1.24wmf1/includes/filerepo/file/LocalFile.php '2026e4aee7ffe00e0192914c0bd4a9ce04681c36' [production]
17:22 <aaron> synchronized wmf-config/PoolCounterSettings-eqiad.php 'Adjust large file download pool counter config to tie up less workers' [production]
17:18 <aaron> synchronized php-1.23wmf22/includes/filerepo/file/LocalFile.php '01ce2888d0d9011514a3fc16c1606b5b42f1ef37' [production]
17:13 <demon> synchronized wmf-config/InitialiseSettings.php 'renderfile-nonstandard throttle config' [production]
17:03 <aaron> synchronized php-1.24wmf1/thumb.php '44c46589ef02a30e60177ea268bc8de27a740434' [production]
17:02 <aaron> synchronized php-1.23wmf22/thumb.php '95913654cbf0a2c35ddbef74d135eeea71600d54' [production]
16:34 <ottomata> reinstalling elastic1013 (elastic1014 is still coming back online, but I don't want there to an extra eligible master for long) [production]
16:04 <ottomata> reinstalling elastic1014 [production]
15:22 <cmjohnson1> dataset2 going down to be relocated to the 12th floor [production]
13:41 <manybubbles> rolling restart on some of the Elasticsearch servers to pick up new plugins. should not cause any trouble. [production]
13:05 <Reedy> De-activated status.wm.o monitor for icinga due to false positive from HTTP auth [production]
12:54 <paravoid> demoting myself, removing Commons crat/admin rights [production]
12:41 <paravoid> escalating myself to Commons bureaucrat/admin, then adding GWToolset privileges [production]
12:40 <paravoid> deleting 29 GWToolset XML under Swift's wikipedia-commons-gwtoolset-metadata container for user Fæ/ [production]
11:51 <reedy> synchronized php-1.23wmf22/extensions/TimedMediaHandler 'I7483c8b7ec75f5149998da2b530ca04' [production]
11:51 <paravoid> deactivating esams<->HE peering, >90% packet loss between lon<->nyc [production]
11:49 <paravoid> deactivating eqiad<->HE peering, >90% packet loss between lon<->nyc [production]
11:45 <reedy> synchronized php-1.23wmf22/extensions/TimedMediaHandler 'I7483c8b7ec75f5149998da2b530ca0467ac70de7' [production]
03:55 <springle> reset pc100* slaves previously replicating from pmtpa [production]
03:33 <ori> 5.5k fatals over last 20 hrs, of which 3.5k are calls to doTransform() on a non-object at TimedMediaThumbnail.php:201, and 0.9k are Lua API OOMs at LuaSandbox/Engine.php:264 [production]
03:30 <LocalisationUpdate> ResourceLoader cache refresh completed at Mon Apr 21 03:30:15 UTC 2014 (duration 30m 14s) [production]
03:26 <ori> ap_busy_workers spike on image scalers eqiad, started ~2:55, subsided around ~3:20 [production]
02:42 <LocalisationUpdate> completed (1.24wmf1) at 2014-04-21 02:42:30+00:00 [production]
02:29 <LocalisationUpdate> completed (1.23wmf22) at 2014-04-21 02:29:49+00:00 [production]
2014-04-20 §
18:52 <ori> restarted grrrit-wm by following instructions on https://wikitech.wikimedia.org/wiki/Grrrit-wm#Restarting_the_bot [production]
11:12 <Nemo_bis> grrrit dead: 10.28 -!- grrrit-wm [tools.lolr@208.80.155.145] has quit [production]
03:26 <LocalisationUpdate> ResourceLoader cache refresh completed at Sun Apr 20 03:26:48 UTC 2014 (duration 26m 47s) [production]
02:39 <LocalisationUpdate> completed (1.24wmf1) at 2014-04-20 02:39:23+00:00 [production]
02:28 <LocalisationUpdate> completed (1.23wmf22) at 2014-04-20 02:28:09+00:00 [production]
2014-04-19 §
03:28 <LocalisationUpdate> ResourceLoader cache refresh completed at Sat Apr 19 03:27:52 UTC 2014 (duration 27m 51s) [production]
02:41 <LocalisationUpdate> completed (1.24wmf1) at 2014-04-19 02:41:02+00:00 [production]
02:29 <LocalisationUpdate> completed (1.23wmf22) at 2014-04-19 02:29:33+00:00 [production]
2014-04-18 §
21:00 <hashar> Jenkins renamed mw-jenkinsbot irc bot to wmf-insecte (french for "bug"). Updated IRC conf to point to chat.freenode.net:7000 with SSL. [production]
19:02 <bblack> enabled cp30[14] varnish mobile frontends in esams pybal [production]
17:50 <bblack> cp301[34] reinstalls complete, should stay ok in monitoring [production]
17:48 <ottomata> resinsalling elastic1008 [production]
16:20 <springle> db48 mysqld shutdown for decom [production]
16:20 <bblack> ignore cp301[34] msgs, reinstalling them [production]
16:10 <springle> db63 mysqld shutdown for decom [production]
15:53 <ottomata> reinstalling elastic1007 [production]
15:52 <springle> db48 mysqld set read_only, disabled m2 repl to db1048 [production]
15:51 <ottomata> disabling puppet on elasti1007 and elastic1008 for reformatting [production]
15:45 <mutante> DNS update - removing Tampa msbe/msfe [production]
15:38 <Jeff_Green> switched mchenry to use db1048/db1049 for OTRS address lookups [production]
15:24 <mutante> DNS update - removing all the Tampa mw/srv mgmt [production]
15:15 <mutante> DNS update - removing lvs1-6 [production]
14:54 <mutante> es5,es6 - revoke puppet certs, salt keys, icinga [production]
14:51 <ottomata> powering down stat1 for decom [production]
14:43 <mutante> ms-fe[14] - shutting down [production]