8001-8050 of 10000 results (31ms)
2014-03-07 §
21:45 <mutante> killing all tampa appservers from puppetstoredconfigs [production]
21:41 <mutante> disabling puppet agent on all tampa appservers [production]
21:15 <hoo> synchronized wmf-config/InitialiseSettings-labs.php 'Syncing beta-only change for consistency' [production]
20:28 <mutante> killing mw1-16 from puppet stored configs, icinga,.. [production]
20:26 <mutante> revoking puppet certs for Tampa appservers [production]
20:14 <catrope> synchronized php-1.23wmf16/extensions/VisualEditor/modules/ve-mw/dm/nodes/ve.dm.MWBlockImageNode.js 'Fix image corruption bug' [production]
20:14 <catrope> updated /a/common/php-1.23wmf16 to {{Gerrit|I4a10768ec}}: Update VisualEditor to wmf16 branch for cherry-pick [production]
18:17 <bd808> Restored pre Ic56177a versions of wmf-config/*pmtpa* config files to mw31 again. Something wiped them out since 20:23Z yesterday even though "mw31" is not found in any dsh group files on tin. [production]
17:53 <mutante> restarted squid on brewster [production]
03:31 <LocalisationUpdate> ResourceLoader cache refresh completed at 2014-03-07 03:31:43+00:00 [production]
02:43 <LocalisationUpdate> completed (1.23wmf17) at 2014-03-07 02:43:40+00:00 [production]
02:21 <LocalisationUpdate> completed (1.23wmf16) at 2014-03-07 02:21:46+00:00 [production]
02:14 <manybubbles> [Elasticsearch upgrade] done. we'll take a while to catch up on jobs that piled up during the upgrade, but we'll get them in time. [production]
02:10 <demon> synchronized wmf-config/InitialiseSettings.php 'Turn Cirrus back on for all wikis how it was before' [production]
02:09 <demon> synchronized wmf-config/jobqueue-eqiad.php 'Turn Cirrus jobs back on' [production]
02:04 <manybubbles> [Elasticsearch upgrade] restoring more sane recovery speed [production]
01:39 <mholmquist> synchronized php-1.23wmf17/extensions/UniversalLanguageSelector/UniversalLanguageSelector.hooks.php 'Actually gate the beta feature for ULS' [production]
01:30 <manybubbles> [Elasticsearch upgrade] temporarily raising recovery speed [production]
01:29 <jgonera> synchronized php-1.23wmf16/extensions/MobileFrontend/ 'Touch MobileFrontend.i18n.php to update RL cache' [production]
01:19 <mholmquist> synchronized wmf-config/InitialiseSettings.php 'Fix James_F's commit, follow-up, should gate ULS beta feature' [production]
01:18 <mholmquist> updated /a/common to {{Gerrit|I48e98d28f}}: Follow-up: Icf0bef96306661 – missing file(!) from commit [production]
01:18 <mholmquist> Finished scap: (no message) (duration: 10m 55s) [production]
01:07 <rdwrer> That scap was for ULS, VE, and MobileFrontend fixes and updates. [production]
01:07 <mholmquist> Started scap: (no message) [production]
01:03 <manybubbles> [Elasticsearch upgrade] Reenabling puppet [production]
01:03 <hashar> Jenkins back up [production]
01:03 <hashar> Jenkins backup [production]
01:03 <manybubbles> [Elasticsearch upgrade] All primary shards have started. Waiting on secondary. [production]
01:01 <manybubbles> [Elasticsearch upgrade] Wait for the cluster to recover. [production]
01:00 <hashar> killed wrong jenkins process (+1 for 2am fix up). Restarting jenkins [production]
00:59 <manybubbles> [Elasticsearch upgrade] Verifying versions [production]
00:59 <manybubbles> [Elasticsearch upgrade] Starting Elasticsearch [production]
00:58 <hashar> Killing a duplicate Jenkins java process on gallium (init.d script sucks, I really need to get it fixed one day) [production]
00:52 <mholmquist> updated /a/common to {{Gerrit|Iad8c84a7d}}: Don't use += with $wgJobTypesExcludedFromDefaultQueue [production]
00:52 <manybubbles> [Elasticsearch upgrade] Upgrading Elasticsearch [production]
00:49 <manybubbles> [Elasticsearch upgrade] Shutting down Elasticsearch [production]
00:48 <manybubbles> [Elasticsearch upgrade] Turning off shard reallocation so we don't thrash while Elasticsearch shuts down [production]
00:47 <manybubbles> [Elasticsearch upgrade] Disabling puppet so it doesn't restart Elasticsearch while we're upgrading it [production]
00:44 <manybubbles> [Elasticsearch upgrade] Elasticsearch is now quiescent [production]
00:43 <mutante> gracefull'ing rogue apaches, mw1131,mw1189,mw1190,mw1215 [production]
00:41 <mutante> gracefull'ing rogue apaches, mw1070,mw1089,mw1104,mw1111 [production]
00:37 <demon> synchronized wmf-config/jobqueue-eqiad.php 'Fixing $wgJobTypesExcludedFromDefaultQueue config' [production]
00:34 <manybubbles> [Elasticsearch upgrade] Running puppet everywhere to make sure we have the newest config [production]
00:33 <mutante> graceful mw1040 [production]
00:03 <manybubbles> synchronized wmf-config/InitialiseSettings.php 'Turn Cirrus of for the duration of the upgrade' [production]
00:02 <manybubbles> synchronized wmf-config/jobqueue-eqiad.php 'Pausing Cirrus jobs for the duration of the upgrade.' [production]
00:02 <manybubbles> Starting Elasticsearch upgrade [production]
2014-03-06 §
22:21 <^d> restarting jenkins on gallium. It's totally hung and nothing's getting done. Jobs will probably need retriggering. [production]
21:23 <bd808> "No space left on device" errors from snapshot1004.eqiad.wmnet during scap [production]
21:21 <bd808> Finished scap: php-1.23wmf17 l10n cache rebuild (duration: 11m 20s) [production]