6601-6650 of 10000 results (29ms)
2013-07-16 §
09:27 <hashar> upgrading packages on gallium [production]
09:12 <apergos> restarting gerrit [production]
02:44 <LocalisationUpdate> ResourceLoader cache refresh completed at Tue Jul 16 02:43:55 UTC 2013 [production]
02:28 <LocalisationUpdate> completed (1.22wmf9) at Tue Jul 16 02:28:30 UTC 2013 [production]
02:15 <LocalisationUpdate> completed (1.22wmf10) at Tue Jul 16 02:15:23 UTC 2013 [production]
01:00 <catrope> Finished syncing Wikimedia installation... : Update VisualEditor to master again [production]
00:52 <catrope> Started syncing Wikimedia installation... : Update VisualEditor to master again [production]
2013-07-15 §
23:38 <cmjohnson1> reinstalling carbon [production]
23:05 <csteipp> synchronized php-1.22wmf10/extensions/CentralAuth 'Fix sul2 regression' [production]
23:02 <gwicke> synchronized wmf-config/CommonSettings.php 'Slightly increase Parsoid dequeue rate' [production]
22:29 <kaldari> synchronized php-1.22wmf10/extensions/WikiLove/WikiLove.hooks.php 'Fixing WikiLove regression on wmf10' [production]
21:50 <ori-l> All EventLogging services start/running; data looks good. [production]
20:46 <catrope> synchronized wmf-config/InitialiseSettings.php 'Enable VisualEditor for all users (anons too) on enwiki' [production]
20:44 <Krinkle> Graceful reload of Zuul to fast-forward deployment to Ia53d412b029205 [production]
20:40 <catrope> Finished syncing Wikimedia installation... : Updating VisualEditor to master [production]
20:29 <catrope> Started syncing Wikimedia installation... : Updating VisualEditor to master [production]
20:24 <ori-l> Disabled MediaWiki errors Ganglia module on vanadium [production]
20:02 <ori-l> Rebooting vanadium to complete kernel upgrade to 3.2.0-49. [production]
19:48 <ori-l> Shutting down EventLogging services on vanadium ahead of Iba8cc5d7b deployment. [production]
18:58 <hashar> labstore3 system CPU rocketed from ~10% to ~60% [production]
18:42 <cmjohnson1> powercycling mw1163 [production]
18:41 <cmjohnson1> depooling mw1163 to troubleshoot DIMM error [production]
18:25 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: everything non wikipedia to 1.22wmf10 [production]
18:22 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: wikiquote and wiktionary to 1.22wmf10 [production]
18:20 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: wikibooks, wikivoyage, wikiversity to 1.22wmf10 [production]
18:16 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: wikimedia, closed, special to 1.22wmf10 [production]
17:54 <cmjohnson1> powering down mw1173 [production]
17:34 <cmjohnson1> depooling mw1173 to troubleshoot DIMM Failure [production]
17:08 <RobH> Fixing blog setup of themes, theme may reset to defaults for next few minutes as i tinker with it. [production]
15:26 <MaxSem> Recreating Solr index [production]
12:40 <apergos> rebooting kaulen to pick up some upgrades, per rt 5460 [production]
02:42 <LocalisationUpdate> ResourceLoader cache refresh completed at Mon Jul 15 02:42:20 UTC 2013 [production]
02:28 <LocalisationUpdate> completed (1.22wmf10) at Mon Jul 15 02:27:57 UTC 2013 [production]
02:14 <LocalisationUpdate> completed (1.22wmf9) at Mon Jul 15 02:14:48 UTC 2013 [production]
2013-07-14 §
18:43 <gwicke> synchronized wmf-config/CommonSettings.php 'Throttle titles per template update job further from 10 to 6' [production]
17:49 <gwicke> synchronized wmf-config/CommonSettings.php 'Re-enable Parsoid updates after throttling template update rate' [production]
16:54 <reedy> synchronized php-1.22wmf10/extensions/Gadgets/ [production]
16:52 <reedy> synchronized php-1.22wmf9/extensions/Gadgets/ [production]
14:04 <catrope> synchronized wmf-config/CommonSettings.php 'Resync for boxes that came back up' [production]
14:01 <mark> Repooled stolen apaches back in the appserver cluster [production]
13:59 <catrope> synchronized php-1.22wmf9/resources/startup.js 'touch' [production]
13:49 <mark> Depooled stolen apaches from the api cluster [production]
13:20 <mark> Upping PyBal weight to 15 for stolen appservers [production]
13:19 <RoanKattouw> Hand-syncing CommonSettings.php with dsh to set $wgParsoidSkipRatio = 0 [production]
13:17 <mark> Stealing 10 app servers, for the API pool [production]
13:05 <catrope> synchronized wmf-config/CommonSettings.php 'Set $wgParsoidSkipRatio to 1 to let the API cluster breathe' [production]
12:54 <RoanKattouw> Restarted Apache on mw1118, had to stop by hand using killall -9 (/etc/init.d/apache stop didn't work) [production]
12:38 <RoanKattouw> Restarted pybal on lvs1003 [production]
12:37 <RoanKattouw> Restarted pybal on lvs1006 (set api depool threshold to .4 from .6) [production]
06:37 <apergos> powercycled mc1005, unreachable via console, from ganglia guess it was in swap death [production]