301-350 of 10000 results (14ms)
2013-07-15 §
18:58 <hashar> labstore3 system CPU rocketed from ~10% to ~60% [production]
18:42 <cmjohnson1> powercycling mw1163 [production]
18:41 <cmjohnson1> depooling mw1163 to troubleshoot DIMM error [production]
18:25 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: everything non wikipedia to 1.22wmf10 [production]
18:22 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: wikiquote and wiktionary to 1.22wmf10 [production]
18:20 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: wikibooks, wikivoyage, wikiversity to 1.22wmf10 [production]
18:16 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: wikimedia, closed, special to 1.22wmf10 [production]
17:54 <cmjohnson1> powering down mw1173 [production]
17:34 <cmjohnson1> depooling mw1173 to troubleshoot DIMM Failure [production]
17:08 <RobH> Fixing blog setup of themes, theme may reset to defaults for next few minutes as i tinker with it. [production]
15:26 <MaxSem> Recreating Solr index [production]
12:40 <apergos> rebooting kaulen to pick up some upgrades, per rt 5460 [production]
02:42 <LocalisationUpdate> ResourceLoader cache refresh completed at Mon Jul 15 02:42:20 UTC 2013 [production]
02:28 <LocalisationUpdate> completed (1.22wmf10) at Mon Jul 15 02:27:57 UTC 2013 [production]
02:14 <LocalisationUpdate> completed (1.22wmf9) at Mon Jul 15 02:14:48 UTC 2013 [production]
2013-07-14 §
18:43 <gwicke> synchronized wmf-config/CommonSettings.php 'Throttle titles per template update job further from 10 to 6' [production]
17:49 <gwicke> synchronized wmf-config/CommonSettings.php 'Re-enable Parsoid updates after throttling template update rate' [production]
16:54 <reedy> synchronized php-1.22wmf10/extensions/Gadgets/ [production]
16:52 <reedy> synchronized php-1.22wmf9/extensions/Gadgets/ [production]
14:04 <catrope> synchronized wmf-config/CommonSettings.php 'Resync for boxes that came back up' [production]
14:01 <mark> Repooled stolen apaches back in the appserver cluster [production]
13:59 <catrope> synchronized php-1.22wmf9/resources/startup.js 'touch' [production]
13:49 <mark> Depooled stolen apaches from the api cluster [production]
13:20 <mark> Upping PyBal weight to 15 for stolen appservers [production]
13:19 <RoanKattouw> Hand-syncing CommonSettings.php with dsh to set $wgParsoidSkipRatio = 0 [production]
13:17 <mark> Stealing 10 app servers, for the API pool [production]
13:05 <catrope> synchronized wmf-config/CommonSettings.php 'Set $wgParsoidSkipRatio to 1 to let the API cluster breathe' [production]
12:54 <RoanKattouw> Restarted Apache on mw1118, had to stop by hand using killall -9 (/etc/init.d/apache stop didn't work) [production]
12:38 <RoanKattouw> Restarted pybal on lvs1003 [production]
12:37 <RoanKattouw> Restarted pybal on lvs1006 (set api depool threshold to .4 from .6) [production]
06:37 <apergos> powercycled mc1005, unreachable via console, from ganglia guess it was in swap death [production]
02:07 <LocalisationUpdate> ResourceLoader cache refresh completed at Sun Jul 14 02:07:10 UTC 2013 [production]
02:02 <LocalisationUpdate> completed (1.22wmf10) at Sun Jul 14 02:02:31 UTC 2013 [production]
02:01 <LocalisationUpdate> completed (1.22wmf9) at Sun Jul 14 02:01:43 UTC 2013 [production]
2013-07-13 §
02:29 <mutante> kernel upgrade, rebooting zirconium [production]
02:28 <mutante> copied etherpad-lite from lucid to precise [production]
02:22 <LocalisationUpdate> ResourceLoader cache refresh completed at Sat Jul 13 02:22:51 UTC 2013 [production]
02:14 <LocalisationUpdate> completed (1.22wmf10) at Sat Jul 13 02:13:55 UTC 2013 [production]
02:07 <LocalisationUpdate> completed (1.22wmf9) at Sat Jul 13 02:07:36 UTC 2013 [production]
00:15 <mutante> swift install on iron (swift cleaner & utils only) had a depends issue python-swift (= 1.5.0-3) but 1.7.4-0ubuntu2+wmf1 is installed, upgraded anyway with -f [production]
00:05 <mutante> installing package upgrades on iron [production]
2013-07-12 §
23:39 <gwicke> updated Parsoid to f6d3742 [production]
23:37 <springle> restarted db32 slave [production]
23:31 <springle> finished tag_summary rebuild [production]
22:39 <aaron> cleared profiling data [production]
22:33 <mutante> completely stopped and then started squid front and back on cp1001 (had socket already in use errors) [production]
22:25 <springle> rebuilding tag_summary on enwiki [production]
22:24 <mutante> restarting squid on cp1001 [production]
22:06 <mutante> restarting job runners [production]
21:06 <springle> stopped db32 mysql slave threads [production]