3201-3250 of 10000 results (24ms)
2012-02-14 §
02:19 <reedy> synchronized wmf-config/ExtensionMessages-1.19.php 'Remove variablepage' [production]
01:54 <Reedy> Make that ddsh -F5 [production]
01:53 <Ryan_Lane> when rebooting hume I also applied security updates [production]
01:52 <Tim> started indexer on searchidx2 with /home/rainman/scripts/search-restart-indexer per docs [production]
01:52 <Reedy> running ddsh -F30 -cM -g mediawiki-installation /usr/bin/sync-common [production]
01:47 <Tim> rebooting srv193 [production]
01:45 <Tim> on searchidx2: doing apt-get upgrade and rebooting [production]
01:44 <Ryan_Lane> rebooting hume [production]
01:28 <binasher> resuming 1.19 schema migrations after fenari reboot (on first s4 commons slave, db22) [production]
01:19 <Tim> rebooting fenari for kernel upgrades [production]
01:14 <Tim> doing apt-get upgrade on fenari [production]
01:12 <Tim> rebooted fenari to fix stale NFS file handle [production]
00:57 <LeslieCarr> rebooted nfs1 as it was unresponsive on console and via IP [production]
00:37 <Reedy> killed /usr/local/apache/common/php-1.19 from apaches [production]
2012-02-13 §
23:29 <reedy> ran sync-common-all [production]
23:13 <reedy> synchronized wmf-config/abusefilter.php 'Hard code wgAbuseFilterStyleVersion as it went away in 1.19' [production]
23:05 <reedy> synchronizing Wikimedia installation... : For good measure [production]
23:01 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: Switch test2wiki to 1.19wmf1 [production]
22:57 <Tim> increased concurrency on the image scalers from 10 to 15 [production]
22:33 <reedy> synchronized php-1.19/includes/api/ApiWatch.php '[[rev:111422|r111422]]' [production]
22:29 <Tim> on pdf1: killed a convert process that had been running since Jan 6 [production]
22:20 <tstarling> synchronized wmf-config/InitialiseSettings.php 'disabling the collection extension due to image scaler overload' [production]
21:28 <LeslieCarr> reloading brewster [production]
21:17 <LeslieCarr> copied a resolv.conf to brewster, apt-get upgrade on brewster and restarted lighttpd and squid on brewster [production]
20:48 <Ryan_Lane> rebooting brewster [production]
17:37 <reedy> synchronizing Wikimedia installation... : Rebuilt trusted-xff.cdb [production]
17:12 <mutante> mailman: deleting test-list [production]
16:31 <reedy> synchronized php-1.19/extensions/OggHandler/ '[[rev:111385|r111385]]' [production]
16:31 <reedy> synchronized php-1.19/extensions/PagedTiffHandler/ '[[rev:111385|r111385]]' [production]
16:20 <reedy> synchronized php-1.19/includes/ '[[rev:111382|r111382]]' [production]
16:19 <reedy> synchronized php-1.19/extensions/CategoryTree/ '[[rev:111382|r111382]]' [production]
15:11 <reedy> synchronized php-1.18//includes/ 'Bringing across 1.18wmf1 livehacks' [production]
15:03 <reedy> synchronizing Wikimedia installation... : Reverting roans live hacks for [[bugzilla:31576|bug 31576]] [production]
14:29 <reedy> synchronized wikimedia.dblist 'Fix double bewikimedia' [production]
14:28 <reedy> synchronized s3.dblist 'Fix double bewikimedia' [production]
14:28 <reedy> synchronized pmtpa.dblist 'Fix double bewikimedia' [production]
14:28 <reedy> synchronized all.dblist 'Fix double bewikimedia' [production]
13:44 <reedy> synchronized php-1.19/extensions/MobileFrontend [production]
02:32 <Tim> on kaulen: increased MaxClients to 500 to better deal with the connection flood [production]
02:23 <Tim> bugzilla is mostly working now, although it's very slow. The DDoS requests are blocked after connection setup using <Location> [production]
02:21 <Tim> on kaulen: restored MaxClients [production]
02:17 <LocalisationUpdate> completed (1.18) at Mon Feb 13 02:17:50 UTC 2012 [production]
01:46 <Tim> temporarily moved bugzilla to port 444 until the connection flood (~1k req/s) subsides [production]
01:15 <Tim> started apache with MaxClients=30 [production]
00:59 <Tim> after kaulen came back up, it was immediately overloaded with jsonrpc.cgi. Stopped apache. [production]
00:54 <Tim> kaulen is not responding on ssh, web down, rebooting [production]
2012-02-12 §
12:09 <mark> Killed lsearchd processes on search8, restarted [production]
12:07 <mark> Rebalanced mw API app servers from load 120 to 150 in pybal list [production]
10:08 <mark> Increased MaxClients to 100 on API apaches in Puppet [production]
09:45 <mark> Restricted only opensearch API requests to the API squids [production]