3301-3350 of 10000 results (20ms)
2012-02-14 §
01:14 <Tim> doing apt-get upgrade on fenari [production]
01:12 <Tim> rebooted fenari to fix stale NFS file handle [production]
00:57 <LeslieCarr> rebooted nfs1 as it was unresponsive on console and via IP [production]
00:37 <Reedy> killed /usr/local/apache/common/php-1.19 from apaches [production]
2012-02-13 §
23:29 <reedy> ran sync-common-all [production]
23:13 <reedy> synchronized wmf-config/abusefilter.php 'Hard code wgAbuseFilterStyleVersion as it went away in 1.19' [production]
23:05 <reedy> synchronizing Wikimedia installation... : For good measure [production]
23:01 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: Switch test2wiki to 1.19wmf1 [production]
22:57 <Tim> increased concurrency on the image scalers from 10 to 15 [production]
22:33 <reedy> synchronized php-1.19/includes/api/ApiWatch.php '[[rev:111422|r111422]]' [production]
22:29 <Tim> on pdf1: killed a convert process that had been running since Jan 6 [production]
22:20 <tstarling> synchronized wmf-config/InitialiseSettings.php 'disabling the collection extension due to image scaler overload' [production]
21:28 <LeslieCarr> reloading brewster [production]
21:17 <LeslieCarr> copied a resolv.conf to brewster, apt-get upgrade on brewster and restarted lighttpd and squid on brewster [production]
20:48 <Ryan_Lane> rebooting brewster [production]
17:37 <reedy> synchronizing Wikimedia installation... : Rebuilt trusted-xff.cdb [production]
17:12 <mutante> mailman: deleting test-list [production]
16:31 <reedy> synchronized php-1.19/extensions/OggHandler/ '[[rev:111385|r111385]]' [production]
16:31 <reedy> synchronized php-1.19/extensions/PagedTiffHandler/ '[[rev:111385|r111385]]' [production]
16:20 <reedy> synchronized php-1.19/includes/ '[[rev:111382|r111382]]' [production]
16:19 <reedy> synchronized php-1.19/extensions/CategoryTree/ '[[rev:111382|r111382]]' [production]
15:11 <reedy> synchronized php-1.18//includes/ 'Bringing across 1.18wmf1 livehacks' [production]
15:03 <reedy> synchronizing Wikimedia installation... : Reverting roans live hacks for [[bugzilla:31576|bug 31576]] [production]
14:29 <reedy> synchronized wikimedia.dblist 'Fix double bewikimedia' [production]
14:28 <reedy> synchronized s3.dblist 'Fix double bewikimedia' [production]
14:28 <reedy> synchronized pmtpa.dblist 'Fix double bewikimedia' [production]
14:28 <reedy> synchronized all.dblist 'Fix double bewikimedia' [production]
13:44 <reedy> synchronized php-1.19/extensions/MobileFrontend [production]
02:32 <Tim> on kaulen: increased MaxClients to 500 to better deal with the connection flood [production]
02:23 <Tim> bugzilla is mostly working now, although it's very slow. The DDoS requests are blocked after connection setup using <Location> [production]
02:21 <Tim> on kaulen: restored MaxClients [production]
02:17 <LocalisationUpdate> completed (1.18) at Mon Feb 13 02:17:50 UTC 2012 [production]
01:46 <Tim> temporarily moved bugzilla to port 444 until the connection flood (~1k req/s) subsides [production]
01:15 <Tim> started apache with MaxClients=30 [production]
00:59 <Tim> after kaulen came back up, it was immediately overloaded with jsonrpc.cgi. Stopped apache. [production]
00:54 <Tim> kaulen is not responding on ssh, web down, rebooting [production]
2012-02-12 §
12:09 <mark> Killed lsearchd processes on search8, restarted [production]
12:07 <mark> Rebalanced mw API app servers from load 120 to 150 in pybal list [production]
10:08 <mark> Increased MaxClients to 100 on API apaches in Puppet [production]
09:45 <mark> Restricted only opensearch API requests to the API squids [production]
09:43 <mark> Restricted only opensearch API requests to the API backend apaches, other API requests now hit the main mediawiki cluster [production]
08:44 <mark> maximum_forwards change deployed to all squids [production]
08:42 <mark> Set maximum_forwards 2 in squid.conf, deployed to the API squids only so far, rest is pending [production]
07:52 <binasher> restarted lsearchd on search{3,4,9} [production]
02:19 <LocalisationUpdate> completed (1.18) at Sun Feb 12 02:19:17 UTC 2012 [production]
2012-02-11 §
20:31 <apergos> restarted lightty on dataset2 [production]
17:28 <RobH> manual test of each affected service complete, db9 fully online. [production]
17:26 <RobH> db9 moved, all systems online [production]
17:08 <RobH> db9 shutting down to move racks, offline during this includes: blogs, bugzilla, racktables, rt, survey, etherpad, observium [production]
02:18 <LocalisationUpdate> completed (1.18) at Sat Feb 11 02:18:36 UTC 2012 [production]