601-650 of 2576 results (6ms)
2009-04-28 §
01:48 <Andrew> Updating configuration to cchange tor settings. [production]
2009-04-27 §
23:42 <tstarling> synchronized php-1.5/db.php 'gave the current ES masters some read load' [production]
23:05 <Tim> increased connection limit on temp-es* from 100 to 500 [production]
18:31 <Rob> srv138, srv139, & srv145 reinstalled and online. [production]
18:24 <brion> stopped apache and umounted amane from srv184 (ES slave). load is way overloaded for some reason on this box [production]
18:24 <Rob> removed amane from mounts on srv184 [production]
18:01 <Rob> srv145 reinstalling [production]
17:58 <Rob> some quirky stuff going on from various memcached hosts being reinstalled and such. Issues seem to be resolved now. [production]
17:56 <robh> synchronized php-1.5/mc-pmtpa.php 'removing reinstalling servers' [production]
17:54 <robh> synchronized php-1.5/mc-pmtpa.php 'removing reinstalling servers' [production]
17:43 <Rob> srv129 back online [production]
17:43 <Rob> reinstalling srv138 and srv139 [production]
17:24 <Rob> srv126 up and online [production]
17:11 <Rob> srv126 and srv129 being reinstalled. [production]
17:09 <Rob> srv86 and srv87 up and online [production]
16:49 <Rob> srv86 and srv87 upgrading to ubuntu [production]
16:42 <Rob> srv107 online [production]
16:38 <robh> synchronized php-1.5/mc-pmtpa.php 'Removing srv120-srv123 for other testing' [production]
16:35 <robh> synchronized php-1.5/mc-pmtpa.php 'removing srv156' [production]
16:22 <Rob> srv120-srv123 reinstalled, NOT online. Base OS, nothing else, passed on to mark for his testing. (Puppet I assume.) [production]
15:48 <Rob> srv120-123 going down for reinstallation [production]
15:45 <Rob> srv108 and srv109 up and online [production]
15:06 <Rob> srv108 and srv109 are in mid-install for ubuntu [production]
15:06 <Rob> srv107 wont restart for some reason, adding to tasks to troubleshoot. [production]
15:04 <Rob> srv105 and srv106 back up and online [production]
14:56 <Rob> srv107-srv109 goin down [production]
14:54 <Rob> srv104 back online [production]
14:48 <Rob> srv102 and srv103 back up and online [production]
14:43 <Rob> srv102-106 reinstalling. [production]
14:29 <Rob> srv53 has a bad fan, shutting down until its replaced. [production]
14:20 <Rob> srv102-srv109 being upgraded to ubuntu. [production]
11:43 <andrew> synchronized php-1.5/InitialiseSettings.php 'Updated $wgSitename for ukwikimedia in accordance with IRC request from Michael Peel, a board member' [production]
02:20 <Tim> srv53 down, took it out of memcached rotation. Updating the memcached spare list. [production]
02:20 <tstarling> synchronized php-1.5/mc-pmtpa.php [production]
02:12 <Tim> fixed rc1 slaves, broken by expire_logs_days on ms3 [production]
01:59 <Tim> Shut down srv217 for maintenance. Similar timer interrupt issue observed as before: select() syscalls running indefinitely despite a short timeout specified. [production]
01:53 <tstarling> synchronized php-1.5/db.php [production]
01:52 <Tim> repooled ms3 rc1 instance [production]
01:49 <Tim> reset slave on db21, was running out of disk space due to relay logs [production]
01:42 <Tim> fixed nagios for srv99, still had its apache check command set to my CGI security vulnerability demonstration, permanently saved in retention.dat despite config changes [production]
01:17 <Tim> enabled apport on srv99, to see if I can track down the nagios flapping [production]
00:52 <Tim> restarted trackBlobs.php [production]
2009-04-25 §
23:31 <Tim-away> experimentally stopping replication on db3 to check disk load [production]
22:51 <tstarling> synchronized php-1.5/db.php 'reduced load on db3' [production]
18:50 <mark> Killed long-running SQL query TrackBlobs::trackRevisions query from hume causing db3 to lag heavily [production]
17:22 <mark> Stopped Apaches on srv32/srv33 again, as syncs will fail in most cases [production]
16:36 <mark> Started /home-less apache on srv33 [production]
13:23 <mark> Started /home-less apache on srv32 [production]
11:03 <mark> Kicked srv99 back into submission [production]
10:56 <mark> Squid-blocked high-rate scraper which was overloading ES [production]