2801-2850 of 10000 results (20ms)
2010-11-04 §
17:59 <RobH> doing puppet runs and final setup for srv290-srv301 [production]
16:56 <rfaulk> Added numpy Python package to grosley.wikimedia.org with apt_get ... For use in the 2010/11 fundraiser to facilitate stats gathering by providing scientific computing functionality in Python [production]
16:43 <rfaulk> Added MySQLdb Python package to on grosley.wikimedia.org with apt-get ... This package will be used to access fundraising databases to facilitate the gathering and synthesis of relevant statistics for the 2010/11 Wikimedia findraiser [production]
16:23 <mark> Set storage1 (varnish) as upload backend on sq41-50, instead of ms4 [production]
16:14 <RobH> sq59 is being bitchy and wont clean the cache, possible hdd issue? will investigate later [production]
15:42 <RobH> sq35 back in rotation [production]
15:34 <mark> Added storage1 (varnish->ms4) as an HTTP backend to sq45's squid config [production]
15:34 <RobH> commenting out sq35, trying to make it work again in pybal [production]
15:16 <RobH> poking at sq59 [production]
15:06 <RobH> sq35 back online, pushed into lvs, partially up - may need to wait up to 5 for idleconnect timer [production]
14:46 <RobH> pushed dns updates for new payments boxes and correcting owadb1/2 to db31/32 [production]
14:28 <RobH> sq35 set to false in pybal until i determine whats wrong with it [production]
14:09 <mark> Reduced CARP weight of sq41-50 from 10 to 5 [production]
13:37 <RobH> sq35 may flag, disregard [production]
13:30 <RoanKattouw> Removed uploadwizard test wiki on prototype, gonna set it up on the Commons prototype instead [production]
04:17 <atglenn> ganglia 3.1 now running on ms4 and ms5 [production]
01:44 <RobH> srv217 back in cluster [production]
00:36 <RobH> torrus back online [production]
00:29 <RobH> fixing torrus deadlock, no touchy [production]
00:18 <tomaszf> upped open fd's on loudon to 4096 [production]
00:17 <RobH> kicking srv217 for reinstall [production]
2010-11-03 §
21:22 <RobH> updated puppet to properly remove memcached from memcached::false entries and removed the host memcached check for servers no longer running memcached, hup'd nagios to take the change [production]
21:21 <atglenn> rebooting ms5 after OS update. note that we were unable to get some of the more recent patches, they are probably from after the sun->oracle transition [production]
21:02 <nimishg> synchronized php-1.5/extensions/LandingCheck/LandingCheck.i18n.php 'r75890' [production]
21:02 <nimishg> synchronized php-1.5/extensions/LandingCheck/LandingCheck.alias.php 'r75890' [production]
21:01 <nimishg> synchronized php-1.5/extensions/LandingCheck/SpecialLandingCheck.php 'r75890' [production]
21:01 <nimishg> synchronized php-1.5/extensions/LandingCheck/LandingCheck.php 'r75890' [production]
20:31 <atglenn> removed about 1.5T of stuff off of /export on ms4 (old backups, solaris isos, etc) [production]
19:41 <catrope> synchronized php-1.5/README 'Dummy sync so I can document what the errors look like' [production]
19:32 <tfinc> synchronized php-1.5/wmf-config/CommonSettings.php 'Backing out config change for stats fix' [production]
19:31 <RobH> srv281 still down, setting to false in pybal just so it doesnt keep trying to use it [production]
18:31 <RobH> reinstalling srv281, tired of lookin at it in red [production]
17:18 <mark> Upgraded storage1 to Lucid [production]
16:42 <mark> Removing 2010-03 snapshots on ms4 [production]
16:01 <mark> Fixed sshd on ms4 [production]
15:46 <mark> Removing 2010-02 snapshots on ms4 [production]
15:45 <mark> Disabled gmetric cron jobs on ms4 [production]
15:43 <mark> Disabled daily snapshot generation on ms4 [production]
15:27 <mark> Restarted gmond on ms4 [production]
15:24 <mark> Upgraded puppet on ms4 [production]
15:13 <mark> Powercycled knsq2 [production]
14:52 <mark> Removing daily snapshots for 2010-10 on ms4 [production]
14:24 <mark> Restored /etc/sudoers file on DB machines butchered by old versions of wikimedia-raid-utils [production]
05:34 <tstarling> synchronized php-1.5/includes/Math.php 'r75909' [production]
04:52 <apergos> oh btw, I notice that when / on the squids fills, we don't see it in ganglia, it must report an aggregate or something. it would sure be nice to get notified. [production]
04:18 <apergos> lather rinse repeat for sq47, I hope that's all of 'em [production]
03:46 <apergos> repeated on sq45... [production]
03:13 <apergos> same old story on sq46... restarted syslog, reloaded squid, got back some space on / [production]
02:41 <apergos> er... and deleted the log file :-P [production]
02:38 <apergos> moved ginormous cache.log out of the way on sg48 and reloaded squid over there since it wasn't done earlier [production]