551-600 of 10000 results (12ms)
2011-04-08 §
17:10 <notpeter> pushing out new dns zones. forgot to change ptr record for yvon... [production]
15:09 <RobH> updating dns with testblog info [production]
13:36 <mark> Added swap on /dev/sdc1 and /dev/sdd1 on ms5 [production]
13:34 <mark> Stopped RAID10 array /dev/md2 again, sync takes too long [production]
13:30 <mark> Created RAID10 array for swap across first partition of 46 drives on ms5 [production]
13:21 <mark> Stopped all rsyncs to investigate ms5's sudden kswapd system cpu load [production]
07:57 <apergos> assigned snapshot1 internal ip in dns [production]
06:11 <apergos> moving snapshot1 to internal vlan etc [production]
04:15 <notpeter> pushing new dns w/ eixamanius as cname for hooper and yvon as new name for box that was previously eixamanius [production]
04:15 <notpeter> stopping etherpad [production]
2011-04-07 §
21:43 <notpeter> removed a silly check for hooper that I made and restarted nagios [production]
19:06 <Ryan_Lane> switching openstack deb repo back to trunk, and upgrading packages on nova-controller, since we are likely to target cactus now [production]
15:40 <mark> Restarted rsyncs [production]
15:26 <mark> Created a test btrfs snapshot of /export on ms6 [production]
15:12 <mark> Temporarily stopped the rsyncs on ms5 to test zfs send performance [production]
13:56 <mark> Reenabled ms6 as backend on esams.upload squids [production]
13:11 <apergos> replaced ms4 in fstab on fenari with ms5 so we have thumbs mounted there [production]
12:08 <mark> Restarted rsyncs on ms5 [production]
12:07 <apergos> nginx conf file change to "thumb" added to puppet [production]
12:00 <mark> Removed the test snapshot on ms5 [production]
11:47 <apergos> edited in place /etc/nginx/sites-available/thumbs and /export/thumbs/scripts/thumb-handler.php to make thumbs generated on the fly return 200. they were returning 404 [production]
10:25 <catrope> synchronized php-1.17/thumb.php 'Attempted fix for wrong temp/thumb paths' [production]
10:20 <apergos> (after reports from en vp that the search index has not been updated for over 4 days) [production]
10:19 <apergos> restarting search indexer on searchidx1 [production]
09:35 <catrope> synchronized php-1.17/includes/specials/SpecialUploadStash.php 'debugging' [production]
09:08 <catrope> synchronized php-1.17/includes/specials/SpecialUploadStash.php 'r85612' [production]
08:23 <catrope> synchronized php-1.17/extensions/UploadWizard/UploadWizard.php 'Fix fatal due to missing API module' [production]
08:17 <catrope> ran sync-common-all [production]
08:16 <RoanKattouw> I meant srv196, not srv193 [production]
08:15 <RoanKattouw> Deploying UploadWizard for real this time, forgot to svn up first. sync-common-all then clearMessageBlobs.php [production]
08:14 <RoanKattouw> Commenting out srv193 in mediawiki-installation node list because its timeouts take forever [production]
08:10 <RoanKattouw> srv196 is not responding to SSH or syncs from fenari (they time out after a looong time) but Nagios says SSH is fine. Should be fixed or temporarily depooled [production]
08:08 <RoanKattouw> Clearing message blobs [production]
08:07 <catrope> ran sync-common-all [production]
08:04 <RoanKattouw> Scap broke with sudo stuff AGAIN, running sync-common-all [production]
08:01 <RoanKattouw> Running scap to deploy UploadWizard changes [production]
07:11 <apergos> turned em off again, started seeing timeouts. bah [production]
06:39 <apergos> and two more... [production]
06:31 <apergos> restarted two of the 8 rsyncs on ms5, keeping an eye on them [production]
01:31 <domas> added nobarrier to xfs mount options on db32 and db37 [production]
2011-04-06 §
20:38 <RobH> updated puppet with a svn::client class (rt#721) [production]
20:18 <RobH> pulled wm09schols, wm10schols, and wm10reg out of enabled sites on singer [production]
20:05 <apergos> suspended all rsyncs on ms5, we were seeing nfs timeouts on the renderers all of a sudden [production]
18:50 <apergos> killed morebots and let the restart script start it up again [production]
2011-04-05 §
23:00 <Ryan_Lane> restarting search indexer on searchidx to free space held by deleted logs [production]
22:58 <Ryan_Lane> clearing up some space on searchidx1 [production]
22:20 <notpeter> crammed an etherpad db into db9's mysql hole. [production]
17:57 <Ryan_Lane> restarting llsearchd on all search boxes [production]
17:45 <RoanKattouw> Restarted morebots, running on wikitech as catrope [production]
17:45 <Ryan_Lane> changing the udp log location for search to emery [production]