4101-4150 of 10000 results (23ms)
2011-04-09 §
07:58 <apergos> power cycling searchidx1, load was at 60, unresponsive to commands after login from mgmt console [production]
02:46 <RobH> troubleshooting a couple new wikis, had to sync-apaches and restart them gracefully [production]
01:01 <notpeter> changed my.cnf on storage3 to replicated-do-db= drupal,mysql,civicrm [production]
00:50 <Ryan_Lane> installing nova-ajax-console-proxy on nova-controller.tesla [production]
2011-04-08 §
23:19 <laner> ran sync-common-all [production]
23:18 <laner> synchronized php-1.17/wmf-config/InitialiseSettings.php [production]
22:50 <Ryan_Lane> that graceful was me [production]
22:28 <laner> synchronized php-1.17/wmf-config/InitialiseSettings.php [production]
22:16 <laner> synchronized php-1.17/wmf-config/InitialiseSettings.php [production]
22:14 <laner> synchronized php-1.17/wmf-config/InitialiseSettings.php [production]
22:07 <laner> ran sync-common-all [production]
22:06 <Ryan_Lane> gave myself deploy access in svn [production]
17:10 <notpeter> pushing out new dns zones. forgot to change ptr record for yvon... [production]
15:09 <RobH> updating dns with testblog info [production]
13:36 <mark> Added swap on /dev/sdc1 and /dev/sdd1 on ms5 [production]
13:34 <mark> Stopped RAID10 array /dev/md2 again, sync takes too long [production]
13:30 <mark> Created RAID10 array for swap across first partition of 46 drives on ms5 [production]
13:21 <mark> Stopped all rsyncs to investigate ms5's sudden kswapd system cpu load [production]
07:57 <apergos> assigned snapshot1 internal ip in dns [production]
06:11 <apergos> moving snapshot1 to internal vlan etc [production]
04:15 <notpeter> pushing new dns w/ eixamanius as cname for hooper and yvon as new name for box that was previously eixamanius [production]
04:15 <notpeter> stopping etherpad [production]
2011-04-07 §
21:43 <notpeter> removed a silly check for hooper that I made and restarted nagios [production]
19:06 <Ryan_Lane> switching openstack deb repo back to trunk, and upgrading packages on nova-controller, since we are likely to target cactus now [production]
15:40 <mark> Restarted rsyncs [production]
15:26 <mark> Created a test btrfs snapshot of /export on ms6 [production]
15:12 <mark> Temporarily stopped the rsyncs on ms5 to test zfs send performance [production]
13:56 <mark> Reenabled ms6 as backend on esams.upload squids [production]
13:11 <apergos> replaced ms4 in fstab on fenari with ms5 so we have thumbs mounted there [production]
12:08 <mark> Restarted rsyncs on ms5 [production]
12:07 <apergos> nginx conf file change to "thumb" added to puppet [production]
12:00 <mark> Removed the test snapshot on ms5 [production]
11:47 <apergos> edited in place /etc/nginx/sites-available/thumbs and /export/thumbs/scripts/thumb-handler.php to make thumbs generated on the fly return 200. they were returning 404 [production]
10:25 <catrope> synchronized php-1.17/thumb.php 'Attempted fix for wrong temp/thumb paths' [production]
10:20 <apergos> (after reports from en vp that the search index has not been updated for over 4 days) [production]
10:19 <apergos> restarting search indexer on searchidx1 [production]
09:35 <catrope> synchronized php-1.17/includes/specials/SpecialUploadStash.php 'debugging' [production]
09:08 <catrope> synchronized php-1.17/includes/specials/SpecialUploadStash.php 'r85612' [production]
08:23 <catrope> synchronized php-1.17/extensions/UploadWizard/UploadWizard.php 'Fix fatal due to missing API module' [production]
08:17 <catrope> ran sync-common-all [production]
08:16 <RoanKattouw> I meant srv196, not srv193 [production]
08:15 <RoanKattouw> Deploying UploadWizard for real this time, forgot to svn up first. sync-common-all then clearMessageBlobs.php [production]
08:14 <RoanKattouw> Commenting out srv193 in mediawiki-installation node list because its timeouts take forever [production]
08:10 <RoanKattouw> srv196 is not responding to SSH or syncs from fenari (they time out after a looong time) but Nagios says SSH is fine. Should be fixed or temporarily depooled [production]
08:08 <RoanKattouw> Clearing message blobs [production]
08:07 <catrope> ran sync-common-all [production]
08:04 <RoanKattouw> Scap broke with sudo stuff AGAIN, running sync-common-all [production]
08:01 <RoanKattouw> Running scap to deploy UploadWizard changes [production]
07:11 <apergos> turned em off again, started seeing timeouts. bah [production]
06:39 <apergos> and two more... [production]