7351-7400 of 10000 results (34ms)
2011-04-12 §
02:11 <tstarling> synchronized php-1.17/includes/Sanitizer.php 'r85859' [production]
01:30 <root> synchronizing Wikimedia installation... Revision: 85853: [production]
01:28 <Tim> svn up/scap to deploy r85848 [production]
00:31 <Ryan_Lane> installing glance on nova-controller.tesla [production]
2011-04-11 §
22:13 <Ryan_Lane> installing python-nova.adminclient on nova-controller.tesla [production]
21:19 <notpeter> added lag check to storage3. restarting nagios [production]
20:33 <Ryan_Lane> pushing squid-2.7.9-1wm1 to wikimedia-lucid repo [production]
05:15 <Tim> did squid configuration update for bug 28235 [production]
04:40 <aaron> synchronized php-1.17/wmf-config/flaggedrevs.php 'Made categories reviewable on kawiki' [production]
03:23 <aaron> synchronized php-1.17/wmf-config/flaggedrevs.php [production]
03:21 <aaron> synchronized php-1.17/wmf-config/InitialiseSettings.php 'flaggedrevs for kawiki' [production]
03:13 <AaronSchulz> Enabled FlaggedRevs for Georgian Wikipedia [production]
03:11 <aaron> ran sync-common-all [production]
2011-04-10 §
05:22 <ariel> synchronized php-1.17/wmf-config/CommonSettings.php 'increase account creation throttle value for el wiki for editing workshop (can't wait for local sysadmins to be able to do this :-P)' [production]
2011-04-09 §
17:07 <ariel> ran sync-common-all 'sync for elwikinews round 2, let's get the import right this time folks cause this is too nerve-wracking' [production]
16:40 <ariel> synchronized all.dblist 'remove elwikinews, need to drop and recreate after borked import' [production]
16:29 <robh> synchronized php-1.17/wmf-config/InitialiseSettings.php 'logo updates for a couple wikis for phillipe' [production]
10:10 <mark> Changed dataset1's clock source to HPET, synced it with ntpdate and restarted ntpd [production]
07:58 <apergos> power cycling searchidx1, load was at 60, unresponsive to commands after login from mgmt console [production]
02:46 <RobH> troubleshooting a couple new wikis, had to sync-apaches and restart them gracefully [production]
01:01 <notpeter> changed my.cnf on storage3 to replicated-do-db= drupal,mysql,civicrm [production]
00:50 <Ryan_Lane> installing nova-ajax-console-proxy on nova-controller.tesla [production]
2011-04-08 §
23:19 <laner> ran sync-common-all [production]
23:18 <laner> synchronized php-1.17/wmf-config/InitialiseSettings.php [production]
22:50 <Ryan_Lane> that graceful was me [production]
22:28 <laner> synchronized php-1.17/wmf-config/InitialiseSettings.php [production]
22:16 <laner> synchronized php-1.17/wmf-config/InitialiseSettings.php [production]
22:14 <laner> synchronized php-1.17/wmf-config/InitialiseSettings.php [production]
22:07 <laner> ran sync-common-all [production]
22:06 <Ryan_Lane> gave myself deploy access in svn [production]
17:10 <notpeter> pushing out new dns zones. forgot to change ptr record for yvon... [production]
15:09 <RobH> updating dns with testblog info [production]
13:36 <mark> Added swap on /dev/sdc1 and /dev/sdd1 on ms5 [production]
13:34 <mark> Stopped RAID10 array /dev/md2 again, sync takes too long [production]
13:30 <mark> Created RAID10 array for swap across first partition of 46 drives on ms5 [production]
13:21 <mark> Stopped all rsyncs to investigate ms5's sudden kswapd system cpu load [production]
07:57 <apergos> assigned snapshot1 internal ip in dns [production]
06:11 <apergos> moving snapshot1 to internal vlan etc [production]
04:15 <notpeter> pushing new dns w/ eixamanius as cname for hooper and yvon as new name for box that was previously eixamanius [production]
04:15 <notpeter> stopping etherpad [production]
2011-04-07 §
21:43 <notpeter> removed a silly check for hooper that I made and restarted nagios [production]
19:06 <Ryan_Lane> switching openstack deb repo back to trunk, and upgrading packages on nova-controller, since we are likely to target cactus now [production]
15:40 <mark> Restarted rsyncs [production]
15:26 <mark> Created a test btrfs snapshot of /export on ms6 [production]
15:12 <mark> Temporarily stopped the rsyncs on ms5 to test zfs send performance [production]
13:56 <mark> Reenabled ms6 as backend on esams.upload squids [production]
13:11 <apergos> replaced ms4 in fstab on fenari with ms5 so we have thumbs mounted there [production]
12:08 <mark> Restarted rsyncs on ms5 [production]
12:07 <apergos> nginx conf file change to "thumb" added to puppet [production]
12:00 <mark> Removed the test snapshot on ms5 [production]