| 2012-12-18
      
      § | 
    
  | 15:45 | <MaxSem> | Testing done, 40 concurrent processes hitting around the worst-case point kept the load on yttrium at 20%. Average response time ~430ms | [production] | 
            
  | 15:19 | <MaxSem> | Load-testing spatial search | [production] | 
            
  | 14:14 | <hashar> | restarting Zuul with  https://gerrit.wikimedia.org/r/39082 so it starts voting Verified+2 | [production] | 
            
  | 14:13 | <^demon> | restarting gerrit on manganese to pick up VRIF+2 | [production] | 
            
  | 13:50 | <hashar> | restarted puppet on gallium (some apt-get process was a zombie) | [production] | 
            
  | 09:25 | <nikerabbit> | synchronized wmf-config/CommonSettings.php  'Bug 43075' | [production] | 
            
  | 06:30 | <andrewbogott> | switched all labs instances to mount /home via gluster on next reboot | [production] | 
            
  | 06:29 | <andrewbogott> | rsynced all labs homedirs to gluster volumes | [production] | 
            
  | 02:46 | <LocalisationUpdate> | completed (1.21wmf5) at Tue Dec 18 02:45:58 UTC 2012 | [production] | 
            
  | 02:25 | <LocalisationUpdate> | completed (1.21wmf6) at Tue Dec 18 02:25:22 UTC 2012 | [production] | 
            
  | 01:28 | <mutante> | fixing duplicate UID issue on stat1 for maryana | [production] | 
            
  | 00:57 | <LeslieCarr> | asw-c-eqiad unreachable due to lacp issue | [production] | 
            
  | 00:32 | <mutante> | fixing fenari permissions for gwicke.. (pre-puppet age UID) | [production] | 
            
  | 00:26 | <LeslieCarr> | starting upgrade of asw-c-eqiad.mgmt - connectivity to row c machines may be affected | [production] | 
            
  | 00:11 | <notpeter> | temp stopping lsave on es1009 and es1010 for upcoming networking downtime | [production] | 
            
  
    | 2012-12-17
      
      § | 
    
  | 23:12 | <LeslieCarr> | restarted pybal on lvs1001-1003 in order to restart their bgp peering | [production] | 
            
  | 22:44 | <notpeter> | taking fenari down for upgarde to precise (not upgrading, not reimaging) | [production] | 
            
  | 22:37 | <LeslieCarr> | cr1-eqiad being upgraded and rebooted | [production] | 
            
  | 22:30 | <Ryan_Lane> | labstore1 is locked up, powercycling | [production] | 
            
  | 22:29 | <Nemo_bis> | en.wiki job queue spiked from 1 to 3 millions in last 3 hours | [production] | 
            
  | 21:22 | <Ryan_Lane> | rebooting labstore4 | [production] | 
            
  | 21:15 | <reedy> | synchronized php-1.21wmf6/includes/filebackend/FSFileBackend.php | [production] | 
            
  | 21:11 | <Ryan_Lane> | rebooting labstore3 | [production] | 
            
  | 21:03 | <Ryan_Lane> | rebooting labstore2 | [production] | 
            
  | 20:48 | <Ryan_Lane> | restarting labstore1 | [production] | 
            
  | 20:33 | <cmjohnson1> | auth-dns update to add internal ip's for solr1-3 | [production] | 
            
  | 19:35 | <hashar> | regenerating Jenkins job mediawiki-core-install-sqlite | [production] | 
            
  | 19:12 | <reedy> | rebuilt wikiversions.cdb and synchronized wikiversions files: enwiki to 1.21wmf6 | [production] | 
            
  | 18:16 | <andrewbogott> | beginning labs $HOME migration from NFS to gluster | [production] | 
            
  | 16:17 | <cmjohnson1> | auth-dns update adding mgmt for solr1-3, solr1001-3 and internal ip's for solr1001-3 | [production] | 
            
  | 13:13 | <hashar> | set Jenkins to use /bin/bash as a default shell (intend of /bin/sh) | [production] | 
            
  | 04:02 | <paravoid> | killall -9 convert on imagescalers; uploading 120px generated thumbnail directly to swift | [production] | 
            
  | 02:50 | <LocalisationUpdate> | completed (1.21wmf5) at Mon Dec 17 02:50:11 UTC 2012 | [production] | 
            
  | 02:27 | <LocalisationUpdate> | completed (1.21wmf6) at Mon Dec 17 02:27:34 UTC 2012 | [production] |