| 
      
        2010-08-11
      
      §
     | 
  
    
  | 21:03 | 
  <jeluf> | 
  synchronized php-1.5/wmf-config/InitialiseSettings.php  '24570 - Request' | 
  [production] | 
            
  | 20:42 | 
  <mark> | 
  Fixed ganglia mess on ms1 | 
  [production] | 
            
  | 20:37 | 
  <mark> | 
  Started rsync of ms2:/a to ms1:/a | 
  [production] | 
            
  | 20:32 | 
  <jeluf> | 
  synchronized php-1.5/wmf-config/InitialiseSettings.php  '24685 - Author, Index, Page namespace for id.wikisource' | 
  [production] | 
            
  | 20:30 | 
  <mark> | 
  FLUSH TABLES WITH READ LOCK on ms2 | 
  [production] | 
            
  | 20:27 | 
  <mark> | 
  Readded spare /dev/mdak1 to /dev/md1 on ms1. Why do spares go missing all the time... | 
  [production] | 
            
  | 20:26 | 
  <mark> | 
  Upgraded ms1 to Lucid, rebooted it | 
  [production] | 
            
  | 20:26 | 
  <RobH> | 
  working on db16 | 
  [production] | 
            
  | 20:24 | 
  <jeluf> | 
  synchronized php-1.5/wmf-config/InitialiseSettings.php  '24719 - Extension' | 
  [production] | 
            
  | 20:07 | 
  <midom> | 
  synchronized php-1.5/wmf-config/db.php  | 
  [production] | 
            
  | 20:07 | 
  <domas> | 
  promoted db5 to slave on s4 | 
  [production] | 
            
  | 19:53 | 
  <mark> | 
  Upgrading ms1 to Lucid | 
  [production] | 
            
  | 19:44 | 
  <mark> | 
  Readded missing spare drive to /dev/md1 on ms1 | 
  [production] | 
            
  | 17:25 | 
  <robh> | 
  synchronized php-1.5/wmf-config/mc.php  'removed srv95 as it has temp warnings and is going to go away soon.' | 
  [production] | 
            
  | 16:41 | 
  <RobH_dc> | 
  pulled network on srv110 and started wipe, byebye | 
  [production] | 
            
  | 16:35 | 
  <RobH_dc> | 
  db19 eth1 disconnected per dc tasks | 
  [production] | 
            
  | 16:02 | 
  <RobH_dc> | 
  working on ms1 | 
  [production] | 
            
  | 15:22 | 
  <RobH_dc> | 
  bad disk replaced in db7, raid is currently rebuilding, system still online. | 
  [production] | 
            
  | 15:10 | 
  <RobH_dc> | 
  pulled hdd5 from db7 for replacement | 
  [production] | 
            
  | 15:02 | 
  <mark> | 
  Shutdown clematis for decommissioning | 
  [production] | 
            
  | 14:18 | 
  <RobH> | 
  knsq11,knsq12,knsq13 are post os reinstall, pre squid deployment config, will finish them in a bit | 
  [production] | 
            
  | 14:07 | 
  <robh> | 
  synchronized php-1.5/wmf-config/InitialiseSettings.php  'Bug 24441 - Enable Rollback in Quechua Wikipedia' | 
  [production] | 
            
  | 13:33 | 
  <RobH> | 
  knsq11-knsq13 coming down for reinstallation | 
  [production] | 
            
  | 12:23 | 
  <Tim> | 
  deployed non-threaded version of imagemagick on all image scalers | 
  [production] | 
            
  | 11:44 | 
  <tstarling> | 
  synchronized php-1.5/includes/media/Bitmap.php  'OMP_NUM_THREADS=1' | 
  [production] | 
            
  | 11:21 | 
  <mark> | 
  Reconfigured wikimedia-lvs-realserver on hume, so wikimedia-task-appserver install succeeds | 
  [production] | 
            
  | 11:19 | 
  <tstarling> | 
  synchronized php-1.5/includes/media/Bitmap.php  'reduced magick memory limit from 100M to 50M to stop hanging with vsize limit 300M' | 
  [production] | 
            
  | 10:46 | 
  <mark> | 
  Removed pattern check from nagios check_http | 
  [production] | 
            
  | 09:42 | 
  <tstarling> | 
  synchronized php-1.5/wmf-config/CommonSettings.php  | 
  [production] | 
            
  | 09:38 | 
  <tstarling> | 
  synchronized php-1.5/wmf-config/CommonSettings.php  | 
  [production] | 
            
  | 09:35 | 
  <Tim> | 
  rebooting srv223, went OOM and mostly died | 
  [production] | 
            
  | 09:32 | 
  <tstarling> | 
  synchronized php-1.5/includes/media/Bitmap.php  'temporary patch to stop scalers going OOM' | 
  [production] | 
            
  | 09:19 | 
  <Tim> | 
  temporarily increased memory limit on the image scalers, since the new convert tends to hang when it runs out of memory instead of crashing nicely | 
  [production] | 
            
  | 09:17 | 
  <tstarling> | 
  synchronized php-1.5/wmf-config/CommonSettings.php  'more memory for image scalers' | 
  [production] | 
            
  | 08:56 | 
  <Tim> | 
  upgrading imagemagick on image scalers to 6.6.2.6-1wm1, package recently committed to svn | 
  [production] | 
            
  | 02:48 | 
  <Tim> | 
  on techblog, disabled WP_DEBUG since it was messing up the admin panels with E_NOTICE messages | 
  [production] | 
            
  | 02:42 | 
  <Tim> | 
  disabled WP-SpamFree on techblog due to bug 19540 | 
  [production] | 
            
  
    | 
      
        2010-08-10
      
      §
     | 
  
    
  | 23:12 | 
  <Fred> | 
  upgraded Tridge to Lucid. Now rebooting. | 
  [production] | 
            
  | 22:04 | 
  <RobH> | 
  knsq10 back online | 
  [production] | 
            
  | 20:59 | 
  <RobH> | 
  knsq10 reinstalling | 
  [production] | 
            
  | 20:44 | 
  <RobH> | 
  knsq9 online | 
  [production] | 
            
  | 19:37 | 
  <RobH> | 
  handed off knsq8 to mark, reinstalling knsq9 | 
  [production] | 
            
  | 19:02 | 
  <^demon> | 
  disabled svn post-commit hook for parser tests, long-since broken | 
  [production] | 
            
  | 18:57 | 
  <mark> | 
  Stopping backend squid on amssq60 for testing | 
  [production] | 
            
  | 15:24 | 
  <RobH> | 
  knsq8 reinstalled, not yet online, will push online shortly | 
  [production] | 
            
  | 14:56 | 
  <mark> | 
  Setup RT on rt.wikimedia.org (streber) | 
  [production] | 
            
  | 14:32 | 
  <RobH> | 
  knsq30 online and in cluster, knsq8 coming down for work | 
  [production] | 
            
  | 14:18 | 
  <RobH> | 
  updated wordpress versions on blog.wikimedia.org and techblog.wikimedia.org | 
  [production] | 
            
  | 13:35 | 
  <RobH> | 
  finishing install on knsq30 | 
  [production] | 
            
  | 12:50 | 
  <Tim> | 
  installed schroot on stafford, for hardy versions of uupdate etc. | 
  [production] |