| 2016-02-09
      
      ยง | 
    
  | 17:07 | <godog> | start cassandra-a on restbase1007 with replace_address=10.64.0.230 | [production] | 
            
  | 16:57 | <thcipriani@mira> | Finished scap: SWAT: Clarify and expand messages mentioning loss of session data [[gerrit:269424]] (duration: 27m 36s) | [production] | 
            
  | 16:53 | <bblack> | rebooting cp1008/pinkunicorn for 4.4 kernel | [production] | 
            
  | 16:34 | <jynus> | reimage db2012 | [production] | 
            
  | 16:30 | <thcipriani@mira> | Started scap: SWAT: Clarify and expand messages mentioning loss of session data [[gerrit:269424]] | [production] | 
            
  | 16:18 | <thcipriani@mira> | Synchronized wmf-config: SWAT: Enable ArticlePlaceholder on test wikis [[gerrit:269399]] (duration: 01m 19s) | [production] | 
            
  | 16:15 | <thcipriani> | mw1037.eqiad.wmnet error during SWAT rsync: failed to set times on "/srv/mediawiki/.": Read-only file system (30) | [production] | 
            
  | 16:09 | <thcipriani@mira> | Synchronized wmf-config/InitialiseSettings.php: SWAT: Enable math data type on Wikidata and everywhere [[gerrit:269398]] (duration: 02m 31s) | [production] | 
            
  | 15:59 | <elukey> | puppet re-enabled on kafka1012 | [production] | 
            
  | 15:56 | <paravoid> | "power"cycling alsafi | [production] | 
            
  | 15:55 | <moritzm> | uploaded linux 4.4-1~wmf1 (jessie-wikimedia/experimental) to carbon | [production] | 
            
  | 15:47 | <_joe_> | re-removed the puppet facts for protactinium | [production] | 
            
  | 15:40 | <paravoid> | echo 1 > /proc/sys/net/ipv4/vs/schedule_icmp on lvs3001 | [production] | 
            
  | 15:36 | <elukey> | disabled puppet on kafka1012, changing temporary kafka retention to purge some extra logs | [production] | 
            
  | 15:17 | <cmjohnson1> | snapshot1002 mistakenly taken offline -- booting now | [production] | 
            
  | 15:15 | <paravoid> | upgrading lvs4001/4002 to linux 4.4.0 | [production] | 
            
  | 15:07 | <godog> | stop cassandra on restbase1007, cpu/mem upgrade and reimage | [production] | 
            
  | 14:59 | <paravoid> | upgrading lvs3001/3002 to linux 4.4.0 | [production] | 
            
  | 14:53 | <godog> | reboot ms-be1004, xfs hosed | [production] | 
            
  | 14:51 | <hashar> | Cutting branches 1.27.0-wmf.13 | [production] | 
            
  | 14:46 | <elukey> | re-enabled puppet on mc1004.eqiad | [production] | 
            
  | 14:45 | <bblack> | resuming cpNNNN rolling kernel reboots | [production] | 
            
  | 14:41 | <_joe_> | setting  mw1026-1050 as inactive in the appservers pool (T126242) | [production] | 
            
  | 13:58 | <hashar> | shutting down jenkins finally,  and restarting it | [production] | 
            
  | 13:51 | <hashar> | Restarting Jenkins. It can not manage to add slaves | [production] | 
            
  | 13:15 | <paravoid> | upgrading lvs1001/lvs1007/lvs1002/lvs1008/lvs1003/lvs1009 to 4.4.0 | [production] | 
            
  | 13:11 | <akosiaris> | reboot serpens to apply memory increase of 2G | [production] | 
            
  | 13:07 | <paravoid> | installing linux 4.4.0 on lvs1001 | [production] | 
            
  | 13:01 | <hashar> | Jenkins disabled again :( | [production] | 
            
  | 12:53 | <akosiaris> | reboot seaborgium to apply memory increase of 2G | [production] | 
            
  | 12:47 | <hashar> | Updated faulty script that caused 'php' too loop infinitely.  Jenkins back up. | [production] | 
            
  | 12:36 | <hashar> | Jenkins no more accept new jobs until the slaves are fixed :/ | [production] | 
            
  | 12:33 | <hashar> | all CI slaves looping to death because of a php loop | [production] | 
            
  | 11:43 | <paravoid> | upgrading lvs2001, lvs2002, lvs2003 to kernel 4.4.0 | [production] | 
            
  | 11:36 | <paravoid> | reverting lvs2005 to 3.19 and rebooting, test is over and was successful | [production] | 
            
  | 11:19 | <paravoid> | stopping pybal on lvs2002 | [production] | 
            
  | 11:05 | <paravoid> | installing linux-image-4.4.0 on lvs2005 and rebooting for testing | [production] | 
            
  | 10:53 | <apergos> | salt minions on labs instances that respond to labcontrol1001 will be coming back up over the next 1/2 hour as puppet runs (salt master key fixes) | [production] | 
            
  | 10:45 | <elukey> | disabled puppet, redis and memcached on mc1004 for jessie migration | [production] | 
            
  | 10:33 | <_joe_> | pybal updated everywhere | [production] | 
            
  | 10:32 | <gehel> | elasticsearch codfw: cleanup leftover logs /var/log/elasticsearch/*.[2-7] | [production] | 
            
  | 10:24 | <gehel> | elasticsearch eqiad: cleanup leftover logs /var/log/elasticsearch/*.[2-7] | [production] | 
            
  | 10:09 | <_joe_> | upgrading pybal on active nodes in esams and eqiad | [production] | 
            
  | 10:04 | <_joe_> | depooling elastic1021.eqiad.wmnet as RAM has failed | [production] | 
            
  | 09:56 | <jynus> | running table engine conversion script on db1069 (potential small lag on labs for 1 day) | [production] | 
            
  | 09:40 | <moritzm> | restarted cassandra-a service on praseodymium | [production] | 
            
  | 09:21 | <ema> | restarted hhvm on mw1132 | [production] | 
            
  | 08:49 | <_joe_> | installing the new pybal package in esams and eqiad backups | [production] | 
            
  | 08:23 | <moritzm> | restarted cassandra-a service on praseodymium | [production] | 
            
  | 07:11 | <_joe_> | manually touched (with -h) the wmf-config/PrivateSettings.php symlink on all mw* hosts | [production] |