| 
      
        2019-01-31
      
      §
     | 
  
    
  | 17:44 | 
  <jynus> | 
  running alter table on metawiki.revision_actor_temp, trying to fix TokuDB horrible bugs | 
  [production] | 
            
  | 15:54 | 
  <jynus> | 
  stop, upgrade and restart db1117 | 
  [production] | 
            
  | 13:34 | 
  <mvolz@deploy1001> | 
  scap-helm zotero finished | 
  [production] | 
            
  | 13:34 | 
  <mvolz@deploy1001> | 
  scap-helm zotero cluster codfw completed | 
  [production] | 
            
  | 13:34 | 
  <mvolz@deploy1001> | 
  scap-helm zotero upgrade production -f zotero-values-codfw.yaml stable/zotero [namespace: zotero, clusters: codfw] | 
  [production] | 
            
  | 13:31 | 
  <mvolz@deploy1001> | 
  scap-helm zotero finished | 
  [production] | 
            
  | 13:31 | 
  <mvolz@deploy1001> | 
  scap-helm zotero cluster eqiad completed | 
  [production] | 
            
  | 13:31 | 
  <mvolz@deploy1001> | 
  scap-helm zotero upgrade production -f zotero-values-eqiad.yaml stable/zotero [namespace: zotero, clusters: eqiad] | 
  [production] | 
            
  | 13:19 | 
  <mvolz@deploy1001> | 
  scap-helm zotero finished | 
  [production] | 
            
  | 13:19 | 
  <mvolz@deploy1001> | 
  scap-helm zotero cluster staging completed | 
  [production] | 
            
  | 13:19 | 
  <mvolz@deploy1001> | 
  scap-helm zotero upgrade staging -f zotero-values-staging.yaml --version=0.0.1 stable/zotero [namespace: zotero, clusters: staging] | 
  [production] | 
            
  | 13:18 | 
  <mvolz@deploy1001> | 
  scap-helm zotero upgrade staging -f zotero-values-staging.yaml stable/zotero [namespace: zotero, clusters: staging] | 
  [production] | 
            
  | 12:54 | 
  <jynus> | 
  stop, upgrade and restart db2044 | 
  [production] | 
            
  | 12:12 | 
  <jynus> | 
  apply new grants to m5-master with replication T214740 | 
  [production] | 
            
  | 11:30 | 
  <arturo> | 
  T215012 icinga downtime cloudvirt1015 for 4h while investigating issues | 
  [production] | 
            
  | 11:24 | 
  <arturo> | 
  T215012 reboot cloudvirt1015 | 
  [production] | 
            
  | 11:24 | 
  <jynus> | 
  restart eventstreams on scb1002,3,4 | 
  [production] | 
            
  | 11:22 | 
  <jynus> | 
  restart eventstreams on scb1001 | 
  [production] | 
            
  | 10:22 | 
  <jynus> | 
  resetting to defaults innodb consistency options for db2048 T188327 | 
  [production] | 
            
  | 10:00 | 
  <jynus> | 
  restarting pdfrender on scb1002,3,4 | 
  [production] | 
            
  | 09:54 | 
  <jynus> | 
  restarting pdfrender on scb1001 | 
  [production] | 
            
  | 02:01 | 
  <gtirloni> | 
  T215004 restarted gerrit (using 1200% cpu, 71% mem) | 
  [production] | 
            
  
    | 
      
        2019-01-30
      
      §
     | 
  
    
  | 20:28 | 
  <bawolff_> | 
  reset 2FA@wikitech for [[User:deigo]] | 
  [production] | 
            
  | 18:25 | 
  <ladsgroup@deploy1001> | 
  Finished deploy [ores/deploy@ad160b0]: (no justification provided) (duration: 12m 46s) | 
  [production] | 
            
  | 18:12 | 
  <ladsgroup@deploy1001> | 
  Started deploy [ores/deploy@ad160b0]: (no justification provided) | 
  [production] | 
            
  | 18:03 | 
  <jynus> | 
  reducing innodb consistency options for db2048 T188327 | 
  [production] | 
            
  | 17:36 | 
  <XioNoX> | 
  deactivate/activate cr2-esams:xe-0/1/3 | 
  [production] | 
            
  | 17:28 | 
  <akosiaris> | 
  restart pdfrender on scb1003, scb1004 | 
  [production] | 
            
  | 16:19 | 
  <akosiaris> | 
  restart proton on proton1002 | 
  [production] | 
            
  | 15:52 | 
  <jynus> | 
  stop, upgrade and restart db2037 | 
  [production] | 
            
  | 15:24 | 
  <jynus> | 
  stop, upgrade and restart db2042 | 
  [production] | 
            
  | 14:27 | 
  <jynus> | 
  stop, upgrade and restart db2034, this will cause some lag on x1-codfw | 
  [production] | 
            
  | 13:53 | 
  <jynus> | 
  stop, upgrade and restart db2069 | 
  [production] | 
            
  | 11:20 | 
  <jynus> | 
  stop, upgrade and restart db2045, this will cause some lag on s8-codfw | 
  [production] | 
            
  | 10:54 | 
  <jynus> | 
  stop, upgrade and restart db2079 | 
  [production] | 
            
  | 10:33 | 
  <jynus> | 
  stop, upgrade and restart db2039, this will cause some lag on s6-codfw | 
  [production] | 
            
  | 10:03 | 
  <jynus> | 
  stop, upgrade and restart db2052, this will cause some lag on s5-codfw | 
  [production] | 
            
  | 09:31 | 
  <jynus> | 
  stop, upgrade and restart db2089 (s5/s6) | 
  [production] | 
            
  | 08:58 | 
  <jynus> | 
  stop, upgrade and restart db2051, this will cause some lag on s4-codfw | 
  [production] | 
            
  | 08:44 | 
  <jynus> | 
  stop, upgrade and restart db2090 | 
  [production] |