| 2019-01-31
      
      § | 
    
  | 17:44 | <jynus> | running alter table on metawiki.revision_actor_temp, trying to fix TokuDB horrible bugs | [production] | 
            
  | 15:54 | <jynus> | stop, upgrade and restart db1117 | [production] | 
            
  | 13:34 | <mvolz@deploy1001> | scap-helm zotero finished | [production] | 
            
  | 13:34 | <mvolz@deploy1001> | scap-helm zotero cluster codfw completed | [production] | 
            
  | 13:34 | <mvolz@deploy1001> | scap-helm zotero upgrade production -f zotero-values-codfw.yaml stable/zotero [namespace: zotero, clusters: codfw] | [production] | 
            
  | 13:31 | <mvolz@deploy1001> | scap-helm zotero finished | [production] | 
            
  | 13:31 | <mvolz@deploy1001> | scap-helm zotero cluster eqiad completed | [production] | 
            
  | 13:31 | <mvolz@deploy1001> | scap-helm zotero upgrade production -f zotero-values-eqiad.yaml stable/zotero [namespace: zotero, clusters: eqiad] | [production] | 
            
  | 13:19 | <mvolz@deploy1001> | scap-helm zotero finished | [production] | 
            
  | 13:19 | <mvolz@deploy1001> | scap-helm zotero cluster staging completed | [production] | 
            
  | 13:19 | <mvolz@deploy1001> | scap-helm zotero upgrade staging -f zotero-values-staging.yaml --version=0.0.1 stable/zotero [namespace: zotero, clusters: staging] | [production] | 
            
  | 13:18 | <mvolz@deploy1001> | scap-helm zotero upgrade staging -f zotero-values-staging.yaml stable/zotero [namespace: zotero, clusters: staging] | [production] | 
            
  | 12:54 | <jynus> | stop, upgrade and restart db2044 | [production] | 
            
  | 12:12 | <jynus> | apply new grants to m5-master with replication T214740 | [production] | 
            
  | 11:30 | <arturo> | T215012 icinga downtime cloudvirt1015 for 4h while investigating issues | [production] | 
            
  | 11:24 | <arturo> | T215012 reboot cloudvirt1015 | [production] | 
            
  | 11:24 | <jynus> | restart eventstreams on scb1002,3,4 | [production] | 
            
  | 11:22 | <jynus> | restart eventstreams on scb1001 | [production] | 
            
  | 10:22 | <jynus> | resetting to defaults innodb consistency options for db2048 T188327 | [production] | 
            
  | 10:00 | <jynus> | restarting pdfrender on scb1002,3,4 | [production] | 
            
  | 09:54 | <jynus> | restarting pdfrender on scb1001 | [production] | 
            
  | 02:01 | <gtirloni> | T215004 restarted gerrit (using 1200% cpu, 71% mem) | [production] | 
            
  
    | 2019-01-30
      
      § | 
    
  | 20:28 | <bawolff_> | reset 2FA@wikitech for [[User:deigo]] | [production] | 
            
  | 18:25 | <ladsgroup@deploy1001> | Finished deploy [ores/deploy@ad160b0]: (no justification provided) (duration: 12m 46s) | [production] | 
            
  | 18:12 | <ladsgroup@deploy1001> | Started deploy [ores/deploy@ad160b0]: (no justification provided) | [production] | 
            
  | 18:03 | <jynus> | reducing innodb consistency options for db2048 T188327 | [production] | 
            
  | 17:36 | <XioNoX> | deactivate/activate cr2-esams:xe-0/1/3 | [production] | 
            
  | 17:28 | <akosiaris> | restart pdfrender on scb1003, scb1004 | [production] | 
            
  | 16:19 | <akosiaris> | restart proton on proton1002 | [production] | 
            
  | 15:52 | <jynus> | stop, upgrade and restart db2037 | [production] | 
            
  | 15:24 | <jynus> | stop, upgrade and restart db2042 | [production] | 
            
  | 14:27 | <jynus> | stop, upgrade and restart db2034, this will cause some lag on x1-codfw | [production] | 
            
  | 13:53 | <jynus> | stop, upgrade and restart db2069 | [production] | 
            
  | 11:20 | <jynus> | stop, upgrade and restart db2045, this will cause some lag on s8-codfw | [production] | 
            
  | 10:54 | <jynus> | stop, upgrade and restart db2079 | [production] | 
            
  | 10:33 | <jynus> | stop, upgrade and restart db2039, this will cause some lag on s6-codfw | [production] | 
            
  | 10:03 | <jynus> | stop, upgrade and restart db2052, this will cause some lag on s5-codfw | [production] | 
            
  | 09:31 | <jynus> | stop, upgrade and restart db2089 (s5/s6) | [production] | 
            
  | 08:58 | <jynus> | stop, upgrade and restart db2051, this will cause some lag on s4-codfw | [production] | 
            
  | 08:44 | <jynus> | stop, upgrade and restart db2090 | [production] | 
            
  
    | 2019-01-29
      
      § | 
    
  | 21:52 | <jijiki> | Depooling thumbor2002 due to disc failure - T214813 | [production] | 
            
  | 16:51 | <arturo> | T214499 update Netbox status for cloudvirt1023/1024/1025/1026/1027 from PLANNED to ACTIVE. These servers are actually providing services already. | [production] | 
            
  | 10:05 | <jynus> | stop, upgrade and restart db2065 | [production] | 
            
  | 09:28 | <jynus> | stop, upgrade and restart db2058 | [production] | 
            
  | 09:12 | <jynus> | stopping, upgrading and restarting db2035, this will cause lag on codfw-s2 | [production] | 
            
  | 08:58 | <jynus> | stop, upgrade and restart db2041 | [production] | 
            
  | 08:38 | <jynus> | stop, upgrade and restart db2056 | [production] | 
            
  | 08:17 | <jynus@deploy1001> | Synchronized wmf-config/db-eqiad.php: Repool db1114 after crash (duration: 00m 52s) | [production] |