| 2021-01-27
      
      ยง | 
    
  | 12:19 | <awight@deploy1001> | Synchronized php-1.36.0-wmf.28/extensions/CodeMirror: Backport: [[gerrit:658815|Improve matchbrackets performance when moving the cursor (T270317)]] (duration: 01m 14s) | [production] | 
            
  | 12:17 | <marostegui@cumin1001> | dbctl commit (dc=all): 'db1143 (re)pooling @ 75%: After upgrading the kernel', diff saved to https://phabricator.wikimedia.org/P13988 and previous config saved to /var/cache/conftool/dbconfig/20210127-121756-root.json | [production] | 
            
  | 12:02 | <marostegui@cumin1001> | dbctl commit (dc=all): 'db1143 (re)pooling @ 50%: After upgrading the kernel', diff saved to https://phabricator.wikimedia.org/P13987 and previous config saved to /var/cache/conftool/dbconfig/20210127-120253-root.json | [production] | 
            
  | 11:47 | <marostegui@cumin1001> | dbctl commit (dc=all): 'db1143 (re)pooling @ 25%: After upgrading the kernel', diff saved to https://phabricator.wikimedia.org/P13986 and previous config saved to /var/cache/conftool/dbconfig/20210127-114749-root.json | [production] | 
            
  | 11:32 | <marostegui@cumin1001> | dbctl commit (dc=all): 'db1143 (re)pooling @ 10%: After upgrading the kernel', diff saved to https://phabricator.wikimedia.org/P13985 and previous config saved to /var/cache/conftool/dbconfig/20210127-113245-root.json | [production] | 
            
  | 10:57 | <marostegui@cumin1001> | dbctl commit (dc=all): 'Depool db1143 for kernel upgrade and enablement of report_host', diff saved to https://phabricator.wikimedia.org/P13984 and previous config saved to /var/cache/conftool/dbconfig/20210127-105735-marostegui.json | [production] | 
            
  | 10:36 | <elukey@cumin1001> | END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host schema2004.codfw.wmnet | [production] | 
            
  | 10:23 | <elukey@cumin1001> | START - Cookbook sre.hosts.reboot-single for host schema2004.codfw.wmnet | [production] | 
            
  | 10:23 | <elukey@cumin1001> | END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host schema2003.codfw.wmnet | [production] | 
            
  | 10:20 | <marostegui@cumin1001> | dbctl commit (dc=all): 'Pool db1160 with final weight T258361', diff saved to https://phabricator.wikimedia.org/P13982 and previous config saved to /var/cache/conftool/dbconfig/20210127-102042-marostegui.json | [production] | 
            
  | 10:18 | <elukey@cumin1001> | START - Cookbook sre.hosts.reboot-single for host schema2003.codfw.wmnet | [production] | 
            
  | 10:17 | <elukey@cumin1001> | END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host schema1004.eqiad.wmnet | [production] | 
            
  | 10:15 | <elukey@cumin1001> | START - Cookbook sre.hosts.reboot-single for host schema1004.eqiad.wmnet | [production] | 
            
  | 10:14 | <elukey@cumin1001> | END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host schema1003.eqiad.wmnet | [production] | 
            
  | 10:12 | <elukey@cumin1001> | START - Cookbook sre.hosts.reboot-single for host schema1003.eqiad.wmnet | [production] | 
            
  | 10:05 | <elukey> | reboot matomo1002 for kernel upgrades | [production] | 
            
  | 10:02 | <marostegui@cumin1001> | dbctl commit (dc=all): 'Pool db1160 with more weight T258361', diff saved to https://phabricator.wikimedia.org/P13981 and previous config saved to /var/cache/conftool/dbconfig/20210127-100220-marostegui.json | [production] | 
            
  | 09:38 | <marostegui@cumin1001> | dbctl commit (dc=all): 'Pool db1160 with more weight T258361', diff saved to https://phabricator.wikimedia.org/P13980 and previous config saved to /var/cache/conftool/dbconfig/20210127-093802-marostegui.json | [production] | 
            
  | 09:19 | <marostegui@cumin1001> | dbctl commit (dc=all): 'Pool db1160 with more weight T258361', diff saved to https://phabricator.wikimedia.org/P13979 and previous config saved to /var/cache/conftool/dbconfig/20210127-091909-marostegui.json | [production] | 
            
  | 09:04 | <jbond42> | deploy fix to enable-puppet | [production] | 
            
  | 09:03 | <godog> | swift codfw-prod decrease SSD weight for ms-be20[16-27] - T272837 | [production] | 
            
  | 08:36 | <marostegui@cumin1001> | dbctl commit (dc=all): 'Pool db1160 with more weight T258361', diff saved to https://phabricator.wikimedia.org/P13978 and previous config saved to /var/cache/conftool/dbconfig/20210127-083618-marostegui.json | [production] | 
            
  | 08:29 | <marostegui> | Stop mysql on db1089 to clone db1169 T258361 | [production] | 
            
  | 08:28 | <marostegui@cumin1001> | dbctl commit (dc=all): 'Depool db1089 to clone db1169 T258361', diff saved to https://phabricator.wikimedia.org/P13976 and previous config saved to /var/cache/conftool/dbconfig/20210127-082826-marostegui.json | [production] | 
            
  | 08:11 | <marostegui@cumin1001> | dbctl commit (dc=all): 'Repool db1121', diff saved to https://phabricator.wikimedia.org/P13975 and previous config saved to /var/cache/conftool/dbconfig/20210127-081150-marostegui.json | [production] | 
            
  | 08:07 | <marostegui@cumin1001> | dbctl commit (dc=all): 'Depool db1121', diff saved to https://phabricator.wikimedia.org/P13974 and previous config saved to /var/cache/conftool/dbconfig/20210127-080753-marostegui.json | [production] | 
            
  | 08:06 | <marostegui@cumin1001> | dbctl commit (dc=all): 'db1085 (re)pooling @ 100%: After moving clouddb replicas', diff saved to https://phabricator.wikimedia.org/P13973 and previous config saved to /var/cache/conftool/dbconfig/20210127-080645-root.json | [production] | 
            
  | 07:57 | <marostegui@cumin1001> | dbctl commit (dc=all): 'Give db1160 some more small weight T258361', diff saved to https://phabricator.wikimedia.org/P13972 and previous config saved to /var/cache/conftool/dbconfig/20210127-075715-marostegui.json | [production] | 
            
  | 07:51 | <marostegui@cumin1001> | dbctl commit (dc=all): 'db1085 (re)pooling @ 75%: After moving clouddb replicas', diff saved to https://phabricator.wikimedia.org/P13971 and previous config saved to /var/cache/conftool/dbconfig/20210127-075142-root.json | [production] | 
            
  | 07:36 | <marostegui@cumin1001> | dbctl commit (dc=all): 'db1085 (re)pooling @ 50%: After moving clouddb replicas', diff saved to https://phabricator.wikimedia.org/P13970 and previous config saved to /var/cache/conftool/dbconfig/20210127-073638-root.json | [production] | 
            
  | 07:26 | <elukey> | powercycle analytics1073 - kernel soft lock up bug registered, os needs a reboot | [production] | 
            
  | 07:21 | <marostegui@cumin1001> | dbctl commit (dc=all): 'db1085 (re)pooling @ 25%: After moving clouddb replicas', diff saved to https://phabricator.wikimedia.org/P13969 and previous config saved to /var/cache/conftool/dbconfig/20210127-072135-root.json | [production] | 
            
  | 07:05 | <marostegui@cumin1001> | dbctl commit (dc=all): 'Depool db1085 T272008', diff saved to https://phabricator.wikimedia.org/P13968 and previous config saved to /var/cache/conftool/dbconfig/20210127-070502-marostegui.json | [production] | 
            
  | 06:57 | <marostegui@cumin1001> | dbctl commit (dc=all): 'Give db1160 some more small weight T258361', diff saved to https://phabricator.wikimedia.org/P13967 and previous config saved to /var/cache/conftool/dbconfig/20210127-065715-marostegui.json | [production] | 
            
  | 06:39 | <marostegui@cumin1001> | dbctl commit (dc=all): 'Give db1160 some more small weight T258361', diff saved to https://phabricator.wikimedia.org/P13966 and previous config saved to /var/cache/conftool/dbconfig/20210127-063930-marostegui.json | [production] | 
            
  | 06:13 | <marostegui@cumin1001> | dbctl commit (dc=all): 'Pool db1160 with minimal weight T258361', diff saved to https://phabricator.wikimedia.org/P13965 and previous config saved to /var/cache/conftool/dbconfig/20210127-061336-marostegui.json | [production] | 
            
  | 06:03 | <twentyafterfour> | phabricator appears to be up and running fine | [production] | 
            
  | 06:03 | <twentyafterfour> | phabricator is read-write | [production] | 
            
  | 06:01 | <twentyafterfour> | phabricator is read-only | [production] | 
            
  | 06:00 | <marostegui> | m3 master restart, phabricator will go on read only - T272596 | [production] | 
            
  | 05:50 | <marostegui> | Deploy schema change on s3 T270055 | [production] | 
            
  | 03:48 | <ryankemper> | (Restarted `wdqs-blazegraph` on `wdqs1012`) | [production] | 
            
  | 02:24 | <ebernhardson@deploy1001> | Finished deploy [wikimedia/discovery/analytics@9c85a21]: transfer_to_es: start date 2020 -> 2021 (duration: 02m 59s) | [production] | 
            
  | 02:21 | <ebernhardson@deploy1001> | Started deploy [wikimedia/discovery/analytics@9c85a21]: transfer_to_es: start date 2020 -> 2021 | [production] | 
            
  | 01:58 | <ryankemper> | [WDQS Deploy] Restarting `wdqs-categories` across lvs-managed hosts, one node at a time: `sudo -E cumin -b 1 'A:wdqs-all and not A:wdqs-test' 'depool && sleep 45 && systemctl restart wdqs-categories && sleep 45 && pool'` | [production] | 
            
  | 01:57 | <ryankemper> | [WDQS Deploy] Restarted `wdqs-categories` across all test hosts simultaneously: `sudo -E cumin 'A:wdqs-test' 'systemctl restart wdqs-categories'` | [production] | 
            
  | 01:57 | <ryankemper> | [WDQS Deploy] Restarted `wdqs-updater` across all hosts, 4 hosts at a time: `sudo -E cumin -b 4 'A:wdqs-all' 'systemctl restart wdqs-updater'` | [production] | 
            
  | 01:56 | <ryankemper@deploy1001> | Finished deploy [wdqs/wdqs@6c6b2cb]: 0.3.61 (duration: 07m 50s) | [production] | 
            
  | 01:50 | <ryankemper> | [WDQS Deploy] Tests passing following deploy of `0.3.61` on canary `wdqs1003`; proceeding to rest of fleet | [production] | 
            
  | 01:48 | <ryankemper@deploy1001> | Started deploy [wdqs/wdqs@6c6b2cb]: 0.3.61 | [production] |