| 
      
        2020-03-04
      
      §
     | 
  
    
  | 14:07 | 
  <liw@deploy1001> | 
  rebuilt and synchronized wikiversions files: group1 wikis to 1.35.0-wmf.22 | 
  [production] | 
            
  | 13:47 | 
  <addshore> | 
  START warm cache for db1111 & db1126 for Q25-30 million T219123 (pass 3) | 
  [production] | 
            
  | 13:33 | 
  <godog> | 
  disable puppet on install1002 to test partman on theemin | 
  [production] | 
            
  | 13:19 | 
  <vgutierrez> | 
  upgrading ATS to version 8.0.6 on esams | 
  [production] | 
            
  | 13:14 | 
  <marostegui> | 
  Drop fixcopyrightwiki from sanitarium hosts (db1112, db2074) to avoid getting the data alert - T246055 | 
  [production] | 
            
  | 12:55 | 
  <urbanecm@deploy1001> | 
  Synchronized wmf-config/throttle.php: 37db2a1: Add new throttle rule for WikiGap Göteborg 2020-03-06 (T246888) (duration: 01m 04s) | 
  [production] | 
            
  | 12:23 | 
  <XioNoX> | 
  add flowspec rule on cr3-knams - T243482 | 
  [production] | 
            
  | 12:20 | 
  <Urbanecm> | 
  EU SWAT done | 
  [production] | 
            
  | 12:19 | 
  <moritzm> | 
  installing 4.9.210-1~deb8u1 kernel on jessie hosts (no reboots, just the upgrade) | 
  [production] | 
            
  | 12:19 | 
  <urbanecm@deploy1001> | 
  Synchronized php-1.35.0-wmf.22/extensions/GrowthExperiments/includes/HelpPanel/QuestionStore.php: SWAT: d495f4c: Replace loadRevisionFromId which has been removed in I0c8fe834da79c (duration: 01m 06s) | 
  [production] | 
            
  | 12:14 | 
  <urbanecm@deploy1001> | 
  Synchronized wmf-config/throttle.php: SWAT: 1fa9dda: IP Cap Lift for University of Mannheim Wikimedia Event (2020-04-01) (T246832) (duration: 01m 06s) | 
  [production] | 
            
  | 12:11 | 
  <moritzm> | 
  imported linux-meta 1.23 to apt.wikimedia.org for jessie-wikimedia | 
  [production] | 
            
  | 12:04 | 
  <urbanecm@deploy1001> | 
  Synchronized wmf-config/throttle.php: SWAT: 85a5c05: Add throttle exempt for 2020-03-07 GenderGap Event (T246813) (duration: 01m 05s) | 
  [production] | 
            
  | 11:51 | 
  <addshore> | 
  START warm cache for db1111 & db1126 for Q25-30 million T219123 (pass 2) | 
  [production] | 
            
  | 11:19 | 
  <vgutierrez> | 
  upgrading ATS to version 8.0.6 on eqsin | 
  [production] | 
            
  | 11:01 | 
  <addshore@deploy1001> | 
  Synchronized wmf-config/InitialiseSettings.php: Write to new term store up to Q86 million, was 84 (T219123) cache bust (duration: 01m 03s) | 
  [production] | 
            
  | 11:00 | 
  <addshore@deploy1001> | 
  Synchronized wmf-config/InitialiseSettings.php: Write to new term store up to Q86 million, was 84 (T219123) (duration: 01m 04s) | 
  [production] | 
            
  | 10:52 | 
  <vgutierrez> | 
  upgrading ATS to version 8.0.6 on ulsfo | 
  [production] | 
            
  | 10:41 | 
  <addshore> | 
  START warm cache for db1111 & db1126 for Q25-30 million T219123 (pass 1) | 
  [production] | 
            
  | 10:38 | 
  <vgutierrez> | 
  upload trafficserver 8.0.6-1wm1 to apt.wm.o (buster) | 
  [production] | 
            
  | 10:38 | 
  <addshore@deploy1001> | 
  Synchronized wmf-config/InitialiseSettings.php: Reading up to Q25M for the new term store everywhere (was Q20M) + warm db1126 & db1111 caches (T219123) cache bust (duration: 01m 04s) | 
  [production] | 
            
  | 10:36 | 
  <addshore@deploy1001> | 
  Synchronized wmf-config/InitialiseSettings.php: Reading up to Q25M for the new term store everywhere (was Q20M) + warm db1126 & db1111 caches (T219123) (duration: 01m 05s) | 
  [production] | 
            
  | 10:20 | 
  <marostegui> | 
  Remove es2 eqiad and codfw from zarcillo.masters table - T246072 | 
  [production] | 
            
  | 10:10 | 
  <marostegui> | 
  Update shards table to set es2 display=0 - T246072 | 
  [production] | 
            
  | 10:05 | 
  <marostegui> | 
  es2 maintenance window over T246072 | 
  [production] | 
            
  | 09:59 | 
  <marostegui@cumin1001> | 
  dbctl commit (dc=all): 'Give some weight to es2 master es1015 and es2016, now standalone - T246072', diff saved to https://phabricator.wikimedia.org/P10609 and previous config saved to /var/cache/conftool/dbconfig/20200304-095919-marostegui.json | 
  [production] | 
            
  | 09:55 | 
  <marostegui> | 
  Reset replication on es2 hosts - T246072 | 
  [production] | 
            
  | 09:44 | 
  <moritzm> | 
  installing python-bleach security updates | 
  [production] | 
            
  | 09:43 | 
  <marostegui> | 
  Set es1015 (es2 master) on read_only - T246072 | 
  [production] | 
            
  | 09:38 | 
  <addshore> | 
  START warm cache for db1111 & db1126 for Q20-25 million T219123 (pass 3 today) | 
  [production] | 
            
  | 09:21 | 
  <marostegui@deploy1001> | 
  Synchronized wmf-config/db-eqiad.php: Set es2 as RO - T246072 (duration: 01m 04s) | 
  [production] | 
            
  | 09:13 | 
  <_joe_> | 
  removing nginx from servers where it was just used for service proxying. | 
  [production] | 
            
  | 09:09 | 
  <marostegui@deploy1001> | 
  Synchronized wmf-config/db-codfw.php: Set es2 as RO - T246072 (duration: 01m 14s) | 
  [production] | 
            
  | 08:58 | 
  <akosiaris> | 
  release Giant Puppet Lock across the fleet. https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/ has made it's way to all PoPs and most of codfw without issues, will make it in the rest of the fleet in the next 30mins | 
  [production] | 
            
  | 08:54 | 
  <addshore> | 
  START warm cache for db1111 & db1126 for Q20-25 million T219123 (pass 2 today) | 
  [production] | 
            
  | 08:45 | 
  <akosiaris> | 
  running puppet on first mw host after merge of  https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/, mw2269, rescheduling icinga checks as well | 
  [production] | 
            
  | 08:41 | 
  <akosiaris> | 
  running puppet on first es host after merge of  https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/, es2019, rescheduling icinga checks as well (correction) | 
  [production] | 
            
  | 08:41 | 
  <akosiaris> | 
  running puppet on first es host after merge of  https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/, db2019, rescheduling icinga checks as well | 
  [production] | 
            
  | 08:41 | 
  <akosiaris> | 
  running puppet on first db host after merge of  https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/, db2086, rescheduling icinga checks as well | 
  [production] | 
            
  | 08:13 | 
  <addshore> | 
  START warm cache for db1111 & db1126 for Q20-25 million T219123 (pass 1 today) | 
  [production] | 
            
  | 07:37 | 
  <marostegui@cumin1001> | 
  dbctl commit (dc=all): 'Fully repool db1098:3316 and db1098:3317 after reimage to buster T246604', diff saved to https://phabricator.wikimedia.org/P10608 and previous config saved to /var/cache/conftool/dbconfig/20200304-073721-marostegui.json | 
  [production] | 
            
  | 07:14 | 
  <marostegui@cumin1001> | 
  dbctl commit (dc=all): 'Slowly repool db1098:3316 and db1098:3317 after reimage to buster T246604', diff saved to https://phabricator.wikimedia.org/P10607 and previous config saved to /var/cache/conftool/dbconfig/20200304-071443-marostegui.json | 
  [production] | 
            
  | 07:00 | 
  <marostegui@cumin1001> | 
  dbctl commit (dc=all): 'Slowly repool db1098:3316 and db1098:3317 after reimage to buster T246604', diff saved to https://phabricator.wikimedia.org/P10606 and previous config saved to /var/cache/conftool/dbconfig/20200304-070048-marostegui.json | 
  [production] | 
            
  | 06:45 | 
  <marostegui@cumin1001> | 
  dbctl commit (dc=all): 'Slowly repool db1098:3316 and db1098:3317 after reimage to buster T246604', diff saved to https://phabricator.wikimedia.org/P10605 and previous config saved to /var/cache/conftool/dbconfig/20200304-064520-marostegui.json | 
  [production] | 
            
  | 06:30 | 
  <marostegui@cumin1001> | 
  END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) | 
  [production] | 
            
  | 06:28 | 
  <marostegui@cumin1001> | 
  START - Cookbook sre.hosts.downtime | 
  [production] | 
            
  | 06:22 | 
  <cdanis> | 
  ✔️ cdanis@prometheus2004.codfw.wmnet ~ 🕝☕ sudo systemctl restart prometheus@ops | 
  [production] | 
            
  | 06:21 | 
  <cdanis> | 
  ✔️ cdanis@prometheus2004.codfw.wmnet ~ 🕝☕ sudo systemctl reload prometheus@ops | 
  [production] | 
            
  | 06:10 | 
  <marostegui> | 
  Stop MySQL on db1098:3316, db1098:3317 for upgrade - T246604 | 
  [production] | 
            
  | 01:56 | 
  <mutante> | 
  mw2178 - systemctl reset-failed to clear (CRITICAL: Status of the systemd unit php7.2-fpm_check_restart) | 
  [production] |