| 2021-06-08
      
      ยง | 
    
  | 19:46 | <bblack> | dns[1235]001: update gdnsd to 3.7.0-2~wmf1 | [production] | 
            
  | 19:43 | <ryankemper@cumin1001> | START - Cookbook sre.wdqs.data-transfer | [production] | 
            
  | 19:36 | <ryankemper@cumin1001> | END (ERROR) - Cookbook sre.wdqs.data-transfer (exit_code=97) | [production] | 
            
  | 19:36 | <ryankemper> | T280382 Cancelling the data-transfer run to restart it; realized that the cookbook will start up the `wdqs-updater` again so will locally hack the cookbook on `cumin1001` to prevent that | [production] | 
            
  | 19:32 | <ladsgroup@deploy1002> | Synchronized php-1.37.0-wmf.9/extensions/Echo/modules/nojs/mw.echo.alert.monobook.less: Backport: [[gerrit:698848|Fix MonoBook orange banner hover styles (T284496)]] (duration: 01m 08s) | [production] | 
            
  | 19:26 | <bblack> | dns400[12]: update gdnsd to 3.7.0-3~wmf1 | [production] | 
            
  | 19:25 | <bblack> | apt: update gdnsd package to gdnsd-3.7.0-2~wmf1 (fix systemd reload issues) | [production] | 
            
  | 19:20 | <ryankemper> | T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs1009.eqiad.wmnet --dest wdqs1010.eqiad.wmnet --reason "transferring skolemized wikidata.jnl so we can reimage wdqs1009" --blazegraph_instance blazegraph --without-lvs` on `ryankemper@cumin1001` tmux session `wdqs_1009` | [production] | 
            
  | 19:20 | <ryankemper@cumin1001> | START - Cookbook sre.wdqs.data-transfer | [production] | 
            
  | 19:19 | <ryankemper@cumin1001> | END (FAIL) - Cookbook sre.wdqs.data-transfer (exit_code=99) | [production] | 
            
  | 19:19 | <ryankemper@cumin1001> | START - Cookbook sre.wdqs.data-transfer | [production] | 
            
  | 19:18 | <ryankemper> | T280382 `sudo systemctl stop wdqs-updater wdqs-blazegraph` on `wdqs1010` in preparation for transfer | [production] | 
            
  | 19:08 | <ryankemper> | [WDQS] `ryankemper@wdqs1005:~$ sudo pool` (all caught up on lag) | [production] | 
            
  | 18:47 | <bblack> | dns4001: update gdnsd to 3.7.0-1~wmf1 | [production] | 
            
  | 18:43 | <bblack> | apt: update gdnsd package to gdnsd-3.7.0-1~wmf1 | [production] | 
            
  | 17:49 | <jgiannelos@deploy1002> | helmfile [eqiad] Ran 'sync' command on namespace 'proton' for release 'production' . | [production] | 
            
  | 17:36 | <jgiannelos@deploy1002> | helmfile [codfw] Ran 'sync' command on namespace 'proton' for release 'production' . | [production] | 
            
  | 17:25 | <jgiannelos@deploy1002> | helmfile [staging] Ran 'sync' command on namespace 'proton' for release 'production' . | [production] | 
            
  | 17:10 | <elukey> | fix dbstore1007's ip address in analytics-in4 on cr{1,2}-eqiad | [production] | 
            
  | 17:06 | <jhuneidi@deploy1002> | Finished scap: testwikis wikis to 1.37.0-wmf.9  refs T281150 (duration: 34m 12s) | [production] | 
            
  | 16:32 | <jhuneidi@deploy1002> | Started scap: testwikis wikis to 1.37.0-wmf.9  refs T281150 | [production] | 
            
  | 16:27 | <papaul> | powerdown  moss-fe2002   for relocation | [production] | 
            
  | 16:06 | <papaul> | powerdown  ms-backup2002  for relocation | [production] | 
            
  | 16:02 | <oblivian@deploy1002> | helmfile [staging] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . | [production] | 
            
  | 15:40 | <papaul> | powerdown ms-be2061 for relocation | [production] | 
            
  | 15:40 | <bblack@cumin1001> | conftool action : set/pooled=yes; selector: name=cp203[34].codfw.wmnet | [production] | 
            
  | 15:33 | <papaul> | powerdown thanos-fe2003 for relocation | [production] | 
            
  | 15:23 | <Krinkle> | mwmaint1002: Running purge-parsercache-now.php on server 4/4 (pc1009) ref P16060, T280605, T282761. | [production] | 
            
  | 15:19 | <kormat@cumin1001> | END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5 days, 0:00:00 on pc2009.codfw.wmnet,pc1009.eqiad.wmnet with reason: Purging parsercache pc3 T282761 | [production] | 
            
  | 15:19 | <kormat@cumin1001> | START - Cookbook sre.hosts.downtime for 5 days, 0:00:00 on pc2009.codfw.wmnet,pc1009.eqiad.wmnet with reason: Purging parsercache pc3 T282761 | [production] | 
            
  | 15:13 | <papaul> | powerdown cp2034 for relocation | [production] | 
            
  | 15:04 | <papaul> | powerdown cp2033 for relocation | [production] | 
            
  | 14:59 | <bblack@cumin1001> | conftool action : set/pooled=no; selector: name=cp203[34].codfw.wmnet | [production] | 
            
  | 14:43 | <moritzm> | cleanup now unused nginx mods and former deps (various X11 libs and libxslt) on testreduce1001/scandium after switch towards nginx-light  T164456 | [production] | 
            
  | 14:08 | <marostegui> | Restart sanitarium hosts (db2094, db2095, db1154, db1155) to pick up new filters T284106 | [production] | 
            
  | 14:05 | <kormat@deploy1002> | Synchronized wmf-config/db-eqiad.php: Set pc1010 as pc3 master T282761 (duration: 00m 57s) | [production] | 
            
  | 14:05 | <kormat> | setting pc1010 as pc3 primary T282761 | [production] | 
            
  | 13:51 | <jbond@deploy1002> | Finished deploy [netbox/deploy@c70df91]: Force deploy of gerrit/672831 to netbox-next  (duration: 00m 42s) | [production] | 
            
  | 13:51 | <jbond@deploy1002> | Started deploy [netbox/deploy@c70df91]: Force deploy of gerrit/672831 to netbox-next | [production] | 
            
  | 13:48 | <otto@cumin1001> | END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) | [production] | 
            
  | 13:41 | <otto@cumin1001> | START - Cookbook sre.zookeeper.roll-restart-zookeeper | [production] | 
            
  | 13:40 | <jbond@deploy1002> | Finished deploy [netbox/deploy@c70df91]: Force deploy of gerrit/672831 to netbox-next  (duration: 00m 47s) | [production] | 
            
  | 13:39 | <jbond@deploy1002> | Started deploy [netbox/deploy@c70df91]: Force deploy of gerrit/672831 to netbox-next | [production] | 
            
  | 13:36 | <jbond@deploy1002> | Finished deploy [netbox/deploy@c70df91]: Force deploy of gerrit/672831 to netbox-next  (duration: 01m 03s) | [production] | 
            
  | 13:35 | <jbond@deploy1002> | Started deploy [netbox/deploy@c70df91]: Force deploy of gerrit/672831 to netbox-next | [production] | 
            
  | 13:33 | <otto@cumin1001> | END (PASS) - Cookbook sre.presto.roll-restart-workers (exit_code=0) for Presto analytics cluster: Roll restart of all Presto's jvm daemons. - otto@cumin1001 | [production] | 
            
  | 13:22 | <otto@cumin1001> | START - Cookbook sre.presto.roll-restart-workers for Presto analytics cluster: Roll restart of all Presto's jvm daemons. - otto@cumin1001 | [production] | 
            
  | 12:15 | <kormat@deploy1002> | Synchronized wmf-config/db-eqiad.php: Repool pc1008 as pc2 master T282761 (duration: 00m 57s) | [production] | 
            
  | 12:14 | <kormat> | setting pc1008 back as pc2 primary T282761 | [production] | 
            
  | 11:54 | <urbanecm@deploy1002> | Synchronized wmf-config/InitialiseSettings.php: ef49422b162ab0161bc39da857b3230175ac4492: enwiki: Disable indexing on the Book namespace (T283522) (duration: 00m 56s) | [production] |