| 2018-03-16
      
      § | 
    
  | 16:29 | <jynus> | updating wikireplica_dns 2/3 | [production] | 
            
  | 16:22 | <moritzm> | installing curl security updates | [production] | 
            
  | 16:09 | <marostegui> | Stop MySQL on db1020 - T189773 | [production] | 
            
  | 14:48 | <andrewbogott> | reset contintcloud quotas as per https://wikitech.wikimedia.org/wiki/Portal:Cloud_VPS/Admin/Troubleshooting#incorrect_quota_violations | [production] | 
            
  | 14:48 | <jynus> | reimage dbproxy1011 | [production] | 
            
  | 14:27 | <andrewbogott> | restarting nodepool on nodepool1001 | [production] | 
            
  | 14:25 | <elukey> | reboot druid1002 for kernel updates | [production] | 
            
  | 14:14 | <andrewbogott> | restarting rabbitmq on labcontrol1001 | [production] | 
            
  | 13:57 | <andrewbogott> | stopping nodepool temporarily during changes to nova.conf | [production] | 
            
  | 13:41 | <marostegui@tin> | Synchronized wmf-config/db-codfw.php: Repool db2050 (duration: 00m 58s) | [production] | 
            
  | 13:15 | <chasemp> | disable puppet across cloud things for safe rollout | [production] | 
            
  | 12:52 | <moritzm> | uploaded libsodium23/php-acpu/php-mailparse to thirdparty/php72 (deps/extentions needed by Phabricator) | [production] | 
            
  | 12:51 | <ema> | text-esams: reboot for kernel upgrades T188092 and to mitigate https://grafana.wikimedia.org/dashboard/db/varnish-failed-fetches?panelId=7&fullscreen&orgId=1&from=1518746284946&to=1521204628041 | [production] | 
            
  | 12:12 | <marostegui> | Reboot dbproxy1005 for kernel upgrade | [production] | 
            
  | 12:02 | <marostegui> | Run pt-table-checksum on m2 | [production] | 
            
  | 12:00 | <marostegui> | Run pt-table-checksum on m5 | [production] | 
            
  | 11:11 | <hashar> | zuul: reenqueue all coverage jobs lost when restarting Zuul | [production] | 
            
  | 10:53 | <hashar> | Upgrading zuul to zuul_2.5.1-wmf4 to resolve a mutex deadlock T189859 | [production] | 
            
  | 10:45 | <jynus> | disable puppet and load balance between 3 wikirreplicas on dbproxy1010 | [production] | 
            
  | 10:19 | <jynus> | upgrade and restart of dbproxy1009 (passive) | [production] | 
            
  | 10:01 | <elukey> | restart eventlogging_sync on db1108 (eventlogging db slave) as precautions after the change of m4-master.eqiad.wmnet's CNAME | [production] | 
            
  | 10:00 | <moritzm> | reverting the HHVM/ICU 57 setup on mwdebug2001 which was used for the dry run tests | [production] | 
            
  | 09:57 | <elukey> | restart eventlogging-consumer@mysql-eventbus on eventlog1002 to force the DNS resolution of m4-master (changed from dbproxy1009 -> dbproxy1004) | [production] | 
            
  | 09:56 | <hashar> | Zuul coverage pipeline is deadlocked on an unreleased mutex. Will need a new Zuul version. | [production] | 
            
  | 09:51 | <elukey> | restart eventlogging-consumer@mysql-m4 on eventlog1002 to force the DNS resolution of m4-master (changed from dbproxy1009 -> dbproxy1004) | [production] | 
            
  | 09:31 | <marostegui@tin> | Synchronized wmf-config/db-eqiad.php: Restore original weight for es1015 after kernel, mariadb and socket upgrade (duration: 00m 57s) | [production] | 
            
  | 09:27 | <oblivian@tin> | Finished deploy [netbox/deploy@ccc342a]: Re-deploying with the newly built artifacts/2 (duration: 00m 29s) | [production] | 
            
  | 09:26 | <oblivian@tin> | Started deploy [netbox/deploy@ccc342a]: Re-deploying with the newly built artifacts/2 | [production] | 
            
  | 09:17 | <oblivian@tin> | (no justification provided) | [production] | 
            
  | 09:17 | <oblivian@tin> | Finished deploy [netbox/deploy@f3e0159]: Re-deploying with the newly built artifacts (duration: 00m 47s) | [production] | 
            
  | 09:16 | <oblivian@tin> | Started deploy [netbox/deploy@f3e0159]: Re-deploying with the newly built artifacts | [production] | 
            
  | 09:15 | <marostegui@tin> | Synchronized wmf-config/db-eqiad.php: Depool db1106 - T183469 (duration: 00m 57s) | [production] | 
            
  | 08:58 | <marostegui@tin> | Synchronized wmf-config/db-eqiad.php: Slowly repool es1015 after kernel, mariadb and socket upgrade (duration: 00m 56s) | [production] | 
            
  | 08:49 | <jynus> | upgrade and restart of dbproxy1004 (passive) | [production] | 
            
  | 08:41 | <marostegui> | Stop MySQL on es1015 for maintenance | [production] | 
            
  | 08:40 | <marostegui@tin> | Synchronized wmf-config/db-eqiad.php: Depool es1015 for kernel, mariadb and socket upgrade (duration: 00m 58s) | [production] | 
            
  | 08:40 | <elukey> | reboot druid1006 for kernel updates | [production] | 
            
  | 08:29 | <elukey> | reboot druid1005 for kernel updates | [production] | 
            
  | 07:53 | <moritzm> | reimage mc2036 after mainboard replacement (T185587) | [production] | 
            
  | 07:15 | <marostegui> | Stop MySQL on es2017 (es3 codfw master) for maintenance | [production] | 
            
  | 07:06 | <marostegui> | Stop MySQL on es2016 (es2 codfw master) for maintenance | [production] | 
            
  | 06:52 | <marostegui> | Stop MySQL on db2048 (s1 codfw master) for maintenance | [production] | 
            
  | 06:41 | <marostegui> | Stop MySQL on db2051 (s4 codfw master) for maintenance | [production] | 
            
  | 06:28 | <marostegui> | Stop MySQL on db2045 (s8 codfw master) for maintenance | [production] | 
            
  | 06:21 | <marostegui@tin> | Synchronized wmf-config/db-eqiad.php: Repool db1084 (duration: 00m 58s) | [production] | 
            
  | 01:46 | <XioNoX> | librenms IRC bot moved to -operations channel. Doc on how to turn it off is on https://wikitech.wikimedia.org/wiki/LibreNMS#IRC_Alerting | [production] | 
            
  | 01:00 | <reedy@tin> | Synchronized php-1.31.0-wmf.25/includes/specials/pagers/NewFilesPager.php: Fix T189846 (duration: 00m 58s) | [production] |