2020-04-01
ยง
|
12:14 |
<tgr@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: re-sync (duration: 01m 06s) |
[production] |
12:12 |
<tgr@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:585059|Enable password-reset-update on all other than Wikipedias (T245791)]] (duration: 01m 07s) |
[production] |
12:09 |
<marostegui> |
Deploy schema change on db1116:3318 |
[production] |
12:05 |
<cparle@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: [SDC] Revert enabling WikibaseQualityConstraints on Commons take 2 (duration: 01m 08s) |
[production] |
12:04 |
<cparle@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: [SDC] Revert enabling WikibaseQualityConstraints on Commons (duration: 01m 05s) |
[production] |
11:54 |
<urbanecm@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: SWAT: 4968501: Restrict short URL management log to stewards (T221073; take II) (duration: 01m 05s) |
[production] |
11:53 |
<urbanecm@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: SWAT: 4968501: Restrict short URL management log to stewards (T221073) (duration: 01m 07s) |
[production] |
11:48 |
<urbanecm@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: [SDC] Enable WikibaseQualityConstraints on Commons take II (duration: 01m 06s) |
[production] |
11:44 |
<cparle@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: [SDC] Enable WikibaseQualityConstraints on Commons (duration: 01m 18s) |
[production] |
11:20 |
<cormacparle__> |
created table wbqc_constraints on commonswiki |
[production] |
11:03 |
<jbond42> |
install bluez update on ganeti-canary and cloudvirt/cloudcontrol-dev |
[production] |
11:01 |
<mutante> |
planet1001 - reinstall OS to test install_server switch, ATS switched to planet1002 earlier |
[production] |
10:47 |
<marostegui> |
Deploy schema change on dbstore1005:3318 |
[production] |
10:25 |
<vgutierrez> |
pool cp2040 - T248816 |
[production] |
10:16 |
<oblivian@puppetmaster1001> |
conftool action : set/pooled=yes:weight=1; selector: service=canary |
[production] |
09:55 |
<dzahn@cumin1001> |
END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) |
[production] |
09:46 |
<dzahn@cumin1001> |
START - Cookbook sre.ganeti.makevm |
[production] |
09:37 |
<marostegui> |
Deploy schema change on s8 codfw, this will generate lag on codfw |
[production] |
09:35 |
<XioNoX> |
Update install servers IPs (dhcp helpers + firewall rules) - T224576 |
[production] |
09:34 |
<mutante> |
install_servers: DHCP_relay in routers and TFTP server in DHCP server config have been switched from install1002/2002 to install1003/2003 - doing a test install, but if any issues report on T224576 |
[production] |
09:26 |
<marostegui> |
last entry was for db2093 |
[production] |
09:26 |
<marostegui> |
Downgrade mariadb package from 10.4.12-2 to 10.4.12-1 |
[production] |
09:09 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:07 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:05 |
<mutante> |
planet - the backend server has been switched from planet1001 (stretch) to planet1002 (buster) - T247651 |
[production] |
08:46 |
<mutante> |
deneb, boron: systemctl reset-failed to clear up systemd state alerts |
[production] |
08:43 |
<marostegui> |
Stop haproxy on dbproxy1010 T248944 |
[production] |
08:37 |
<jynus> |
restart bacula at backup1001 |
[production] |
08:30 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) |
[production] |
08:30 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.decommission |
[production] |
08:28 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
08:28 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
08:28 |
<vgutierrez> |
depool & decommission cp2017 - T249084 |
[production] |
08:21 |
<vgutierrez> |
pool cp2039 - T248816 |
[production] |
08:09 |
<marostegui> |
Deploy schema change on db1138 (s4 primary master) |
[production] |
08:06 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
08:04 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
07:13 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repool db1121 after schema change', diff saved to https://phabricator.wikimedia.org/P10841 and previous config saved to /var/cache/conftool/dbconfig/20200401-071339-marostegui.json |
[production] |
07:12 |
<vgutierrez> |
pool cp2038 - T248816 |
[production] |
06:38 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) |
[production] |
06:38 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.decommission |
[production] |
06:36 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
06:36 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
06:36 |
<vgutierrez> |
depool & decommission cp2012 - T249080 |
[production] |
06:24 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
06:22 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
05:39 |
<marostegui> |
Deploy schema change on db1121 (this will create lag on s4 labs) |
[production] |
05:38 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1121 for schema change', diff saved to https://phabricator.wikimedia.org/P10840 and previous config saved to /var/cache/conftool/dbconfig/20200401-053827-marostegui.json |
[production] |
00:39 |
<reedy@deploy1001> |
Synchronized docroot/mediawiki.org/xml/: Update http and prot rel links to https, fix link to sitelist in MW Core (duration: 01m 06s) |
[production] |
00:12 |
<reedy@deploy1001> |
Synchronized docroot/mediawiki.org/xml/: Add export-0.11 (duration: 01m 05s) |
[production] |