6751-6800 of 10000 results (100ms)
2020-04-01 ยง
14:46 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
14:46 <vgutierrez@cumin1001> START - Cookbook sre.hosts.downtime [production]
14:43 <vgutierrez> depool && decommission cp[2018,2020,2022,2024-2026].codfw.wmnet - T249115 [production]
14:32 <gehel> depooling wdqs1006 to allow catching up on lag [production]
14:30 <vgutierrez> pool cp2042 - T248816 [production]
14:16 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
14:13 <vgutierrez@cumin1001> START - Cookbook sre.hosts.downtime [production]
14:09 <XioNoX> remove AS-path prepending in esams [production]
13:47 <XioNoX> remove AS-path prepending in eqsin [production]
13:39 <vgutierrez> pool cp2041 - T248816 [production]
13:34 <mutante> sodium (mirror): sudo -u mirror ftpsync to get Debian mirror updated (Icinga says it's old) [production]
13:24 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
13:24 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime [production]
13:17 <marostegui> Deploy schema change on db1099:3318 [production]
13:17 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1099:3318 for schema change', diff saved to https://phabricator.wikimedia.org/P10843 and previous config saved to /var/cache/conftool/dbconfig/20200401-131719-marostegui.json [production]
13:13 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
13:10 <vgutierrez@cumin1001> START - Cookbook sre.hosts.downtime [production]
12:19 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) [production]
12:19 <vgutierrez@cumin1001> START - Cookbook sre.hosts.decommission [production]
12:19 <tgr@deploy1001> Synchronized wmf-config/config: SWAT: [[gerrit:584579|Sync growthexperiments dblist with actual state of wmgUseGrowthExperiments (T248844)]] (duration: 01m 06s) [production]
12:18 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
12:18 <vgutierrez@cumin1001> START - Cookbook sre.hosts.downtime [production]
12:17 <tgr@deploy1001> Synchronized dblists/growthexperiments.dblist: SWAT: [[gerrit:584579|Sync growthexperiments dblist with actual state of wmgUseGrowthExperiments (T248844)]] (duration: 01m 05s) [production]
12:17 <XioNoX> restart nfacct on netflow4001 for kafka tls tests - T248980 [production]
12:15 <vgutierrez> depool & decommission cp2013 - T249088 [production]
12:14 <tgr@deploy1001> Synchronized wmf-config/InitialiseSettings.php: re-sync (duration: 01m 06s) [production]
12:12 <tgr@deploy1001> Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:585059|Enable password-reset-update on all other than Wikipedias (T245791)]] (duration: 01m 07s) [production]
12:09 <marostegui> Deploy schema change on db1116:3318 [production]
12:05 <cparle@deploy1001> Synchronized wmf-config/InitialiseSettings.php: [SDC] Revert enabling WikibaseQualityConstraints on Commons take 2 (duration: 01m 08s) [production]
12:04 <cparle@deploy1001> Synchronized wmf-config/InitialiseSettings.php: [SDC] Revert enabling WikibaseQualityConstraints on Commons (duration: 01m 05s) [production]
11:54 <urbanecm@deploy1001> Synchronized wmf-config/InitialiseSettings.php: SWAT: 4968501: Restrict short URL management log to stewards (T221073; take II) (duration: 01m 05s) [production]
11:53 <urbanecm@deploy1001> Synchronized wmf-config/InitialiseSettings.php: SWAT: 4968501: Restrict short URL management log to stewards (T221073) (duration: 01m 07s) [production]
11:48 <urbanecm@deploy1001> Synchronized wmf-config/InitialiseSettings.php: [SDC] Enable WikibaseQualityConstraints on Commons take II (duration: 01m 06s) [production]
11:44 <cparle@deploy1001> Synchronized wmf-config/InitialiseSettings.php: [SDC] Enable WikibaseQualityConstraints on Commons (duration: 01m 18s) [production]
11:20 <cormacparle__> created table wbqc_constraints on commonswiki [production]
11:03 <jbond42> install bluez update on ganeti-canary and cloudvirt/cloudcontrol-dev [production]
11:01 <mutante> planet1001 - reinstall OS to test install_server switch, ATS switched to planet1002 earlier [production]
10:47 <marostegui> Deploy schema change on dbstore1005:3318 [production]
10:25 <vgutierrez> pool cp2040 - T248816 [production]
10:16 <oblivian@puppetmaster1001> conftool action : set/pooled=yes:weight=1; selector: service=canary [production]
09:55 <dzahn@cumin1001> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) [production]
09:46 <dzahn@cumin1001> START - Cookbook sre.ganeti.makevm [production]
09:37 <marostegui> Deploy schema change on s8 codfw, this will generate lag on codfw [production]
09:35 <XioNoX> Update install servers IPs (dhcp helpers + firewall rules) - T224576 [production]
09:34 <mutante> install_servers: DHCP_relay in routers and TFTP server in DHCP server config have been switched from install1002/2002 to install1003/2003 - doing a test install, but if any issues report on T224576 [production]
09:26 <marostegui> last entry was for db2093 [production]
09:26 <marostegui> Downgrade mariadb package from 10.4.12-2 to 10.4.12-1 [production]
09:09 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
09:07 <vgutierrez@cumin1001> START - Cookbook sre.hosts.downtime [production]
09:05 <mutante> planet - the backend server has been switched from planet1001 (stretch) to planet1002 (buster) - T247651 [production]