2021-01-13
§
|
09:58 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool es1020', diff saved to https://phabricator.wikimedia.org/P13747 and previous config saved to /var/cache/conftool/dbconfig/20210113-095834-marostegui.json |
[production] |
09:49 |
<marostegui> |
Enable report_host on all codfw sby masters - T271106 |
[production] |
09:42 |
<godog> |
swift eqiad-prod: add weight to ms-be106[0-3] - T268435 |
[production] |
09:05 |
<ayounsi@deploy1001> |
Finished deploy [homer/deploy@723ebfe]: Netbox 2.9 changes (duration: 03m 11s) |
[production] |
09:03 |
<ryankemper@cumin1001> |
END (FAIL) - Cookbook sre.elasticsearch.rolling-restart (exit_code=99) |
[production] |
09:02 |
<ayounsi@deploy1001> |
Started deploy [homer/deploy@723ebfe]: Netbox 2.9 changes |
[production] |
09:02 |
<moritzm> |
installing efivar bugfix update |
[production] |
09:00 |
<jmm@cumin2001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) |
[production] |
08:54 |
<jmm@cumin2001> |
START - Cookbook sre.hosts.reboot-single |
[production] |
08:47 |
<moritzm> |
draining ganeti4003 for eventual reboot |
[production] |
08:46 |
<ema> |
cp5008: re-enable puppet to undo JIT tslua experiment T265625 |
[production] |
08:35 |
<moritzm> |
failover ganeti master in ulsfo to ganeti4002 |
[production] |
08:29 |
<jmm@cumin2001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) |
[production] |
08:23 |
<jmm@cumin2001> |
START - Cookbook sre.hosts.reboot-single |
[production] |
08:19 |
<moritzm> |
draining ganeti4002 for eventual reboot |
[production] |
08:17 |
<jmm@cumin2001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) |
[production] |
08:13 |
<jmm@cumin2001> |
START - Cookbook sre.hosts.reboot-single |
[production] |
08:04 |
<ryankemper> |
[WDQS Deploy] Deploy is complete, and the WDQS service is healthy |
[production] |
07:59 |
<moritzm> |
draining ganeti4001 for eventual reboot |
[production] |
07:29 |
<ryankemper> |
[WDQS Deploy] Restarting `wdqs-categories` across lvs-managed hosts, one node at a time: `sudo -E cumin -b 1 'A:wdqs-all and not A:wdqs-test' 'depool && sleep 45 && systemctl restart wdqs-categories && sleep 45 && pool'` |
[production] |
07:29 |
<ryankemper> |
[WDQS Deploy] Restarted `wdqs-categories` across all test hosts simultaneously: `sudo -E cumin 'A:wdqs-test' 'systemctl restart wdqs-categories'` |
[production] |
07:28 |
<ryankemper> |
[WDQS Deploy] Restarted `wdqs-updater` across all hosts simultaneously: `sudo -E cumin -b 4 'A:wdqs-all' 'systemctl restart wdqs-updater'` |
[production] |
07:28 |
<ryankemper@deploy1001> |
Finished deploy [wdqs/wdqs@fdd2c2f]: 0.3.59 (duration: 14m 23s) |
[production] |
07:15 |
<ryankemper> |
[WDQS Deploy] All tests passing on canary instance `wdqs1003` following canary deploy. Proceeding to rest of fleet... |
[production] |
07:13 |
<ryankemper@deploy1001> |
Started deploy [wdqs/wdqs@fdd2c2f]: 0.3.59 |
[production] |
07:13 |
<ryankemper> |
[WDQS Deploy] All tests passing on canary instance `wdqs1003` prior to start of deploy. Proceeding with canary deploy of version `0.3.59`... |
[production] |
07:04 |
<ryankemper> |
T266492 T268779 T265699 Restarting cloudelastic to apply new readahead changes, this will also verify cloudelastic support works in our elasticsearch spicerack code. Only going one node at a time because cloudelastic elasticsearch indices only have 1 replica shard per index. |
[production] |
07:03 |
<ryankemper@cumin1001> |
START - Cookbook sre.elasticsearch.rolling-restart |
[production] |
06:55 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1079 (re)pooling @ 100%: After cloning db1155:3317', diff saved to https://phabricator.wikimedia.org/P13745 and previous config saved to /var/cache/conftool/dbconfig/20210113-065535-root.json |
[production] |
06:40 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1079 (re)pooling @ 75%: After cloning db1155:3317', diff saved to https://phabricator.wikimedia.org/P13744 and previous config saved to /var/cache/conftool/dbconfig/20210113-064031-root.json |
[production] |
06:25 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1079 (re)pooling @ 50%: After cloning db1155:3317', diff saved to https://phabricator.wikimedia.org/P13743 and previous config saved to /var/cache/conftool/dbconfig/20210113-062528-root.json |
[production] |
06:10 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1079 (re)pooling @ 25%: After cloning db1155:3317', diff saved to https://phabricator.wikimedia.org/P13742 and previous config saved to /var/cache/conftool/dbconfig/20210113-061024-root.json |
[production] |
2021-01-12
§
|
22:55 |
<dzahn@cumin1001> |
conftool action : set/pooled=yes; selector: name=mw2225.codfw.wmnet |
[production] |
22:55 |
<dzahn@cumin1001> |
conftool action : set/pooled=yes; selector: name=mw2224.codfw.wmnet |
[production] |
22:46 |
<crusnov@deploy1001> |
Finished deploy [netbox/deploy@b17db99]: Rerun production deploy of Netbox 2.9 just in case T266487 (duration: 00m 05s) |
[production] |
22:46 |
<crusnov@deploy1001> |
Started deploy [netbox/deploy@b17db99]: Rerun production deploy of Netbox 2.9 just in case T266487 |
[production] |
22:37 |
<chaomodus> |
Upgrade of Netbox to 2.9 complete, checking support software. T266487 |
[production] |
22:32 |
<crusnov@deploy1001> |
Finished deploy [netbox/deploy@b17db99]: Deploy Netbox 2.9.10 to production T266487 (duration: 02m 33s) |
[production] |
22:30 |
<crusnov@deploy1001> |
Started deploy [netbox/deploy@b17db99]: Deploy Netbox 2.9.10 to production T266487 |
[production] |
22:12 |
<chaomodus> |
Merged Netbox 2.9 related changes in puppet and -extras; testing on -next T266487 |
[production] |
22:07 |
<bblack> |
reboot authdns1001 - T266746#6741647 |
[production] |
22:04 |
<chaomodus> |
proceeding with Netbox 2.9 upgrade T266487 |
[production] |
22:02 |
<dzahn@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw2225.codfw.wmnet with reason: REIMAGE |
[production] |
22:00 |
<dzahn@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on mw2225.codfw.wmnet with reason: REIMAGE |
[production] |
21:57 |
<dzahn@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw2224.codfw.wmnet with reason: REIMAGE |
[production] |
21:55 |
<dzahn@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on mw2224.codfw.wmnet with reason: REIMAGE |
[production] |
21:50 |
<jforrester@deploy1001> |
Synchronized php-1.36.0-wmf.25/extensions/AbuseFilter/modules/mode-abusefilter.js: T271487 Don't pass protocol-relative URLs to the Ace worker (duration: 01m 06s) |
[production] |
21:41 |
<ottomata> |
rolling restart of eventgate-analytics-external pods |
[production] |
20:40 |
<tgr_> |
running 'mwscript extensions/ORES/maintenance/PopulateDatabase.php --wiki=ukwiki' on terbium |
[production] |
19:57 |
<tgr_> |
backports done |
[production] |