1801-1850 of 10000 results (45ms)
2021-02-09 §
09:22 <godog> swift eqiad-prod: decrease weight for SSDs on ms-be[1019-1026] - T272836 [production]
08:44 <XioNoX> repool esams - T272342 [production]
08:30 <XioNoX> rollback redirect ns2 to authdns1001 - T252631 [production]
08:09 <XioNoX> alright, brace yourself, esams switch stack is going to go down [production]
08:03 <ayounsi@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:30:00 on 32 hosts with reason: switch upgrade [production]
08:02 <ayounsi@cumin1001> START - Cookbook sre.hosts.downtime for 1:30:00 on 32 hosts with reason: switch upgrade [production]
07:54 <XioNoX> redirect ns2 to authdns1001 - T252631 [production]
07:47 <hashar@deploy1001> Finished deploy [integration/docroot@672e79f]: build: Add /scap/log to gitignore (duration: 00m 06s) [production]
07:47 <hashar@deploy1001> Started deploy [integration/docroot@672e79f]: build: Add /scap/log to gitignore [production]
07:34 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove db1081 from dbctl T273040', diff saved to https://phabricator.wikimedia.org/P14241 and previous config saved to /var/cache/conftool/dbconfig/20210209-073455-marostegui.json [production]
07:20 <ryankemper> [WDQS Deploy] Deploy complete. Successful test query placed on query.wikidata.org, there's no relevant criticals in Icinga, and Grafana looks good [production]
07:20 <marostegui@cumin1001> dbctl commit (dc=all): 'db1111 (re)pooling @ 100%: Slowly repooling db1111 after onsite maintenance', diff saved to https://phabricator.wikimedia.org/P14240 and previous config saved to /var/cache/conftool/dbconfig/20210209-072038-root.json [production]
07:05 <marostegui@cumin1001> dbctl commit (dc=all): 'db1111 (re)pooling @ 75%: Slowly repooling db1111 after onsite maintenance', diff saved to https://phabricator.wikimedia.org/P14239 and previous config saved to /var/cache/conftool/dbconfig/20210209-070534-root.json [production]
07:04 <XioNoX> depool disable 2 uplinks on asw2-esams - T272342 [production]
06:50 <marostegui@cumin1001> dbctl commit (dc=all): 'db1111 (re)pooling @ 50%: Slowly repooling db1111 after onsite maintenance', diff saved to https://phabricator.wikimedia.org/P14238 and previous config saved to /var/cache/conftool/dbconfig/20210209-065031-root.json [production]
06:48 <ryankemper> [WDQS Deploy] Restarting `wdqs-categories` across lvs-managed hosts, one node at a time: `sudo -E cumin -b 1 'A:wdqs-all and not A:wdqs-test' 'depool && sleep 45 && systemctl restart wdqs-categories && sleep 45 && pool'` [production]
06:48 <ryankemper> [WDQS Deploy] Restarted `wdqs-categories` across all test hosts simultaneously: `sudo -E cumin 'A:wdqs-test' 'systemctl restart wdqs-categories'` [production]
06:48 <ryankemper> [WDQS Deploy] Restarted `wdqs-updater` across all hosts, 4 hosts at a time: `sudo -E cumin -b 4 'A:wdqs-all' 'systemctl restart wdqs-updater'` [production]
06:47 <ryankemper@deploy1001> Finished deploy [wdqs/wdqs@582b070]: 0.3.63 (duration: 06m 46s) [production]
06:44 <XioNoX> depool esams for network maintenance - T272342 [production]
06:41 <ryankemper> [WDQS Deploy] Tests passing following deploy of `0.3.63` on canary `wdqs1003`; proceeding to rest of fleet [production]
06:40 <ryankemper@deploy1001> Started deploy [wdqs/wdqs@582b070]: 0.3.63 [production]
06:40 <ryankemper> Pooled `wdqs1007` and depooled `wdqs1005` (`1005` is ~12 hours behind) [production]
06:38 <ryankemper> [WDQS Deploy] Gearing up for deploy of wdqs `0.3.63`. Pre-deploy tests passing on canary `wdqs1003` [production]
06:35 <marostegui@cumin1001> dbctl commit (dc=all): 'db1111 (re)pooling @ 25%: Slowly repooling db1111 after onsite maintenance', diff saved to https://phabricator.wikimedia.org/P14237 and previous config saved to /var/cache/conftool/dbconfig/20210209-063527-root.json [production]
06:20 <marostegui@cumin1001> dbctl commit (dc=all): 'db1111 (re)pooling @ 10%: Slowly repooling db1111 after onsite maintenance', diff saved to https://phabricator.wikimedia.org/P14236 and previous config saved to /var/cache/conftool/dbconfig/20210209-062024-root.json [production]
06:20 <marostegui> Stop mysql on s2 and s7 on db1090 to clone db1170 T258361 [production]
06:18 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1090:3312, db1090:3317 T258361', diff saved to https://phabricator.wikimedia.org/P14234 and previous config saved to /var/cache/conftool/dbconfig/20210209-061822-marostegui.json [production]
06:05 <marostegui@cumin1001> dbctl commit (dc=all): 'db1111 (re)pooling @ 5%: Slowly repooling db1111 after onsite maintenance', diff saved to https://phabricator.wikimedia.org/P14233 and previous config saved to /var/cache/conftool/dbconfig/20210209-060520-root.json [production]
05:02 <krinkle@deploy1001> Finished deploy [integration/docroot@fdfb265]: I271e6054880, T273247 (duration: 00m 06s) [production]
05:02 <krinkle@deploy1001> Started deploy [integration/docroot@fdfb265]: I271e6054880, T273247 [production]
01:56 <tstarling@deploy1001> Synchronized php-1.36.0-wmf.29/extensions/FeaturedFeeds: probable fix for UBN T273242 (duration: 01m 06s) [production]
01:46 <dzahn@cumin1001> conftool action : set/pooled=yes; selector: name=mw1302.eqiad.wmnet [production]
01:46 <dzahn@cumin1001> conftool action : set/pooled=yes; selector: name=mw1301.eqiad.wmnet [production]
00:48 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw1302.eqiad.wmnet [production]
00:48 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw1301.eqiad.wmnet [production]
00:28 <dzahn@cumin1001> conftool action : set/pooled=yes; selector: name=mw1387.eqiad.wmnet [production]
00:24 <dzahn@cumin1001> conftool action : set/pooled=yes; selector: name=mw1386.eqiad.wmnet [production]
00:22 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw1386.eqiad.wmnet [production]
00:22 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw1387.eqiad.wmnet [production]
00:02 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw1301.eqiad.wmnet with reason: REIMAGE [production]
00:00 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw1302.eqiad.wmnet with reason: REIMAGE [production]
2021-02-08 §
23:59 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on mw1301.eqiad.wmnet with reason: REIMAGE [production]
23:58 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on mw1302.eqiad.wmnet with reason: REIMAGE [production]
23:52 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on mw2220.codfw.wmnet with reason: T273803 [production]
23:52 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on mw2220.codfw.wmnet with reason: T273803 [production]
23:50 <dzahn@cumin1001> conftool action : set/pooled=inactive; selector: name=mw2220.codfw.wmnet [production]
23:49 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw2220.codfw.wmnet [production]
23:49 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw1386.eqiad.wmnet with reason: REIMAGE [production]
23:47 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on mw1386.eqiad.wmnet with reason: REIMAGE [production]