2020-06-29
§
|
12:32 |
<jayme> |
deleted all tags for docker-registry.wikimedia.org/envoy-tls-local-proxy from docker registry - T253396 |
[production] |
12:20 |
<marostegui> |
Stop MySQL on db2096 (codfw x1 master) for reimage T254871 |
[production] |
12:03 |
<cdanis> |
re-pool eqiad T256512 |
[production] |
11:59 |
<cdanis> |
deployed I132075ee on cr1-eqiad T256512 |
[production] |
11:58 |
<cdanis> |
deployed I132075ee on cr2-eqiad T256512 |
[production] |
11:58 |
<cdanis> |
deployed I132075ee on cr2-eqiad |
[production] |
11:41 |
<cdanis> |
depool eqiad T256512 |
[production] |
11:15 |
<awight> |
EU BACON cooked |
[production] |
11:08 |
<marostegui> |
Deploy schema change on db1095:3312 (lag will show up) |
[production] |
10:41 |
<jdrewniak@deploy1001> |
Synchronized portals: Wikimedia Portals Update: [[gerrit:608284| Bumping portals to master (608284)]] (duration: 00m 57s) |
[production] |
10:41 |
<jdrewniak@deploy1001> |
Synchronized portals/wikipedia.org/assets: Wikimedia Portals Update: [[gerrit:608284| Bumping portals to master (608284)]] (duration: 00m 58s) |
[production] |
10:29 |
<gehel> |
restart blazegraph on wdqs1004 + depool to catchup on lag |
[production] |
09:59 |
<ema> |
cp2040: upgrade purged to 0.16 T256479 |
[production] |
09:59 |
<jbond42> |
switch idp to memcached |
[production] |
09:47 |
<jmm@cumin2001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:47 |
<jmm@cumin2001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:45 |
<marostegui> |
Deploy schema change on dbstore1004:3312 |
[production] |
09:11 |
<jbond42> |
dploying shellcheck CI https://gerrit.wikimedia.org/r/c/operations/puppet/+/602693 |
[production] |
08:59 |
<marostegui> |
Compress InnoDB on db1089 (this will cause lag and will take a few days) - T254462 |
[production] |
08:58 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1089 for InnoDB compression T254462', diff saved to https://phabricator.wikimedia.org/P11690 and previous config saved to /var/cache/conftool/dbconfig/20200629-085854-marostegui.json |
[production] |
08:48 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Fully pool db1135 into s1 T253217', diff saved to https://phabricator.wikimedia.org/P11688 and previous config saved to /var/cache/conftool/dbconfig/20200629-084827-marostegui.json |
[production] |
08:40 |
<ema> |
cp2034: restart purged T256444 |
[production] |
08:36 |
<ema> |
cp4025: restart purged T256444 |
[production] |
08:36 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly pool db1135 into s1 T253217', diff saved to https://phabricator.wikimedia.org/P11687 and previous config saved to /var/cache/conftool/dbconfig/20200629-083631-marostegui.json |
[production] |
08:33 |
<ema> |
cp1087, cp2033, cp2037, cp2039: repool after spending (way) more than 24h depooled T256444 |
[production] |
08:26 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly pool db1135 into s1 T253217', diff saved to https://phabricator.wikimedia.org/P11686 and previous config saved to /var/cache/conftool/dbconfig/20200629-082635-marostegui.json |
[production] |
08:24 |
<marostegui> |
Deploy schema change on s2 codfw (lag will show up) T253276 |
[production] |
08:04 |
<XioNoX> |
add term selected-paths to policy BGP_IXP_in on all routers |
[production] |
08:03 |
<godog> |
prometheus eqiad -- lvextend --resizefs --size +200G vg-ssd/prometheus-ops |
[production] |
08:02 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly pool db1135 into s1 T253217', diff saved to https://phabricator.wikimedia.org/P11685 and previous config saved to /var/cache/conftool/dbconfig/20200629-080253-marostegui.json |
[production] |
07:46 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Add db1135 (depooled) to s1 T253217', diff saved to https://phabricator.wikimedia.org/P11684 and previous config saved to /var/cache/conftool/dbconfig/20200629-074611-marostegui.json |
[production] |
07:16 |
<XioNoX> |
push new pfw firewall rules - T256170 |
[production] |
07:13 |
<marostegui> |
Deploy schema change on db1085 with replication to labs T253276 |
[production] |
07:12 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1085', diff saved to https://phabricator.wikimedia.org/P11683 and previous config saved to /var/cache/conftool/dbconfig/20200629-071236-marostegui.json |
[production] |
06:53 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Remove db1080 from MW', diff saved to https://phabricator.wikimedia.org/P11682 and previous config saved to /var/cache/conftool/dbconfig/20200629-065335-marostegui.json |
[production] |
06:50 |
<elukey> |
execute gnt-instance remove an-launcher1001.eqiad.wmnet on ganeti1011 - T256363 |
[production] |
06:47 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) |
[production] |
06:46 |
<elukey@cumin1001> |
START - Cookbook sre.hosts.decommission |
[production] |
06:45 |
<marostegui> |
Deploy MCR schema change on db1090:3312 |
[production] |
06:35 |
<elukey> |
force puppet run on ores* to overcome celery OOMs on some nodes |
[production] |
04:57 |
<marostegui> |
Stop MySQL on db1080 to clone db1135 T253217 |
[production] |
04:56 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
04:53 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |