51-100 of 10000 results (22ms)
2020-06-29 §
10:41 <jdrewniak@deploy1001> Synchronized portals/wikipedia.org/assets: Wikimedia Portals Update: [[gerrit:608284| Bumping portals to master (608284)]] (duration: 00m 58s) [production]
10:29 <gehel> restart blazegraph on wdqs1004 + depool to catchup on lag [production]
09:59 <ema> cp2040: upgrade purged to 0.16 T256479 [production]
09:59 <jbond42> switch idp to memcached [production]
09:47 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
09:47 <jmm@cumin2001> START - Cookbook sre.hosts.downtime [production]
09:45 <marostegui> Deploy schema change on dbstore1004:3312 [production]
09:11 <jbond42> dploying shellcheck CI https://gerrit.wikimedia.org/r/c/operations/puppet/+/602693 [production]
08:59 <marostegui> Compress InnoDB on db1089 (this will cause lag and will take a few days) - T254462 [production]
08:58 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1089 for InnoDB compression T254462', diff saved to https://phabricator.wikimedia.org/P11690 and previous config saved to /var/cache/conftool/dbconfig/20200629-085854-marostegui.json [production]
08:48 <marostegui@cumin1001> dbctl commit (dc=all): 'Fully pool db1135 into s1 T253217', diff saved to https://phabricator.wikimedia.org/P11688 and previous config saved to /var/cache/conftool/dbconfig/20200629-084827-marostegui.json [production]
08:40 <ema> cp2034: restart purged T256444 [production]
08:36 <ema> cp4025: restart purged T256444 [production]
08:36 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly pool db1135 into s1 T253217', diff saved to https://phabricator.wikimedia.org/P11687 and previous config saved to /var/cache/conftool/dbconfig/20200629-083631-marostegui.json [production]
08:33 <ema> cp1087, cp2033, cp2037, cp2039: repool after spending (way) more than 24h depooled T256444 [production]
08:26 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly pool db1135 into s1 T253217', diff saved to https://phabricator.wikimedia.org/P11686 and previous config saved to /var/cache/conftool/dbconfig/20200629-082635-marostegui.json [production]
08:24 <marostegui> Deploy schema change on s2 codfw (lag will show up) T253276 [production]
08:04 <XioNoX> add term selected-paths to policy BGP_IXP_in on all routers [production]
08:03 <godog> prometheus eqiad -- lvextend --resizefs --size +200G vg-ssd/prometheus-ops [production]
08:02 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly pool db1135 into s1 T253217', diff saved to https://phabricator.wikimedia.org/P11685 and previous config saved to /var/cache/conftool/dbconfig/20200629-080253-marostegui.json [production]
07:46 <marostegui@cumin1001> dbctl commit (dc=all): 'Add db1135 (depooled) to s1 T253217', diff saved to https://phabricator.wikimedia.org/P11684 and previous config saved to /var/cache/conftool/dbconfig/20200629-074611-marostegui.json [production]
07:16 <XioNoX> push new pfw firewall rules - T256170 [production]
07:13 <marostegui> Deploy schema change on db1085 with replication to labs T253276 [production]
07:12 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1085', diff saved to https://phabricator.wikimedia.org/P11683 and previous config saved to /var/cache/conftool/dbconfig/20200629-071236-marostegui.json [production]
06:53 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove db1080 from MW', diff saved to https://phabricator.wikimedia.org/P11682 and previous config saved to /var/cache/conftool/dbconfig/20200629-065335-marostegui.json [production]
06:50 <elukey> execute gnt-instance remove an-launcher1001.eqiad.wmnet on ganeti1011 - T256363 [production]
06:47 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) [production]
06:46 <elukey@cumin1001> START - Cookbook sre.hosts.decommission [production]
06:45 <marostegui> Deploy MCR schema change on db1090:3312 [production]
06:35 <elukey> force puppet run on ores* to overcome celery OOMs on some nodes [production]
04:57 <marostegui> Stop MySQL on db1080 to clone db1135 T253217 [production]
04:56 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
04:53 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime [production]
2020-06-28 §
21:43 <krinkle@deploy1001> Synchronized wmf-config/CommonSettings.php: no-op I56eb4a802 (duration: 00m 58s) [production]
21:38 <krinkle@deploy1001> Synchronized wmf-config/InitialiseSettings-labs.php: beta-only I56eb4a802 (duration: 01m 00s) [production]
2020-06-27 §
20:22 <qchris> Gerrit upgrade done. [production]
19:49 <mutante> removed 2620:0:861:3:208:80:154:136 from /etc/network/interfaces on gerrit1001, rebooting [production]
19:27 <mutante> rebooting gerrit1001 one more time [production]
19:24 <mutante> restarted ferm on gerrit1001 [production]
19:19 <mutante> rebooting gerrit1001 one more time [production]
19:05 <mutante> rebooting gerrit1001 [production]
18:58 <mutante> rebooting gerrit2001 [production]
18:49 <hashar> Enabling beta cluster update job (gerrit maintenance) https://integration.wikimedia.org/ci/view/Beta/job/beta-code-update-eqiad/ [production]
18:35 <qchris@deploy1001> Finished deploy [gerrit/gerrit@da40615]: Gerrit to v3.2.2-98-g98d827eaa3 on gerrit2001 (duration: 00m 10s) [production]
18:34 <qchris@deploy1001> Started deploy [gerrit/gerrit@da40615]: Gerrit to v3.2.2-98-g98d827eaa3 on gerrit2001 [production]
18:27 <qchris@deploy1001> Finished deploy [gerrit/gerrit@da40615]: Gerrit to v3.2.2-98-g98d827eaa3 on gerrit1001 (duration: 00m 08s) [production]
18:27 <qchris@deploy1001> Started deploy [gerrit/gerrit@da40615]: Gerrit to v3.2.2-98-g98d827eaa3 on gerrit1001 [production]
17:25 <hashar> Disabled beta cluster update job (gerrit maintenance) https://integration.wikimedia.org/ci/view/Beta/job/beta-code-update-eqiad/ [production]
17:19 <qchris> Stopping gerrit on gerrit1001 for the Gerrit upgrade [production]
17:14 <qchris> Duplicating reviewdb changes so we get a cheap and quick rollback [production]