4051-4100 of 10000 results (31ms)
2020-11-30 §
10:06 <moritzm> installing NSS security updates [production]
09:57 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1134 for schema change', diff saved to https://phabricator.wikimedia.org/P13469 and previous config saved to /var/cache/conftool/dbconfig/20201130-095729-marostegui.json [production]
09:56 <marostegui@cumin1001> dbctl commit (dc=all): 'db1089 (re)pooling @ 100%: After schema change', diff saved to https://phabricator.wikimedia.org/P13468 and previous config saved to /var/cache/conftool/dbconfig/20201130-095621-root.json [production]
09:41 <marostegui@cumin1001> dbctl commit (dc=all): 'db1089 (re)pooling @ 75%: After schema change', diff saved to https://phabricator.wikimedia.org/P13467 and previous config saved to /var/cache/conftool/dbconfig/20201130-094117-root.json [production]
09:40 <marostegui> Stop MySQL on db1087 to clone clouddb1016:3318 T267090) [production]
09:39 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1087 from s8 and pool db1092 instead temporarily on vslow T267090', diff saved to https://phabricator.wikimedia.org/P13466 and previous config saved to /var/cache/conftool/dbconfig/20201130-093909-marostegui.json [production]
09:26 <marostegui@cumin1001> dbctl commit (dc=all): 'db1089 (re)pooling @ 50%: After schema change', diff saved to https://phabricator.wikimedia.org/P13465 and previous config saved to /var/cache/conftool/dbconfig/20201130-092614-root.json [production]
09:21 <marostegui@cumin1001> dbctl commit (dc=all): 'db1089+ (re)pooling @ 25%: After schema change', diff saved to https://phabricator.wikimedia.org/P13464 and previous config saved to /var/cache/conftool/dbconfig/20201130-092154-root.json [production]
08:51 <marostegui> Deploy schema change on db1089 [production]
08:51 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1089', diff saved to https://phabricator.wikimedia.org/P13463 and previous config saved to /var/cache/conftool/dbconfig/20201130-085101-marostegui.json [production]
08:41 <godog> swift eqiad-prod: add weight to ms-be106[0-3] - T268435 [production]
08:36 <marostegui> Compare data between clouddb1016:3315 labsdb1012 T267090 [production]
07:45 <marostegui@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) [production]
07:41 <marostegui@cumin1001> START - Cookbook sre.hosts.decommission [production]
07:25 <marostegui@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) [production]
07:18 <marostegui@cumin1001> START - Cookbook sre.hosts.decommission [production]
07:11 <marostegui> Deploy schema change on s1 codfw - T268004 [production]
07:05 <marostegui> Stop mysql on db1124:3318 to clone clouddb1016:3318, lag will show up on wikireplicas on s8 T267090 [production]
06:47 <marostegui@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) [production]
06:43 <marostegui@cumin1001> START - Cookbook sre.hosts.decommission [production]
04:26 <kart_> Updated cxserver to 2020-11-23-050106-production (T262253, T268410) [production]
04:18 <kartik@deploy1001> helmfile [eqiad] Ran 'sync' command on namespace 'cxserver' for release 'production' . [production]
04:14 <kartik@deploy1001> helmfile [codfw] Ran 'sync' command on namespace 'cxserver' for release 'production' . [production]
04:11 <kartik@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'cxserver' for release 'staging' . [production]
2020-11-27 §
17:30 <hnowlan@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
17:30 <hnowlan@cumin1001> START - Cookbook sre.hosts.downtime [production]
15:50 <hnowlan@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
15:50 <hnowlan@cumin1001> START - Cookbook sre.hosts.downtime [production]
15:13 <elukey@cumin1001> END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) [production]
15:06 <elukey@cumin1001> START - Cookbook sre.zookeeper.roll-restart-zookeeper [production]
14:56 <elukey@cumin1001> END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) [production]
14:50 <elukey> roll restart zookeeper on druid* nodes for openjdk upgrades [production]
14:50 <elukey@cumin1001> START - Cookbook sre.zookeeper.roll-restart-zookeeper [production]
10:52 <jayme> updated helmfile to 0.135.0-1 on deploy*,contint* [production]
10:51 <jayme> updated helm-diff to 3.1.3-1 on contint* [production]
10:49 <jayme> updated helm to 2.17.0-1 on deploy*,contint*,chartmuseum* [production]
10:06 <jayme> updated helm and helmfile on deploy2001 [production]
10:04 <jayme@deploy2001> helmfile [staging] Ran 'sync' command on namespace 'blubberoid' for release 'staging' . [production]
10:00 <jayme> imported helm 2.17.0 into buster-wikimedia and stretch-wikimedia [production]
08:55 <elukey@cumin1001> END (PASS) - Cookbook sre.druid.roll-restart-workers (exit_code=0) [production]
08:05 <elukey> roll restart druid public cluster for openjdk upgrades [production]
08:04 <elukey@cumin1001> START - Cookbook sre.druid.roll-restart-workers [production]
06:39 <marostegui> Stop mysql on es1015 T268810 [production]
06:38 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove es1015 from dbctl', diff saved to https://phabricator.wikimedia.org/P13454 and previous config saved to /var/cache/conftool/dbconfig/20201127-063846-marostegui.json [production]
06:30 <marostegui> Remove es1016 from tendril and zarcillo T268812 [production]
06:29 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) [production]
06:25 <marostegui@cumin1001> START - Cookbook sre.hosts.decommission [production]
06:19 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool es1015 for decommissioning T268810', diff saved to https://phabricator.wikimedia.org/P13453 and previous config saved to /var/cache/conftool/dbconfig/20201127-061929-marostegui.json [production]
2020-11-26 §
17:18 <jayme> downgrade helmfile to 0.125.2-1 on deploy* [production]
17:05 <jayme> updated helm-diff and helmfile on deploy100* and deploy200* [production]