2022-08-24
ยง
|
12:49 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1120 (T312975)', diff saved to https://phabricator.wikimedia.org/P32914 and previous config saved to /var/cache/conftool/dbconfig/20220824-124905-ladsgroup.json |
[production] |
12:43 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1174 (re)pooling @ 25%: Repooling after cloning db1191', diff saved to https://phabricator.wikimedia.org/P32913 and previous config saved to /var/cache/conftool/dbconfig/20220824-124354-root.json |
[production] |
12:43 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1104 (T314041)', diff saved to https://phabricator.wikimedia.org/P32912 and previous config saved to /var/cache/conftool/dbconfig/20220824-124346-ladsgroup.json |
[production] |
12:43 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1104.eqiad.wmnet with reason: Maintenance |
[production] |
12:43 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1104.eqiad.wmnet with reason: Maintenance |
[production] |
12:42 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1171.eqiad.wmnet with reason: Maintenance |
[production] |
12:42 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1171.eqiad.wmnet with reason: Maintenance |
[production] |
12:33 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1120', diff saved to https://phabricator.wikimedia.org/P32911 and previous config saved to /var/cache/conftool/dbconfig/20220824-123358-ladsgroup.json |
[production] |
12:28 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1174 (re)pooling @ 10%: Repooling after cloning db1191', diff saved to https://phabricator.wikimedia.org/P32910 and previous config saved to /var/cache/conftool/dbconfig/20220824-122848-root.json |
[production] |
12:24 |
<moritzm> |
installing containerd security updates |
[production] |
12:18 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1120', diff saved to https://phabricator.wikimedia.org/P32909 and previous config saved to /var/cache/conftool/dbconfig/20220824-121852-ladsgroup.json |
[production] |
12:13 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1174 (re)pooling @ 5%: Repooling after cloning db1191', diff saved to https://phabricator.wikimedia.org/P32908 and previous config saved to /var/cache/conftool/dbconfig/20220824-121343-root.json |
[production] |
12:03 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1120 (T312975)', diff saved to https://phabricator.wikimedia.org/P32907 and previous config saved to /var/cache/conftool/dbconfig/20220824-120346-ladsgroup.json |
[production] |
12:01 |
<Amir1> |
killed refresh links-recomm scripts in rowiki, cswiki, simplewiki, frwiki (T299021) |
[production] |
11:59 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1120 (T312975)', diff saved to https://phabricator.wikimedia.org/P32906 and previous config saved to /var/cache/conftool/dbconfig/20220824-115935-ladsgroup.json |
[production] |
11:59 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1120.eqiad.wmnet with reason: Maintenance |
[production] |
11:59 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1120.eqiad.wmnet with reason: Maintenance |
[production] |
11:59 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on dbstore1005.eqiad.wmnet with reason: Maintenance |
[production] |
11:59 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on dbstore1005.eqiad.wmnet with reason: Maintenance |
[production] |
11:46 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on dbstore1005.eqiad.wmnet with reason: Maintenance |
[production] |
11:46 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on dbstore1005.eqiad.wmnet with reason: Maintenance |
[production] |
11:42 |
<klausman@cumin1001> |
END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching ml-cache*: Rolling restart to activate new JRE - klausman@cumin1001 |
[production] |
11:38 |
<slyngs> |
Migrate mdadm array checks to systemd timers. Gerrit: 819577 |
[production] |
11:29 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1147 (re)pooling @ 100%: Repooling after cloning db1190', diff saved to https://phabricator.wikimedia.org/P32905 and previous config saved to /var/cache/conftool/dbconfig/20220824-112938-root.json |
[production] |
11:14 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1147 (re)pooling @ 75%: Repooling after cloning db1190', diff saved to https://phabricator.wikimedia.org/P32904 and previous config saved to /var/cache/conftool/dbconfig/20220824-111433-root.json |
[production] |
11:07 |
<klausman@cumin1001> |
START - Cookbook sre.cassandra.roll-restart for nodes matching ml-cache*: Rolling restart to activate new JRE - klausman@cumin1001 |
[production] |
10:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1147 (re)pooling @ 50%: Repooling after cloning db1190', diff saved to https://phabricator.wikimedia.org/P32903 and previous config saved to /var/cache/conftool/dbconfig/20220824-105928-root.json |
[production] |
10:52 |
<vgutierrez> |
disable origin coalescing in ats@cp600[78] - T315911 |
[production] |
10:46 |
<hnowlan@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/api-gateway: sync |
[production] |
10:46 |
<hnowlan@deploy1002> |
helmfile [eqiad] START helmfile.d/services/api-gateway: sync |
[production] |
10:44 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1147 (re)pooling @ 25%: Repooling after cloning db1190', diff saved to https://phabricator.wikimedia.org/P32902 and previous config saved to /var/cache/conftool/dbconfig/20220824-104424-root.json |
[production] |
10:36 |
<hnowlan@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/api-gateway: sync |
[production] |
10:35 |
<hnowlan@deploy1002> |
helmfile [codfw] START helmfile.d/services/api-gateway: sync |
[production] |
10:32 |
<hnowlan@deploy1002> |
helmfile [staging] DONE helmfile.d/services/api-gateway: sync |
[production] |
10:32 |
<hnowlan@deploy1002> |
helmfile [staging] START helmfile.d/services/api-gateway: sync |
[production] |
10:29 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1147 (re)pooling @ 10%: Repooling after cloning db1190', diff saved to https://phabricator.wikimedia.org/P32901 and previous config saved to /var/cache/conftool/dbconfig/20220824-102919-root.json |
[production] |
10:14 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1147 (re)pooling @ 5%: Repooling after cloning db1190', diff saved to https://phabricator.wikimedia.org/P32900 and previous config saved to /var/cache/conftool/dbconfig/20220824-101414-root.json |
[production] |
09:46 |
<vgutierrez> |
Restart incremental roll-out of query-sorting at 1% - T314868 |
[production] |
08:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1129 (re)pooling @ 100%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P32899 and previous config saved to /var/cache/conftool/dbconfig/20220824-085902-root.json |
[production] |
08:56 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1119 (re)pooling @ 100%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P32898 and previous config saved to /var/cache/conftool/dbconfig/20220824-085639-root.json |
[production] |
08:49 |
<jayme> |
jayme@builder-envoy-03:~$ sudo apt-get remove --purge linux-image-4.19.0-6-amd64-dbg linux-image-4.19.0-14-amd64-dbg |
[production] |
08:43 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1129 (re)pooling @ 75%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P32897 and previous config saved to /var/cache/conftool/dbconfig/20220824-084357-root.json |
[production] |
08:41 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1119 (re)pooling @ 75%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P32896 and previous config saved to /var/cache/conftool/dbconfig/20220824-084134-root.json |
[production] |
08:28 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1129 (re)pooling @ 50%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P32895 and previous config saved to /var/cache/conftool/dbconfig/20220824-082852-root.json |
[production] |
08:28 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1174', diff saved to https://phabricator.wikimedia.org/P32893 and previous config saved to /var/cache/conftool/dbconfig/20220824-082809-root.json |
[production] |
08:26 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1119 (re)pooling @ 50%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P32892 and previous config saved to /var/cache/conftool/dbconfig/20220824-082630-root.json |
[production] |
08:21 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: apply |
[production] |
08:20 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply |
[production] |
08:20 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply |
[production] |
08:19 |
<hashar@deploy1002> |
Synchronized php: group1 wikis to 1.39.0-wmf.26 refs T314187 (duration: 02m 46s) |
[production] |