2023-08-07
ยง
|
13:13 |
<urbanecm@deploy1002> |
anzx and dreamyjazz and stang and urbanecm: Continuing with sync |
[production] |
13:06 |
<urbanecm@deploy1002> |
anzx and dreamyjazz and stang and urbanecm: Backport for [[gerrit:945939|Update knwiktionary logos (T343662)]], [[gerrit:946527|Write new for event table migration on all wikis (T330158)]], [[gerrit:946540|zhwiki: Grant "suppressredirect"to autoreviewer (T343711)]] synced to the testservers mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug2002.codfw.wmnet, and mw-d |
[production] |
13:05 |
<urbanecm@deploy1002> |
Started scap: Backport for [[gerrit:945939|Update knwiktionary logos (T343662)]], [[gerrit:946527|Write new for event table migration on all wikis (T330158)]], [[gerrit:946540|zhwiki: Grant "suppressredirect"to autoreviewer (T343711)]] |
[production] |
12:57 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2105.codfw.wmnet with reason: Maintenance |
[production] |
12:57 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2105.codfw.wmnet with reason: Maintenance |
[production] |
12:17 |
<dcausse> |
repooling wdqs1004 |
[production] |
11:54 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance |
[production] |
11:54 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance |
[production] |
10:53 |
<ladsgroup@deploy1002> |
Finished scap: Backport for [[gerrit:946521|Stop writing to the old externallinks columns in testwiki (T342683)]] (duration: 08m 06s) |
[production] |
10:48 |
<ladsgroup@deploy1002> |
ladsgroup: Continuing with sync |
[production] |
10:47 |
<ladsgroup@deploy1002> |
ladsgroup: Backport for [[gerrit:946521|Stop writing to the old externallinks columns in testwiki (T342683)]] synced to the testservers mwdebug2002.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug1002.eqiad.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) |
[production] |
10:45 |
<ladsgroup@deploy1002> |
Started scap: Backport for [[gerrit:946521|Stop writing to the old externallinks columns in testwiki (T342683)]] |
[production] |
10:26 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2105.codfw.wmnet with reason: Maintenance |
[production] |
10:26 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2105.codfw.wmnet with reason: Maintenance |
[production] |
10:25 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance |
[production] |
10:25 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance |
[production] |
10:25 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2105.codfw.wmnet with reason: Maintenance |
[production] |
10:25 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2105.codfw.wmnet with reason: Maintenance |
[production] |
10:25 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance |
[production] |
10:25 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance |
[production] |
10:24 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2105.codfw.wmnet with reason: Maintenance |
[production] |
10:24 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2105.codfw.wmnet with reason: Maintenance |
[production] |
10:24 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance |
[production] |
10:23 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1157.eqiad.wmnet with reason: Maintenance |
[production] |
10:08 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1138 (T342617)', diff saved to https://phabricator.wikimedia.org/P50158 and previous config saved to /var/cache/conftool/dbconfig/20230807-100805-ladsgroup.json |
[production] |
10:08 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2099.codfw.wmnet with reason: Maintenance |
[production] |
10:08 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1138.eqiad.wmnet with reason: Maintenance |
[production] |
10:07 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2099.codfw.wmnet with reason: Maintenance |
[production] |
10:07 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1138.eqiad.wmnet with reason: Maintenance |
[production] |
09:23 |
<dcausse> |
restarting blazegraph on wdqs1004 |
[production] |
08:31 |
<elukey@deploy1002> |
Finished scap: Backport for [[gerrit:946510|ext-ORES: force cswiki to use the ORES settings/backend (T343308)]] (duration: 14m 50s) |
[production] |
08:25 |
<elukey@deploy1002> |
elukey: Continuing with sync |
[production] |
08:24 |
<elukey@deploy1002> |
elukey: Backport for [[gerrit:946510|ext-ORES: force cswiki to use the ORES settings/backend (T343308)]] synced to the testservers mwdebug1001.eqiad.wmnet, mwdebug2002.codfw.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) |
[production] |
08:16 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1224 (re)pooling @ 100%: Repooling after migration', diff saved to https://phabricator.wikimedia.org/P50157 and previous config saved to /var/cache/conftool/dbconfig/20230807-081639-root.json |
[production] |
08:16 |
<elukey@deploy1002> |
Started scap: Backport for [[gerrit:946510|ext-ORES: force cswiki to use the ORES settings/backend (T343308)]] |
[production] |
08:08 |
<godog> |
start docker-image-prune-old on alert hosts - T329939 |
[production] |
08:01 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1224 (re)pooling @ 75%: Repooling after migration', diff saved to https://phabricator.wikimedia.org/P50156 and previous config saved to /var/cache/conftool/dbconfig/20230807-080133-root.json |
[production] |
07:46 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1224 (re)pooling @ 50%: Repooling after migration', diff saved to https://phabricator.wikimedia.org/P50155 and previous config saved to /var/cache/conftool/dbconfig/20230807-074628-root.json |
[production] |
07:31 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1224 (re)pooling @ 25%: Repooling after migration', diff saved to https://phabricator.wikimedia.org/P50154 and previous config saved to /var/cache/conftool/dbconfig/20230807-073123-root.json |
[production] |
07:16 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1224 (re)pooling @ 10%: Repooling after migration', diff saved to https://phabricator.wikimedia.org/P50153 and previous config saved to /var/cache/conftool/dbconfig/20230807-071618-root.json |
[production] |
07:11 |
<marostegui> |
Depool clouddb1015 T334650 |
[production] |
07:01 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1224 (re)pooling @ 5%: Repooling after migration', diff saved to https://phabricator.wikimedia.org/P50152 and previous config saved to /var/cache/conftool/dbconfig/20230807-070113-root.json |
[production] |
06:46 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1224 (re)pooling @ 3%: Repooling after migration', diff saved to https://phabricator.wikimedia.org/P50151 and previous config saved to /var/cache/conftool/dbconfig/20230807-064608-root.json |
[production] |
06:33 |
<kart_> |
Updated cxserver to 2023-08-03-132800-production (T338602, T333969, T343211) |
[production] |
06:31 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1224 (re)pooling @ 1%: Repooling after migration', diff saved to https://phabricator.wikimedia.org/P50150 and previous config saved to /var/cache/conftool/dbconfig/20230807-063104-root.json |
[production] |
06:28 |
<kartik@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/cxserver: apply |
[production] |
06:28 |
<kartik@deploy1002> |
helmfile [eqiad] START helmfile.d/services/cxserver: apply |
[production] |
06:26 |
<kartik@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/cxserver: apply |
[production] |
06:25 |
<kartik@deploy1002> |
helmfile [codfw] START helmfile.d/services/cxserver: apply |
[production] |
06:22 |
<kartik@deploy1002> |
helmfile [staging] DONE helmfile.d/services/cxserver: apply |
[production] |