|
2026-02-02
ยง
|
| 12:46 |
<marostegui@cumin1003> |
END (FAIL) - Cookbook sre.mysql.newpool (exit_code=99) pool db1193: After schema change |
[production] |
| 12:45 |
<marostegui@cumin1003> |
END (PASS) - Cookbook sre.mysql.newpool (exit_code=0) pool db1222: After schema change |
[production] |
| 12:37 |
<marostegui@cumin1003> |
dbctl commit (dc=all): 'Depooling db2177 (T415786)', diff saved to https://phabricator.wikimedia.org/P88389 and previous config saved to /var/cache/conftool/dbconfig/20260202-123726-marostegui.json |
[production] |
| 12:37 |
<marostegui@cumin1003> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db2177.codfw.wmnet with reason: Maintenance |
[production] |
| 12:37 |
<marostegui@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db2156 (T415786)', diff saved to https://phabricator.wikimedia.org/P88388 and previous config saved to /var/cache/conftool/dbconfig/20260202-123712-marostegui.json |
[production] |
| 12:37 |
<slyngshede@cumin1003> |
DONE (PASS) - Cookbook sre.idm.logout (exit_code=0) Logging Samuel (WMF) out of all services on: 2487 hosts |
[production] |
| 12:33 |
<moritzm> |
restarting nginx on puppetdb hosts |
[production] |
| 12:31 |
<jmm@cumin2002> |
DONE (PASS) - Cookbook sre.debmonitor.remove-hosts (exit_code=0) for 1 hosts: sretest2006.codfw.wmnet |
[production] |
| 12:30 |
<slyngshede@dns1004> |
END - running authdns-update |
[production] |
| 12:29 |
<slyngshede@dns1004> |
START - running authdns-update |
[production] |
| 12:27 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.misc-clusters.roll-restart-reboot-docker-registry (exit_code=0) rolling restart_daemons on A:docker-registry |
[production] |
| 12:22 |
<marostegui@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db2156', diff saved to https://phabricator.wikimedia.org/P88385 and previous config saved to /var/cache/conftool/dbconfig/20260202-122203-marostegui.json |
[production] |
| 12:17 |
<marostegui@cumin1003> |
dbctl commit (dc=all): 'Depooling db1166 (T415786)', diff saved to https://phabricator.wikimedia.org/P88384 and previous config saved to /var/cache/conftool/dbconfig/20260202-121735-marostegui.json |
[production] |
| 12:17 |
<marostegui@cumin1003> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db1166.eqiad.wmnet with reason: Maintenance |
[production] |
| 12:17 |
<marostegui@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1157 (T415786)', diff saved to https://phabricator.wikimedia.org/P88383 and previous config saved to /var/cache/conftool/dbconfig/20260202-121707-marostegui.json |
[production] |
| 12:08 |
<jmm@cumin2002> |
START - Cookbook sre.misc-clusters.roll-restart-reboot-docker-registry rolling restart_daemons on A:docker-registry |
[production] |
| 12:06 |
<marostegui@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db2156', diff saved to https://phabricator.wikimedia.org/P88380 and previous config saved to /var/cache/conftool/dbconfig/20260202-120654-marostegui.json |
[production] |
| 12:02 |
<marostegui@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1157', diff saved to https://phabricator.wikimedia.org/P88379 and previous config saved to /var/cache/conftool/dbconfig/20260202-120157-marostegui.json |
[production] |
| 12:00 |
<marostegui@cumin1003> |
START - Cookbook sre.mysql.newpool pool db1193: After schema change |
[production] |
| 12:00 |
<marostegui@cumin1003> |
START - Cookbook sre.mysql.newpool pool db1222: After schema change |
[production] |
| 11:58 |
<marostegui@cumin1003> |
END (FAIL) - Cookbook sre.mysql.newpool (exit_code=99) pool db1222: After schema change |
[production] |
| 11:57 |
<marostegui@cumin1003> |
START - Cookbook sre.mysql.newpool pool db1222: After schema change |
[production] |
| 11:51 |
<marostegui@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db2156 (T415786)', diff saved to https://phabricator.wikimedia.org/P88376 and previous config saved to /var/cache/conftool/dbconfig/20260202-115142-marostegui.json |
[production] |
| 11:46 |
<marostegui@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1157', diff saved to https://phabricator.wikimedia.org/P88375 and previous config saved to /var/cache/conftool/dbconfig/20260202-114648-marostegui.json |
[production] |
| 11:46 |
<slyngshede@cumin1003> |
DONE (PASS) - Cookbook sre.idm.logout (exit_code=0) Logging AUgolnikova out of all services on: 2487 hosts |
[production] |
| 11:31 |
<marostegui@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1157 (T415786)', diff saved to https://phabricator.wikimedia.org/P88374 and previous config saved to /var/cache/conftool/dbconfig/20260202-113139-marostegui.json |
[production] |
| 11:16 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.misc-clusters.roll-restart-reboot-eventschemas (exit_code=0) rolling restart_daemons on A:schema-eqiad |
[production] |
| 11:14 |
<jmm@cumin2002> |
START - Cookbook sre.misc-clusters.roll-restart-reboot-eventschemas rolling restart_daemons on A:schema-eqiad |
[production] |
| 11:07 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.misc-clusters.roll-restart-reboot-eventschemas (exit_code=0) rolling restart_daemons on A:schema-codfw |
[production] |
| 11:06 |
<jmm@cumin2002> |
START - Cookbook sre.misc-clusters.roll-restart-reboot-eventschemas rolling restart_daemons on A:schema-codfw |
[production] |
| 10:45 |
<moritzm> |
restarting Bitu on idm* |
[production] |
| 10:36 |
<marostegui@cumin1003> |
END (PASS) - Cookbook sre.mysql.newpool (exit_code=0) pool db2249: After reimage |
[production] |
| 10:20 |
<dpogorzelski@cumin1003> |
START - Cookbook sre.k8s.wipe-cluster Wipe the K8s cluster ml-staging-codfw: Kubernetes upgrade |
[production] |
| 10:16 |
<marostegui@cumin1003> |
dbctl commit (dc=all): 'Depooling db1157 (T415786)', diff saved to https://phabricator.wikimedia.org/P88371 and previous config saved to /var/cache/conftool/dbconfig/20260202-101658-marostegui.json |
[production] |
| 10:16 |
<marostegui@cumin1003> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db1157.eqiad.wmnet with reason: Maintenance |
[production] |
| 09:51 |
<marostegui@cumin1003> |
START - Cookbook sre.mysql.newpool pool db2249: After reimage |
[production] |
| 09:50 |
<dpogorzelski@cumin1003> |
END (FAIL) - Cookbook sre.k8s.wipe-cluster (exit_code=99) Wipe the K8s cluster ml-staging-codfw: Kubernetes upgrade |
[production] |
| 09:46 |
<marostegui@cumin1003> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db2249.codfw.wmnet with OS trixie |
[production] |
| 09:44 |
<ihurbain@deploy2002> |
Finished scap sync-world: Backport for [[gerrit:1235418|Upgrading psy/psysh (v0.12.10 => v0.12.19) (T416050)]], [[gerrit:1235386|Bump wikimedia/parsoid to 0.23.0-a13.1 (T415328)]], [[gerrit:1235384|Bump wikimedia/parsoid to 0.23.0-a13.1 (T415888 T415328)]] (duration: 06m 36s) |
[production] |
| 09:40 |
<ihurbain@deploy2002> |
reedy, cscott, ihurbain: Continuing with sync |
[production] |
| 09:40 |
<ihurbain@deploy2002> |
reedy, cscott, ihurbain: Backport for [[gerrit:1235418|Upgrading psy/psysh (v0.12.10 => v0.12.19) (T416050)]], [[gerrit:1235386|Bump wikimedia/parsoid to 0.23.0-a13.1 (T415328)]], [[gerrit:1235384|Bump wikimedia/parsoid to 0.23.0-a13.1 (T415888 T415328)]] synced to the testservers (see https://wikitech.wikimedia.org/wiki/Mwdebug). Changes can now be verified there. |
[production] |
| 09:39 |
<dpogorzelski@cumin1003> |
START - Cookbook sre.k8s.wipe-cluster Wipe the K8s cluster ml-staging-codfw: Kubernetes upgrade |
[production] |
| 09:38 |
<ihurbain@deploy2002> |
Started scap sync-world: Backport for [[gerrit:1235418|Upgrading psy/psysh (v0.12.10 => v0.12.19) (T416050)]], [[gerrit:1235386|Bump wikimedia/parsoid to 0.23.0-a13.1 (T415328)]], [[gerrit:1235384|Bump wikimedia/parsoid to 0.23.0-a13.1 (T415888 T415328)]] |
[production] |
| 09:35 |
<dpogorzelski@cumin1003> |
END (PASS) - Cookbook sre.k8s.pool-depool-cluster (exit_code=0) depool all services in codfw/ml-staging-codfw: maintenance |
[production] |
| 09:35 |
<dpogorzelski@cumin1003> |
START - Cookbook sre.k8s.pool-depool-cluster depool all services in codfw/ml-staging-codfw: maintenance |
[production] |
| 09:34 |
<marostegui@cumin1003> |
dbctl commit (dc=all): 'Depooling db2156 (T415786)', diff saved to https://phabricator.wikimedia.org/P88368 and previous config saved to /var/cache/conftool/dbconfig/20260202-093418-marostegui.json |
[production] |
| 09:34 |
<marostegui@cumin1003> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db2156.codfw.wmnet with reason: Maintenance |
[production] |
| 09:33 |
<marostegui@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db2149 (T415786)', diff saved to https://phabricator.wikimedia.org/P88367 and previous config saved to /var/cache/conftool/dbconfig/20260202-093354-marostegui.json |
[production] |
| 09:33 |
<dpogorzelski@cumin1003> |
END (PASS) - Cookbook sre.k8s.pool-depool-cluster (exit_code=0) depool all services in codfw/ml-staging-codfw: maintenance |
[production] |
| 09:33 |
<dpogorzelski@cumin1003> |
START - Cookbook sre.k8s.pool-depool-cluster depool all services in codfw/ml-staging-codfw: maintenance |
[production] |