|
2026-04-20
ยง
|
| 12:12 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1227', diff saved to https://phabricator.wikimedia.org/P91190 and previous config saved to /var/cache/conftool/dbconfig/20260420-121247-fceratto.json |
[production] |
| 12:11 |
<mvernon@cumin2002> |
START - Cookbook sre.hosts.decommission for hosts moss-be[1001-1002].eqiad.wmnet |
[production] |
| 12:10 |
<moritzm> |
installing edk2 security updates |
[production] |
| 12:02 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1227', diff saved to https://phabricator.wikimedia.org/P91189 and previous config saved to /var/cache/conftool/dbconfig/20260420-120239-fceratto.json |
[production] |
| 11:52 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1227 (T419635)', diff saved to https://phabricator.wikimedia.org/P91188 and previous config saved to /var/cache/conftool/dbconfig/20260420-115231-fceratto.json |
[production] |
| 11:49 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti5006.eqsin.wmnet |
[production] |
| 11:17 |
<fnegri@cumin1003> |
conftool action : set/pooled=no; selector: name=clouddb1025.eqiad.wmnet,service=x4 |
[production] |
| 10:52 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Depooling db1227 (T419635)', diff saved to https://phabricator.wikimedia.org/P91187 and previous config saved to /var/cache/conftool/dbconfig/20260420-105213-fceratto.json |
[production] |
| 10:52 |
<fceratto@cumin1003> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1227.eqiad.wmnet with reason: Maintenance |
[production] |
| 10:51 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1202 (T419635)', diff saved to https://phabricator.wikimedia.org/P91186 and previous config saved to /var/cache/conftool/dbconfig/20260420-105148-fceratto.json |
[production] |
| 10:41 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1202', diff saved to https://phabricator.wikimedia.org/P91185 and previous config saved to /var/cache/conftool/dbconfig/20260420-104141-fceratto.json |
[production] |
| 10:32 |
<trueg@deploy1003> |
helmfile [staging] DONE helmfile.d/services/rdf-streaming-updater: apply |
[production] |
| 10:32 |
<trueg@deploy1003> |
helmfile [staging] START helmfile.d/services/rdf-streaming-updater: apply |
[production] |
| 10:31 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1202', diff saved to https://phabricator.wikimedia.org/P91184 and previous config saved to /var/cache/conftool/dbconfig/20260420-103133-fceratto.json |
[production] |
| 10:26 |
<kamila@deploy1003> |
Finished scap sync-world: ICU 72 upgrade (duration: 51m 35s) |
[production] |
| 10:24 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts bast1003.wikimedia.org |
[production] |
| 10:24 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
| 10:24 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: bast1003.wikimedia.org decommissioned, removing all IPs except the asset tag one - jmm@cumin2002" |
[production] |
| 10:21 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1202 (T419635)', diff saved to https://phabricator.wikimedia.org/P91183 and previous config saved to /var/cache/conftool/dbconfig/20260420-102125-fceratto.json |
[production] |
| 10:19 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Depooling db1202 (T419635)', diff saved to https://phabricator.wikimedia.org/P91182 and previous config saved to /var/cache/conftool/dbconfig/20260420-101913-fceratto.json |
[production] |
| 10:19 |
<fceratto@cumin1003> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1202.eqiad.wmnet with reason: Maintenance |
[production] |
| 10:18 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1194 (T419635)', diff saved to https://phabricator.wikimedia.org/P91181 and previous config saved to /var/cache/conftool/dbconfig/20260420-101847-fceratto.json |
[production] |
| 10:15 |
<jmm@cumin2002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: bast1003.wikimedia.org decommissioned, removing all IPs except the asset tag one - jmm@cumin2002" |
[production] |
| 10:14 |
<kamila@deploy1003> |
kamila: Continuing with sync |
[production] |
| 10:08 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1194', diff saved to https://phabricator.wikimedia.org/P91180 and previous config saved to /var/cache/conftool/dbconfig/20260420-100839-fceratto.json |
[production] |
| 10:07 |
<jmm@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
| 10:04 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Depooling db2190 (T419961)', diff saved to https://phabricator.wikimedia.org/P91179 and previous config saved to /var/cache/conftool/dbconfig/20260420-100423-fceratto.json |
[production] |
| 10:04 |
<fceratto@cumin1003> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2190.codfw.wmnet with reason: Maintenance |
[production] |
| 10:04 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db2177 (T419961)', diff saved to https://phabricator.wikimedia.org/P91178 and previous config saved to /var/cache/conftool/dbconfig/20260420-100402-fceratto.json |
[production] |
| 10:02 |
<Emperor> |
ceph orch host drain moss-be1002 T418901 |
[production] |
| 10:02 |
<marostegui@cumin1003> |
END (PASS) - Cookbook sre.mysql.pool (exit_code=0) pool db1165: after reimage to trixie |
[production] |
| 09:58 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1194', diff saved to https://phabricator.wikimedia.org/P91176 and previous config saved to /var/cache/conftool/dbconfig/20260420-095831-fceratto.json |
[production] |
| 09:58 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.decommission for hosts bast1003.wikimedia.org |
[production] |
| 09:53 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db2177', diff saved to https://phabricator.wikimedia.org/P91175 and previous config saved to /var/cache/conftool/dbconfig/20260420-095354-fceratto.json |
[production] |
| 09:52 |
<kamila@deploy1003> |
kamila: ICU 72 upgrade synced to the testservers (see https://wikitech.wikimedia.org/wiki/Mwdebug). Changes can now be verified there. |
[production] |
| 09:48 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1194 (T419635)', diff saved to https://phabricator.wikimedia.org/P91174 and previous config saved to /var/cache/conftool/dbconfig/20260420-094823-fceratto.json |
[production] |
| 09:48 |
<klausman@deploy1003> |
helmfile [ml-serve-eqiad] Ran 'sync' command on namespace 'llm' for release 'main' . |
[production] |
| 09:46 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Depooling db1194 (T419635)', diff saved to https://phabricator.wikimedia.org/P91172 and previous config saved to /var/cache/conftool/dbconfig/20260420-094612-fceratto.json |
[production] |
| 09:46 |
<fceratto@cumin1003> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1194.eqiad.wmnet with reason: Maintenance |
[production] |
| 09:45 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1191 (T419635)', diff saved to https://phabricator.wikimedia.org/P91171 and previous config saved to /var/cache/conftool/dbconfig/20260420-094546-fceratto.json |
[production] |
| 09:43 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db2177', diff saved to https://phabricator.wikimedia.org/P91170 and previous config saved to /var/cache/conftool/dbconfig/20260420-094345-fceratto.json |
[production] |
| 09:43 |
<Emperor> |
ceph orch host drain moss-be1001 T418901 |
[production] |
| 09:40 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.reboot-vm (exit_code=0) for VM puppetboard1003.eqiad.wmnet |
[production] |
| 09:36 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.reboot-vm for VM puppetboard1003.eqiad.wmnet |
[production] |
| 09:35 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db1191', diff saved to https://phabricator.wikimedia.org/P91169 and previous config saved to /var/cache/conftool/dbconfig/20260420-093538-fceratto.json |
[production] |
| 09:35 |
<kamila@deploy1003> |
Started scap sync-world: ICU 72 upgrade |
[production] |
| 09:33 |
<fceratto@cumin1003> |
dbctl commit (dc=all): 'Repooling after maintenance db2177 (T419961)', diff saved to https://phabricator.wikimedia.org/P91168 and previous config saved to /var/cache/conftool/dbconfig/20260420-093337-fceratto.json |
[production] |
| 09:33 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.reboot-vm (exit_code=0) for VM puppetboard2003.codfw.wmnet |
[production] |
| 09:29 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.reboot-vm for VM puppetboard2003.codfw.wmnet |
[production] |
| 09:26 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti5006.eqsin.wmnet |
[production] |