2025-02-26
ยง
|
11:45 |
<jiji@deploy2002> |
helmfile [codfw] DONE helmfile.d/services/shellbox-timeline: apply |
[production] |
11:45 |
<marostegui@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2180.codfw.wmnet with reason: Index rebuild |
[production] |
11:45 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for db1168.eqiad.wmnet |
[production] |
11:45 |
<jiji@deploy2002> |
helmfile [codfw] START helmfile.d/services/shellbox-timeline: apply |
[production] |
11:43 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for db2180.codfw.wmnet |
[production] |
11:42 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1024.eqiad.wmnet |
[production] |
11:39 |
<marostegui@cumin1002> |
START - Cookbook sre.mysql.upgrade for db1206.eqiad.wmnet |
[production] |
11:39 |
<marostegui@cumin1002> |
START - Cookbook sre.mysql.upgrade for db2170.codfw.wmnet |
[production] |
11:39 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depool db1206 db2170 T385561', diff saved to https://phabricator.wikimedia.org/P73674 and previous config saved to /var/cache/conftool/dbconfig/20250226-113935-root.json |
[production] |
11:37 |
<marostegui@cumin1002> |
START - Cookbook sre.mysql.upgrade for db1168.eqiad.wmnet |
[production] |
11:37 |
<marostegui@cumin1002> |
START - Cookbook sre.mysql.upgrade for db2180.codfw.wmnet |
[production] |
11:36 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Pool db2169 with 100%', diff saved to https://phabricator.wikimedia.org/P73673 and previous config saved to /var/cache/conftool/dbconfig/20250226-113613-marostegui.json |
[production] |
11:34 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depool db1168 db2180 T386242', diff saved to https://phabricator.wikimedia.org/P73671 and previous config saved to /var/cache/conftool/dbconfig/20250226-113453-root.json |
[production] |
11:32 |
<hnowlan@deploy1003> |
helmfile [staging] DONE helmfile.d/services/mobileapps: sync |
[production] |
11:32 |
<hnowlan@deploy1003> |
helmfile [staging] START helmfile.d/services/mobileapps: sync |
[production] |
11:31 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2218 (re)pooling @ 75%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73670 and previous config saved to /var/cache/conftool/dbconfig/20250226-113134-root.json |
[production] |
11:29 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2169 (re)pooling @ 75%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73669 and previous config saved to /var/cache/conftool/dbconfig/20250226-112915-root.json |
[production] |
11:21 |
<vgutierrez> |
repooling lvs7001 running liberica - T384477 |
[production] |
11:16 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2218 (re)pooling @ 50%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73668 and previous config saved to /var/cache/conftool/dbconfig/20250226-111629-root.json |
[production] |
11:14 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1180 (re)pooling @ 100%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73667 and previous config saved to /var/cache/conftool/dbconfig/20250226-111421-root.json |
[production] |
11:14 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2169 (re)pooling @ 50%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73666 and previous config saved to /var/cache/conftool/dbconfig/20250226-111410-root.json |
[production] |
11:04 |
<marostegui@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on 27 hosts with reason: Schema change |
[production] |
11:04 |
<vgutierrez@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host lvs7001.magru.wmnet with OS bookworm |
[production] |
11:03 |
<marostegui> |
Drop schema change on s7 codfw master with replication dbmaint T385645 |
[production] |
11:02 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1195 (re)pooling @ 100%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73665 and previous config saved to /var/cache/conftool/dbconfig/20250226-110209-root.json |
[production] |
11:01 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2218 (re)pooling @ 25%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73664 and previous config saved to /var/cache/conftool/dbconfig/20250226-110124-root.json |
[production] |
10:59 |
<marostegui> |
Drop schema change on s3 codfw master with replication dbmaint T385645 |
[production] |
10:59 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2153 (re)pooling @ 100%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73663 and previous config saved to /var/cache/conftool/dbconfig/20250226-105937-root.json |
[production] |
10:59 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1180 (re)pooling @ 75%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73662 and previous config saved to /var/cache/conftool/dbconfig/20250226-105916-root.json |
[production] |
10:59 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2169 (re)pooling @ 25%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73661 and previous config saved to /var/cache/conftool/dbconfig/20250226-105905-root.json |
[production] |
10:47 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1195 (re)pooling @ 75%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73660 and previous config saved to /var/cache/conftool/dbconfig/20250226-104704-root.json |
[production] |
10:46 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2218 (re)pooling @ 10%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73659 and previous config saved to /var/cache/conftool/dbconfig/20250226-104619-root.json |
[production] |
10:44 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2153 (re)pooling @ 75%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73658 and previous config saved to /var/cache/conftool/dbconfig/20250226-104433-root.json |
[production] |
10:44 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1180 (re)pooling @ 50%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73656 and previous config saved to /var/cache/conftool/dbconfig/20250226-104411-root.json |
[production] |
10:44 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2169 (re)pooling @ 10%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73655 and previous config saved to /var/cache/conftool/dbconfig/20250226-104359-root.json |
[production] |
10:42 |
<vgutierrez@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on lvs7001.magru.wmnet with reason: host reimage |
[production] |
10:39 |
<marostegui@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on dbstore1009.eqiad.wmnet with reason: Index rebuild |
[production] |
10:39 |
<vgutierrez@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on lvs7001.magru.wmnet with reason: host reimage |
[production] |
10:36 |
<marostegui@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on an-redacteddb1001.eqiad.wmnet with reason: Index rebuild |
[production] |
10:32 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1195 (re)pooling @ 50%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73654 and previous config saved to /var/cache/conftool/dbconfig/20250226-103159-root.json |
[production] |
10:29 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2153 (re)pooling @ 50%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73653 and previous config saved to /var/cache/conftool/dbconfig/20250226-102927-root.json |
[production] |
10:29 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1180 (re)pooling @ 25%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73652 and previous config saved to /var/cache/conftool/dbconfig/20250226-102906-root.json |
[production] |
10:18 |
<vgutierrez@cumin1002> |
START - Cookbook sre.hosts.reimage for host lvs7001.magru.wmnet with OS bookworm |
[production] |
10:16 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1195 (re)pooling @ 25%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73651 and previous config saved to /var/cache/conftool/dbconfig/20250226-101654-root.json |
[production] |
10:14 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2153 (re)pooling @ 25%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73650 and previous config saved to /var/cache/conftool/dbconfig/20250226-101422-root.json |
[production] |
10:14 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1180 (re)pooling @ 10%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73649 and previous config saved to /var/cache/conftool/dbconfig/20250226-101401-root.json |
[production] |
10:08 |
<vgutierrez> |
depooling lvs7001 before reimaging - T384477 |
[production] |
10:01 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1195 (re)pooling @ 10%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73648 and previous config saved to /var/cache/conftool/dbconfig/20250226-100148-root.json |
[production] |
09:59 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2153 (re)pooling @ 10%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73647 and previous config saved to /var/cache/conftool/dbconfig/20250226-095917-root.json |
[production] |
09:52 |
<hashar> |
Restarting Gerrit on gerrit2002 and gerrit1003 |
[production] |