2025-03-02
§
|
22:07 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1248 (re)pooling @ 100%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73921 and previous config saved to /var/cache/conftool/dbconfig/20250302-220727-root.json |
[production] |
21:52 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1248 (re)pooling @ 75%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73920 and previous config saved to /var/cache/conftool/dbconfig/20250302-215221-root.json |
[production] |
21:37 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1248 (re)pooling @ 50%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73919 and previous config saved to /var/cache/conftool/dbconfig/20250302-213716-root.json |
[production] |
21:22 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1248 (re)pooling @ 25%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73918 and previous config saved to /var/cache/conftool/dbconfig/20250302-212211-root.json |
[production] |
21:07 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1248 (re)pooling @ 10%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73917 and previous config saved to /var/cache/conftool/dbconfig/20250302-210705-root.json |
[production] |
20:52 |
<mvernon@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on db1246.eqiad.wmnet with reason: crashed |
[production] |
20:51 |
<mvernon@cumin1002> |
dbctl commit (dc=all): 'Depool db1246', diff saved to https://phabricator.wikimedia.org/P73916 and previous config saved to /var/cache/conftool/dbconfig/20250302-205123-mvernon.json |
[production] |
16:24 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2163 (re)pooling @ 100%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73915 and previous config saved to /var/cache/conftool/dbconfig/20250302-162421-root.json |
[production] |
16:09 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2163 (re)pooling @ 75%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73914 and previous config saved to /var/cache/conftool/dbconfig/20250302-160915-root.json |
[production] |
15:54 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2163 (re)pooling @ 50%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73913 and previous config saved to /var/cache/conftool/dbconfig/20250302-155410-root.json |
[production] |
15:39 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2163 (re)pooling @ 25%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73912 and previous config saved to /var/cache/conftool/dbconfig/20250302-153904-root.json |
[production] |
15:23 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2163 (re)pooling @ 10%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73911 and previous config saved to /var/cache/conftool/dbconfig/20250302-152359-root.json |
[production] |
10:17 |
<root@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1248.eqiad.wmnet with reason: Index rebuild |
[production] |
10:17 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for db1248.eqiad.wmnet |
[production] |
10:11 |
<marostegui@cumin1002> |
START - Cookbook sre.mysql.upgrade for db1248.eqiad.wmnet |
[production] |
10:11 |
<root@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2163.codfw.wmnet with reason: Index rebuild |
[production] |
10:08 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for db2163.codfw.wmnet |
[production] |
10:04 |
<marostegui@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2163.codfw.wmnet with reason: Setup |
[production] |
10:03 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depool db2167', diff saved to https://phabricator.wikimedia.org/P73910 and previous config saved to /var/cache/conftool/dbconfig/20250302-100324-marostegui.json |
[production] |
10:00 |
<marostegui@cumin1002> |
START - Cookbook sre.mysql.upgrade for db2163.codfw.wmnet |
[production] |
09:58 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depool db2163', diff saved to https://phabricator.wikimedia.org/P73909 and previous config saved to /var/cache/conftool/dbconfig/20250302-095839-root.json |
[production] |
06:04 |
<_joe_> |
started replication on db2167 |
[production] |
05:44 |
<tchin@deploy2002> |
helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/mw-content-history-reconcile-enrich: apply |
[production] |
05:44 |
<tchin@deploy2002> |
helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/mw-content-history-reconcile-enrich: apply |
[production] |
00:32 |
<reedy@deploy2002> |
Finished scap sync-world: Backport for [[gerrit:1123772|UserGroupsHookHandler: Return early if performer is false (T387523)]] (duration: 10m 33s) |
[production] |
00:25 |
<reedy@deploy2002> |
reedy, dreamyjazz: Continuing with sync |
[production] |
00:25 |
<reedy@deploy2002> |
reedy, dreamyjazz: Backport for [[gerrit:1123772|UserGroupsHookHandler: Return early if performer is false (T387523)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) |
[production] |
00:21 |
<reedy@deploy2002> |
Started scap sync-world: Backport for [[gerrit:1123772|UserGroupsHookHandler: Return early if performer is false (T387523)]] |
[production] |
2025-03-01
§
|
23:59 |
<tchin@deploy2002> |
helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/mw-content-history-reconcile-enrich: apply |
[production] |
23:59 |
<tchin@deploy2002> |
helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/mw-content-history-reconcile-enrich: apply |
[production] |
18:37 |
<dcausse> |
disabling the saneitizer on the cirrus streaming updater for consumer-search@eqiad & consumer-cloudelastic (pre-emptive hotfix for T387625) |
[production] |
18:37 |
<dcausse@deploy2002> |
helmfile [eqiad] DONE helmfile.d/services/cirrus-streaming-updater: apply |
[production] |
18:37 |
<dcausse@deploy2002> |
helmfile [eqiad] START helmfile.d/services/cirrus-streaming-updater: apply |
[production] |
18:36 |
<dcausse@deploy2002> |
helmfile [eqiad] DONE helmfile.d/services/cirrus-streaming-updater: apply |
[production] |
18:35 |
<dcausse@deploy2002> |
helmfile [eqiad] START helmfile.d/services/cirrus-streaming-updater: apply |
[production] |
18:30 |
<dcausse> |
disabling the saneitizer on the cirrus streaming updater in codfw (hotfix for T387625) |
[production] |
18:29 |
<dcausse@deploy2002> |
helmfile [codfw] DONE helmfile.d/services/cirrus-streaming-updater: apply |
[production] |
18:29 |
<dcausse@deploy2002> |
helmfile [codfw] START helmfile.d/services/cirrus-streaming-updater: apply |
[production] |
17:47 |
<godog> |
bounce mtail on centrallog2002 |
[production] |
17:22 |
<tchin@deploy2002> |
helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/mw-content-history-reconcile-enrich: apply |
[production] |
17:22 |
<tchin@deploy2002> |
helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/mw-content-history-reconcile-enrich: apply |
[production] |
14:00 |
<andrewbogott> |
rebooting wikitech-static; the entire server was intermittently locking up |
[production] |
2025-02-28
§
|
22:30 |
<inflatador> |
bking@elastic1103 restart elastic-chi to apply thread pool settings T387176 |
[production] |
22:13 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Unbanning all hosts in search_eqiad |
[production] |
22:13 |
<bking@cumin2002> |
START - Cookbook sre.elasticsearch.ban Unbanning all hosts in search_eqiad |
[production] |
22:04 |
<dzahn@cumin1002> |
END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) for new host doc2003.codfw.wmnet |
[production] |
22:04 |
<dzahn@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host doc2003.codfw.wmnet with OS bookworm |
[production] |
22:02 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Banning hosts: elastic1103*,elastic1107* for ban hosts to change threadpool settings - bking@cumin2002 - T387176 |
[production] |
22:02 |
<bking@cumin2002> |
START - Cookbook sre.elasticsearch.ban Banning hosts: elastic1103*,elastic1107* for ban hosts to change threadpool settings - bking@cumin2002 - T387176 |
[production] |
21:46 |
<dzahn@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on doc2003.codfw.wmnet with reason: host reimage |
[production] |