2024-03-27
§
|
23:21 |
<TimStarling> |
on releases1003: uploaded 80 missing old MediaWiki releases T190369 |
[production] |
23:15 |
<eevans@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 30 days, 0:00:00 on restbase1026.eqiad.wmnet with reason: Decommissioning — T354561 |
[production] |
23:15 |
<eevans@cumin1002> |
START - Cookbook sre.hosts.downtime for 30 days, 0:00:00 on restbase1026.eqiad.wmnet with reason: Decommissioning — T354561 |
[production] |
23:04 |
<bking@cumin2002> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=93) for host elastic2088.codfw.wmnet with OS bullseye |
[production] |
22:30 |
<ryankemper> |
T360993 [WDQS Deploy] Restarting `wdqs-categories` across lvs-managed hosts, one node at a time: `sudo -E cumin -b 1 'A:wdqs-all and not A:wdqs-test' 'depool && sleep 45 && systemctl restart wdqs-categories && sleep 45 && pool'` |
[production] |
22:30 |
<ryankemper> |
T360993 [WDQS Deploy] Restarted `wdqs-categories` across all test hosts simultaneously: `sudo -E cumin 'A:wdqs-test' 'systemctl restart wdqs-categories'` |
[production] |
22:30 |
<ryankemper> |
T360993 [WDQS Deploy] Restarted `wdqs-updater` across all hosts, 4 hosts at a time: `sudo -E cumin -b 4 'A:wdqs-all' 'systemctl restart wdqs-updater'` |
[production] |
22:28 |
<ryankemper@deploy1002> |
Finished deploy [wdqs/wdqs@143ca33]: 0.3.138 (duration: 11m 24s) |
[production] |
22:17 |
<ryankemper> |
T360993 [WDQS Deploy] Tests passing following deploy of `0.3.138` on canary `wdqs1003`; proceeding to rest of fleet |
[production] |
22:17 |
<ryankemper@deploy1002> |
Started deploy [wdqs/wdqs@143ca33]: 0.3.138 |
[production] |
22:16 |
<ryankemper> |
T360993 [WDQS Deploy] Gearing up for deploy of wdqs `0.3.138`. Pre-deploy tests passing on canary `wdqs1003` |
[production] |
21:46 |
<bking@cumin2002> |
START - Cookbook sre.hosts.decommission for hosts elastic[2038-2048,2050-2054].codfw.wmnet |
[production] |
21:41 |
<bking@cumin2002> |
START - Cookbook sre.hosts.reimage for host elastic2088.codfw.wmnet with OS bullseye |
[production] |
20:26 |
<jhuneidi@deploy1002> |
Finished scap: Backport for [[gerrit:1015072|Scope temp user reserved pattern to temp users (T361021 T349506)]], [[gerrit:1015095|Updates config to deploy vector 2022 (T360628)]] (duration: 18m 57s) |
[production] |
20:15 |
<jhuneidi@deploy1002> |
ksarabia and jhuneidi and tchanders: Continuing with sync |
[production] |
20:10 |
<jhuneidi@deploy1002> |
ksarabia and jhuneidi and tchanders: Backport for [[gerrit:1015072|Scope temp user reserved pattern to temp users (T361021 T349506)]], [[gerrit:1015095|Updates config to deploy vector 2022 (T360628)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) |
[production] |
20:07 |
<jhuneidi@deploy1002> |
Started scap: Backport for [[gerrit:1015072|Scope temp user reserved pattern to temp users (T361021 T349506)]], [[gerrit:1015095|Updates config to deploy vector 2022 (T360628)]] |
[production] |
19:41 |
<mutante> |
ticket.wikimedia.org - replacing envoy cert on backends |
[production] |
18:54 |
<jynus> |
increasing volume size of backup2011 T334069 |
[production] |
18:38 |
<jhuneidi@deploy1002> |
Synchronized php: group1 wikis to 1.42.0-wmf.24 refs T360156 (duration: 12m 38s) |
[production] |
18:34 |
<vriley@cumin1002> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host dbprov1006.eqiad.wmnet with OS bullseye |
[production] |
18:25 |
<jhuneidi@deploy1002> |
rebuilt and synchronized wikiversions files: group1 wikis to 1.42.0-wmf.24 refs T360156 |
[production] |
17:12 |
<vriley@cumin1002> |
START - Cookbook sre.hosts.reimage for host dbprov1006.eqiad.wmnet with OS bullseye |
[production] |
16:38 |
<Emperor> |
depool and restart swift-proxy on ms-fe2013 then repool T360913 |
[production] |
16:37 |
<Emperor> |
depool and restart swift-proxy on ms-fe2012 then repool T360913 |
[production] |
16:37 |
<Emperor> |
depool and restart swift-proxy on ms-fe2011 then repool T360913 |
[production] |
16:34 |
<Emperor> |
restart swift-proxy on ms-fe2010 then repool T360913 |
[production] |
16:31 |
<Emperor> |
depool and restart swift-proxy on moss-fe2001 then repool T360913 |
[production] |
16:28 |
<denisse@cumin2002> |
END (PASS) - Cookbook sre.puppet.migrate-host (exit_code=0) for host alert2001.wikimedia.org |
[production] |
16:22 |
<denisse@cumin2002> |
START - Cookbook sre.puppet.migrate-host for host alert2001.wikimedia.org |
[production] |
16:21 |
<denisse@cumin2002> |
END (FAIL) - Cookbook sre.puppet.migrate-host (exit_code=99) for host alert2001.wikimedia.org |
[production] |
16:21 |
<denisse@cumin2002> |
START - Cookbook sre.puppet.migrate-host for host alert2001.wikimedia.org |
[production] |
16:12 |
<arnaudb@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 22:00:00 on db[2115,2215].codfw.wmnet with reason: Downtime for analysis |
[production] |
16:12 |
<arnaudb@cumin1002> |
START - Cookbook sre.hosts.downtime for 22:00:00 on db[2115,2215].codfw.wmnet with reason: Downtime for analysis |
[production] |
16:10 |
<jayme@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/changeprop-jobqueue: apply |
[production] |
16:09 |
<jayme@deploy1002> |
helmfile [eqiad] START helmfile.d/services/changeprop-jobqueue: apply |
[production] |
16:08 |
<jayme@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/changeprop-jobqueue: apply |
[production] |
16:07 |
<jayme@deploy1002> |
helmfile [codfw] START helmfile.d/services/changeprop-jobqueue: apply |
[production] |
16:06 |
<jayme@deploy1002> |
helmfile [staging] DONE helmfile.d/services/changeprop-jobqueue: apply |
[production] |
16:05 |
<jayme@deploy1002> |
helmfile [staging] START helmfile.d/services/changeprop-jobqueue: apply |
[production] |
15:55 |
<inflatador> |
bking@cumin2002 running puppet against A:wdqs-main to apply nginx changes T360993 |
[production] |
15:53 |
<jayme@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/changeprop: apply |
[production] |
15:53 |
<jayme@deploy1002> |
helmfile [codfw] START helmfile.d/services/changeprop: apply |
[production] |
15:51 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12 days, 0:00:00 on elastic2038.codfw.wmnet with reason: T358882 |
[production] |
15:51 |
<bking@cumin2002> |
START - Cookbook sre.hosts.downtime for 12 days, 0:00:00 on elastic2038.codfw.wmnet with reason: T358882 |
[production] |
15:51 |
<arnaudb@cumin1002> |
END (ERROR) - Cookbook sre.mysql.clone (exit_code=97) Will create a clone of db2115.codfw.wmnet onto db2215.codfw.wmnet |
[production] |
15:51 |
<jayme@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/changeprop: apply |
[production] |
15:51 |
<claime> |
50% of backend RESTbase traffic to mw-api-int - T358213 |
[production] |
15:50 |
<jayme@deploy1002> |
helmfile [eqiad] START helmfile.d/services/changeprop: apply |
[production] |
15:50 |
<jayme@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/changeprop: apply |
[production] |