2022-03-15
ยง
|
14:52 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1144.eqiad.wmnet with reason: Maintenance |
[production] |
14:52 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db1144.eqiad.wmnet with reason: Maintenance |
[production] |
14:52 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143 (T298743)', diff saved to https://phabricator.wikimedia.org/P22562 and previous config saved to /var/cache/conftool/dbconfig/20220315-145238-ladsgroup.json |
[production] |
14:51 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1096:3315 (re)pooling @ 75%: After schema change ', diff saved to https://phabricator.wikimedia.org/P22561 and previous config saved to /var/cache/conftool/dbconfig/20220315-145146-root.json |
[production] |
14:50 |
<moritzm> |
installing postgresql-11 security updates |
[production] |
14:49 |
<ntsako@deploy1002> |
Finished deploy [airflow-dags/analytics@88d5618]: (no justification provided) (duration: 00m 07s) |
[production] |
14:49 |
<ntsako@deploy1002> |
Started deploy [airflow-dags/analytics@88d5618]: (no justification provided) |
[production] |
14:43 |
<otto@cumin1001> |
START - Cookbook sre.kafka.roll-restart-brokers for Kafka A:kafka-jumbo-eqiad cluster: Roll restart of jvm daemons for openjdk upgrade. |
[production] |
14:42 |
<ottomata> |
I read the cumin output wrong, kafka-jumbo1001 and 1002 restarted successfully before accidental ctrl-c on cumin command. Restarting the full jumbo roll-restart to thoroughly do them all - T303324 |
[production] |
14:40 |
<cmjohnson@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on ms-be1068.eqiad.wmnet with reason: host reimage |
[production] |
14:39 |
<aikochou@deploy1002> |
helmfile [ml-serve-codfw] Ran 'sync' command on namespace 'revscoring-editquality-goodfaith' for release 'main' . |
[production] |
14:38 |
<ottomata> |
all brokers except kafka-jumbo1001 were succesffully roll restarted, doing kafka-jumbo1001 manually - T303324 |
[production] |
14:37 |
<ottomata> |
accidental cancel of roll restart brokers, re-doing - T303324 |
[production] |
14:37 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143', diff saved to https://phabricator.wikimedia.org/P22560 and previous config saved to /var/cache/conftool/dbconfig/20220315-143733-ladsgroup.json |
[production] |
14:37 |
<otto@cumin1001> |
END (ERROR) - Cookbook sre.kafka.roll-restart-brokers (exit_code=97) for Kafka A:kafka-jumbo-eqiad cluster: Roll restart of jvm daemons for openjdk upgrade. |
[production] |
14:37 |
<cmjohnson@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on ms-be1068.eqiad.wmnet with reason: host reimage |
[production] |
14:36 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1096:3315 (re)pooling @ 50%: After schema change ', diff saved to https://phabricator.wikimedia.org/P22559 and previous config saved to /var/cache/conftool/dbconfig/20220315-143642-root.json |
[production] |
14:32 |
<ntsako@deploy1002> |
Finished deploy [airflow-dags/analytics@2924232]: (no justification provided) (duration: 00m 08s) |
[production] |
14:32 |
<ntsako@deploy1002> |
Started deploy [airflow-dags/analytics@2924232]: (no justification provided) |
[production] |
14:24 |
<cmjohnson@cumin1001> |
START - Cookbook sre.hosts.reimage for host ms-be1068.eqiad.wmnet with OS stretch |
[production] |
14:23 |
<andrew@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cloudvirt1023.eqiad.wmnet with OS bullseye |
[production] |
14:22 |
<inflatador> |
T303256 bking@cumin1001 restarting wdqs services `sudo -E cumin -b 4 'A:wdqs-all' 'systemctl restart wdqs-blazegraph` |
[production] |
14:22 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143', diff saved to https://phabricator.wikimedia.org/P22558 and previous config saved to /var/cache/conftool/dbconfig/20220315-142228-ladsgroup.json |
[production] |
14:21 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1096:3315 (re)pooling @ 25%: After schema change ', diff saved to https://phabricator.wikimedia.org/P22557 and previous config saved to /var/cache/conftool/dbconfig/20220315-142138-root.json |
[production] |
14:10 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on 10 hosts with reason: Maintenance |
[production] |
14:10 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on 10 hosts with reason: Maintenance |
[production] |
14:10 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2121.codfw.wmnet with reason: Maintenance |
[production] |
14:10 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2121.codfw.wmnet with reason: Maintenance |
[production] |
14:07 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143 (T298743)', diff saved to https://phabricator.wikimedia.org/P22556 and previous config saved to /var/cache/conftool/dbconfig/20220315-140723-ladsgroup.json |
[production] |
14:06 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1096:3315 (re)pooling @ 10%: After schema change ', diff saved to https://phabricator.wikimedia.org/P22555 and previous config saved to /var/cache/conftool/dbconfig/20220315-140634-root.json |
[production] |
14:05 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1145.eqiad.wmnet with reason: Maintenance |
[production] |
14:05 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1145.eqiad.wmnet with reason: Maintenance |
[production] |
14:05 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1166 (T298557)', diff saved to https://phabricator.wikimedia.org/P22554 and previous config saved to /var/cache/conftool/dbconfig/20220315-140520-marostegui.json |
[production] |
14:03 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1143 (T298743)', diff saved to https://phabricator.wikimedia.org/P22553 and previous config saved to /var/cache/conftool/dbconfig/20220315-140259-ladsgroup.json |
[production] |
14:02 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1143.eqiad.wmnet with reason: Maintenance |
[production] |
14:02 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db1143.eqiad.wmnet with reason: Maintenance |
[production] |
14:02 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1142 (T298743)', diff saved to https://phabricator.wikimedia.org/P22552 and previous config saved to /var/cache/conftool/dbconfig/20220315-140252-ladsgroup.json |
[production] |
14:01 |
<elukey@deploy1002> |
helmfile [ml-serve-codfw] Ran 'sync' command on namespace 'revscoring-editquality-goodfaith' for release 'main' . |
[production] |
14:00 |
<otto@cumin1001> |
START - Cookbook sre.kafka.roll-restart-brokers for Kafka A:kafka-jumbo-eqiad cluster: Roll restart of jvm daemons for openjdk upgrade. |
[production] |
13:59 |
<ottomata> |
roll restarting kafka jumbo brokers to set max.incremental.fetch.session.cache.slots=2000 - T303324 |
[production] |
13:58 |
<andrew@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cloudvirt1023.eqiad.wmnet with reason: host reimage |
[production] |
13:54 |
<andrew@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on cloudvirt1023.eqiad.wmnet with reason: host reimage |
[production] |
13:50 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1166', diff saved to https://phabricator.wikimedia.org/P22551 and previous config saved to /var/cache/conftool/dbconfig/20220315-135015-marostegui.json |
[production] |
13:47 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1142', diff saved to https://phabricator.wikimedia.org/P22550 and previous config saved to /var/cache/conftool/dbconfig/20220315-134747-ladsgroup.json |
[production] |
13:43 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: apply |
[production] |
13:42 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply |
[production] |
13:42 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply |
[production] |
13:41 |
<andrew@cumin1001> |
START - Cookbook sre.hosts.reimage for host cloudvirt1023.eqiad.wmnet with OS bullseye |
[production] |
13:41 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply |
[production] |
13:37 |
<awight> |
EU deployment complete |
[production] |