2022-03-15
ยง
|
15:18 |
<moritzm> |
installing Java updates on wcqs*/wdqs* hosts |
[production] |
15:16 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1144:3314', diff saved to https://phabricator.wikimedia.org/P22567 and previous config saved to /var/cache/conftool/dbconfig/20220315-151621-ladsgroup.json |
[production] |
15:12 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1181 (T298563)', diff saved to https://phabricator.wikimedia.org/P22566 and previous config saved to /var/cache/conftool/dbconfig/20220315-151206-marostegui.json |
[production] |
15:12 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1181.eqiad.wmnet with reason: Maintenance |
[production] |
15:12 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1181.eqiad.wmnet with reason: Maintenance |
[production] |
15:09 |
<ebysans@deploy1002> |
Finished deploy [airflow-dags/analytics@f01214c]: (no justification provided) (duration: 00m 07s) |
[production] |
15:09 |
<ebysans@deploy1002> |
Started deploy [airflow-dags/analytics@f01214c]: (no justification provided) |
[production] |
15:06 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1096:3315 (re)pooling @ 100%: After schema change ', diff saved to https://phabricator.wikimedia.org/P22565 and previous config saved to /var/cache/conftool/dbconfig/20220315-150649-root.json |
[production] |
15:01 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1144:3314 (T298743)', diff saved to https://phabricator.wikimedia.org/P22564 and previous config saved to /var/cache/conftool/dbconfig/20220315-150116-ladsgroup.json |
[production] |
14:52 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1144:3314 (T298743)', diff saved to https://phabricator.wikimedia.org/P22563 and previous config saved to /var/cache/conftool/dbconfig/20220315-145246-ladsgroup.json |
[production] |
14:52 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1144.eqiad.wmnet with reason: Maintenance |
[production] |
14:52 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db1144.eqiad.wmnet with reason: Maintenance |
[production] |
14:52 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143 (T298743)', diff saved to https://phabricator.wikimedia.org/P22562 and previous config saved to /var/cache/conftool/dbconfig/20220315-145238-ladsgroup.json |
[production] |
14:51 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1096:3315 (re)pooling @ 75%: After schema change ', diff saved to https://phabricator.wikimedia.org/P22561 and previous config saved to /var/cache/conftool/dbconfig/20220315-145146-root.json |
[production] |
14:50 |
<moritzm> |
installing postgresql-11 security updates |
[production] |
14:49 |
<ntsako@deploy1002> |
Finished deploy [airflow-dags/analytics@88d5618]: (no justification provided) (duration: 00m 07s) |
[production] |
14:49 |
<ntsako@deploy1002> |
Started deploy [airflow-dags/analytics@88d5618]: (no justification provided) |
[production] |
14:43 |
<otto@cumin1001> |
START - Cookbook sre.kafka.roll-restart-brokers for Kafka A:kafka-jumbo-eqiad cluster: Roll restart of jvm daemons for openjdk upgrade. |
[production] |
14:42 |
<ottomata> |
I read the cumin output wrong, kafka-jumbo1001 and 1002 restarted successfully before accidental ctrl-c on cumin command. Restarting the full jumbo roll-restart to thoroughly do them all - T303324 |
[production] |
14:40 |
<cmjohnson@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on ms-be1068.eqiad.wmnet with reason: host reimage |
[production] |
14:39 |
<aikochou@deploy1002> |
helmfile [ml-serve-codfw] Ran 'sync' command on namespace 'revscoring-editquality-goodfaith' for release 'main' . |
[production] |
14:38 |
<ottomata> |
all brokers except kafka-jumbo1001 were succesffully roll restarted, doing kafka-jumbo1001 manually - T303324 |
[production] |
14:37 |
<ottomata> |
accidental cancel of roll restart brokers, re-doing - T303324 |
[production] |
14:37 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143', diff saved to https://phabricator.wikimedia.org/P22560 and previous config saved to /var/cache/conftool/dbconfig/20220315-143733-ladsgroup.json |
[production] |
14:37 |
<otto@cumin1001> |
END (ERROR) - Cookbook sre.kafka.roll-restart-brokers (exit_code=97) for Kafka A:kafka-jumbo-eqiad cluster: Roll restart of jvm daemons for openjdk upgrade. |
[production] |
14:37 |
<cmjohnson@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on ms-be1068.eqiad.wmnet with reason: host reimage |
[production] |
14:36 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1096:3315 (re)pooling @ 50%: After schema change ', diff saved to https://phabricator.wikimedia.org/P22559 and previous config saved to /var/cache/conftool/dbconfig/20220315-143642-root.json |
[production] |
14:32 |
<ntsako@deploy1002> |
Finished deploy [airflow-dags/analytics@2924232]: (no justification provided) (duration: 00m 08s) |
[production] |
14:32 |
<ntsako@deploy1002> |
Started deploy [airflow-dags/analytics@2924232]: (no justification provided) |
[production] |
14:24 |
<cmjohnson@cumin1001> |
START - Cookbook sre.hosts.reimage for host ms-be1068.eqiad.wmnet with OS stretch |
[production] |
14:23 |
<andrew@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cloudvirt1023.eqiad.wmnet with OS bullseye |
[production] |
14:22 |
<inflatador> |
T303256 bking@cumin1001 restarting wdqs services `sudo -E cumin -b 4 'A:wdqs-all' 'systemctl restart wdqs-blazegraph` |
[production] |
14:22 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143', diff saved to https://phabricator.wikimedia.org/P22558 and previous config saved to /var/cache/conftool/dbconfig/20220315-142228-ladsgroup.json |
[production] |
14:21 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1096:3315 (re)pooling @ 25%: After schema change ', diff saved to https://phabricator.wikimedia.org/P22557 and previous config saved to /var/cache/conftool/dbconfig/20220315-142138-root.json |
[production] |
14:10 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on 10 hosts with reason: Maintenance |
[production] |
14:10 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on 10 hosts with reason: Maintenance |
[production] |
14:10 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2121.codfw.wmnet with reason: Maintenance |
[production] |
14:10 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2121.codfw.wmnet with reason: Maintenance |
[production] |
14:07 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143 (T298743)', diff saved to https://phabricator.wikimedia.org/P22556 and previous config saved to /var/cache/conftool/dbconfig/20220315-140723-ladsgroup.json |
[production] |
14:06 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1096:3315 (re)pooling @ 10%: After schema change ', diff saved to https://phabricator.wikimedia.org/P22555 and previous config saved to /var/cache/conftool/dbconfig/20220315-140634-root.json |
[production] |
14:05 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1145.eqiad.wmnet with reason: Maintenance |
[production] |
14:05 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1145.eqiad.wmnet with reason: Maintenance |
[production] |
14:05 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1166 (T298557)', diff saved to https://phabricator.wikimedia.org/P22554 and previous config saved to /var/cache/conftool/dbconfig/20220315-140520-marostegui.json |
[production] |
14:03 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1143 (T298743)', diff saved to https://phabricator.wikimedia.org/P22553 and previous config saved to /var/cache/conftool/dbconfig/20220315-140259-ladsgroup.json |
[production] |
14:02 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1143.eqiad.wmnet with reason: Maintenance |
[production] |
14:02 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db1143.eqiad.wmnet with reason: Maintenance |
[production] |
14:02 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1142 (T298743)', diff saved to https://phabricator.wikimedia.org/P22552 and previous config saved to /var/cache/conftool/dbconfig/20220315-140252-ladsgroup.json |
[production] |
14:01 |
<elukey@deploy1002> |
helmfile [ml-serve-codfw] Ran 'sync' command on namespace 'revscoring-editquality-goodfaith' for release 'main' . |
[production] |
14:00 |
<otto@cumin1001> |
START - Cookbook sre.kafka.roll-restart-brokers for Kafka A:kafka-jumbo-eqiad cluster: Roll restart of jvm daemons for openjdk upgrade. |
[production] |
13:59 |
<ottomata> |
roll restarting kafka jumbo brokers to set max.incremental.fetch.session.cache.slots=2000 - T303324 |
[production] |