2022-05-19
ยง
|
19:01 |
<dzahn@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
18:49 |
<ryankemper> |
[WDQS Deploy] `Unknown` status resolved following deploy of https://gerrit.wikimedia.org/r/793530 ; wdqs categories monitoring is healthy again. We're done here |
[production] |
18:45 |
<ryankemper> |
[WDQS Deploy] Deployed https://gerrit.wikimedia.org/r/793530; ran puppet agent across wdqs* and just kicked off a re-check of the NRPE alerts. We'll see if that clears the Unknown state up |
[production] |
18:29 |
<ryankemper> |
[WDQS Deploy] Okay, so a recent refactor changed where the `check_categories.py` lives. Previously it was `/usr/lib/nagios/plugins/check_categories.py` and now it's `/usr/local/lib/nagios/plugins/check_categories.py`. So https://gerrit.wikimedia.org/r/793530 should fix things now |
[production] |
18:18 |
<ryankemper> |
[WDQS Deploy] Traced the failure back to https://gerrit.wikimedia.org/r/c/operations/puppet/+/792700 presumably; trying to see what we can do to fix up the patch without having to revert it since it touches stuff besides query service |
[production] |
17:55 |
<ryankemper> |
[WDQS Deploy] Slight amendment to the above; we're seeing status `Unknown` for `Categories endpoint` and `Categories update lag`. They've been warning for ~24h so it didn't surface following the deploy, but looking into that now |
[production] |
17:51 |
<ryankemper> |
T306899 Rolled `wdqs` and `wcqs` deploys to adjust logging settings. Hoping this gives us more visibility on the 500 errors WCQS users have been experiencing. |
[production] |
17:50 |
<ryankemper> |
[WDQS Deploy] Deploy complete. Successful test query placed on query.wikidata.org, there's no relevant criticals in Icinga, and Grafana looks good |
[production] |
17:30 |
<ryankemper> |
[WCQS Deploy] Successful test query placed on commons-query.wikimedia.org, there's no relevant criticals in Icinga, and Grafana looks good. WCQS deploy complete |
[production] |
17:30 |
<ryankemper> |
[WCQS Deploy] Restarted `wcqs-updater` across all hosts: `sudo -E cumin 'A:wcqs-public' 'systemctl restart wcqs-updater'` |
[production] |
17:29 |
<ryankemper> |
[WCQS Deploy] Tests looked good following deploy of `0.3.111` to canary `wcqs1002.eqiad.wmnet`; proceeded to rest of fleet |
[production] |
17:29 |
<ryankemper@deploy1002> |
Finished deploy [wdqs/wdqs@a493d7f] (wcqs): Deploy 0.3.111 to WCQS (duration: 03m 03s) |
[production] |
17:26 |
<ryankemper@deploy1002> |
Started deploy [wdqs/wdqs@a493d7f] (wcqs): Deploy 0.3.111 to WCQS |
[production] |
17:26 |
<ryankemper> |
[WCQS Deploy] Gearing up for deploy of wcqs `0.3.111` |
[production] |
17:24 |
<ryankemper> |
[WDQS Deploy] Restarting `wdqs-categories` across lvs-managed hosts, one node at a time: `sudo -E cumin -b 1 'A:wdqs-all and not A:wdqs-test' 'depool && sleep 45 && systemctl restart wdqs-categories && sleep 45 && pool'` |
[production] |
17:24 |
<ryankemper> |
[WDQS Deploy] Restarted `wdqs-categories` across all test hosts simultaneously: `sudo -E cumin 'A:wdqs-test' 'systemctl restart wdqs-categories'` |
[production] |
17:23 |
<ryankemper> |
[WDQS Deploy] Restarted `wdqs-updater` across all hosts, 4 hosts at a time: `sudo -E cumin -b 4 'A:wdqs-all' 'systemctl restart wdqs-updater'` |
[production] |
17:22 |
<ryankemper@deploy1002> |
Finished deploy [wdqs/wdqs@a493d7f]: 0.3.111 (duration: 08m 11s) |
[production] |
17:16 |
<ryankemper> |
[WDQS Deploy] Tests passing following deploy of `0.3.111` on canary `wdqs1003`; proceeding to rest of fleet |
[production] |
17:14 |
<ryankemper@deploy1002> |
Started deploy [wdqs/wdqs@a493d7f]: 0.3.111 |
[production] |
17:14 |
<ryankemper> |
[WDQS Deploy] Gearing up for deploy of wdqs `0.3.111`. Pre-deploy tests passing on canary `wdqs1003` |
[production] |
17:03 |
<otto@deploy1002> |
Finished deploy [airflow-dags/analytics@95c1f50]: (no justification provided) (duration: 00m 21s) |
[production] |
17:03 |
<otto@deploy1002> |
Started deploy [airflow-dags/analytics@95c1f50]: (no justification provided) |
[production] |
16:56 |
<otto@deploy1002> |
Finished deploy [airflow-dags/analytics_test@95c1f50]: (no justification provided) (duration: 00m 12s) |
[production] |
16:55 |
<otto@deploy1002> |
Started deploy [airflow-dags/analytics_test@95c1f50]: (no justification provided) |
[production] |
16:37 |
<dcaro@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host cloudgw1002.eqiad.wmnet |
[production] |
16:35 |
<dcaro@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cloudgw1001.eqiad.wmnet |
[production] |
16:31 |
<dcaro@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host cloudgw1001.eqiad.wmnet |
[production] |
16:15 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host gerrit2002.wikimedia.org with OS bullseye |
[production] |
16:10 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1182 (T298560)', diff saved to https://phabricator.wikimedia.org/P28155 and previous config saved to /var/cache/conftool/dbconfig/20220519-161022-ladsgroup.json |
[production] |
16:10 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 16:00:00 on db1182.eqiad.wmnet with reason: Maintenance |
[production] |
16:10 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 16:00:00 on db1182.eqiad.wmnet with reason: Maintenance |
[production] |
16:10 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1129 (T298560)', diff saved to https://phabricator.wikimedia.org/P28154 and previous config saved to /var/cache/conftool/dbconfig/20220519-161014-ladsgroup.json |
[production] |
16:01 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on gerrit2002.wikimedia.org with reason: host reimage |
[production] |
15:58 |
<pt1979@cumin2002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on gerrit2002.wikimedia.org with reason: host reimage |
[production] |
15:57 |
<pt1979@cumin2002> |
START - Cookbook sre.hosts.reimage for host gerrit2002.wikimedia.org with OS bullseye |
[production] |
15:55 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1129', diff saved to https://phabricator.wikimedia.org/P28153 and previous config saved to /var/cache/conftool/dbconfig/20220519-155509-ladsgroup.json |
[production] |
15:54 |
<pt1979@cumin2002> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host gerrit2002.wikimedia.org with OS bullseye |
[production] |
15:41 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1147 (T303603)', diff saved to https://phabricator.wikimedia.org/P28152 and previous config saved to /var/cache/conftool/dbconfig/20220519-154124-ladsgroup.json |
[production] |
15:40 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1129', diff saved to https://phabricator.wikimedia.org/P28151 and previous config saved to /var/cache/conftool/dbconfig/20220519-154003-ladsgroup.json |
[production] |
15:37 |
<pt1979@cumin2002> |
START - Cookbook sre.hosts.reimage for host gerrit2002.wikimedia.org with OS bullseye |
[production] |
15:28 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
15:26 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1147', diff saved to https://phabricator.wikimedia.org/P28150 and previous config saved to /var/cache/conftool/dbconfig/20220519-152618-ladsgroup.json |
[production] |
15:24 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1129 (T298560)', diff saved to https://phabricator.wikimedia.org/P28149 and previous config saved to /var/cache/conftool/dbconfig/20220519-152457-ladsgroup.json |
[production] |
15:24 |
<ariel@deploy1002> |
Finished deploy [dumps/dumps@cd30939]: use dbgroupdefault for most jobs (duration: 00m 04s) |
[production] |
15:24 |
<ariel@deploy1002> |
Started deploy [dumps/dumps@cd30939]: use dbgroupdefault for most jobs |
[production] |
15:23 |
<pt1979@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
15:20 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on ganeti5003.eqsin.wmnet with reason: Remove from cluster for firmware update and eventual reimage |
[production] |
15:20 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on ganeti5003.eqsin.wmnet with reason: Remove from cluster for firmware update and eventual reimage |
[production] |
15:19 |
<oblivian@deploy1002> |
Synchronized README: null sync-file to verify the switch to the deployment group (duration: 00m 50s) |
[production] |