2024-06-04
ยง
|
19:06 |
<ryankemper> |
[WDQS Deploy] Restarting `wdqs-categories` across lvs-managed hosts, one node at a time: `sudo -E cumin -b 1 'A:wdqs-all and not A:wdqs-test' 'depool && sleep 45 && systemctl restart wdqs-categories && sleep 45 && pool'` |
[production] |
19:06 |
<ryankemper> |
[WDQS Deploy] Restarted `wdqs-categories` across all test hosts simultaneously: `sudo -E cumin 'A:wdqs-test' 'systemctl restart wdqs-categories'` |
[production] |
19:06 |
<ryankemper> |
[WDQS Deploy] Restarted `wdqs-updater` across all hosts, 4 hosts at a time: `sudo -E cumin -b 4 'A:wdqs-all' 'systemctl restart wdqs-updater'` |
[production] |
19:00 |
<ryankemper@deploy1002> |
Finished deploy [wdqs/wdqs@43b966f]: 0.3.142 (duration: 12m 53s) |
[production] |
18:53 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1238', diff saved to https://phabricator.wikimedia.org/P64033 and previous config saved to /var/cache/conftool/dbconfig/20240604-185358-marostegui.json |
[production] |
18:48 |
<ryankemper> |
[WDQS Deploy] Forgot to run the command to set git hash to tip of origin/master so deploy was a partial no-op. Re-rolling... |
[production] |
18:47 |
<ryankemper@deploy1002> |
Started deploy [wdqs/wdqs@43b966f]: 0.3.142 |
[production] |
18:46 |
<ryankemper@deploy1002> |
Finished deploy [wdqs/wdqs@143ca33]: 0.3.142 (duration: 02m 02s) |
[production] |
18:45 |
<ryankemper> |
[WDQS Deploy] Tests passing following deploy of `0.3.142` on canary `wdqs1016`; proceeding to rest of fleet |
[production] |
18:44 |
<ryankemper@deploy1002> |
Started deploy [wdqs/wdqs@143ca33]: 0.3.142 |
[production] |
18:41 |
<ryankemper> |
[WDQS Deploy] Gearing up for deploy of wdqs `0.3.142`. Pre-deploy tests passing on canary `wdqs1016` |
[production] |
18:38 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1238', diff saved to https://phabricator.wikimedia.org/P64032 and previous config saved to /var/cache/conftool/dbconfig/20240604-183850-marostegui.json |
[production] |
18:35 |
<mutante> |
aphlict - (phab realtime notifications) - reboots |
[production] |
18:30 |
<mutante> |
doc.wikimedia.org - very short downtime for maintenance |
[production] |
18:28 |
<dzahn@cumin1002> |
END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) for 0:10:00 on doc1003.eqiad.wmnet with reason: reboot T366555 |
[production] |
18:28 |
<dzahn@cumin1002> |
START - Cookbook sre.hosts.downtime for 0:10:00 on doc1003.eqiad.wmnet with reason: reboot T366555 |
[production] |
18:28 |
<dzahn@cumin1002> |
END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) for 0:10:00 on doc.wikimedia.org with reason: reboot T366555 |
[production] |
18:28 |
<dzahn@cumin1002> |
START - Cookbook sre.hosts.downtime for 0:10:00 on doc.wikimedia.org with reason: reboot T366555 |
[production] |
18:26 |
<dduvall@deploy1002> |
rebuilt and synchronized wikiversions files: group0 wikis to 1.43.0-wmf.8 refs T361402 |
[production] |
18:23 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1238 (T364069)', diff saved to https://phabricator.wikimedia.org/P64031 and previous config saved to /var/cache/conftool/dbconfig/20240604-182342-marostegui.json |
[production] |
18:15 |
<kamila@cumin1002> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host wikikube-ctrl1001.eqiad.wmnet with OS bullseye |
[production] |
18:04 |
<sukhe@cumin1002> |
END (PASS) - Cookbook sre.cdn.roll-reboot (exit_code=0) rolling reboot on P{cp7014*} and A:cp |
[production] |
17:54 |
<sukhe@cumin1002> |
START - Cookbook sre.cdn.roll-reboot rolling reboot on P{cp7014*} and A:cp |
[production] |
17:53 |
<sukhe> |
sudo cumin 'A:cp-upload and A:magru' "sed -i '/\sup ethtool -A eno12399np0/d' /etc/network/interfaces" |
[production] |
17:51 |
<sukhe> |
sudo cumin 'A:cp-text and A:magru' "sed -i '/\sup ethtool -A eno12399np0/d' /etc/network/interfaces" |
[production] |
17:49 |
<sukhe@cumin1002> |
END (PASS) - Cookbook sre.cdn.roll-reboot (exit_code=0) rolling reboot on P{cp7002*} and A:cp |
[production] |
17:39 |
<sukhe@cumin1002> |
START - Cookbook sre.cdn.roll-reboot rolling reboot on P{cp7002*} and A:cp |
[production] |
17:23 |
<kamila@cumin1002> |
START - Cookbook sre.hosts.reimage for host wikikube-ctrl1001.eqiad.wmnet with OS bullseye |
[production] |
17:22 |
<sukhe> |
sudo cumin 'A:cp and A:magru' 'run-puppet-agent' |
[production] |
17:15 |
<kamila@cumin1002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
17:15 |
<kamila@cumin1002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Moved wikikube-ctrl1001 to a new rack - kamila@cumin1002" |
[production] |
17:14 |
<kamila@cumin1002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Moved wikikube-ctrl1001 to a new rack - kamila@cumin1002" |
[production] |
17:11 |
<kamila@cumin1002> |
START - Cookbook sre.dns.netbox |
[production] |
16:53 |
<sukhe@puppetmaster1001> |
conftool action : set/pooled=yes; selector: name=cp700[12].magru.wmnet,service=(cdn|ats-be) |
[production] |
16:52 |
<swfrench@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/changeprop-jobqueue: apply |
[production] |
16:51 |
<swfrench@deploy1002> |
helmfile [eqiad] START helmfile.d/services/changeprop-jobqueue: apply |
[production] |
16:41 |
<elukey> |
delete other 2 pods in eventgate-main on wikikube-eqiad to test if envoy on them is in a weird state |
[production] |
16:36 |
<dcaro@cumin1002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cloudcephosd1010.eqiad.wmnet |
[production] |
16:31 |
<elukey> |
delete 3 pods in eventgate-main on wikikube-eqiad to test if envoy on them is in a weird state |
[production] |
16:29 |
<dcaro@cumin1002> |
START - Cookbook sre.hosts.reboot-single for host cloudcephosd1010.eqiad.wmnet |
[production] |
16:22 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2203 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P64028 and previous config saved to /var/cache/conftool/dbconfig/20240604-162241-root.json |
[production] |
16:22 |
<fabfur@cumin1002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cp7002.magru.wmnet |
[production] |
16:15 |
<fabfur@cumin1002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cp7001.magru.wmnet |
[production] |
16:12 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db2137 (T364299)', diff saved to https://phabricator.wikimedia.org/P64025 and previous config saved to /var/cache/conftool/dbconfig/20240604-161233-marostegui.json |
[production] |
16:12 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2137.codfw.wmnet with reason: Maintenance |
[production] |
16:12 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db2137.codfw.wmnet with reason: Maintenance |
[production] |
16:12 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2136 (T364299)', diff saved to https://phabricator.wikimedia.org/P64024 and previous config saved to /var/cache/conftool/dbconfig/20240604-161210-marostegui.json |
[production] |
16:11 |
<fabfur@cumin1002> |
START - Cookbook sre.hosts.reboot-single for host cp7002.magru.wmnet |
[production] |
16:10 |
<fabfur@cumin1002> |
START - Cookbook sre.hosts.reboot-single for host cp7001.magru.wmnet |
[production] |
16:10 |
<fnegri@cumin1002> |
conftool action : set/pooled=yes; selector: name=clouddb1013.eqiad.wmnet,service=s1 |
[production] |