2023-09-12
ยง
|
11:17 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2158 (re)pooling @ 50%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52471 and previous config saved to /var/cache/conftool/dbconfig/20230912-111702-root.json |
[production] |
11:03 |
<cgoubert@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mw-web: apply |
[production] |
11:03 |
<cgoubert@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mw-web: apply |
[production] |
11:03 |
<cgoubert@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mw-web: apply |
[production] |
11:02 |
<cgoubert@deploy1002> |
helmfile [codfw] START helmfile.d/services/mw-web: apply |
[production] |
11:01 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2158 (re)pooling @ 25%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52470 and previous config saved to /var/cache/conftool/dbconfig/20230912-110157-root.json |
[production] |
10:54 |
<aborrero@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host cloudservices1004.wikimedia.org |
[production] |
10:46 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2158 (re)pooling @ 10%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52468 and previous config saved to /var/cache/conftool/dbconfig/20230912-104652-root.json |
[production] |
10:45 |
<moritzm> |
rebalance Ganeti cluster in eqiad/C following node reboots |
[production] |
10:39 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1028.eqiad.wmnet |
[production] |
10:39 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1028.eqiad.wmnet |
[production] |
10:37 |
<taavi@cumin1001> |
conftool action : set/pooled=yes:weight=10; selector: cluster=cloudweb |
[production] |
10:32 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti1028.eqiad.wmnet |
[production] |
10:31 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2158 (re)pooling @ 5%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52467 and previous config saved to /var/cache/conftool/dbconfig/20230912-103148-root.json |
[production] |
10:25 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1028.eqiad.wmnet |
[production] |
10:23 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host pki-root1002.eqiad.wmnet |
[production] |
10:21 |
<jgiannelos@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/wikifeeds: apply |
[production] |
10:21 |
<jgiannelos@deploy1002> |
helmfile [eqiad] START helmfile.d/services/wikifeeds: apply |
[production] |
10:16 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host pki-root1002.eqiad.wmnet |
[production] |
10:16 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2158 (re)pooling @ 3%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52466 and previous config saved to /var/cache/conftool/dbconfig/20230912-101643-root.json |
[production] |
10:13 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.pki.restart-reboot (exit_code=0) rolling reboot on A:pki |
[production] |
10:13 |
<moritzm> |
disabled nginx/puppetdb/postgresql/microservice on puppetdb1002/2002 to ensure nothing hits the old endpoints anymore |
[production] |
10:09 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3 days, 0:00:00 on puppetdb2002.codfw.wmnet with reason: Disable puppetdb/postgres/nginx on old nodes to ensure nothing hits them anyway |
[production] |
10:09 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.downtime for 3 days, 0:00:00 on puppetdb2002.codfw.wmnet with reason: Disable puppetdb/postgres/nginx on old nodes to ensure nothing hits them anyway |
[production] |
10:09 |
<jgiannelos@deploy1002> |
helmfile [staging] DONE helmfile.d/services/wikifeeds: apply |
[production] |
10:08 |
<jgiannelos@deploy1002> |
helmfile [staging] START helmfile.d/services/wikifeeds: apply |
[production] |
10:05 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3 days, 0:00:00 on puppetdb1002.eqiad.wmnet with reason: Disable puppetdb/postgres on old nodes to ensure nothing hits them anyway |
[production] |
10:05 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.downtime for 3 days, 0:00:00 on puppetdb1002.eqiad.wmnet with reason: Disable puppetdb/postgres on old nodes to ensure nothing hits them anyway |
[production] |
10:02 |
<hnowlan> |
enabling puppet on A:cp |
[production] |
10:01 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2158 (re)pooling @ 1%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52465 and previous config saved to /var/cache/conftool/dbconfig/20230912-100138-root.json |
[production] |
09:59 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) pki.discovery.wmnet. on all recursors |
[production] |
09:59 |
<jmm@cumin2002> |
START - Cookbook sre.dns.wipe-cache pki.discovery.wmnet. on all recursors |
[production] |
09:53 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) pki.discovery.wmnet. on all recursors |
[production] |
09:52 |
<jmm@cumin2002> |
START - Cookbook sre.dns.wipe-cache pki.discovery.wmnet. on all recursors |
[production] |
09:52 |
<jmm@cumin2002> |
START - Cookbook sre.pki.restart-reboot rolling reboot on A:pki |
[production] |
09:32 |
<hnowlan> |
disabled puppet on A:cp |
[production] |
09:26 |
<arnaudb@cumin1001> |
dbctl commit (dc=all): 'Depooling db1170:3317 (T343198)', diff saved to https://phabricator.wikimedia.org/P52464 and previous config saved to /var/cache/conftool/dbconfig/20230912-092639-arnaudb.json |
[production] |
09:26 |
<jmm@cumin2002> |
END (FAIL) - Cookbook sre.pki.restart-reboot (exit_code=99) rolling reboot on A:pki |
[production] |
09:26 |
<arnaudb@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1170.eqiad.wmnet with reason: Maintenance |
[production] |
09:26 |
<arnaudb@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1170.eqiad.wmnet with reason: Maintenance |
[production] |
09:26 |
<arnaudb@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1158 (T343198)', diff saved to https://phabricator.wikimedia.org/P52463 and previous config saved to /var/cache/conftool/dbconfig/20230912-092618-arnaudb.json |
[production] |
09:26 |
<jmm@cumin2002> |
START - Cookbook sre.pki.restart-reboot rolling reboot on A:pki |
[production] |
09:15 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2115.codfw.wmnet with reason: Maintenance |
[production] |
09:15 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db2115.codfw.wmnet with reason: Maintenance |
[production] |
09:12 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1179.eqiad.wmnet with reason: Maintenance |
[production] |
09:12 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1179.eqiad.wmnet with reason: Maintenance |
[production] |
09:11 |
<arnaudb@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1158', diff saved to https://phabricator.wikimedia.org/P52461 and previous config saved to /var/cache/conftool/dbconfig/20230912-091112-arnaudb.json |
[production] |
08:58 |
<claime> |
Running puppet on cp-text P:trafficserver::backend - T341780 |
[production] |
08:58 |
<claime> |
Sending 5% of global traffic to mw-on-k8s - T341780 |
[production] |
08:56 |
<arnaudb@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1158', diff saved to https://phabricator.wikimedia.org/P52460 and previous config saved to /var/cache/conftool/dbconfig/20230912-085606-arnaudb.json |
[production] |