2022-12-08
§
|
01:56 |
<bblack> |
lvs1017 - manually setting BGP MED to 101 and starting pybal (should come back and and speak BGP, but not steal traffic from lvs1020) |
[production] |
01:29 |
<bblack> |
lvs1017 - disable puppet and stop pybal to fix ipv6 for now |
[production] |
01:27 |
<bblack> |
lvs1017: restart pybal, attempt to fix text-ipv6 service |
[production] |
01:05 |
<bblack> |
lvsNNNN: restart pybal to apply etcd key changes on all "high-traffic1" lvs at all sites - T324336 |
[production] |
01:00 |
<bblack> |
lvsNNNN: restart pybal to apply etcd key changes on all "high-traffic2" lvs at all sites - T324336 |
[production] |
00:47 |
<bblack> |
lvsNNNN: restart pybal to apply etcd key changes on all "secondary" lvs at all sites - T324336 (5 hosts, ulsfo completed previously) |
[production] |
00:45 |
<cwhite@cumin2002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host logstash1012.eqiad.wmnet with OS bullseye |
[production] |
00:29 |
<bblack> |
lvs4010: restart pybal to test etcd key changes - T324336 |
[production] |
00:16 |
<bblack> |
disabling puppet on all cp and lvs hosts for conftool key changes. Please coordinate if any lvs/pybal/cpNNNN depooling/work is needed during this transition! |
[production] |
00:12 |
<bblack@cumin1001> |
conftool action : set/pooled=yes; selector: service=cdn |
[production] |
00:12 |
<bblack@cumin1001> |
conftool action : set/weight=1; selector: service=cdn |
[production] |
00:07 |
<cwhite@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on logstash1012.eqiad.wmnet with reason: host reimage |
[production] |
00:04 |
<cwhite@cumin2002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on logstash1012.eqiad.wmnet with reason: host reimage |
[production] |
2022-12-07
§
|
23:38 |
<cwhite@cumin2002> |
START - Cookbook sre.hosts.reimage for host logstash1012.eqiad.wmnet with OS bullseye |
[production] |
23:31 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on dbstore1005.eqiad.wmnet with reason: Maintenance |
[production] |
23:31 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on dbstore1005.eqiad.wmnet with reason: Maintenance |
[production] |
23:31 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1201 (T322618)', diff saved to https://phabricator.wikimedia.org/P42545 and previous config saved to /var/cache/conftool/dbconfig/20221207-233130-ladsgroup.json |
[production] |
23:24 |
<mutante> |
mx1001 about to run out of disk again - apt-get clean, gzip /var/log/exim4/mainlog.1 find -mtime +31 -delete in /var/log/exim4 - deleting old logs to prevent mail server running out of disk - it was alerting in Icinga but same as conf* - monitoring works, alerting does not T305567 |
[production] |
23:23 |
<mutante> |
mx1001 - apt-get clean, gzip /var/log/exim4/mainlog.1 find -mtime +31 -delete in /var/log/exim4 - deleting old logs to prevent mail server running out of disk - it was alerting in Icinga but same as conf* - monitoring works, alerting does not |
[production] |
23:16 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1201', diff saved to https://phabricator.wikimedia.org/P42544 and previous config saved to /var/cache/conftool/dbconfig/20221207-231623-ladsgroup.json |
[production] |
23:14 |
<samtar@deploy1002> |
Finished scap: Backport for [[gerrit:865749|Make parsoid accept all content models. (T324711)]] (duration: 13m 57s) |
[production] |
23:02 |
<samtar@deploy1002> |
samtar and samtar: Backport for [[gerrit:865749|Make parsoid accept all content models. (T324711)]] synced to the testservers: mwdebug2001.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug2002.codfw.wmnet, mwdebug1002.eqiad.wmnet |
[production] |
23:01 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1201', diff saved to https://phabricator.wikimedia.org/P42543 and previous config saved to /var/cache/conftool/dbconfig/20221207-230116-ladsgroup.json |
[production] |
23:00 |
<samtar@deploy1002> |
Started scap: Backport for [[gerrit:865749|Make parsoid accept all content models. (T324711)]] |
[production] |
22:51 |
<bking@cumin2002> |
START - Cookbook sre.wdqs.data-reload |
[production] |
22:51 |
<bking@cumin2002> |
END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) |
[production] |
22:51 |
<bking@cumin2002> |
START - Cookbook sre.wdqs.data-reload |
[production] |
22:50 |
<bking@cumin1001> |
START - Cookbook sre.wdqs.data-reload |
[production] |
22:49 |
<bking@cumin1001> |
END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) |
[production] |
22:49 |
<bking@cumin1001> |
START - Cookbook sre.wdqs.data-reload |
[production] |
22:49 |
<bking@cumin1001> |
END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) |
[production] |
22:49 |
<bking@cumin1001> |
START - Cookbook sre.wdqs.data-reload |
[production] |
22:48 |
<bking@cumin2002> |
END (ERROR) - Cookbook sre.wdqs.data-reload (exit_code=97) |
[production] |
22:48 |
<TheresNoTime> |
Going to backport [[gerrit:865749]] to wmf/1.40.0-wmf.13 for T324711 |
[production] |
22:47 |
<bking@cumin2002> |
START - Cookbook sre.wdqs.data-reload |
[production] |
22:46 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1201 (T322618)', diff saved to https://phabricator.wikimedia.org/P42542 and previous config saved to /var/cache/conftool/dbconfig/20221207-224610-ladsgroup.json |
[production] |
22:45 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1201 (T322618)', diff saved to https://phabricator.wikimedia.org/P42541 and previous config saved to /var/cache/conftool/dbconfig/20221207-224502-ladsgroup.json |
[production] |
22:44 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1201.eqiad.wmnet with reason: Maintenance |
[production] |
22:44 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1201.eqiad.wmnet with reason: Maintenance |
[production] |
22:44 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1187 (T322618)', diff saved to https://phabricator.wikimedia.org/P42540 and previous config saved to /var/cache/conftool/dbconfig/20221207-224440-ladsgroup.json |
[production] |
22:41 |
<ryankemper> |
T301167 Downtimed `wdqs20[09-12]` for 7 days |
[production] |
22:37 |
<bking@cumin2002> |
END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) |
[production] |
22:36 |
<ryankemper@puppetmaster1001> |
conftool action : set/weight=10:pooled=no; selector: name=wdqs2009.* |
[production] |
22:36 |
<ryankemper@puppetmaster1001> |
conftool action : set/weight=10:pooled=no; selector: name=wdqs2010.* |
[production] |
22:35 |
<bking@cumin2002> |
START - Cookbook sre.wdqs.data-reload |
[production] |
22:32 |
<bking@cumin2002> |
END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) |
[production] |
22:30 |
<bking@cumin2002> |
START - Cookbook sre.wdqs.data-reload |
[production] |
22:29 |
<bking@cumin2002> |
END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) |
[production] |
22:29 |
<bking@cumin2002> |
START - Cookbook sre.wdqs.data-reload |
[production] |
22:29 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1187', diff saved to https://phabricator.wikimedia.org/P42539 and previous config saved to /var/cache/conftool/dbconfig/20221207-222934-ladsgroup.json |
[production] |