2022-12-08
§
|
05:26 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1156.eqiad.wmnet with reason: Maintenance |
[production] |
05:20 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'db2112 (re)pooling @ 10%: Maint done', diff saved to https://phabricator.wikimedia.org/P42546 and previous config saved to /var/cache/conftool/dbconfig/20221208-052036-ladsgroup.json |
[production] |
05:19 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2112.codfw.wmnet with reason: Maintenance |
[production] |
05:19 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db2112.codfw.wmnet with reason: Maintenance |
[production] |
05:17 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2112.codfw.wmnet with reason: Maintenance |
[production] |
05:17 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db2112.codfw.wmnet with reason: Maintenance |
[production] |
05:14 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2112.codfw.wmnet with reason: Maintenance |
[production] |
05:14 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db2112.codfw.wmnet with reason: Maintenance |
[production] |
05:06 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db2112.codfw.wmnet with reason: Maintenance |
[production] |
05:06 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 8:00:00 on db2112.codfw.wmnet with reason: Maintenance |
[production] |
03:37 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2112.codfw.wmnet with reason: Maintenance |
[production] |
03:36 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2112.codfw.wmnet with reason: Maintenance |
[production] |
02:24 |
<bblack> |
lvs1017 - restary pybal manually again, back on bgp_med=101 (traffic goes back to lvs1020) |
[production] |
02:21 |
<bblack> |
restarting pybal on lvs1017 manually again with bgp_med=0 (should take traffic, may or may not do so very usefully!) |
[production] |
02:05 |
<bblack> |
sretest1001 - puppet disabled, manipulating routing on this host to conduct tests... |
[production] |
01:56 |
<bblack> |
lvs1017 - manually setting BGP MED to 101 and starting pybal (should come back and and speak BGP, but not steal traffic from lvs1020) |
[production] |
01:29 |
<bblack> |
lvs1017 - disable puppet and stop pybal to fix ipv6 for now |
[production] |
01:27 |
<bblack> |
lvs1017: restart pybal, attempt to fix text-ipv6 service |
[production] |
01:05 |
<bblack> |
lvsNNNN: restart pybal to apply etcd key changes on all "high-traffic1" lvs at all sites - T324336 |
[production] |
01:00 |
<bblack> |
lvsNNNN: restart pybal to apply etcd key changes on all "high-traffic2" lvs at all sites - T324336 |
[production] |
00:47 |
<bblack> |
lvsNNNN: restart pybal to apply etcd key changes on all "secondary" lvs at all sites - T324336 (5 hosts, ulsfo completed previously) |
[production] |
00:45 |
<cwhite@cumin2002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host logstash1012.eqiad.wmnet with OS bullseye |
[production] |
00:29 |
<bblack> |
lvs4010: restart pybal to test etcd key changes - T324336 |
[production] |
00:16 |
<bblack> |
disabling puppet on all cp and lvs hosts for conftool key changes. Please coordinate if any lvs/pybal/cpNNNN depooling/work is needed during this transition! |
[production] |
00:12 |
<bblack@cumin1001> |
conftool action : set/pooled=yes; selector: service=cdn |
[production] |
00:12 |
<bblack@cumin1001> |
conftool action : set/weight=1; selector: service=cdn |
[production] |
00:07 |
<cwhite@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on logstash1012.eqiad.wmnet with reason: host reimage |
[production] |
00:04 |
<cwhite@cumin2002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on logstash1012.eqiad.wmnet with reason: host reimage |
[production] |
2022-12-07
§
|
23:38 |
<cwhite@cumin2002> |
START - Cookbook sre.hosts.reimage for host logstash1012.eqiad.wmnet with OS bullseye |
[production] |
23:31 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on dbstore1005.eqiad.wmnet with reason: Maintenance |
[production] |
23:31 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on dbstore1005.eqiad.wmnet with reason: Maintenance |
[production] |
23:31 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1201 (T322618)', diff saved to https://phabricator.wikimedia.org/P42545 and previous config saved to /var/cache/conftool/dbconfig/20221207-233130-ladsgroup.json |
[production] |
23:24 |
<mutante> |
mx1001 about to run out of disk again - apt-get clean, gzip /var/log/exim4/mainlog.1 find -mtime +31 -delete in /var/log/exim4 - deleting old logs to prevent mail server running out of disk - it was alerting in Icinga but same as conf* - monitoring works, alerting does not T305567 |
[production] |
23:23 |
<mutante> |
mx1001 - apt-get clean, gzip /var/log/exim4/mainlog.1 find -mtime +31 -delete in /var/log/exim4 - deleting old logs to prevent mail server running out of disk - it was alerting in Icinga but same as conf* - monitoring works, alerting does not |
[production] |
23:16 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1201', diff saved to https://phabricator.wikimedia.org/P42544 and previous config saved to /var/cache/conftool/dbconfig/20221207-231623-ladsgroup.json |
[production] |
23:14 |
<samtar@deploy1002> |
Finished scap: Backport for [[gerrit:865749|Make parsoid accept all content models. (T324711)]] (duration: 13m 57s) |
[production] |
23:02 |
<samtar@deploy1002> |
samtar and samtar: Backport for [[gerrit:865749|Make parsoid accept all content models. (T324711)]] synced to the testservers: mwdebug2001.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug2002.codfw.wmnet, mwdebug1002.eqiad.wmnet |
[production] |
23:01 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1201', diff saved to https://phabricator.wikimedia.org/P42543 and previous config saved to /var/cache/conftool/dbconfig/20221207-230116-ladsgroup.json |
[production] |
23:00 |
<samtar@deploy1002> |
Started scap: Backport for [[gerrit:865749|Make parsoid accept all content models. (T324711)]] |
[production] |
22:51 |
<bking@cumin2002> |
START - Cookbook sre.wdqs.data-reload |
[production] |
22:51 |
<bking@cumin2002> |
END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) |
[production] |
22:51 |
<bking@cumin2002> |
START - Cookbook sre.wdqs.data-reload |
[production] |
22:50 |
<bking@cumin1001> |
START - Cookbook sre.wdqs.data-reload |
[production] |
22:49 |
<bking@cumin1001> |
END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) |
[production] |
22:49 |
<bking@cumin1001> |
START - Cookbook sre.wdqs.data-reload |
[production] |
22:49 |
<bking@cumin1001> |
END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) |
[production] |
22:49 |
<bking@cumin1001> |
START - Cookbook sre.wdqs.data-reload |
[production] |
22:48 |
<bking@cumin2002> |
END (ERROR) - Cookbook sre.wdqs.data-reload (exit_code=97) |
[production] |
22:48 |
<TheresNoTime> |
Going to backport [[gerrit:865749]] to wmf/1.40.0-wmf.13 for T324711 |
[production] |
22:47 |
<bking@cumin2002> |
START - Cookbook sre.wdqs.data-reload |
[production] |