1401-1450 of 10000 results (55ms)
2022-12-08 §
05:34 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1098:3317 (T322618)', diff saved to https://phabricator.wikimedia.org/P42551 and previous config saved to /var/cache/conftool/dbconfig/20221208-053447-ladsgroup.json [production]
05:32 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2108 (T322618)', diff saved to https://phabricator.wikimedia.org/P42550 and previous config saved to /var/cache/conftool/dbconfig/20221208-053253-ladsgroup.json [production]
05:32 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2108.codfw.wmnet with reason: Maintenance [production]
05:32 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1098:3317 (T322618)', diff saved to https://phabricator.wikimedia.org/P42549 and previous config saved to /var/cache/conftool/dbconfig/20221208-053236-ladsgroup.json [production]
05:32 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1098.eqiad.wmnet with reason: Maintenance [production]
05:32 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db2108.codfw.wmnet with reason: Maintenance [production]
05:32 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1098.eqiad.wmnet with reason: Maintenance [production]
05:32 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2100.codfw.wmnet with reason: Maintenance [production]
05:31 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db2100.codfw.wmnet with reason: Maintenance [production]
05:31 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2098.codfw.wmnet with reason: Maintenance [production]
05:31 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db2098.codfw.wmnet with reason: Maintenance [production]
05:29 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1156 (T322618)', diff saved to https://phabricator.wikimedia.org/P42548 and previous config saved to /var/cache/conftool/dbconfig/20221208-052917-ladsgroup.json [production]
05:27 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1156 (T322618)', diff saved to https://phabricator.wikimedia.org/P42547 and previous config saved to /var/cache/conftool/dbconfig/20221208-052705-ladsgroup.json [production]
05:26 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance [production]
05:26 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance [production]
05:26 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1156.eqiad.wmnet with reason: Maintenance [production]
05:26 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1156.eqiad.wmnet with reason: Maintenance [production]
05:20 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db2112 (re)pooling @ 10%: Maint done', diff saved to https://phabricator.wikimedia.org/P42546 and previous config saved to /var/cache/conftool/dbconfig/20221208-052036-ladsgroup.json [production]
05:19 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2112.codfw.wmnet with reason: Maintenance [production]
05:19 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db2112.codfw.wmnet with reason: Maintenance [production]
05:17 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2112.codfw.wmnet with reason: Maintenance [production]
05:17 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db2112.codfw.wmnet with reason: Maintenance [production]
05:14 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2112.codfw.wmnet with reason: Maintenance [production]
05:14 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db2112.codfw.wmnet with reason: Maintenance [production]
05:06 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db2112.codfw.wmnet with reason: Maintenance [production]
05:06 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 8:00:00 on db2112.codfw.wmnet with reason: Maintenance [production]
03:37 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2112.codfw.wmnet with reason: Maintenance [production]
03:36 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2112.codfw.wmnet with reason: Maintenance [production]
02:24 <bblack> lvs1017 - restary pybal manually again, back on bgp_med=101 (traffic goes back to lvs1020) [production]
02:21 <bblack> restarting pybal on lvs1017 manually again with bgp_med=0 (should take traffic, may or may not do so very usefully!) [production]
02:05 <bblack> sretest1001 - puppet disabled, manipulating routing on this host to conduct tests... [production]
01:56 <bblack> lvs1017 - manually setting BGP MED to 101 and starting pybal (should come back and and speak BGP, but not steal traffic from lvs1020) [production]
01:29 <bblack> lvs1017 - disable puppet and stop pybal to fix ipv6 for now [production]
01:27 <bblack> lvs1017: restart pybal, attempt to fix text-ipv6 service [production]
01:05 <bblack> lvsNNNN: restart pybal to apply etcd key changes on all "high-traffic1" lvs at all sites - T324336 [production]
01:00 <bblack> lvsNNNN: restart pybal to apply etcd key changes on all "high-traffic2" lvs at all sites - T324336 [production]
00:47 <bblack> lvsNNNN: restart pybal to apply etcd key changes on all "secondary" lvs at all sites - T324336 (5 hosts, ulsfo completed previously) [production]
00:45 <cwhite@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host logstash1012.eqiad.wmnet with OS bullseye [production]
00:29 <bblack> lvs4010: restart pybal to test etcd key changes - T324336 [production]
00:16 <bblack> disabling puppet on all cp and lvs hosts for conftool key changes. Please coordinate if any lvs/pybal/cpNNNN depooling/work is needed during this transition! [production]
00:12 <bblack@cumin1001> conftool action : set/pooled=yes; selector: service=cdn [production]
00:12 <bblack@cumin1001> conftool action : set/weight=1; selector: service=cdn [production]
00:07 <cwhite@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on logstash1012.eqiad.wmnet with reason: host reimage [production]
00:04 <cwhite@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on logstash1012.eqiad.wmnet with reason: host reimage [production]
2022-12-07 §
23:38 <cwhite@cumin2002> START - Cookbook sre.hosts.reimage for host logstash1012.eqiad.wmnet with OS bullseye [production]
23:31 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on dbstore1005.eqiad.wmnet with reason: Maintenance [production]
23:31 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on dbstore1005.eqiad.wmnet with reason: Maintenance [production]
23:31 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1201 (T322618)', diff saved to https://phabricator.wikimedia.org/P42545 and previous config saved to /var/cache/conftool/dbconfig/20221207-233130-ladsgroup.json [production]
23:24 <mutante> mx1001 about to run out of disk again - apt-get clean, gzip /var/log/exim4/mainlog.1 find -mtime +31 -delete in /var/log/exim4 - deleting old logs to prevent mail server running out of disk - it was alerting in Icinga but same as conf* - monitoring works, alerting does not T305567 [production]
23:23 <mutante> mx1001 - apt-get clean, gzip /var/log/exim4/mainlog.1 find -mtime +31 -delete in /var/log/exim4 - deleting old logs to prevent mail server running out of disk - it was alerting in Icinga but same as conf* - monitoring works, alerting does not [production]