951-1000 of 10000 results (77ms)
2023-09-12 ยง
11:41 <brouberol@cumin1001> START - Cookbook sre.hosts.downtime for 4:00:00 on 7 hosts with reason: Mute initial failures of hadoop-hdfs-datanode.service [production]
11:40 <filippo@cumin1001> conftool action : set/pooled=no; selector: name=titan1002.eqiad.wmnet,service=thanos-web [production]
11:40 <filippo@cumin1001> conftool action : set/pooled=yes; selector: name=titan1002.eqiad.wmnet,service=thanos-web [production]
11:39 <filippo@cumin1001> conftool action : set/pooled=no; selector: name=titan1001.eqiad.wmnet,service=thanos-web [production]
11:37 <filippo@cumin1001> conftool action : set/weight=100; selector: name=titan2002.codfw.wmnet [production]
11:37 <filippo@cumin1001> conftool action : set/weight=100; selector: name=titan2001.codfw.wmnet [production]
11:37 <filippo@cumin1001> conftool action : set/weight=100; selector: name=titan1002.eqiad.wmnet [production]
11:37 <filippo@cumin1001> conftool action : set/weight=100; selector: name=titan1001.eqiad.wmnet [production]
11:36 <filippo@cumin1001> conftool action : set/weight=100; selector: name=titan* [production]
11:35 <filippo@cumin1001> conftool action : set/weight=10; selector: name=titan2002.codfw.wmnet [production]
11:35 <filippo@cumin1001> conftool action : set/weight=10; selector: name=titan2001.codfw.wmnet [production]
11:35 <filippo@cumin1001> conftool action : set/weight=10; selector: name=titan1002.eqiad.wmnet [production]
11:35 <filippo@cumin1001> conftool action : set/weight=10; selector: name=titan1001.eqiad.wmnet [production]
11:32 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 75%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52472 and previous config saved to /var/cache/conftool/dbconfig/20230912-113207-root.json [production]
11:18 <aborrero@cumin1001> END (ERROR) - Cookbook sre.hosts.reboot-single (exit_code=97) for host cloudservices1004.wikimedia.org [production]
11:17 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 50%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52471 and previous config saved to /var/cache/conftool/dbconfig/20230912-111702-root.json [production]
11:03 <cgoubert@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mw-web: apply [production]
11:03 <cgoubert@deploy1002> helmfile [eqiad] START helmfile.d/services/mw-web: apply [production]
11:03 <cgoubert@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-web: apply [production]
11:02 <cgoubert@deploy1002> helmfile [codfw] START helmfile.d/services/mw-web: apply [production]
11:01 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 25%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52470 and previous config saved to /var/cache/conftool/dbconfig/20230912-110157-root.json [production]
10:54 <aborrero@cumin1001> START - Cookbook sre.hosts.reboot-single for host cloudservices1004.wikimedia.org [production]
10:46 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 10%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52468 and previous config saved to /var/cache/conftool/dbconfig/20230912-104652-root.json [production]
10:45 <moritzm> rebalance Ganeti cluster in eqiad/C following node reboots [production]
10:39 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1028.eqiad.wmnet [production]
10:39 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1028.eqiad.wmnet [production]
10:37 <taavi@cumin1001> conftool action : set/pooled=yes:weight=10; selector: cluster=cloudweb [production]
10:32 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti1028.eqiad.wmnet [production]
10:31 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 5%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52467 and previous config saved to /var/cache/conftool/dbconfig/20230912-103148-root.json [production]
10:25 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1028.eqiad.wmnet [production]
10:23 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host pki-root1002.eqiad.wmnet [production]
10:21 <jgiannelos@deploy1002> helmfile [eqiad] DONE helmfile.d/services/wikifeeds: apply [production]
10:21 <jgiannelos@deploy1002> helmfile [eqiad] START helmfile.d/services/wikifeeds: apply [production]
10:16 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host pki-root1002.eqiad.wmnet [production]
10:16 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 3%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52466 and previous config saved to /var/cache/conftool/dbconfig/20230912-101643-root.json [production]
10:13 <jmm@cumin2002> END (PASS) - Cookbook sre.pki.restart-reboot (exit_code=0) rolling reboot on A:pki [production]
10:13 <moritzm> disabled nginx/puppetdb/postgresql/microservice on puppetdb1002/2002 to ensure nothing hits the old endpoints anymore [production]
10:09 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3 days, 0:00:00 on puppetdb2002.codfw.wmnet with reason: Disable puppetdb/postgres/nginx on old nodes to ensure nothing hits them anyway [production]
10:09 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 3 days, 0:00:00 on puppetdb2002.codfw.wmnet with reason: Disable puppetdb/postgres/nginx on old nodes to ensure nothing hits them anyway [production]
10:09 <jgiannelos@deploy1002> helmfile [staging] DONE helmfile.d/services/wikifeeds: apply [production]
10:08 <jgiannelos@deploy1002> helmfile [staging] START helmfile.d/services/wikifeeds: apply [production]
10:05 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3 days, 0:00:00 on puppetdb1002.eqiad.wmnet with reason: Disable puppetdb/postgres on old nodes to ensure nothing hits them anyway [production]
10:05 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 3 days, 0:00:00 on puppetdb1002.eqiad.wmnet with reason: Disable puppetdb/postgres on old nodes to ensure nothing hits them anyway [production]
10:02 <hnowlan> enabling puppet on A:cp [production]
10:01 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 1%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52465 and previous config saved to /var/cache/conftool/dbconfig/20230912-100138-root.json [production]
09:59 <jmm@cumin2002> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) pki.discovery.wmnet. on all recursors [production]
09:59 <jmm@cumin2002> START - Cookbook sre.dns.wipe-cache pki.discovery.wmnet. on all recursors [production]
09:53 <jmm@cumin2002> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) pki.discovery.wmnet. on all recursors [production]
09:52 <jmm@cumin2002> START - Cookbook sre.dns.wipe-cache pki.discovery.wmnet. on all recursors [production]
09:52 <jmm@cumin2002> START - Cookbook sre.pki.restart-reboot rolling reboot on A:pki [production]