5951-6000 of 10000 results (93ms)
2023-09-12 ยง
12:11 <aborrero@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
12:11 <aborrero@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: cloudservices1004.wikimedia.org decommissioned, removing all IPs except the asset tag one - aborrero@cumin1001" [production]
12:09 <aborrero@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: cloudservices1004.wikimedia.org decommissioned, removing all IPs except the asset tag one - aborrero@cumin1001" [production]
12:07 <aborrero@cumin1001> START - Cookbook sre.dns.netbox [production]
11:59 <aborrero@cumin1001> START - Cookbook sre.hosts.decommission for hosts cloudservices1004.wikimedia.org [production]
11:57 <godog> pool thanos[12]001 for thanos.w.o - T341999 [production]
11:47 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 100%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52473 and previous config saved to /var/cache/conftool/dbconfig/20230912-114711-root.json [production]
11:43 <godog> pool titan hosts alongside thanos-fe for thanos-query / thanos-web services - T341999 [production]
11:42 <filippo@cumin1001> conftool action : set/pooled=no; selector: name=titan1001.eqiad.wmnet,service=thanos-web [production]
11:42 <filippo@cumin1001> conftool action : set/pooled=no; selector: name=titan1002.eqiad.wmnet,service=thanos-web [production]
11:41 <brouberol@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on 7 hosts with reason: Mute initial failures of hadoop-hdfs-datanode.service [production]
11:41 <brouberol@cumin1001> START - Cookbook sre.hosts.downtime for 4:00:00 on 7 hosts with reason: Mute initial failures of hadoop-hdfs-datanode.service [production]
11:40 <filippo@cumin1001> conftool action : set/pooled=no; selector: name=titan1002.eqiad.wmnet,service=thanos-web [production]
11:40 <filippo@cumin1001> conftool action : set/pooled=yes; selector: name=titan1002.eqiad.wmnet,service=thanos-web [production]
11:39 <filippo@cumin1001> conftool action : set/pooled=no; selector: name=titan1001.eqiad.wmnet,service=thanos-web [production]
11:37 <filippo@cumin1001> conftool action : set/weight=100; selector: name=titan2002.codfw.wmnet [production]
11:37 <filippo@cumin1001> conftool action : set/weight=100; selector: name=titan2001.codfw.wmnet [production]
11:37 <filippo@cumin1001> conftool action : set/weight=100; selector: name=titan1002.eqiad.wmnet [production]
11:37 <filippo@cumin1001> conftool action : set/weight=100; selector: name=titan1001.eqiad.wmnet [production]
11:36 <filippo@cumin1001> conftool action : set/weight=100; selector: name=titan* [production]
11:35 <filippo@cumin1001> conftool action : set/weight=10; selector: name=titan2002.codfw.wmnet [production]
11:35 <filippo@cumin1001> conftool action : set/weight=10; selector: name=titan2001.codfw.wmnet [production]
11:35 <filippo@cumin1001> conftool action : set/weight=10; selector: name=titan1002.eqiad.wmnet [production]
11:35 <filippo@cumin1001> conftool action : set/weight=10; selector: name=titan1001.eqiad.wmnet [production]
11:32 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 75%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52472 and previous config saved to /var/cache/conftool/dbconfig/20230912-113207-root.json [production]
11:18 <aborrero@cumin1001> END (ERROR) - Cookbook sre.hosts.reboot-single (exit_code=97) for host cloudservices1004.wikimedia.org [production]
11:17 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 50%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52471 and previous config saved to /var/cache/conftool/dbconfig/20230912-111702-root.json [production]
11:03 <cgoubert@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mw-web: apply [production]
11:03 <cgoubert@deploy1002> helmfile [eqiad] START helmfile.d/services/mw-web: apply [production]
11:03 <cgoubert@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-web: apply [production]
11:02 <cgoubert@deploy1002> helmfile [codfw] START helmfile.d/services/mw-web: apply [production]
11:01 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 25%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52470 and previous config saved to /var/cache/conftool/dbconfig/20230912-110157-root.json [production]
10:54 <aborrero@cumin1001> START - Cookbook sre.hosts.reboot-single for host cloudservices1004.wikimedia.org [production]
10:46 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 10%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52468 and previous config saved to /var/cache/conftool/dbconfig/20230912-104652-root.json [production]
10:45 <moritzm> rebalance Ganeti cluster in eqiad/C following node reboots [production]
10:39 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1028.eqiad.wmnet [production]
10:39 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1028.eqiad.wmnet [production]
10:37 <taavi@cumin1001> conftool action : set/pooled=yes:weight=10; selector: cluster=cloudweb [production]
10:32 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti1028.eqiad.wmnet [production]
10:31 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 5%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52467 and previous config saved to /var/cache/conftool/dbconfig/20230912-103148-root.json [production]
10:25 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1028.eqiad.wmnet [production]
10:23 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host pki-root1002.eqiad.wmnet [production]
10:21 <jgiannelos@deploy1002> helmfile [eqiad] DONE helmfile.d/services/wikifeeds: apply [production]
10:21 <jgiannelos@deploy1002> helmfile [eqiad] START helmfile.d/services/wikifeeds: apply [production]
10:16 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host pki-root1002.eqiad.wmnet [production]
10:16 <marostegui@cumin1001> dbctl commit (dc=all): 'db2158 (re)pooling @ 3%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52466 and previous config saved to /var/cache/conftool/dbconfig/20230912-101643-root.json [production]
10:13 <jmm@cumin2002> END (PASS) - Cookbook sre.pki.restart-reboot (exit_code=0) rolling reboot on A:pki [production]
10:13 <moritzm> disabled nginx/puppetdb/postgresql/microservice on puppetdb1002/2002 to ensure nothing hits the old endpoints anymore [production]
10:09 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3 days, 0:00:00 on puppetdb2002.codfw.wmnet with reason: Disable puppetdb/postgres/nginx on old nodes to ensure nothing hits them anyway [production]
10:09 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 3 days, 0:00:00 on puppetdb2002.codfw.wmnet with reason: Disable puppetdb/postgres/nginx on old nodes to ensure nothing hits them anyway [production]