5301-5350 of 10000 results (86ms)
2023-08-30 ยง
12:49 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1182 (T343718)', diff saved to https://phabricator.wikimedia.org/P52084 and previous config saved to /var/cache/conftool/dbconfig/20230830-124933-ladsgroup.json [production]
12:47 <taavi@deploy1002> sukhe and taavi: Continuing with sync [production]
12:46 <taavi@deploy1002> sukhe and taavi: Backport for [[gerrit:951591|wmf-config: remove public subnets from reverse-proxy.php (T344704 T329219)]] synced to the testservers mwdebug2002.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
12:46 <akosiaris@deploy1002> helmfile [staging] START helmfile.d/services/linkrecommendation: apply [production]
12:45 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores1009.eqiad.wmnet [production]
12:43 <taavi@deploy1002> Started scap: Backport for [[gerrit:951591|wmf-config: remove public subnets from reverse-proxy.php (T344704 T329219)]] [production]
12:43 <ladsgroup@deploy1002> Finished scap: Backport for [[gerrit:953590|ores-extension: fix thresholds (T343308)]] (duration: 25m 53s) [production]
12:38 <elukey@cumin1001> START - Cookbook sre.hosts.reboot-single for host ores1009.eqiad.wmnet [production]
12:37 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores1008.eqiad.wmnet [production]
12:34 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1182', diff saved to https://phabricator.wikimedia.org/P52083 and previous config saved to /var/cache/conftool/dbconfig/20230830-123427-ladsgroup.json [production]
12:33 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1034.eqiad.wmnet [production]
12:30 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2170:3312 (T343718)', diff saved to https://phabricator.wikimedia.org/P52082 and previous config saved to /var/cache/conftool/dbconfig/20230830-123001-ladsgroup.json [production]
12:29 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2170.codfw.wmnet with reason: Maintenance [production]
12:29 <elukey@cumin1001> START - Cookbook sre.hosts.reboot-single for host ores1008.eqiad.wmnet [production]
12:29 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2170.codfw.wmnet with reason: Maintenance [production]
12:29 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2148 (T343718)', diff saved to https://phabricator.wikimedia.org/P52081 and previous config saved to /var/cache/conftool/dbconfig/20230830-122940-ladsgroup.json [production]
12:29 <elukey@deploy1002> helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. [production]
12:28 <elukey@deploy1002> helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. [production]
12:28 <elukey@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'sync'. [production]
12:27 <elukey@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'sync'. [production]
12:27 <elukey@deploy1002> helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. [production]
12:26 <elukey@deploy1002> helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. [production]
12:25 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores1007.eqiad.wmnet [production]
12:19 <ladsgroup@deploy1002> isaranto and ladsgroup: Continuing with sync [production]
12:19 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1182', diff saved to https://phabricator.wikimedia.org/P52080 and previous config saved to /var/cache/conftool/dbconfig/20230830-121921-ladsgroup.json [production]
12:19 <ladsgroup@deploy1002> isaranto and ladsgroup: Backport for [[gerrit:953590|ores-extension: fix thresholds (T343308)]] synced to the testservers mwdebug1001.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2002.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
12:19 <elukey@cumin1001> START - Cookbook sre.hosts.reboot-single for host ores1007.eqiad.wmnet [production]
12:17 <ladsgroup@deploy1002> Started scap: Backport for [[gerrit:953590|ores-extension: fix thresholds (T343308)]] [production]
12:16 <aborrero@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cloudservices1006.eqiad.wmnet with reason: host reimage [production]
12:15 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1033.eqiad.wmnet [production]
12:15 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1033.eqiad.wmnet [production]
12:14 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2148', diff saved to https://phabricator.wikimedia.org/P52079 and previous config saved to /var/cache/conftool/dbconfig/20230830-121433-ladsgroup.json [production]
12:13 <aborrero@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on cloudservices1006.eqiad.wmnet with reason: host reimage [production]
12:12 <aborrero@cumin1001> START - Cookbook sre.hosts.reimage for host cloudservices1006.eqiad.wmnet with OS bullseye [production]
12:10 <aborrero@cumin1001> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host cloudservices1006.eqiad.wmnet with OS bullseye [production]
12:10 <aborrero@cumin1001> START - Cookbook sre.hosts.reimage for host cloudservices1006.eqiad.wmnet with OS bullseye [production]
12:09 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti1033.eqiad.wmnet [production]
12:08 <jbond@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host sretest1001.eqiad.wmnet with OS bullseye [production]
12:05 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1220 (T344589)', diff saved to https://phabricator.wikimedia.org/P52078 and previous config saved to /var/cache/conftool/dbconfig/20230830-120511-ladsgroup.json [production]
12:04 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1182 (T343718)', diff saved to https://phabricator.wikimedia.org/P52077 and previous config saved to /var/cache/conftool/dbconfig/20230830-120415-ladsgroup.json [production]
11:59 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2148', diff saved to https://phabricator.wikimedia.org/P52076 and previous config saved to /var/cache/conftool/dbconfig/20230830-115927-ladsgroup.json [production]
11:59 <aborrero@cumin1001> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host cloudservices1006.eqiad.wmnet with OS bullseye [production]
11:57 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1033.eqiad.wmnet [production]
11:56 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1032.eqiad.wmnet [production]
11:56 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1032.eqiad.wmnet [production]
11:52 <cmooney@cumin1001> END (FAIL) - Cookbook sre.network.provision (exit_code=99) for device ssw1-a1-codfw.mgmt.codfw.wmnet [production]
11:52 <cmooney@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
11:52 <cmooney@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Remove management record for ssw1-a1-codfw - cmooney@cumin1001" [production]
11:51 <jbond@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on sretest1001.eqiad.wmnet with reason: host reimage [production]
11:51 <cmooney@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Remove management record for ssw1-a1-codfw - cmooney@cumin1001" [production]