4051-4100 of 10000 results (79ms)
2023-08-30 ยง
12:56 <elukey> restart kubelet on ml-serve1001 to clear prometheus metrics [production]
12:55 <taavi@deploy1002> Finished scap: Backport for [[gerrit:951591|wmf-config: remove public subnets from reverse-proxy.php (T344704 T329219)]] (duration: 11m 28s) [production]
12:54 <elukey@deploy1002> helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. [production]
12:54 <elukey@deploy1002> helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. [production]
12:54 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti1034.eqiad.wmnet [production]
12:53 <elukey@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'sync'. [production]
12:53 <elukey@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'sync'. [production]
12:53 <elukey@deploy1002> helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. [production]
12:52 <elukey@deploy1002> helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. [production]
12:52 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1188 (T343718)', diff saved to https://phabricator.wikimedia.org/P52086 and previous config saved to /var/cache/conftool/dbconfig/20230830-125206-ladsgroup.json [production]
12:49 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1188 (T343718)', diff saved to https://phabricator.wikimedia.org/P52085 and previous config saved to /var/cache/conftool/dbconfig/20230830-124954-ladsgroup.json [production]
12:49 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1188.eqiad.wmnet with reason: Maintenance [production]
12:49 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1188.eqiad.wmnet with reason: Maintenance [production]
12:49 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1182 (T343718)', diff saved to https://phabricator.wikimedia.org/P52084 and previous config saved to /var/cache/conftool/dbconfig/20230830-124933-ladsgroup.json [production]
12:47 <taavi@deploy1002> sukhe and taavi: Continuing with sync [production]
12:46 <taavi@deploy1002> sukhe and taavi: Backport for [[gerrit:951591|wmf-config: remove public subnets from reverse-proxy.php (T344704 T329219)]] synced to the testservers mwdebug2002.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
12:46 <akosiaris@deploy1002> helmfile [staging] START helmfile.d/services/linkrecommendation: apply [production]
12:45 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores1009.eqiad.wmnet [production]
12:43 <taavi@deploy1002> Started scap: Backport for [[gerrit:951591|wmf-config: remove public subnets from reverse-proxy.php (T344704 T329219)]] [production]
12:43 <ladsgroup@deploy1002> Finished scap: Backport for [[gerrit:953590|ores-extension: fix thresholds (T343308)]] (duration: 25m 53s) [production]
12:38 <elukey@cumin1001> START - Cookbook sre.hosts.reboot-single for host ores1009.eqiad.wmnet [production]
12:37 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores1008.eqiad.wmnet [production]
12:34 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1182', diff saved to https://phabricator.wikimedia.org/P52083 and previous config saved to /var/cache/conftool/dbconfig/20230830-123427-ladsgroup.json [production]
12:33 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1034.eqiad.wmnet [production]
12:30 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2170:3312 (T343718)', diff saved to https://phabricator.wikimedia.org/P52082 and previous config saved to /var/cache/conftool/dbconfig/20230830-123001-ladsgroup.json [production]
12:29 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2170.codfw.wmnet with reason: Maintenance [production]
12:29 <elukey@cumin1001> START - Cookbook sre.hosts.reboot-single for host ores1008.eqiad.wmnet [production]
12:29 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2170.codfw.wmnet with reason: Maintenance [production]
12:29 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2148 (T343718)', diff saved to https://phabricator.wikimedia.org/P52081 and previous config saved to /var/cache/conftool/dbconfig/20230830-122940-ladsgroup.json [production]
12:29 <elukey@deploy1002> helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. [production]
12:28 <elukey@deploy1002> helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. [production]
12:28 <elukey@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'sync'. [production]
12:27 <elukey@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'sync'. [production]
12:27 <elukey@deploy1002> helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. [production]
12:26 <elukey@deploy1002> helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. [production]
12:25 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores1007.eqiad.wmnet [production]
12:19 <ladsgroup@deploy1002> isaranto and ladsgroup: Continuing with sync [production]
12:19 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1182', diff saved to https://phabricator.wikimedia.org/P52080 and previous config saved to /var/cache/conftool/dbconfig/20230830-121921-ladsgroup.json [production]
12:19 <ladsgroup@deploy1002> isaranto and ladsgroup: Backport for [[gerrit:953590|ores-extension: fix thresholds (T343308)]] synced to the testservers mwdebug1001.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2002.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
12:19 <elukey@cumin1001> START - Cookbook sre.hosts.reboot-single for host ores1007.eqiad.wmnet [production]
12:17 <ladsgroup@deploy1002> Started scap: Backport for [[gerrit:953590|ores-extension: fix thresholds (T343308)]] [production]
12:16 <aborrero@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cloudservices1006.eqiad.wmnet with reason: host reimage [production]
12:15 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1033.eqiad.wmnet [production]
12:15 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1033.eqiad.wmnet [production]
12:14 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2148', diff saved to https://phabricator.wikimedia.org/P52079 and previous config saved to /var/cache/conftool/dbconfig/20230830-121433-ladsgroup.json [production]
12:13 <aborrero@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on cloudservices1006.eqiad.wmnet with reason: host reimage [production]
12:12 <aborrero@cumin1001> START - Cookbook sre.hosts.reimage for host cloudservices1006.eqiad.wmnet with OS bullseye [production]
12:10 <aborrero@cumin1001> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host cloudservices1006.eqiad.wmnet with OS bullseye [production]
12:10 <aborrero@cumin1001> START - Cookbook sre.hosts.reimage for host cloudservices1006.eqiad.wmnet with OS bullseye [production]
12:09 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti1033.eqiad.wmnet [production]