351-400 of 10000 results (44ms)
2022-04-06 ยง
11:24 <mmandere> depool cp4033 for reimage - T290005 [production]
11:23 <marostegui> dbmaint s3@eqiad T297189 [production]
11:23 <aborrero@cumin1001> START - Cookbook sre.hosts.reimage for host cloudgw1002.eqiad.wmnet with OS bullseye [production]
11:22 <btullis@deploy1002> helmfile [staging] START helmfile.d/services/datahub: apply on main [production]
11:20 <mmandere> pool cp4027 with HAProxy as TLS termination layer - T290005 [production]
11:12 <mmandere@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cp4027.ulsfo.wmnet with OS buster [production]
11:10 <btullis@deploy1002> helmfile [staging] DONE helmfile.d/services/datahub: sync on main [production]
11:03 <mmandere> pool cp3052 with HAProxy as TLS termination layer - T290005 [production]
11:01 <mmandere@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cp3052.esams.wmnet with OS buster [production]
11:00 <btullis@deploy1002> helmfile [staging] START helmfile.d/services/datahub: apply on main [production]
10:57 <btullis@deploy1002> helmfile [staging] DONE helmfile.d/services/datahub: sync on main [production]
10:48 <elukey@deploy1002> helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. [production]
10:47 <btullis@deploy1002> helmfile [staging] START helmfile.d/services/datahub: apply on main [production]
10:39 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1163 (T298565)', diff saved to https://phabricator.wikimedia.org/P24150 and previous config saved to /var/cache/conftool/dbconfig/20220406-103929-ladsgroup.json [production]
10:39 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1163.eqiad.wmnet with reason: Maintenance [production]
10:39 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1163.eqiad.wmnet with reason: Maintenance [production]
10:38 <elukey@deploy1002> helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. [production]
10:32 <mmandere@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cp3052.esams.wmnet with reason: host reimage [production]
10:30 <jynus> reruning es4 dump on backup2002 [production]
10:29 <mmandere@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cp4027.ulsfo.wmnet with reason: host reimage [production]
10:29 <elukey@deploy1002> helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. [production]
10:28 <mmandere@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on cp3052.esams.wmnet with reason: host reimage [production]
10:27 <jmm@cumin2002> END (FAIL) - Cookbook sre.hosts.reboot-single (exit_code=1) for host deploy2002.codfw.wmnet [production]
10:25 <mmandere@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on cp4027.ulsfo.wmnet with reason: host reimage [production]
10:24 <aborrero@cumin1001> END (ERROR) - Cookbook sre.hosts.reimage (exit_code=97) for host cloudgw1002.eqiad.wmnet with OS bullseye [production]
10:23 <btullis@deploy1002> helmfile [staging] DONE helmfile.d/services/datahub: sync on main [production]
10:19 <elukey@deploy1002> helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. [production]
10:16 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host deploy2002.codfw.wmnet [production]
10:13 <btullis@deploy1002> helmfile [staging] START helmfile.d/services/datahub: apply on main [production]
10:10 <mmandere@cumin1001> START - Cookbook sre.hosts.reimage for host cp4027.ulsfo.wmnet with OS buster [production]
10:07 <btullis@deploy1002> helmfile [staging] DONE helmfile.d/services/datahub: sync on main [production]
10:06 <aborrero@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cloudgw1002.eqiad.wmnet with reason: host reimage [production]
10:03 <mmandere> depool cp4027 for reimage - T290005 [production]
10:02 <aborrero@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on cloudgw1002.eqiad.wmnet with reason: host reimage [production]
09:58 <mmandere@cumin1001> START - Cookbook sre.hosts.reimage for host cp3052.esams.wmnet with OS buster [production]
09:57 <btullis@deploy1002> helmfile [staging] START helmfile.d/services/datahub: apply on main [production]
09:57 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host xhgui1001.eqiad.wmnet [production]
09:55 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host xhgui1001.eqiad.wmnet [production]
09:54 <btullis@deploy1002> helmfile [staging] DONE helmfile.d/services/datahub: sync on main [production]
09:52 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on dbstore1003.eqiad.wmnet with reason: Maintenance [production]
09:52 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on dbstore1003.eqiad.wmnet with reason: Maintenance [production]
09:51 <aborrero@cumin1001> START - Cookbook sre.hosts.reimage for host cloudgw1002.eqiad.wmnet with OS bullseye [production]
09:50 <mmandere> depool cp3052 for reimage - T290005 [production]
09:47 <moritzm> installing mariadb-10.3 updates from buster 10.12 point released (different from wmf-mariadb packages) [production]
09:44 <btullis@deploy1002> helmfile [staging] START helmfile.d/services/datahub: apply on main [production]
09:24 <aborrero@cumin1001> START - Cookbook sre.hosts.reimage for host cloudgw1002.eqiad.wmnet with OS bullseye [production]
09:21 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) for new host webperf1003.eqiad.wmnet [production]
09:19 <btullis@cumin1001> END (PASS) - Cookbook sre.presto.reboot-workers (exit_code=0) for Presto analytics cluster: Reboot Presto nodes [production]
09:17 <klausman@deploy1002> helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. [production]
09:17 <klausman@deploy1002> helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. [production]