301-350 of 10000 results (53ms)
2022-07-20 §
04:59 <marostegui@cumin1001> dbctl commit (dc=all): 'Add db2168 to dbctl in s7 and s8 T311493', diff saved to https://phabricator.wikimedia.org/P31473 and previous config saved to /var/cache/conftool/dbconfig/20220720-045918-marostegui.json [production]
04:57 <bking@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.REIMAGE (1 nodes at a time) for ElasticSearch cluster search_codfw: codfw cluster reimage (bullseye upgrade) - bking@cumin1001 - T289135 [production]
04:50 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1101:3317 (T312990)', diff saved to https://phabricator.wikimedia.org/P31472 and previous config saved to /var/cache/conftool/dbconfig/20220720-045004-marostegui.json [production]
04:47 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1101:3317 (T312990)', diff saved to https://phabricator.wikimedia.org/P31471 and previous config saved to /var/cache/conftool/dbconfig/20220720-044729-marostegui.json [production]
04:47 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on db1101.eqiad.wmnet with reason: Maintenance [production]
04:47 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 5:00:00 on db1101.eqiad.wmnet with reason: Maintenance [production]
04:43 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2123.codfw.wmnet with reason: Maintenance [production]
04:43 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2123.codfw.wmnet with reason: Maintenance [production]
04:43 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on 8 hosts with reason: Maintenance [production]
04:42 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on 8 hosts with reason: Maintenance [production]
04:10 <rzl> rzl@kubemaster1001:~$ sudo systemctl restart kube-apiserver [production]
04:08 <rzl> rzl@kubemaster1002:~$ sudo systemctl restart kube-apiserver [production]
03:48 <rzl> rzl@cumin2002:~$ sudo cumin dbproxy[1019,1020,1021].eqiad.wmnet 'systemctl reload haproxy' [production]
03:37 <rzl> rzl@dbproxy1018:~$ sudo systemctl reload haproxy [production]
03:30 <bking@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.REIMAGE (1 nodes at a time) for ElasticSearch cluster search_codfw: codfw cluster reimage (bullseye upgrade) - bking@cumin1001 - T289135 [production]
03:19 <bking@cumin1001> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=93) for host elastic2060.codfw.wmnet with OS bullseye [production]
03:19 <bking@cumin1001> START - Cookbook sre.hosts.reimage for host elastic2060.codfw.wmnet with OS bullseye [production]
03:10 <tstarling@deploy1002> Finished scap: revert yue -> zh fallback, needs LC rebuild in both branches T296188 (duration: 19m 41s) [production]
02:59 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: apply [production]
02:58 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply [production]
02:58 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply [production]
02:54 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply [production]
02:51 <tstarling@deploy1002> Started scap: revert yue -> zh fallback, needs LC rebuild in both branches T296188 [production]
02:29 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: apply [production]
02:25 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply [production]
02:25 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply [production]
02:19 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply [production]
01:49 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host elastic2052.codfw.wmnet with OS bullseye [production]
01:27 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on elastic2052.codfw.wmnet with reason: host reimage [production]
01:24 <bking@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on elastic2052.codfw.wmnet with reason: host reimage [production]
01:04 <bking@cumin1001> START - Cookbook sre.hosts.reimage for host elastic2052.codfw.wmnet with OS bullseye [production]
01:00 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host elastic2051.codfw.wmnet with OS bullseye [production]
00:43 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on elastic2051.codfw.wmnet with reason: host reimage [production]
00:39 <bking@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on elastic2051.codfw.wmnet with reason: host reimage [production]
00:22 <bking@cumin1001> START - Cookbook sre.hosts.reimage for host elastic2051.codfw.wmnet with OS bullseye [production]
2022-07-19 §
22:59 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 10:00:00 on 8 hosts with reason: Maintenance [production]
22:59 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 10:00:00 on 8 hosts with reason: Maintenance [production]
22:59 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on db2104.codfw.wmnet with reason: Maintenance [production]
22:59 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 5:00:00 on db2104.codfw.wmnet with reason: Maintenance [production]
22:58 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on dbstore1007.eqiad.wmnet with reason: Maintenance [production]
22:58 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 5:00:00 on dbstore1007.eqiad.wmnet with reason: Maintenance [production]
22:58 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1156 (T312990)', diff saved to https://phabricator.wikimedia.org/P31470 and previous config saved to /var/cache/conftool/dbconfig/20220719-225828-marostegui.json [production]
22:57 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host elastic2050.codfw.wmnet with OS bullseye [production]
22:51 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: apply [production]
22:45 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply [production]
22:45 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply [production]
22:43 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1156', diff saved to https://phabricator.wikimedia.org/P31469 and previous config saved to /var/cache/conftool/dbconfig/20220719-224323-marostegui.json [production]
22:38 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply [production]
22:35 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on elastic2050.codfw.wmnet with reason: host reimage [production]
22:31 <bking@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on elastic2050.codfw.wmnet with reason: host reimage [production]