3451-3500 of 10000 results (92ms)
2024-02-10 §
02:11 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1230 (T352010)', diff saved to https://phabricator.wikimedia.org/P56610 and previous config saved to /var/cache/conftool/dbconfig/20240210-021119-ladsgroup.json [production]
01:56 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1230', diff saved to https://phabricator.wikimedia.org/P56609 and previous config saved to /var/cache/conftool/dbconfig/20240210-015612-ladsgroup.json [production]
01:41 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1230', diff saved to https://phabricator.wikimedia.org/P56608 and previous config saved to /var/cache/conftool/dbconfig/20240210-014106-ladsgroup.json [production]
01:25 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1230 (T352010)', diff saved to https://phabricator.wikimedia.org/P56607 and previous config saved to /var/cache/conftool/dbconfig/20240210-012559-ladsgroup.json [production]
2024-02-09 §
23:04 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1230 (T352010)', diff saved to https://phabricator.wikimedia.org/P56606 and previous config saved to /var/cache/conftool/dbconfig/20240209-230425-ladsgroup.json [production]
23:04 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1230.eqiad.wmnet with reason: Maintenance [production]
23:04 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1230.eqiad.wmnet with reason: Maintenance [production]
21:38 <inflatador> bking@deploy2002 install 'python3-boto3' pkg T348685 [production]
21:36 <inflatador> bking@deploy2002 install 'python3-plac' pkg T348685 [production]
21:09 <bking@cumin2002> END (ERROR) - Cookbook sre.elasticsearch.rolling-operation (exit_code=97) Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: apply new systemd settings - bking@cumin2002 - T355617 [production]
21:06 <bking@cumin2002> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: apply new systemd settings - bking@cumin2002 - T355617 [production]
20:55 <bking@cumin2002> END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.UPGRADE (1 nodes at a time) for ElasticSearch cluster relforge: apply new systemd settings - bking@cumin2002 - T355617 [production]
20:46 <bking@cumin2002> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (1 nodes at a time) for ElasticSearch cluster relforge: apply new systemd settings - bking@cumin2002 - T355617 [production]
20:28 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1216.eqiad.wmnet with reason: Maintenance [production]
20:28 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1216.eqiad.wmnet with reason: Maintenance [production]
20:28 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1213:3315 (T352010)', diff saved to https://phabricator.wikimedia.org/P56605 and previous config saved to /var/cache/conftool/dbconfig/20240209-202830-ladsgroup.json [production]
20:13 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1213:3315', diff saved to https://phabricator.wikimedia.org/P56604 and previous config saved to /var/cache/conftool/dbconfig/20240209-201324-ladsgroup.json [production]
19:58 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1213:3315', diff saved to https://phabricator.wikimedia.org/P56603 and previous config saved to /var/cache/conftool/dbconfig/20240209-195817-ladsgroup.json [production]
19:43 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1213:3315 (T352010)', diff saved to https://phabricator.wikimedia.org/P56602 and previous config saved to /var/cache/conftool/dbconfig/20240209-194310-ladsgroup.json [production]
19:34 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1175 (T352010)', diff saved to https://phabricator.wikimedia.org/P56601 and previous config saved to /var/cache/conftool/dbconfig/20240209-193452-ladsgroup.json [production]
19:34 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1175.eqiad.wmnet with reason: Maintenance [production]
19:34 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1175.eqiad.wmnet with reason: Maintenance [production]
19:34 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1166 (T352010)', diff saved to https://phabricator.wikimedia.org/P56600 and previous config saved to /var/cache/conftool/dbconfig/20240209-193430-ladsgroup.json [production]
19:19 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1166', diff saved to https://phabricator.wikimedia.org/P56599 and previous config saved to /var/cache/conftool/dbconfig/20240209-191923-ladsgroup.json [production]
19:04 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1166', diff saved to https://phabricator.wikimedia.org/P56598 and previous config saved to /var/cache/conftool/dbconfig/20240209-190416-ladsgroup.json [production]
18:49 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1166 (T352010)', diff saved to https://phabricator.wikimedia.org/P56597 and previous config saved to /var/cache/conftool/dbconfig/20240209-184910-ladsgroup.json [production]
18:49 <dzahn@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host etherpad1004.eqiad.wmnet with OS bookworm [production]
18:39 <cdanis@deploy2002> helmfile [aux-k8s-eqiad] DONE helmfile.d/admin 'apply'. [production]
18:38 <cdanis@deploy2002> helmfile [aux-k8s-eqiad] START helmfile.d/admin 'apply'. [production]
18:37 <cdanis@deploy2002> helmfile [aux-k8s-eqiad] DONE helmfile.d/admin 'apply'. [production]
18:37 <cdanis@deploy2002> helmfile [aux-k8s-eqiad] START helmfile.d/admin 'apply'. [production]
18:36 <bking@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cloudelastic1008.eqiad.wmnet with OS bullseye [production]
18:36 <bking@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - bking@cumin2002" [production]
18:36 <cdanis@deploy2002> helmfile [aux-k8s-eqiad] DONE helmfile.d/admin 'apply'. [production]
18:36 <cdanis@deploy2002> helmfile [aux-k8s-eqiad] START helmfile.d/admin 'apply'. [production]
18:35 <cdanis@deploy2002> helmfile [aux-k8s-eqiad] DONE helmfile.d/admin 'apply'. [production]
18:35 <cdanis@deploy2002> helmfile [aux-k8s-eqiad] START helmfile.d/admin 'apply'. [production]
18:35 <dzahn@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on etherpad1004.eqiad.wmnet with reason: host reimage [production]
18:32 <bking@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - bking@cumin2002" [production]
18:32 <dzahn@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on etherpad1004.eqiad.wmnet with reason: host reimage [production]
18:19 <dzahn@cumin1002> START - Cookbook sre.hosts.reimage for host etherpad1004.eqiad.wmnet with OS bookworm [production]
18:14 <bking@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cloudelastic1008.eqiad.wmnet with reason: host reimage [production]
18:11 <bking@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on cloudelastic1008.eqiad.wmnet with reason: host reimage [production]
17:54 <bking@cumin2002> START - Cookbook sre.hosts.reimage for host cloudelastic1008.eqiad.wmnet with OS bullseye [production]
17:43 <dzahn@cumin1002> END (FAIL) - Cookbook sre.ganeti.makevm (exit_code=99) for new host etherpad1004.eqiad.wmnet [production]
17:43 <dzahn@cumin1002> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host etherpad1004.eqiad.wmnet with OS bookworm [production]
17:43 <dzahn@cumin1002> START - Cookbook sre.hosts.reimage for host etherpad1004.eqiad.wmnet with OS bookworm [production]
17:41 <dzahn@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.ganeti.makevm: created new VM etherpad1004.eqiad.wmnet - dzahn@cumin1002" [production]
17:41 <dzahn@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.ganeti.makevm: created new VM etherpad1004.eqiad.wmnet - dzahn@cumin1002" [production]
17:40 <dzahn@cumin1002> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) etherpad1004.eqiad.wmnet on all recursors [production]