2901-2950 of 10000 results (96ms)
2023-10-09 §
09:07 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host analytics1072.eqiad.wmnet [production]
09:07 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host analytics1071.eqiad.wmnet [production]
09:01 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host analytics1071.eqiad.wmnet [production]
09:01 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host analytics1070.eqiad.wmnet [production]
08:55 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host analytics1070.eqiad.wmnet [production]
08:53 <moritzm> rebuilt bookworm d-i image for the Bookworm 12.2 point release T348326 [production]
08:23 <moritzm> rebuilt bullseye d-i image for the Bullseye 11.9 point release T348327 [production]
07:06 <taavi> kill stuck updateSpecialPages.php process on mwmaint2002 which was trying to re-connect to an unreachable db host [production]
07:02 <arnaudb@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8 days, 0:00:00 on db2109.codfw.wmnet with reason: investigating db2109 [production]
07:01 <arnaudb@cumin1001> START - Cookbook sre.hosts.downtime for 8 days, 0:00:00 on db2109.codfw.wmnet with reason: investigating db2109 [production]
2023-10-08 §
22:58 <ryankemper> [WDQS] Depooled `wdqs1014` while it catches up on a day of lag [production]
22:57 <ryankemper> [WDQS] Restarted `wdqs1014`; blazegraph has been deadlocked since `2023-10-07 12:30:00` [production]
2023-10-07 §
09:22 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2181 (T343198)', diff saved to https://phabricator.wikimedia.org/P52863 and previous config saved to /var/cache/conftool/dbconfig/20231007-092249-arnaudb.json [production]
09:07 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2181', diff saved to https://phabricator.wikimedia.org/P52862 and previous config saved to /var/cache/conftool/dbconfig/20231007-090742-arnaudb.json [production]
08:52 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2181', diff saved to https://phabricator.wikimedia.org/P52861 and previous config saved to /var/cache/conftool/dbconfig/20231007-085236-arnaudb.json [production]
08:37 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2181 (T343198)', diff saved to https://phabricator.wikimedia.org/P52860 and previous config saved to /var/cache/conftool/dbconfig/20231007-083729-arnaudb.json [production]
02:33 <eevans@cumin1001> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for restbase1030.eqiad.wmnet [production]
02:33 <eevans@cumin1001> START - Cookbook sre.hosts.remove-downtime for restbase1030.eqiad.wmnet [production]
2023-10-06 §
23:04 <pt1979@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host kubernetes2054.codfw.wmnet with OS bullseye [production]
23:04 <pt1979@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - pt1979@cumin2002" [production]
23:03 <pt1979@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - pt1979@cumin2002" [production]
22:50 <pt1979@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on kubernetes2054.codfw.wmnet with reason: host reimage [production]
22:47 <pt1979@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on kubernetes2054.codfw.wmnet with reason: host reimage [production]
22:43 <arnaudb@cumin1001> dbctl commit (dc=all): 'Depooling db2181 (T343198)', diff saved to https://phabricator.wikimedia.org/P52859 and previous config saved to /var/cache/conftool/dbconfig/20231006-224306-arnaudb.json [production]
22:43 <arnaudb@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2181.codfw.wmnet with reason: Maintenance [production]
22:42 <arnaudb@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2181.codfw.wmnet with reason: Maintenance [production]
22:42 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2168:3318 (T343198)', diff saved to https://phabricator.wikimedia.org/P52858 and previous config saved to /var/cache/conftool/dbconfig/20231006-224245-arnaudb.json [production]
22:27 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2168:3318', diff saved to https://phabricator.wikimedia.org/P52857 and previous config saved to /var/cache/conftool/dbconfig/20231006-222738-arnaudb.json [production]
22:26 <pt1979@cumin2002> START - Cookbook sre.hosts.reimage for host kubernetes2054.codfw.wmnet with OS bullseye [production]
22:12 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2168:3318', diff saved to https://phabricator.wikimedia.org/P52856 and previous config saved to /var/cache/conftool/dbconfig/20231006-221232-arnaudb.json [production]
21:57 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2168:3318 (T343198)', diff saved to https://phabricator.wikimedia.org/P52855 and previous config saved to /var/cache/conftool/dbconfig/20231006-215725-arnaudb.json [production]
20:45 <bking@deploy2002> helmfile [staging] DONE helmfile.d/services/cirrus-streaming-updater: apply [production]
20:45 <bking@deploy2002> helmfile [staging] START helmfile.d/services/cirrus-streaming-updater: apply [production]
20:35 <bking@deploy2002> helmfile [staging] DONE helmfile.d/services/cirrus-streaming-updater: apply [production]
20:34 <bking@deploy2002> helmfile [staging] START helmfile.d/services/cirrus-streaming-updater: apply [production]
20:29 <bking@deploy2002> helmfile [staging] DONE helmfile.d/services/cirrus-streaming-updater: apply [production]
20:29 <bking@deploy2002> helmfile [staging] START helmfile.d/services/cirrus-streaming-updater: apply [production]
20:11 <bking@deploy2002> helmfile [staging] DONE helmfile.d/services/cirrus-streaming-updater: apply [production]
20:10 <bking@deploy2002> helmfile [staging] START helmfile.d/services/cirrus-streaming-updater: apply [production]
19:46 <bking@deploy2002> helmfile [eqiad] DONE helmfile.d/admin 'apply'. [production]
19:45 <bking@deploy2002> helmfile [eqiad] START helmfile.d/admin 'apply'. [production]
19:44 <bking@deploy2002> helmfile [codfw] DONE helmfile.d/admin 'apply'. [production]
19:43 <bking@deploy2002> helmfile [codfw] START helmfile.d/admin 'apply'. [production]
19:43 <bking@deploy2002> helmfile [staging-eqiad] DONE helmfile.d/admin 'apply'. [production]
19:41 <bking@deploy2002> helmfile [staging-eqiad] START helmfile.d/admin 'apply'. [production]
19:40 <bking@deploy2002> helmfile [staging-codfw] DONE helmfile.d/admin 'apply'. [production]
19:39 <bking@deploy2002> helmfile [staging-codfw] START helmfile.d/admin 'apply'. [production]
18:43 <ebernhardson@deploy2002> Finished deploy [airflow-dags/search@3b7df78]: Update rdf-spark-tools to 0.3.135 to fix query mapping job failure (duration: 00m 29s) [production]
18:42 <ebernhardson@deploy2002> Started deploy [airflow-dags/search@3b7df78]: Update rdf-spark-tools to 0.3.135 to fix query mapping job failure [production]
18:42 <vriley@cumin1001> END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host cp1101.mgmt.eqiad.wmnet with reboot policy FORCED [production]