3351-3400 of 10000 results (96ms)
2024-05-09 ยง
20:18 <jhuneidi@deploy1002> rebuilt and synchronized wikiversions files: group2 wikis to 1.43.0-wmf.4 refs T361398 [production]
19:59 <jhuneidi@deploy1002> Finished scap: Backport for [[gerrit:1029562|Revert "Migrate to IReadableDatabase::newSelectQueryBuilder" (T312418 T364499)]] (duration: 17m 37s) [production]
19:46 <jhuneidi@deploy1002> jhuneidi and zabe: Continuing with sync [production]
19:44 <jhuneidi@deploy1002> jhuneidi and zabe: Backport for [[gerrit:1029562|Revert "Migrate to IReadableDatabase::newSelectQueryBuilder" (T312418 T364499)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
19:42 <jhuneidi@deploy1002> Started scap: Backport for [[gerrit:1029562|Revert "Migrate to IReadableDatabase::newSelectQueryBuilder" (T312418 T364499)]] [production]
19:29 <andrew@cumin1002> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts cloudcontrol2001-dev.codfw.wmnet [production]
19:29 <andrew@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
19:29 <andrew@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: cloudcontrol2001-dev.codfw.wmnet decommissioned, removing all IPs except the asset tag one - andrew@cumin1002" [production]
19:29 <eileen> civicrm upgraded from 6256c944 to c0d2fa95 [production]
19:28 <andrew@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: cloudcontrol2001-dev.codfw.wmnet decommissioned, removing all IPs except the asset tag one - andrew@cumin1002" [production]
19:26 <andrew@cumin1002> START - Cookbook sre.dns.netbox [production]
19:19 <andrew@cumin1002> START - Cookbook sre.hosts.decommission for hosts cloudcontrol2001-dev.codfw.wmnet [production]
19:09 <denisse> Restarting `pyrra-filesystem-notify-thanos.path`, and `reset-failed thanos-rule-reload.service` units on titan1001 [production]
19:08 <denisse> Reset failed `pyrra-filesystem-notify-thanos.path`, and `reset-failed thanos-rule-reload.service` units on titan1001 [production]
17:58 <jforrester@deploy1002> Finished scap: Backport for [[gerrit:1029556|Revert "Action APIs: Set most of our APIs to emit a cache header for 24 hours" (T364567)]] (duration: 17m 17s) [production]
17:45 <jforrester@deploy1002> jforrester: Continuing with sync [production]
17:44 <ejegg> SmashPig (standalone IPN listener) upgraded from 67db9d96 to 82392d54 [production]
17:43 <jforrester@deploy1002> jforrester: Backport for [[gerrit:1029556|Revert "Action APIs: Set most of our APIs to emit a cache header for 24 hours" (T364567)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
17:41 <jforrester@deploy1002> Started scap: Backport for [[gerrit:1029556|Revert "Action APIs: Set most of our APIs to emit a cache header for 24 hours" (T364567)]] [production]
17:37 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Depooling db1198 (T352010)', diff saved to https://phabricator.wikimedia.org/P62263 and previous config saved to /var/cache/conftool/dbconfig/20240509-173728-ladsgroup.json [production]
17:37 <ladsgroup@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1198.eqiad.wmnet with reason: Maintenance [production]
17:37 <ladsgroup@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1198.eqiad.wmnet with reason: Maintenance [production]
17:37 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1189 (T352010)', diff saved to https://phabricator.wikimedia.org/P62262 and previous config saved to /var/cache/conftool/dbconfig/20240509-173705-ladsgroup.json [production]
17:34 <andrew@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cloudcontrol2006-dev.codfw.wmnet with OS bookworm [production]
17:21 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1189', diff saved to https://phabricator.wikimedia.org/P62261 and previous config saved to /var/cache/conftool/dbconfig/20240509-172157-ladsgroup.json [production]
17:16 <andrew@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cloudcontrol2006-dev.codfw.wmnet with reason: host reimage [production]
17:13 <andrew@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on cloudcontrol2006-dev.codfw.wmnet with reason: host reimage [production]
17:06 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1189', diff saved to https://phabricator.wikimedia.org/P62260 and previous config saved to /var/cache/conftool/dbconfig/20240509-170649-ladsgroup.json [production]
16:56 <jhancock@cumin2002> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host kafka-main2010.codfw.wmnet with OS bullseye [production]
16:55 <sukhe> sudo cumin -b30 'A:cp' 'run-puppet-agent --enable "merging CR 1029614"' [production]
16:53 <andrew@cumin1002> START - Cookbook sre.hosts.reimage for host cloudcontrol2006-dev.codfw.wmnet with OS bookworm [production]
16:51 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1189 (T352010)', diff saved to https://phabricator.wikimedia.org/P62259 and previous config saved to /var/cache/conftool/dbconfig/20240509-165141-ladsgroup.json [production]
16:49 <sukhe> sudo cumin 'A:cp' 'disable-puppet "merging CR 1029614"' [production]
16:47 <elukey@deploy1002> helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'experimental' for release 'main' . [production]
16:35 <jhancock@cumin2002> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host kafka-main2008.codfw.wmnet with OS bullseye [production]
16:35 <cmooney@cumin1002> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) cloudcontrol2006-dev.private.codfw.wikimedia.cloud on all recursors [production]
16:35 <cmooney@cumin1002> START - Cookbook sre.dns.wipe-cache cloudcontrol2006-dev.private.codfw.wikimedia.cloud on all recursors [production]
16:34 <jhancock@cumin2002> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host kafka-main2009.codfw.wmnet with OS bullseye [production]
16:32 <jhancock@cumin2002> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=93) for host kafka-main2006.codfw.wmnet with OS bullseye [production]
16:32 <jhancock@cumin2002> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=93) for host kafka-main2007.codfw.wmnet with OS bullseye [production]
16:32 <cmooney@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
16:32 <cmooney@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add entries for new codfw cloudcontrol nodes - cmooney@cumin1002" [production]
16:31 <cmooney@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add entries for new codfw cloudcontrol nodes - cmooney@cumin1002" [production]
16:29 <cmooney@cumin1002> START - Cookbook sre.dns.netbox [production]
16:20 <elukey@deploy1002> helmfile [ml-serve-eqiad] Ran 'sync' command on namespace 'llm' for release 'main' . [production]
15:36 <jhancock@cumin2002> START - Cookbook sre.hosts.reimage for host kafka-main2010.codfw.wmnet with OS bullseye [production]
15:36 <jhancock@cumin2002> END (PASS) - Cookbook sre.hardware.upgrade-firmware (exit_code=0) upgrade firmware for hosts ['kafka-main2010'] [production]
15:35 <jhancock@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['kafka-main2010'] [production]
15:35 <jhancock@cumin2002> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['kafka-main2010'] [production]
15:35 <jhancock@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['kafka-main2010'] [production]