1351-1400 of 10000 results (106ms)
2024-01-09 ยง
10:45 <ayounsi@cumin1002> END (ERROR) - Cookbook sre.hosts.reimage (exit_code=93) for host ganeti2033.codfw.wmnet with OS bookworm [production]
10:38 <btullis@cumin1002> START - Cookbook sre.kafka.roll-restart-reboot-brokers rolling restart_daemons on A:kafka-jumbo-eqiad [production]
10:22 <sfaci@deploy2002> helmfile [staging] DONE helmfile.d/services/edit-analytics: apply [production]
10:21 <sfaci@deploy2002> helmfile [staging] START helmfile.d/services/edit-analytics: apply [production]
10:19 <btullis@cumin1002> END (PASS) - Cookbook sre.opensearch.roll-restart-reboot (exit_code=0) rolling restart_daemons on A:datahubsearch [production]
10:11 <btullis@cumin1002> START - Cookbook sre.opensearch.roll-restart-reboot rolling restart_daemons on A:datahubsearch [production]
10:00 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.cdn.roll-upgrade-haproxy (exit_code=0) rolling upgrade of HAProxy on A:cp-upload_drmrs and A:cp [production]
09:59 <ayounsi@cumin1002> START - Cookbook sre.hosts.reimage for host ganeti2033.codfw.wmnet with OS bookworm [production]
09:54 <oblivian@deploy2002> Finished scap: Backport for [[gerrit:987033|Always process media files via shellbox on k8s (T352515)]] (duration: 11m 03s) [production]
09:52 <ayounsi@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
09:52 <ayounsi@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: ganeti2033/2034 move - ayounsi@cumin1002" [production]
09:48 <ayounsi@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: ganeti2033/2034 move - ayounsi@cumin1002" [production]
09:47 <oblivian@deploy2002> oblivian: Continuing with sync [production]
09:46 <ayounsi@cumin1002> START - Cookbook sre.dns.netbox [production]
09:44 <oblivian@deploy2002> oblivian: Backport for [[gerrit:987033|Always process media files via shellbox on k8s (T352515)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
09:42 <oblivian@deploy2002> Started scap: Backport for [[gerrit:987033|Always process media files via shellbox on k8s (T352515)]] [production]
09:39 <vgutierrez@cumin1002> START - Cookbook sre.cdn.roll-upgrade-haproxy rolling upgrade of HAProxy on A:cp-upload_drmrs and A:cp [production]
09:34 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.cdn.roll-upgrade-haproxy (exit_code=0) rolling upgrade of HAProxy on A:cp-text_codfw and A:cp [production]
09:27 <oblivian@deploy2002> Finished scap: Backport for [[gerrit:987032|Use shellbox for djvu handling on kubernetes (T352515)]] (duration: 23m 56s) [production]
09:20 <oblivian@deploy2002> oblivian: Continuing with sync [production]
09:15 <vgutierrez@cumin1002> START - Cookbook sre.cdn.roll-upgrade-haproxy rolling upgrade of HAProxy on A:cp-text_codfw and A:cp [production]
09:14 <moritzm> prune obsolete nginx packages from ncredir hosts after migration to new library scheme T329529 [production]
09:11 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.cdn.roll-upgrade-haproxy (exit_code=0) rolling upgrade of HAProxy on A:cp-upload_codfw and A:cp [production]
09:06 <arnaudb> upload wmfdb 0.1.4 from https://gitlab.wikimedia.org/repos/sre/wmfdb/-/tree/dgit/bookworm-wikimedia to fix default ca bundle [production]
09:05 <oblivian@deploy2002> oblivian: Backport for [[gerrit:987032|Use shellbox for djvu handling on kubernetes (T352515)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
09:03 <oblivian@deploy2002> Started scap: Backport for [[gerrit:987032|Use shellbox for djvu handling on kubernetes (T352515)]] [production]
08:59 <ayounsi@cumin1002> END (PASS) - Cookbook sre.network.peering (exit_code=0) with action 'configure' for AS: 45287 [production]
08:54 <ayounsi@cumin1002> START - Cookbook sre.network.peering with action 'configure' for AS: 45287 [production]
08:54 <vgutierrez@cumin1002> START - Cookbook sre.cdn.roll-upgrade-haproxy rolling upgrade of HAProxy on A:cp-upload_codfw and A:cp [production]
08:49 <oblivian@deploy2002> Finished scap: Backport for [[gerrit:987031|Remove throttle exception (T352569)]] (duration: 09m 01s) [production]
08:48 <ayounsi@cumin1002> END (PASS) - Cookbook sre.network.peering (exit_code=0) with action 'configure' for AS: 9902 [production]
08:47 <ayounsi@cumin1002> START - Cookbook sre.network.peering with action 'configure' for AS: 9902 [production]
08:42 <oblivian@deploy2002> oblivian: Continuing with sync [production]
08:42 <oblivian@deploy2002> oblivian: Backport for [[gerrit:987031|Remove throttle exception (T352569)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
08:40 <oblivian@deploy2002> Started scap: Backport for [[gerrit:987031|Remove throttle exception (T352569)]] [production]
08:22 <kartik@deploy2002> Finished scap: Backport for [[gerrit:988493|testwiki: Enable Section translation on WPs with potential to be supported with MinT using MADLAD-400 (T353510)]] (duration: 15m 54s) [production]
08:21 <marostegui@cumin1002> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host db2143.codfw.wmnet with OS bookworm [production]
08:20 <godog> set aside WAL for prometheus@k8s in codfw and restart - T354399 [production]
08:19 <marostegui@cumin1001> dbctl commit (dc=all): 'db2151 (re)pooling @ 100%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54567 and previous config saved to /var/cache/conftool/dbconfig/20240109-081946-root.json [production]
08:11 <kartik@deploy2002> kartik: Continuing with sync [production]
08:10 <kartik@deploy2002> kartik: Backport for [[gerrit:988493|testwiki: Enable Section translation on WPs with potential to be supported with MinT using MADLAD-400 (T353510)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
08:06 <kartik@deploy2002> Started scap: Backport for [[gerrit:988493|testwiki: Enable Section translation on WPs with potential to be supported with MinT using MADLAD-400 (T353510)]] [production]
08:05 <marostegui@cumin1001> dbctl commit (dc=all): 'db1224 (re)pooling @ 100%: After a crash', diff saved to https://phabricator.wikimedia.org/P54566 and previous config saved to /var/cache/conftool/dbconfig/20240109-080558-root.json [production]
08:04 <marostegui@cumin1001> dbctl commit (dc=all): 'db2151 (re)pooling @ 75%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54565 and previous config saved to /var/cache/conftool/dbconfig/20240109-080441-root.json [production]
07:50 <marostegui@cumin1001> dbctl commit (dc=all): 'db1224 (re)pooling @ 75%: After a crash', diff saved to https://phabricator.wikimedia.org/P54564 and previous config saved to /var/cache/conftool/dbconfig/20240109-075053-root.json [production]
07:49 <marostegui@cumin1001> dbctl commit (dc=all): 'db2151 (re)pooling @ 50%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54563 and previous config saved to /var/cache/conftool/dbconfig/20240109-074936-root.json [production]
07:35 <marostegui@cumin1001> dbctl commit (dc=all): 'db1224 (re)pooling @ 50%: After a crash', diff saved to https://phabricator.wikimedia.org/P54562 and previous config saved to /var/cache/conftool/dbconfig/20240109-073548-root.json [production]
07:34 <marostegui@cumin1001> dbctl commit (dc=all): 'db2151 (re)pooling @ 25%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54561 and previous config saved to /var/cache/conftool/dbconfig/20240109-073431-root.json [production]
07:20 <marostegui@cumin1001> dbctl commit (dc=all): 'db1224 (re)pooling @ 25%: After a crash', diff saved to https://phabricator.wikimedia.org/P54560 and previous config saved to /var/cache/conftool/dbconfig/20240109-072043-root.json [production]
07:19 <marostegui@cumin1001> dbctl commit (dc=all): 'db2151 (re)pooling @ 10%: Upgrade to 10.6.16 and bookworm', diff saved to https://phabricator.wikimedia.org/P54559 and previous config saved to /var/cache/conftool/dbconfig/20240109-071926-root.json [production]