51-100 of 10000 results (63ms)
2022-12-02 ยง
15:28 <isaranto@deploy1002> helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'revscoring-draftquality' for release 'main' . [production]
15:22 <isaranto@deploy1002> helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'revscoring-articletopic' for release 'main' . [production]
15:22 <bking@cumin2002> START - Cookbook sre.wdqs.restart [production]
15:16 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on dns5004.wikimedia.org with reason: host reimage [production]
15:13 <isaranto@deploy1002> helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'revscoring-articlequality' for release 'main' . [production]
15:12 <sukhe@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on dns5004.wikimedia.org with reason: host reimage [production]
15:06 <volans> run `git gc` on /srv/netbox-exports/dns.git on netbox[12]002 - T324334 [production]
14:48 <sukhe@cumin1001> START - Cookbook sre.hosts.reimage for host lvs5004.eqsin.wmnet with OS buster [production]
14:38 <sukhe@cumin2002> START - Cookbook sre.hosts.reimage for host dns5004.wikimedia.org with OS buster [production]
12:09 <jynus> dropping all databases from db1133 [production]
11:16 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts ganeti5001.eqsin.wmnet [production]
11:16 <jmm@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
11:16 <jmm@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: ganeti5001.eqsin.wmnet decommissioned, removing all IPs except the asset tag one - jmm@cumin2002" [production]
11:12 <jmm@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: ganeti5001.eqsin.wmnet decommissioned, removing all IPs except the asset tag one - jmm@cumin2002" [production]
11:02 <jmm@cumin2002> START - Cookbook sre.dns.netbox [production]
10:57 <jmm@cumin2002> START - Cookbook sre.hosts.decommission for hosts ganeti5001.eqsin.wmnet [production]
10:56 <isaranto@deploy1002> helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'revscoring-articlequality' for release 'main' . [production]
10:34 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on ganeti5001.eqsin.wmnet with reason: Remove from cluster for decom [production]
10:34 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on ganeti5001.eqsin.wmnet with reason: Remove from cluster for decom [production]
10:01 <vgutierrez> upload acme-chief 0.36 to apt.wm.o (bullseye) - T321309 [production]
09:58 <moritzm> installing publicsuffix updates from bullseye/buster point releases [production]
09:54 <moritzm> installing debootstrap updates from bullseye point release [production]
09:53 <moritzm> rebalance ganeti codfw/C T323222 [production]
09:52 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.addnode (exit_code=0) for new host ganeti2013.codfw.wmnet to cluster codfw and group C [production]
09:51 <jmm@cumin2002> START - Cookbook sre.ganeti.addnode for new host ganeti2013.codfw.wmnet to cluster codfw and group C [production]
09:11 <marostegui@cumin1001> dbctl commit (dc=all): 'db1134 (re)pooling @ 100%: After cloning db1206', diff saved to https://phabricator.wikimedia.org/P42215 and previous config saved to /var/cache/conftool/dbconfig/20221202-091126-root.json [production]
08:56 <marostegui@cumin1001> dbctl commit (dc=all): 'db1134 (re)pooling @ 75%: After cloning db1206', diff saved to https://phabricator.wikimedia.org/P42214 and previous config saved to /var/cache/conftool/dbconfig/20221202-085621-root.json [production]
08:41 <jayme@deploy1002> helmfile [eqiad] DONE helmfile.d/admin 'apply'. [production]
08:41 <jayme@deploy1002> helmfile [eqiad] START helmfile.d/admin 'apply'. [production]
08:41 <marostegui@cumin1001> dbctl commit (dc=all): 'db1134 (re)pooling @ 50%: After cloning db1206', diff saved to https://phabricator.wikimedia.org/P42213 and previous config saved to /var/cache/conftool/dbconfig/20221202-084116-root.json [production]
08:41 <jayme@deploy1002> helmfile [codfw] DONE helmfile.d/admin 'apply'. [production]
08:40 <jayme@deploy1002> helmfile [codfw] START helmfile.d/admin 'apply'. [production]
08:26 <marostegui@cumin1001> dbctl commit (dc=all): 'db1134 (re)pooling @ 25%: After cloning db1206', diff saved to https://phabricator.wikimedia.org/P42212 and previous config saved to /var/cache/conftool/dbconfig/20221202-082611-root.json [production]
08:11 <marostegui@cumin1001> dbctl commit (dc=all): 'db1134 (re)pooling @ 10%: After cloning db1206', diff saved to https://phabricator.wikimedia.org/P42211 and previous config saved to /var/cache/conftool/dbconfig/20221202-081106-root.json [production]
07:56 <marostegui@cumin1001> dbctl commit (dc=all): 'db1134 (re)pooling @ 5%: After cloning db1206', diff saved to https://phabricator.wikimedia.org/P42210 and previous config saved to /var/cache/conftool/dbconfig/20221202-075601-root.json [production]
07:49 <elukey@deploy1002> helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. [production]
07:49 <elukey@deploy1002> helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. [production]
07:49 <elukey@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'sync'. [production]
07:49 <elukey@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'sync'. [production]
07:49 <elukey@deploy1002> helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. [production]
07:49 <elukey@deploy1002> helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. [production]
07:43 <elukey@deploy1002> helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. [production]
07:43 <elukey@deploy1002> helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. [production]
07:43 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1163 (re)pooling @ 100%: Maint done', diff saved to https://phabricator.wikimedia.org/P42209 and previous config saved to /var/cache/conftool/dbconfig/20221202-074300-ladsgroup.json [production]
07:41 <moritzm> draining ganeti5001 for eventual decom T322048 [production]
07:41 <elukey@deploy1002> helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. [production]
07:41 <elukey@deploy1002> helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. [production]
07:27 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1163 (re)pooling @ 75%: Maint done', diff saved to https://phabricator.wikimedia.org/P42208 and previous config saved to /var/cache/conftool/dbconfig/20221202-072755-ladsgroup.json [production]
07:12 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1163 (re)pooling @ 25%: Maint done', diff saved to https://phabricator.wikimedia.org/P42207 and previous config saved to /var/cache/conftool/dbconfig/20221202-071250-ladsgroup.json [production]
06:57 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1163 (re)pooling @ 10%: Maint done', diff saved to https://phabricator.wikimedia.org/P42206 and previous config saved to /var/cache/conftool/dbconfig/20221202-065745-ladsgroup.json [production]