401-450 of 10000 results (23ms)
2025-06-23 ยง
11:34 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1242', diff saved to https://phabricator.wikimedia.org/P78633 and previous config saved to /var/cache/conftool/dbconfig/20250623-113443-marostegui.json [production]
11:34 <cgoubert@deploy1003> conftool action : set/pooled=false; selector: dnsdisc=thumbor.*,name=codfw [production]
11:33 <jmm@cumin1003> START - Cookbook sre.hosts.downtime for 2:00:00 on debmonitor-dev2001.codfw.wmnet with reason: host reimage [production]
11:23 <kamila@cumin1003> START - Cookbook sre.k8s.wipe-cluster Wipe the K8s cluster wikikube-codfw: Kubernetes upgrade [production]
11:23 <kamila@cumin1003> END (FAIL) - Cookbook sre.k8s.wipe-cluster (exit_code=99) Wipe the K8s cluster wikikube-codfw: Kubernetes upgrade [production]
11:19 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1242', diff saved to https://phabricator.wikimedia.org/P78631 and previous config saved to /var/cache/conftool/dbconfig/20250623-111935-marostegui.json [production]
11:18 <mvernon@cumin2002> END (PASS) - Cookbook sre.swift.roll-restart-reboot-swift-ms-proxies (exit_code=0) rolling restart_daemons on A:swift-fe-codfw [production]
11:15 <jmm@cumin1003> START - Cookbook sre.hosts.reimage for host debmonitor-dev2001.codfw.wmnet with OS bookworm [production]
11:14 <mvernon@cumin2002> START - Cookbook sre.swift.roll-restart-reboot-swift-ms-proxies rolling restart_daemons on A:swift-fe-codfw [production]
11:10 <jmm@cumin1003> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.ganeti.makevm: created new VM debmonitor-dev2001.codfw.wmnet - jmm@cumin1003" [production]
11:10 <jmm@cumin1003> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.ganeti.makevm: created new VM debmonitor-dev2001.codfw.wmnet - jmm@cumin1003" [production]
11:10 <jmm@cumin1003> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) debmonitor-dev2001.codfw.wmnet on all recursors [production]
11:10 <jmm@cumin1003> START - Cookbook sre.dns.wipe-cache debmonitor-dev2001.codfw.wmnet on all recursors [production]
11:10 <jmm@cumin1003> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
11:10 <jmm@cumin1003> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add records for VM debmonitor-dev2001.codfw.wmnet - jmm@cumin1003" [production]
11:10 <jmm@cumin1003> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add records for VM debmonitor-dev2001.codfw.wmnet - jmm@cumin1003" [production]
11:04 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1242 (T396130)', diff saved to https://phabricator.wikimedia.org/P78630 and previous config saved to /var/cache/conftool/dbconfig/20250623-110428-marostegui.json [production]
10:59 <jmm@cumin1003> START - Cookbook sre.dns.netbox [production]
10:59 <jmm@cumin1003> START - Cookbook sre.ganeti.makevm for new host debmonitor-dev2001.codfw.wmnet [production]
10:57 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db1242 (T396130)', diff saved to https://phabricator.wikimedia.org/P78629 and previous config saved to /var/cache/conftool/dbconfig/20250623-105746-marostegui.json [production]
10:57 <marostegui@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1242.eqiad.wmnet with reason: Maintenance [production]
10:57 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1241 (T396130)', diff saved to https://phabricator.wikimedia.org/P78628 and previous config saved to /var/cache/conftool/dbconfig/20250623-105722-marostegui.json [production]
10:49 <taavi> deploy deny-all robots.txt file T397502 [quarry]
10:44 <cgoubert@deploy1003> Forcefully removing global lock: Kubernetes upgrade [production]
10:42 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1241', diff saved to https://phabricator.wikimedia.org/P78627 and previous config saved to /var/cache/conftool/dbconfig/20250623-104214-marostegui.json [production]
10:41 <ladsgroup@cumin1002> END (PASS) - Cookbook sre.mysql.pool (exit_code=0) db2155 gradually with 4 steps - Work done [production]
10:39 <claime> cookbook sre.k8s.wipe-cluster --k8s-cluster wikikube-codfw -H 2 --reason "Kubernetes upgrade" - T397148 [production]
10:38 <kamila@cumin1003> START - Cookbook sre.k8s.wipe-cluster Wipe the K8s cluster wikikube-codfw: Kubernetes upgrade [production]
10:36 <kamila@cumin1003> END (PASS) - Cookbook sre.k8s.pool-depool-cluster (exit_code=0) depool all services in codfw/codfw: maintenance [production]
10:35 <claime> scap lock --all "Kubernetes upgrade" [production]
10:30 <marostegui@cumin1002> dbctl commit (dc=all): 'db1222 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P78625 and previous config saved to /var/cache/conftool/dbconfig/20250623-103036-root.json [production]
10:27 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1241', diff saved to https://phabricator.wikimedia.org/P78623 and previous config saved to /var/cache/conftool/dbconfig/20250623-102706-marostegui.json [production]
10:27 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for lvs6002.drmrs.wmnet [production]
10:26 <vgutierrez@cumin1002> START - Cookbook sre.hosts.remove-downtime for lvs6002.drmrs.wmnet [production]
10:24 <kamila@cumin1003> START - Cookbook sre.k8s.pool-depool-cluster depool all services in codfw/codfw: maintenance [production]
10:23 <kamila@cumin1003> END (ERROR) - Cookbook sre.k8s.pool-depool-cluster (exit_code=93) depool all services in codfw/codfw: maintenance [production]
10:19 <vgutierrez@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on lvs6002.drmrs.wmnet with reason: switching to katran [production]
10:18 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.loadbalancer.admin (exit_code=0) depooling P{lvs6002.drmrs.wmnet} and A:liberica (T396561) [production]
10:18 <marostegui@cumin1002> dbctl commit (dc=all): 'db1220 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P78621 and previous config saved to /var/cache/conftool/dbconfig/20250623-101848-root.json [production]
10:18 <vgutierrez@cumin1002> START - Cookbook sre.loadbalancer.admin depooling P{lvs6002.drmrs.wmnet} and A:liberica (T396561) [production]
10:17 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.loadbalancer.upgrade (exit_code=0) upgradeing P{lvs7001.magru.wmnet} and A:liberica [production]
10:16 <vgutierrez@cumin1002> START - Cookbook sre.loadbalancer.upgrade upgradeing P{lvs7001.magru.wmnet} and A:liberica [production]
10:15 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.loadbalancer.upgrade (exit_code=0) upgradeing P{lvs7002.magru.wmnet} and A:liberica [production]
10:15 <marostegui@cumin1002> dbctl commit (dc=all): 'db1222 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P78620 and previous config saved to /var/cache/conftool/dbconfig/20250623-101530-root.json [production]
10:14 <vgutierrez@cumin1002> START - Cookbook sre.loadbalancer.upgrade upgradeing P{lvs7002.magru.wmnet} and A:liberica [production]
10:13 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.loadbalancer.upgrade (exit_code=0) upgradeing P{lvs1013.eqiad.wmnet} and A:liberica [production]
10:13 <vgutierrez@cumin1002> START - Cookbook sre.loadbalancer.upgrade upgradeing P{lvs1013.eqiad.wmnet} and A:liberica [production]
10:11 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1241 (T396130)', diff saved to https://phabricator.wikimedia.org/P78619 and previous config saved to /var/cache/conftool/dbconfig/20250623-101159-marostegui.json [production]
10:11 <kamila@cumin1003> START - Cookbook sre.k8s.pool-depool-cluster depool all services in codfw/codfw: maintenance [production]
10:10 <vgutierrez> upload liberica 0.22 to apt.wm.o (bookworm-wikimedia) [production]