2024-05-22
ยง
|
15:53 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db2130 (re)pooling @ 1%: post reimage repool', diff saved to https://phabricator.wikimedia.org/P62916 and previous config saved to /var/cache/conftool/dbconfig/20240522-155315-arnaudb.json |
[production] |
15:50 |
<arnaudb@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db2130.codfw.wmnet with OS bookworm |
[production] |
15:44 |
<elukey> |
upload to bookworm-wikimedia dragonfly-{dfdaemon,dfget}, calicoctl, calico-cni - T365253 |
[production] |
15:42 |
<kamila@deploy1002> |
helmfile [staging] DONE helmfile.d/services/recommendation-api: apply |
[production] |
15:42 |
<kamila@deploy1002> |
helmfile [staging] START helmfile.d/services/recommendation-api: apply |
[production] |
15:42 |
<elukey@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host ml-staging2001.codfw.wmnet with OS bookworm |
[production] |
15:40 |
<kamila@deploy1002> |
helmfile [staging] DONE helmfile.d/services/recommendation-api: apply |
[production] |
15:40 |
<kamila@deploy1002> |
helmfile [staging] START helmfile.d/services/recommendation-api: apply |
[production] |
15:39 |
<kamila@deploy1002> |
helmfile [staging] DONE helmfile.d/services/recommendation-api: apply |
[production] |
15:39 |
<kamila@deploy1002> |
helmfile [staging] START helmfile.d/services/recommendation-api: apply |
[production] |
15:34 |
<damilare> |
civicrm upgraded from 8c5fee40 to b0a3965a |
[production] |
15:32 |
<hnowlan@deploy1002> |
helmfile [staging] DONE helmfile.d/services/sessionstore: apply |
[production] |
15:32 |
<hnowlan@deploy1002> |
helmfile [staging] START helmfile.d/services/sessionstore: apply |
[production] |
15:27 |
<arnaudb@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db2130.codfw.wmnet with reason: host reimage |
[production] |
15:24 |
<arnaudb@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db2130.codfw.wmnet with reason: host reimage |
[production] |
15:22 |
<elukey@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on ml-staging2001.codfw.wmnet with reason: host reimage |
[production] |
15:19 |
<elukey@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on ml-staging2001.codfw.wmnet with reason: host reimage |
[production] |
15:19 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2220 (T352010)', diff saved to https://phabricator.wikimedia.org/P62915 and previous config saved to /var/cache/conftool/dbconfig/20240522-151923-ladsgroup.json |
[production] |
15:16 |
<vgutierrez> |
repool upload@drmrs with IPIP encapsulation enabled - T357257 |
[production] |
15:16 |
<fabfur> |
enabling puppet on all cp-ulsfo (T365566) |
[production] |
15:16 |
<dzahn@cumin1002> |
END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts contint1003.eqiad.wmnet |
[production] |
15:16 |
<dzahn@cumin1002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
15:16 |
<dzahn@cumin1002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: contint1003.eqiad.wmnet decommissioned, removing all IPs except the asset tag one - dzahn@cumin1002" |
[production] |
15:14 |
<dzahn@cumin1002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: contint1003.eqiad.wmnet decommissioned, removing all IPs except the asset tag one - dzahn@cumin1002" |
[production] |
15:10 |
<dzahn@cumin1002> |
START - Cookbook sre.dns.netbox |
[production] |
15:10 |
<vgutierrez> |
rolling restart of pybal on lvs6003 and lvs6002 - T357257 |
[production] |
15:06 |
<arnaudb@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db2130.codfw.wmnet with reason: reimage |
[production] |
15:06 |
<arnaudb@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db2130.codfw.wmnet with reason: reimage |
[production] |
15:06 |
<arnaudb@cumin1002> |
START - Cookbook sre.hosts.reimage for host db2130.codfw.wmnet with OS bookworm |
[production] |
15:05 |
<dzahn@cumin1002> |
START - Cookbook sre.hosts.decommission for hosts contint1003.eqiad.wmnet |
[production] |
15:05 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'T364290 db2130', diff saved to https://phabricator.wikimedia.org/P62914 and previous config saved to /var/cache/conftool/dbconfig/20240522-150516-arnaudb.json |
[production] |
15:04 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2220', diff saved to https://phabricator.wikimedia.org/P62913 and previous config saved to /var/cache/conftool/dbconfig/20240522-150415-ladsgroup.json |
[production] |
15:03 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2213 (T352010)', diff saved to https://phabricator.wikimedia.org/P62912 and previous config saved to /var/cache/conftool/dbconfig/20240522-150333-ladsgroup.json |
[production] |
15:01 |
<jynus> |
stopping eqiad mediabackups for cleaning up missing files T365607 |
[production] |
14:58 |
<elukey@cumin1002> |
START - Cookbook sre.hosts.reimage for host ml-staging2001.codfw.wmnet with OS bookworm |
[production] |
14:57 |
<hnowlan> |
running `puppet cert revoke sessionstore.discovery.wmnet ` T363996 |
[production] |
14:49 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2220', diff saved to https://phabricator.wikimedia.org/P62911 and previous config saved to /var/cache/conftool/dbconfig/20240522-144907-ladsgroup.json |
[production] |
14:48 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2213', diff saved to https://phabricator.wikimedia.org/P62910 and previous config saved to /var/cache/conftool/dbconfig/20240522-144826-ladsgroup.json |
[production] |
14:43 |
<vgutierrez> |
depool upload@drmrs before enabling IPIP encapsulation - T357257 |
[production] |
14:34 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2220 (T352010)', diff saved to https://phabricator.wikimedia.org/P62909 and previous config saved to /var/cache/conftool/dbconfig/20240522-143359-ladsgroup.json |
[production] |
14:33 |
<jayme> |
drained, cordoned and pooled=inactive kubernetes2023 and kubernetes2032 for cookbook testing - T350152 T365571 |
[production] |
14:33 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2213', diff saved to https://phabricator.wikimedia.org/P62908 and previous config saved to /var/cache/conftool/dbconfig/20240522-143318-ladsgroup.json |
[production] |
14:32 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db2145 (re)pooling @ 100%: post reimage repool', diff saved to https://phabricator.wikimedia.org/P62907 and previous config saved to /var/cache/conftool/dbconfig/20240522-143238-arnaudb.json |
[production] |
14:32 |
<jayme@cumin1002> |
conftool action : set/pooled=inactive; selector: name=kubernetes20(23|32).codfw.wmnet |
[production] |
14:28 |
<elukey> |
copy calico, istio-cni, kubernetes-node packages from bullseye-wikimedia to bookworm-wikimedia - T365253 |
[production] |
14:28 |
<fabfur> |
disabling puppet on all cp-ulsfo to apply https://gerrit.wikimedia.org/r/c/operations/puppet/+/1034852 selectively (T365566) |
[production] |
14:18 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2213 (T352010)', diff saved to https://phabricator.wikimedia.org/P62906 and previous config saved to /var/cache/conftool/dbconfig/20240522-141809-ladsgroup.json |
[production] |
14:17 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db2145 (re)pooling @ 75%: post reimage repool', diff saved to https://phabricator.wikimedia.org/P62905 and previous config saved to /var/cache/conftool/dbconfig/20240522-141732-arnaudb.json |
[production] |
14:14 |
<logmsgbot> |
lucaswerkmeister-wmde@deploy1002 Finished scap: Backport for [[gerrit:1034878|PrefixSearch: Make sure $prefix is a string (T365565)]] (duration: 14m 58s) |
[production] |
14:02 |
<logmsgbot> |
lucaswerkmeister-wmde@deploy1002 lucaswerkmeister-wmde: Continuing with sync |
[production] |