2023-06-07
§
|
16:02 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host lvs2014.mgmt.codfw.wmnet with reboot policy FORCED |
[production] |
16:02 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1182', diff saved to https://phabricator.wikimedia.org/P49126 and previous config saved to /var/cache/conftool/dbconfig/20230607-160234-ladsgroup.json |
[production] |
16:00 |
<jmm@cumin2002> |
END (ERROR) - Cookbook sre.hosts.reboot-single (exit_code=97) for host lists1003.wikimedia.org |
[production] |
15:57 |
<jiji@deploy1002> |
helmfile [staging] START helmfile.d/services/ipoid: apply |
[production] |
15:56 |
<urandom> |
Beginning (3 hour) generated traffic testing of sessionstore.svc.codfw.wmnet — T337426 |
[production] |
15:56 |
<jiji@deploy1002> |
helmfile [staging] START helmfile.d/services/ipoid: apply |
[production] |
15:53 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143', diff saved to https://phabricator.wikimedia.org/P49125 and previous config saved to /var/cache/conftool/dbconfig/20230607-155345-ladsgroup.json |
[production] |
15:52 |
<urandom> |
Upgrading Cassandra to 4.1.1, sessionstore2003 — T337426 |
[production] |
15:51 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host lists1003.wikimedia.org |
[production] |
15:50 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host testvm2005.codfw.wmnet |
[production] |
15:47 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1182', diff saved to https://phabricator.wikimedia.org/P49124 and previous config saved to /var/cache/conftool/dbconfig/20230607-154727-ladsgroup.json |
[production] |
15:47 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host testvm2005.codfw.wmnet |
[production] |
15:44 |
<urandom> |
Upgrading Cassandra to 4.1.1, sessionstore2002 — T337426 |
[production] |
15:43 |
<pt1979@cumin2002> |
START - Cookbook sre.hosts.provision for host lvs2014.mgmt.codfw.wmnet with reboot policy FORCED |
[production] |
15:42 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
15:42 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add DNS entry for lvs2014 - pt1979@cumin2002" |
[production] |
15:41 |
<pt1979@cumin2002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add DNS entry for lvs2014 - pt1979@cumin2002" |
[production] |
15:40 |
<jbond@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on puppetserver2001.codfw.wmnet with reason: host reimage |
[production] |
15:39 |
<moritzm> |
installing isc-dhcp bugfixes updates from Bullseye 11.7 point release |
[production] |
15:38 |
<pt1979@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
15:38 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143', diff saved to https://phabricator.wikimedia.org/P49123 and previous config saved to /var/cache/conftool/dbconfig/20230607-153839-ladsgroup.json |
[production] |
15:37 |
<jbond@cumin2002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on puppetserver2001.codfw.wmnet with reason: host reimage |
[production] |
15:34 |
<pt1979@cumin2002> |
END (FAIL) - Cookbook sre.dns.netbox (exit_code=99) |
[production] |
15:33 |
<jiji@deploy1002> |
helmfile [staging] DONE helmfile.d/services/ipoid: apply |
[production] |
15:33 |
<pt1979@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
15:32 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1182 (T336886)', diff saved to https://phabricator.wikimedia.org/P49122 and previous config saved to /var/cache/conftool/dbconfig/20230607-153221-ladsgroup.json |
[production] |
15:26 |
<moritzm> |
rolling restart of FPM on mw canaries to pick up libwebp security updates |
[production] |
15:26 |
<pt1979@cumin2002> |
END (ERROR) - Cookbook sre.dns.netbox (exit_code=97) |
[production] |
15:26 |
<pt1979@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
15:24 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1182 (T336886)', diff saved to https://phabricator.wikimedia.org/P49121 and previous config saved to /var/cache/conftool/dbconfig/20230607-152456-ladsgroup.json |
[production] |
15:24 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1182.eqiad.wmnet with reason: Maintenance |
[production] |
15:24 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db1182.eqiad.wmnet with reason: Maintenance |
[production] |
15:24 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1170:3312 (T336886)', diff saved to https://phabricator.wikimedia.org/P49120 and previous config saved to /var/cache/conftool/dbconfig/20230607-152425-ladsgroup.json |
[production] |
15:23 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143 (T336886)', diff saved to https://phabricator.wikimedia.org/P49119 and previous config saved to /var/cache/conftool/dbconfig/20230607-152333-ladsgroup.json |
[production] |
15:23 |
<elukey> |
all varnishkafka instances on caching nodes are getting restarted due to https://gerrit.wikimedia.org/r/c/operations/puppet/+/928087 - T337825 |
[production] |
15:22 |
<jiji@deploy1002> |
helmfile [staging] START helmfile.d/services/ipoid: apply |
[production] |
15:22 |
<cgoubert@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/changeprop-jobqueue: apply |
[production] |
15:22 |
<elukey> |
re-enable puppet on caching nodes |
[production] |
15:22 |
<cgoubert@deploy1002> |
helmfile [eqiad] START helmfile.d/services/changeprop-jobqueue: apply |
[production] |
15:21 |
<cgoubert@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/changeprop-jobqueue: apply |
[production] |
15:21 |
<cgoubert@deploy1002> |
helmfile [codfw] START helmfile.d/services/changeprop-jobqueue: apply |
[production] |
15:21 |
<claime> |
Bumping prewarmparsoid concurrency to 45 in changeprop-jobqueue - T320534 |
[production] |
15:18 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1143 (T336886)', diff saved to https://phabricator.wikimedia.org/P49118 and previous config saved to /var/cache/conftool/dbconfig/20230607-151835-ladsgroup.json |
[production] |
15:18 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1143.eqiad.wmnet with reason: Maintenance |
[production] |
15:18 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db1143.eqiad.wmnet with reason: Maintenance |
[production] |
15:18 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1142 (T336886)', diff saved to https://phabricator.wikimedia.org/P49117 and previous config saved to /var/cache/conftool/dbconfig/20230607-151815-ladsgroup.json |
[production] |
15:17 |
<moritzm> |
installing libwebp security updates on buster |
[production] |
15:17 |
<jbond@cumin2002> |
START - Cookbook sre.hosts.reimage for host puppetserver2001.codfw.wmnet with OS bookworm |
[production] |
15:17 |
<jbond@cumin2002> |
END (ERROR) - Cookbook sre.hosts.reimage (exit_code=97) for host puppetserver2001.codfw.wmnet with OS bookworm |
[production] |
15:14 |
<urandom> |
Upgrading Cassandra to 4.1.1, sessionstore2001 — T337426 |
[production] |