2023-06-07
§
|
15:14 |
<isaranto@deploy1002> |
helmfile [ml-serve-eqiad] Ran 'sync' command on namespace 'experimental' for release 'main' . |
[production] |
15:10 |
<elukey> |
disable puppet on all caching nodes to rollout a varnishakfka change (ref: https://gerrit.wikimedia.org/r/c/operations/puppet/+/928087) |
[production] |
15:09 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1170:3312', diff saved to https://phabricator.wikimedia.org/P49116 and previous config saved to /var/cache/conftool/dbconfig/20230607-150919-ladsgroup.json |
[production] |
15:08 |
<jbond@cumin2002> |
START - Cookbook sre.hosts.reimage for host puppetserver2001.codfw.wmnet with OS bookworm |
[production] |
15:07 |
<eevans@cumin1001> |
END (PASS) - Cookbook sre.discovery.service-route (exit_code=0) depool sessionstore in codfw: maintenance |
[production] |
15:06 |
<jbond@cumin2002> |
END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) puppetserver2001.mgmt.codfw.wmnet on all recursors |
[production] |
15:06 |
<jbond@cumin2002> |
START - Cookbook sre.dns.wipe-cache puppetserver2001.mgmt.codfw.wmnet on all recursors |
[production] |
15:03 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1142', diff saved to https://phabricator.wikimedia.org/P49115 and previous config saved to /var/cache/conftool/dbconfig/20230607-150309-ladsgroup.json |
[production] |
15:02 |
<eevans@cumin1001> |
START - Cookbook sre.discovery.service-route depool sessionstore in codfw: maintenance |
[production] |
15:02 |
<urandom> |
de-pooling sessionstore/codfw — T337426 |
[production] |
14:56 |
<sukhe> |
homer "cr*-codfw*" commit "Gerrit: 928068 remove decommissioned host lvs2010" |
[production] |
14:54 |
<jbond@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host puppetserver1001.eqiad.wmnet with OS bookworm |
[production] |
14:54 |
<jbond@cumin1001> |
END (FAIL) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=99) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - jbond@cumin1001" |
[production] |
14:54 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1170:3312', diff saved to https://phabricator.wikimedia.org/P49114 and previous config saved to /var/cache/conftool/dbconfig/20230607-145413-ladsgroup.json |
[production] |
14:54 |
<moritzm> |
installing postgresql 13 security updates (clients/libs, server instances all updated already) |
[production] |
14:53 |
<jbond@cumin1001> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - jbond@cumin1001" |
[production] |
14:51 |
<jbond@cumin2002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
14:50 |
<jbond@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
14:49 |
<jbond@cumin2002> |
END (FAIL) - Cookbook sre.dns.netbox (exit_code=99) |
[production] |
14:49 |
<sukhe@cumin2002> |
END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts lvs2010.codfw.wmnet |
[production] |
14:49 |
<sukhe@cumin2002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
14:48 |
<sukhe@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
14:48 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1142', diff saved to https://phabricator.wikimedia.org/P49112 and previous config saved to /var/cache/conftool/dbconfig/20230607-144803-ladsgroup.json |
[production] |
14:43 |
<jbond@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
14:40 |
<jbond@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on puppetserver1001.eqiad.wmnet with reason: host reimage |
[production] |
14:40 |
<fabfur@cumin1001> |
END (PASS) - Cookbook sre.cdn.run-puppet-restart-varnish (exit_code=0) rolling custom on A:cp-upload_eqiad and A:cp |
[production] |
14:39 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1170:3312 (T336886)', diff saved to https://phabricator.wikimedia.org/P49111 and previous config saved to /var/cache/conftool/dbconfig/20230607-143907-ladsgroup.json |
[production] |
14:39 |
<sukhe@cumin2002> |
START - Cookbook sre.hosts.decommission for hosts lvs2010.codfw.wmnet |
[production] |
14:37 |
<jbond@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on puppetserver1001.eqiad.wmnet with reason: host reimage |
[production] |
14:36 |
<cgoubert@deploy1002> |
helmfile [staging] DONE helmfile.d/services/changeprop-jobqueue: apply |
[production] |
14:33 |
<cgoubert@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/changeprop-jobqueue: apply |
[production] |
14:33 |
<fabfur@cumin1001> |
END (PASS) - Cookbook sre.cdn.run-puppet-restart-varnish (exit_code=0) rolling custom on A:cp-text_eqiad and A:cp |
[production] |
14:32 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1142 (T336886)', diff saved to https://phabricator.wikimedia.org/P49110 and previous config saved to /var/cache/conftool/dbconfig/20230607-143256-ladsgroup.json |
[production] |
14:32 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1170:3312 (T336886)', diff saved to https://phabricator.wikimedia.org/P49109 and previous config saved to /var/cache/conftool/dbconfig/20230607-143235-ladsgroup.json |
[production] |
14:32 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1170.eqiad.wmnet with reason: Maintenance |
[production] |
14:32 |
<cgoubert@deploy1002> |
helmfile [eqiad] START helmfile.d/services/changeprop-jobqueue: apply |
[production] |
14:32 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db1170.eqiad.wmnet with reason: Maintenance |
[production] |
14:32 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1156 (T336886)', diff saved to https://phabricator.wikimedia.org/P49108 and previous config saved to /var/cache/conftool/dbconfig/20230607-143215-ladsgroup.json |
[production] |
14:32 |
<cgoubert@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/changeprop-jobqueue: apply |
[production] |
14:31 |
<cgoubert@deploy1002> |
helmfile [codfw] START helmfile.d/services/changeprop-jobqueue: apply |
[production] |
14:27 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1142 (T336886)', diff saved to https://phabricator.wikimedia.org/P49107 and previous config saved to /var/cache/conftool/dbconfig/20230607-142756-ladsgroup.json |
[production] |
14:27 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1142.eqiad.wmnet with reason: Maintenance |
[production] |
14:27 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db1142.eqiad.wmnet with reason: Maintenance |
[production] |
14:27 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1141 (T336886)', diff saved to https://phabricator.wikimedia.org/P49106 and previous config saved to /var/cache/conftool/dbconfig/20230607-142736-ladsgroup.json |
[production] |
14:26 |
<cgoubert@deploy1002> |
helmfile [staging] START helmfile.d/services/changeprop-jobqueue: apply |
[production] |
14:25 |
<jbond@cumin1001> |
START - Cookbook sre.hosts.reimage for host puppetserver1001.eqiad.wmnet with OS bookworm |
[production] |
14:24 |
<jbond@cumin1001> |
END (ERROR) - Cookbook sre.hosts.reimage (exit_code=97) for host puppetserver1001.eqiad.wmnet with OS bookworm |
[production] |
14:23 |
<jclark@cumin1001> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=93) for host dbproxy1027.eqiad.wmnet with OS bullseye |
[production] |
14:17 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1156', diff saved to https://phabricator.wikimedia.org/P49104 and previous config saved to /var/cache/conftool/dbconfig/20230607-141709-ladsgroup.json |
[production] |
14:17 |
<aborrero@cumin2002> |
END (PASS) - Cookbook sre.network.configure-switch-interfaces (exit_code=0) for host cloudnet2006-dev |
[production] |