3401-3450 of 10000 results (94ms)
2023-06-07 §
15:24 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1182 (T336886)', diff saved to https://phabricator.wikimedia.org/P49121 and previous config saved to /var/cache/conftool/dbconfig/20230607-152456-ladsgroup.json [production]
15:24 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1182.eqiad.wmnet with reason: Maintenance [production]
15:24 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db1182.eqiad.wmnet with reason: Maintenance [production]
15:24 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1170:3312 (T336886)', diff saved to https://phabricator.wikimedia.org/P49120 and previous config saved to /var/cache/conftool/dbconfig/20230607-152425-ladsgroup.json [production]
15:23 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1143 (T336886)', diff saved to https://phabricator.wikimedia.org/P49119 and previous config saved to /var/cache/conftool/dbconfig/20230607-152333-ladsgroup.json [production]
15:23 <elukey> all varnishkafka instances on caching nodes are getting restarted due to https://gerrit.wikimedia.org/r/c/operations/puppet/+/928087 - T337825 [production]
15:22 <jiji@deploy1002> helmfile [staging] START helmfile.d/services/ipoid: apply [production]
15:22 <cgoubert@deploy1002> helmfile [eqiad] DONE helmfile.d/services/changeprop-jobqueue: apply [production]
15:22 <elukey> re-enable puppet on caching nodes [production]
15:22 <cgoubert@deploy1002> helmfile [eqiad] START helmfile.d/services/changeprop-jobqueue: apply [production]
15:21 <cgoubert@deploy1002> helmfile [codfw] DONE helmfile.d/services/changeprop-jobqueue: apply [production]
15:21 <cgoubert@deploy1002> helmfile [codfw] START helmfile.d/services/changeprop-jobqueue: apply [production]
15:21 <claime> Bumping prewarmparsoid concurrency to 45 in changeprop-jobqueue - T320534 [production]
15:18 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1143 (T336886)', diff saved to https://phabricator.wikimedia.org/P49118 and previous config saved to /var/cache/conftool/dbconfig/20230607-151835-ladsgroup.json [production]
15:18 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1143.eqiad.wmnet with reason: Maintenance [production]
15:18 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db1143.eqiad.wmnet with reason: Maintenance [production]
15:18 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1142 (T336886)', diff saved to https://phabricator.wikimedia.org/P49117 and previous config saved to /var/cache/conftool/dbconfig/20230607-151815-ladsgroup.json [production]
15:17 <moritzm> installing libwebp security updates on buster [production]
15:17 <jbond@cumin2002> START - Cookbook sre.hosts.reimage for host puppetserver2001.codfw.wmnet with OS bookworm [production]
15:17 <jbond@cumin2002> END (ERROR) - Cookbook sre.hosts.reimage (exit_code=97) for host puppetserver2001.codfw.wmnet with OS bookworm [production]
15:14 <urandom> Upgrading Cassandra to 4.1.1, sessionstore2001 — T337426 [production]
15:14 <isaranto@deploy1002> helmfile [ml-serve-eqiad] Ran 'sync' command on namespace 'experimental' for release 'main' . [production]
15:10 <elukey> disable puppet on all caching nodes to rollout a varnishakfka change (ref: https://gerrit.wikimedia.org/r/c/operations/puppet/+/928087) [production]
15:09 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1170:3312', diff saved to https://phabricator.wikimedia.org/P49116 and previous config saved to /var/cache/conftool/dbconfig/20230607-150919-ladsgroup.json [production]
15:08 <jbond@cumin2002> START - Cookbook sre.hosts.reimage for host puppetserver2001.codfw.wmnet with OS bookworm [production]
15:07 <eevans@cumin1001> END (PASS) - Cookbook sre.discovery.service-route (exit_code=0) depool sessionstore in codfw: maintenance [production]
15:06 <jbond@cumin2002> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) puppetserver2001.mgmt.codfw.wmnet on all recursors [production]
15:06 <jbond@cumin2002> START - Cookbook sre.dns.wipe-cache puppetserver2001.mgmt.codfw.wmnet on all recursors [production]
15:03 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1142', diff saved to https://phabricator.wikimedia.org/P49115 and previous config saved to /var/cache/conftool/dbconfig/20230607-150309-ladsgroup.json [production]
15:02 <eevans@cumin1001> START - Cookbook sre.discovery.service-route depool sessionstore in codfw: maintenance [production]
15:02 <urandom> de-pooling sessionstore/codfw — T337426 [production]
14:56 <sukhe> homer "cr*-codfw*" commit "Gerrit: 928068 remove decommissioned host lvs2010" [production]
14:54 <jbond@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host puppetserver1001.eqiad.wmnet with OS bookworm [production]
14:54 <jbond@cumin1001> END (FAIL) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=99) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - jbond@cumin1001" [production]
14:54 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1170:3312', diff saved to https://phabricator.wikimedia.org/P49114 and previous config saved to /var/cache/conftool/dbconfig/20230607-145413-ladsgroup.json [production]
14:54 <moritzm> installing postgresql 13 security updates (clients/libs, server instances all updated already) [production]
14:53 <jbond@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - jbond@cumin1001" [production]
14:51 <jbond@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
14:50 <jbond@cumin2002> START - Cookbook sre.dns.netbox [production]
14:49 <jbond@cumin2002> END (FAIL) - Cookbook sre.dns.netbox (exit_code=99) [production]
14:49 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts lvs2010.codfw.wmnet [production]
14:49 <sukhe@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
14:48 <sukhe@cumin2002> START - Cookbook sre.dns.netbox [production]
14:48 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1142', diff saved to https://phabricator.wikimedia.org/P49112 and previous config saved to /var/cache/conftool/dbconfig/20230607-144803-ladsgroup.json [production]
14:43 <jbond@cumin2002> START - Cookbook sre.dns.netbox [production]
14:40 <jbond@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on puppetserver1001.eqiad.wmnet with reason: host reimage [production]
14:40 <fabfur@cumin1001> END (PASS) - Cookbook sre.cdn.run-puppet-restart-varnish (exit_code=0) rolling custom on A:cp-upload_eqiad and A:cp [production]
14:39 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1170:3312 (T336886)', diff saved to https://phabricator.wikimedia.org/P49111 and previous config saved to /var/cache/conftool/dbconfig/20230607-143907-ladsgroup.json [production]
14:39 <sukhe@cumin2002> START - Cookbook sre.hosts.decommission for hosts lvs2010.codfw.wmnet [production]
14:37 <jbond@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on puppetserver1001.eqiad.wmnet with reason: host reimage [production]