2022-07-21
§
|
23:55 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1098:3316', diff saved to https://phabricator.wikimedia.org/P31681 and previous config saved to /var/cache/conftool/dbconfig/20220721-235551-ladsgroup.json |
[production] |
23:53 |
<mutante> |
https://policy.wikimedia.org moved from Wordpress DNS back to WMF DNS - now redirects to https://wikimediafoundation.org/advocacy/ as requested on T310738 | this might also resolve T132104 or not because wikimediafoundation.org is also on wordpress VIP |
[production] |
23:40 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1098:3316 (T312863)', diff saved to https://phabricator.wikimedia.org/P31680 and previous config saved to /var/cache/conftool/dbconfig/20220721-234045-ladsgroup.json |
[production] |
23:22 |
<mutante> |
[cumin2002:~] $ sudo cumin 'C:profile::httpbb' "rm /srv/deployment/httpbb-tests/appserver/test_search.yaml" |
[production] |
23:12 |
<bking@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host elastic2045.codfw.wmnet with OS bullseye |
[production] |
22:55 |
<bking@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on elastic2045.codfw.wmnet with reason: host reimage |
[production] |
22:52 |
<bking@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on elastic2045.codfw.wmnet with reason: host reimage |
[production] |
22:30 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 10:00:00 on dbstore1003.eqiad.wmnet with reason: Maintenance |
[production] |
22:30 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 10:00:00 on dbstore1003.eqiad.wmnet with reason: Maintenance |
[production] |
22:30 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1158 (T312984)', diff saved to https://phabricator.wikimedia.org/P31678 and previous config saved to /var/cache/conftool/dbconfig/20220721-223048-ladsgroup.json |
[production] |
22:30 |
<mutante> |
re-enabling puppet on all remaining 'C:profile::mediawiki::httpd' |
[production] |
22:26 |
<bking@cumin1001> |
START - Cookbook sre.hosts.reimage for host elastic2045.codfw.wmnet with OS bullseye |
[production] |
22:15 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1158', diff saved to https://phabricator.wikimedia.org/P31677 and previous config saved to /var/cache/conftool/dbconfig/20220721-221543-ladsgroup.json |
[production] |
22:09 |
<bking@cumin1001> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host elastic2045.codfw.wmnet with OS bullseye |
[production] |
22:05 |
<bking@cumin1001> |
START - Cookbook sre.hosts.reimage for host elastic2045.codfw.wmnet with OS bullseye |
[production] |
22:02 |
<dancy@deploy1002> |
Installation of scap version "4.11.3" completed for 559 hosts |
[production] |
22:02 |
<dancy@deploy1002> |
Installing scap version "4.11.3" for 559 hosts |
[production] |
22:00 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1158', diff saved to https://phabricator.wikimedia.org/P31676 and previous config saved to /var/cache/conftool/dbconfig/20220721-220038-ladsgroup.json |
[production] |
21:56 |
<mutante> |
re-enabling puppet on mw2 in groups (codfw) |
[production] |
21:48 |
<mutante> |
re-enabling puppet on parsoid (wtp*) |
[production] |
21:45 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1158 (T312984)', diff saved to https://phabricator.wikimedia.org/P31675 and previous config saved to /var/cache/conftool/dbconfig/20220721-214532-ladsgroup.json |
[production] |
21:32 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1098:3316 (T312863)', diff saved to https://phabricator.wikimedia.org/P31674 and previous config saved to /var/cache/conftool/dbconfig/20220721-213246-ladsgroup.json |
[production] |
21:32 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1098.eqiad.wmnet with reason: Maintenance |
[production] |
21:32 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1098.eqiad.wmnet with reason: Maintenance |
[production] |
21:32 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1113:3316 (T312863)', diff saved to https://phabricator.wikimedia.org/P31673 and previous config saved to /var/cache/conftool/dbconfig/20220721-213237-ladsgroup.json |
[production] |
21:17 |
<mutante> |
puppet re-enabled on mw-api-canary and parsoid-canary |
[production] |
21:17 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1113:3316', diff saved to https://phabricator.wikimedia.org/P31672 and previous config saved to /var/cache/conftool/dbconfig/20220721-211732-ladsgroup.json |
[production] |
21:02 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1113:3316', diff saved to https://phabricator.wikimedia.org/P31671 and previous config saved to /var/cache/conftool/dbconfig/20220721-210226-ladsgroup.json |
[production] |
20:52 |
<mutante> |
deploying apache config change on cluster, slowly..puppet disabled on C:profile::mediawiki::httpd .. then re-enabling starting with mwdebug.. using httpbb to test it.. then re-enabling puppet on more hosts https://gerrit.wikimedia.org/r/c/operations/puppet/+/809324 Bug: T310738 |
[production] |
20:47 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1113:3316 (T312863)', diff saved to https://phabricator.wikimedia.org/P31670 and previous config saved to /var/cache/conftool/dbconfig/20220721-204721-ladsgroup.json |
[production] |
20:45 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1158 (T312984)', diff saved to https://phabricator.wikimedia.org/P31669 and previous config saved to /var/cache/conftool/dbconfig/20220721-204518-ladsgroup.json |
[production] |
20:45 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 20:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance |
[production] |
20:45 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 20:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance |
[production] |
20:45 |
<dancy@deploy1002> |
backport aborted: (duration: 00m 02s) |
[production] |
20:44 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 10:00:00 on db1158.eqiad.wmnet with reason: Maintenance |
[production] |
20:44 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 10:00:00 on db1158.eqiad.wmnet with reason: Maintenance |
[production] |
20:39 |
<mutante> |
disabling puppet on mw appservers to deploy gerrit:809324 - T310738 |
[production] |
20:34 |
<cjming> |
end of UTC late backport window |
[production] |
20:34 |
<bd808> |
Proof of life for stashbot processing !logs |
[production] |
20:33 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: apply |
[production] |
20:32 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply |
[production] |
20:32 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply |
[production] |
20:32 |
<cjming@deploy1002> |
Synchronized wmf-config: Config: [[gerrit:814907|Deploy grid to all wikis (T312241)]] (duration: 03m 13s) |
[production] |
20:31 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply |
[production] |