3751-3800 of 10000 results (71ms)
2022-07-21 ยง
22:30 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1158 (T312984)', diff saved to https://phabricator.wikimedia.org/P31678 and previous config saved to /var/cache/conftool/dbconfig/20220721-223048-ladsgroup.json [production]
22:30 <mutante> re-enabling puppet on all remaining 'C:profile::mediawiki::httpd' [production]
22:26 <bking@cumin1001> START - Cookbook sre.hosts.reimage for host elastic2045.codfw.wmnet with OS bullseye [production]
22:15 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1158', diff saved to https://phabricator.wikimedia.org/P31677 and previous config saved to /var/cache/conftool/dbconfig/20220721-221543-ladsgroup.json [production]
22:09 <bking@cumin1001> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host elastic2045.codfw.wmnet with OS bullseye [production]
22:05 <bking@cumin1001> START - Cookbook sre.hosts.reimage for host elastic2045.codfw.wmnet with OS bullseye [production]
22:02 <dancy@deploy1002> Installation of scap version "4.11.3" completed for 559 hosts [production]
22:02 <dancy@deploy1002> Installing scap version "4.11.3" for 559 hosts [production]
22:00 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1158', diff saved to https://phabricator.wikimedia.org/P31676 and previous config saved to /var/cache/conftool/dbconfig/20220721-220038-ladsgroup.json [production]
21:56 <mutante> re-enabling puppet on mw2 in groups (codfw) [production]
21:48 <mutante> re-enabling puppet on parsoid (wtp*) [production]
21:45 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1158 (T312984)', diff saved to https://phabricator.wikimedia.org/P31675 and previous config saved to /var/cache/conftool/dbconfig/20220721-214532-ladsgroup.json [production]
21:32 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1098:3316 (T312863)', diff saved to https://phabricator.wikimedia.org/P31674 and previous config saved to /var/cache/conftool/dbconfig/20220721-213246-ladsgroup.json [production]
21:32 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1098.eqiad.wmnet with reason: Maintenance [production]
21:32 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1098.eqiad.wmnet with reason: Maintenance [production]
21:32 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1113:3316 (T312863)', diff saved to https://phabricator.wikimedia.org/P31673 and previous config saved to /var/cache/conftool/dbconfig/20220721-213237-ladsgroup.json [production]
21:17 <mutante> puppet re-enabled on mw-api-canary and parsoid-canary [production]
21:17 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1113:3316', diff saved to https://phabricator.wikimedia.org/P31672 and previous config saved to /var/cache/conftool/dbconfig/20220721-211732-ladsgroup.json [production]
21:02 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1113:3316', diff saved to https://phabricator.wikimedia.org/P31671 and previous config saved to /var/cache/conftool/dbconfig/20220721-210226-ladsgroup.json [production]
20:52 <mutante> deploying apache config change on cluster, slowly..puppet disabled on C:profile::mediawiki::httpd .. then re-enabling starting with mwdebug.. using httpbb to test it.. then re-enabling puppet on more hosts https://gerrit.wikimedia.org/r/c/operations/puppet/+/809324 Bug: T310738 [production]
20:47 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1113:3316 (T312863)', diff saved to https://phabricator.wikimedia.org/P31670 and previous config saved to /var/cache/conftool/dbconfig/20220721-204721-ladsgroup.json [production]
20:45 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1158 (T312984)', diff saved to https://phabricator.wikimedia.org/P31669 and previous config saved to /var/cache/conftool/dbconfig/20220721-204518-ladsgroup.json [production]
20:45 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 20:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance [production]
20:45 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 20:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance [production]
20:45 <dancy@deploy1002> backport aborted: (duration: 00m 02s) [production]
20:44 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 10:00:00 on db1158.eqiad.wmnet with reason: Maintenance [production]
20:44 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 10:00:00 on db1158.eqiad.wmnet with reason: Maintenance [production]
20:39 <mutante> disabling puppet on mw appservers to deploy gerrit:809324 - T310738 [production]
20:34 <cjming> end of UTC late backport window [production]
20:34 <bd808> Proof of life for stashbot processing !logs [production]
20:33 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: apply [production]
20:32 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply [production]
20:32 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply [production]
20:32 <cjming@deploy1002> Synchronized wmf-config: Config: [[gerrit:814907|Deploy grid to all wikis (T312241)]] (duration: 03m 13s) [production]
20:31 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply [production]
20:28 <andrewbogott> testing the log by logging a test [production]
20:23 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1161 (T312863)', diff saved to https://phabricator.wikimedia.org/P31668 and previous config saved to /var/cache/conftool/dbconfig/20220721-202348-ladsgroup.json [production]
20:23 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on clouddb[1016,1020-1021].eqiad.wmnet,db1154.eqiad.wmnet with reason: Maintenance [production]
20:23 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on clouddb[1016,1020-1021].eqiad.wmnet,db1154.eqiad.wmnet with reason: Maintenance [production]
20:23 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1161.eqiad.wmnet with reason: Maintenance [production]
20:23 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1161.eqiad.wmnet with reason: Maintenance [production]
20:23 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1110 (T312863)', diff saved to https://phabricator.wikimedia.org/P31667 and previous config saved to /var/cache/conftool/dbconfig/20220721-202311-ladsgroup.json [production]
20:21 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: apply [production]
20:20 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply [production]
20:20 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply [production]
20:20 <cjming@deploy1002> Synchronized wmf-config/InitialiseSettings.php: Config: [[gerrit:814909|Revert "cirrus: Dont recycle completion suggester indices"]] (duration: 02m 56s) [production]
20:19 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply [production]
20:08 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1110', diff saved to https://phabricator.wikimedia.org/P31666 and previous config saved to /var/cache/conftool/dbconfig/20220721-200806-ladsgroup.json [production]
19:56 <bking@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.REIMAGE (1 nodes at a time) for ElasticSearch cluster search_codfw: codfw cluster reimage (bullseye upgrade) - bking@cumin1001 - T289135 [production]
19:54 <bking@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.REIMAGE (1 nodes at a time) for ElasticSearch cluster search_codfw: codfw cluster reimage (bullseye upgrade) - bking@cumin1001 - T289135 [production]