5551-5600 of 10000 results (47ms)
2022-01-26 §
00:11 <ryankemper> T294805 Reverted https://gerrit.wikimedia.org/r/c/operations/puppet/+/757003 (elasticsearch-oss dependency issues, will pick this back up tomorrow); re-enabling puppet across elastic1* [production]
00:03 <ryankemper> T294805 Merged https://gerrit.wikimedia.org/r/c/operations/puppet/+/757003; running puppet on `elastic1068` to make it join the fleet [production]
2022-01-25 §
23:42 <ryankemper> T294805 [Elastic] Step 2: Disabling puppet in advance of merge of https://gerrit.wikimedia.org/r/c/operations/puppet/+/736117 [production]
23:20 <ryankemper> T294805 [Elastic] Merged https://gerrit.wikimedia.org/r/736116, step 1 of bringing new eqiad 10G refresh hosts into service [production]
21:20 <bblack@cumin1001> conftool action : set/weight=100; selector: dc=drmrs,service=ats-be [production]
21:20 <bblack@cumin1001> conftool action : set/weight=1; selector: dc=drmrs,service=varnish-fe [production]
21:20 <bblack@cumin1001> conftool action : set/weight=1; selector: dc=drmrs,service=ats-tls [production]
21:03 <cwhite> end transition to logstash output opensearch plugin T299168 [production]
20:41 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
20:35 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
20:35 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
20:29 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
20:18 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
20:17 <cwhite> begin transition to logstash output opensearch plugin T299168 [production]
20:12 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
20:12 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
20:08 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
20:05 <brennen@deploy1002> rebuilt and synchronized wikiversions files: group0 wikis to 1.38.0-wmf.19 refs T293960 [production]
20:03 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host backup1008.eqiad.wmnet with OS buster [production]
20:01 <brennen> train 1.38.0-wmf.19 (T293960): testwiki sync finished, still no open blockers, proceeding to group0 [production]
19:50 <brennen@deploy1002> Finished scap: testwikis wikis to 1.38.0-wmf.19 refs T293960 (duration: 52m 01s) [production]
19:38 <cmjohnson@cumin1001> START - Cookbook sre.hosts.reimage for host backup1008.eqiad.wmnet with OS buster [production]
19:37 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
19:35 <cmjohnson1> updating firmware ganeti1006 T299527 [production]
19:31 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
19:31 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
19:25 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
19:12 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Make es1028 master of es3 T299911', diff saved to https://phabricator.wikimedia.org/P19221 and previous config saved to /var/cache/conftool/dbconfig/20220125-191238-ladsgroup.json [production]
19:09 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance es1028 (T299911)', diff saved to https://phabricator.wikimedia.org/P19220 and previous config saved to /var/cache/conftool/dbconfig/20220125-190949-ladsgroup.json [production]
19:04 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
19:04 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4 days, 0:00:00 on ganeti1006.eqiad.wmnet with reason: Remove from Ganeti cluster for reimage [production]
19:04 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 4 days, 0:00:00 on ganeti1006.eqiad.wmnet with reason: Remove from Ganeti cluster for reimage [production]
19:03 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
19:03 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
19:02 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
18:58 <brennen@deploy1002> Started scap: testwikis wikis to 1.38.0-wmf.19 refs T293960 [production]
18:57 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
18:56 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
18:56 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
18:55 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
18:54 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance es1028', diff saved to https://phabricator.wikimedia.org/P19219 and previous config saved to /var/cache/conftool/dbconfig/20220125-185444-ladsgroup.json [production]
18:47 <marostegui@cumin1001> dbctl commit (dc=all): 'es1022 (re)pooling @ 100%: repooling after reimage', diff saved to https://phabricator.wikimedia.org/P19218 and previous config saved to /var/cache/conftool/dbconfig/20220125-184714-root.json [production]
18:44 <jelto@cumin1001> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) for new host gitlab-runner1001.eqiad.wmnet [production]
18:39 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance es1028', diff saved to https://phabricator.wikimedia.org/P19217 and previous config saved to /var/cache/conftool/dbconfig/20220125-183940-ladsgroup.json [production]
18:38 <jgiannelos@deploy1002> helmfile [eqiad] DONE helmfile.d/services/proton: sync on production [production]
18:34 <jgiannelos@deploy1002> helmfile [eqiad] START helmfile.d/services/proton: apply on production [production]
18:33 <jgiannelos@deploy1002> helmfile [codfw] DONE helmfile.d/services/proton: sync on production [production]
18:32 <marostegui@cumin1001> dbctl commit (dc=all): 'es1022 (re)pooling @ 75%: repooling after reimage', diff saved to https://phabricator.wikimedia.org/P19216 and previous config saved to /var/cache/conftool/dbconfig/20220125-183210-root.json [production]
18:31 <jgiannelos@deploy1002> helmfile [codfw] START helmfile.d/services/proton: apply on production [production]
18:30 <jgiannelos@deploy1002> helmfile [staging] DONE helmfile.d/services/proton: sync on production [production]