1201-1250 of 10000 results (41ms)
2022-02-08 §
07:03 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1123 (T300402)', diff saved to https://phabricator.wikimedia.org/P20238 and previous config saved to /var/cache/conftool/dbconfig/20220208-070339-marostegui.json [production]
07:03 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1123.eqiad.wmnet with reason: Maintenance [production]
07:03 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1123.eqiad.wmnet with reason: Maintenance [production]
06:55 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db2134.codfw.wmnet with OS bullseye [production]
06:25 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on 6 hosts with reason: Maintenance [production]
06:25 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on 6 hosts with reason: Maintenance [production]
06:25 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2105.codfw.wmnet with reason: Maintenance [production]
06:25 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db2105.codfw.wmnet with reason: Maintenance [production]
06:22 <marostegui@cumin1001> START - Cookbook sre.hosts.reimage for host db2134.codfw.wmnet with OS bullseye [production]
06:09 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1113:3316 (T300775)', diff saved to https://phabricator.wikimedia.org/P20237 and previous config saved to /var/cache/conftool/dbconfig/20220208-060943-marostegui.json [production]
06:09 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1113.eqiad.wmnet with reason: Maintenance [production]
06:09 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db1113.eqiad.wmnet with reason: Maintenance [production]
06:04 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1145.eqiad.wmnet with reason: Maintenance [production]
06:04 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1145.eqiad.wmnet with reason: Maintenance [production]
06:03 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove contributions group from s1 eqiad T263127', diff saved to https://phabricator.wikimedia.org/P20236 and previous config saved to /var/cache/conftool/dbconfig/20220208-060310-marostegui.json [production]
02:30 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
02:29 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
02:29 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
02:28 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
02:07 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
02:05 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
02:05 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
02:03 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
00:12 <ryankemper> T294805 Re-enabling puppet across eqiad elastic fleet: `ryankemper@cumin1001:~$ sudo cumin -b 8 'elastic1*' 'sudo enable-puppet "Add new eqiad replacement hosts elastic10[68-83] - T294805 - root" && sudo run-puppet-agent'` tmux session `elastic` [production]
00:12 <ryankemper> T294805 old psi masters are out, done with all elastic master operations [production]
00:05 <ryankemper> T294805 new psi masters `elastic1073`, `elastic1075`, and `elastic1083` are in [production]
2022-02-07 §
23:39 <ryankemper> T294805 Removed old masters `elastic1034` and `elastic1038` (and `elastic1040` was removed earlier) [production]
23:35 <ryankemper> T294805 Bringing in new omega master `elastic1057` [production]
23:31 <ryankemper> T294805 Bringing in new omega master `elastic1076` [production]
23:27 <ryankemper> T294805 Bringing in new master `elastic1068` [production]
23:27 <ryankemper> T294805 Main search cluster all done, proceeding to `omega` cluster [production]
23:19 <pt1979@cumin2002> END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host mc2053.mgmt.codfw.wmnet with reboot policy FORCED [production]
23:17 <cwhite> end opensearch upgrade (eqiad) T299168 [production]
23:09 <ryankemper> T294805 Kicking out the final master `elastic1036` (which is also the currently elected leader); after this we'll be back to 3 masters as intended [production]
23:06 <ryankemper> T294805 Running puppet and restarting elasticsearch services on `elastic1040` to make it no longer a master [production]
23:04 <ryankemper> T294805 Bringing in new master `elastic1081`: `sudo systemctl restart elasticsearch_6@production-search-eqiad.service elasticsearch_6@production-search-psi-eqiad.service` [production]
23:04 <pt1979@cumin2002> START - Cookbook sre.hosts.provision for host mc2053.mgmt.codfw.wmnet with reboot policy FORCED [production]
23:04 <ryankemper> T294805 Bringing in new master `elastic1081`: `sudo enable-puppet "Add new eqiad replacement hosts elastic10[68-83] - T294805 - root" && sudo run-puppet-agent` [production]
22:59 <ryankemper> T294805 `sudo systemctl restart elasticsearch_6@production-search-eqiad.service elasticsearch_6@production-search-omega-eqiad.service` on `elastic1074` [production]
22:59 <pt1979@cumin2002> END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host mc2052.mgmt.codfw.wmnet with reboot policy FORCED [production]
22:57 <ryankemper> T294805 Running puppet agent on new master elastic1074.eqiad.wmnet: `sudo enable-puppet "Add new eqiad replacement hosts elastic10[68-83] - T294805 - root" && sudo run-puppet-agent` [production]
22:48 <ryankemper> T294805 Disabled puppet across all of elastic1* in preparation for bringing new master hosts in [production]
22:47 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1098:3317 (T298554)', diff saved to https://phabricator.wikimedia.org/P20235 and previous config saved to /var/cache/conftool/dbconfig/20220207-224733-ladsgroup.json [production]
22:45 <inflatador> T294805 puppet-merged https://gerrit.wikimedia.org/r/c/operations/puppet/+/736118 [production]
22:44 <pt1979@cumin2002> START - Cookbook sre.hosts.provision for host mc2052.mgmt.codfw.wmnet with reboot policy FORCED [production]
22:35 <pt1979@cumin2002> END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host mc2051.mgmt.codfw.wmnet with reboot policy FORCED [production]
22:32 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1098:3317', diff saved to https://phabricator.wikimedia.org/P20234 and previous config saved to /var/cache/conftool/dbconfig/20220207-223228-ladsgroup.json [production]
22:25 <cwhite> begin opensearch upgrade (eqiad) T299168 [production]
22:21 <pt1979@cumin2002> START - Cookbook sre.hosts.provision for host mc2051.mgmt.codfw.wmnet with reboot policy FORCED [production]
22:17 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1098:3317', diff saved to https://phabricator.wikimedia.org/P20233 and previous config saved to /var/cache/conftool/dbconfig/20220207-221723-ladsgroup.json [production]