2024-03-27
§
|
08:38 |
<hashar> |
UTC morning backport window completed |
[production] |
08:37 |
<hashar@deploy1002> |
Finished scap: Backport for [[gerrit:983905|Add webrequest.frontend.rc0 stream (T314956 T351117)]] (duration: 20m 59s) |
[production] |
08:25 |
<hashar@deploy1002> |
otto and hashar: Continuing with sync |
[production] |
08:20 |
<hashar@deploy1002> |
otto and hashar: Backport for [[gerrit:983905|Add webrequest.frontend.rc0 stream (T314956 T351117)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) |
[production] |
08:16 |
<hashar@deploy1002> |
Started scap: Backport for [[gerrit:983905|Add webrequest.frontend.rc0 stream (T314956 T351117)]] |
[production] |
07:14 |
<fabfur@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on 8 hosts with reason: preparing for new disk |
[production] |
07:14 |
<fabfur@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on 8 hosts with reason: preparing for new disk |
[production] |
07:11 |
<kart_> |
Updated MinT to 2024-03-26-120044-production (T347930, T355304, T349487) |
[production] |
07:09 |
<kartik@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/machinetranslation: apply |
[production] |
07:00 |
<kartik@deploy1002> |
helmfile [eqiad] START helmfile.d/services/machinetranslation: apply |
[production] |
06:57 |
<kartik@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/machinetranslation: apply |
[production] |
06:48 |
<kartik@deploy1002> |
helmfile [codfw] START helmfile.d/services/machinetranslation: apply |
[production] |
06:38 |
<kartik@deploy1002> |
helmfile [staging] DONE helmfile.d/services/machinetranslation: apply |
[production] |
06:32 |
<kartik@deploy1002> |
helmfile [staging] START helmfile.d/services/machinetranslation: apply |
[production] |
05:57 |
<fabfur> |
running authdns-update on dns1004 to depool ESAMS (T360430) |
[production] |
04:55 |
<eileen> |
civicrm upgraded from 143aa0bf to 2e0ac12f |
[production] |
01:35 |
<ryankemper@cumin2002> |
END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Banning hosts: elastic2037*,elastic2038*,elastic2041*,elastic2042*,elastic2045*,elastic2046*,elastic2047*,elastic2050*,elastic2051*,elastic2052*,elastic2039*,elastic2040*,elastic2043*,elastic2044*,elastic2048*,elastic2053*,elastic2054* for prepare for decom of hosts - ryankemper@cumin2002 - T358882 |
[production] |
01:35 |
<ryankemper@cumin2002> |
START - Cookbook sre.elasticsearch.ban Banning hosts: elastic2037*,elastic2038*,elastic2041*,elastic2042*,elastic2045*,elastic2046*,elastic2047*,elastic2050*,elastic2051*,elastic2052*,elastic2039*,elastic2040*,elastic2043*,elastic2044*,elastic2048*,elastic2053*,elastic2054* for prepare for decom of hosts - ryankemper@cumin2002 - T358882 |
[production] |
01:31 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host dbprov2005.codfw.wmnet with OS bullseye |
[production] |
01:10 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on dbprov2005.codfw.wmnet with reason: host reimage |
[production] |
01:07 |
<pt1979@cumin2002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on dbprov2005.codfw.wmnet with reason: host reimage |
[production] |
01:06 |
<ryankemper> |
T358882 Updated remote cluster seeds for new master state |
[production] |
01:06 |
<ryankemper> |
[WDQS] Restarted `wdqs-blazegraph` and `wdqs-updater` on `wdqs1013` and depooled to catch up on lag |
[production] |
00:53 |
<pt1979@cumin2002> |
START - Cookbook sre.hosts.reimage for host dbprov2005.codfw.wmnet with OS bullseye |
[production] |
2024-03-26
§
|
23:48 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Depooling db1170 (T352010)', diff saved to https://phabricator.wikimedia.org/P58936 and previous config saved to /var/cache/conftool/dbconfig/20240326-234806-ladsgroup.json |
[production] |
23:47 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1170.eqiad.wmnet with reason: Maintenance |
[production] |
23:47 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1170.eqiad.wmnet with reason: Maintenance |
[production] |
23:47 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1158 (T352010)', diff saved to https://phabricator.wikimedia.org/P58935 and previous config saved to /var/cache/conftool/dbconfig/20240326-234743-ladsgroup.json |
[production] |
23:32 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1158', diff saved to https://phabricator.wikimedia.org/P58934 and previous config saved to /var/cache/conftool/dbconfig/20240326-233235-ladsgroup.json |
[production] |
23:30 |
<pt1979@cumin2002> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=93) for host dbprov2005.codfw.wmnet with OS bullseye |
[production] |
23:17 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1158', diff saved to https://phabricator.wikimedia.org/P58932 and previous config saved to /var/cache/conftool/dbconfig/20240326-231728-ladsgroup.json |
[production] |
23:02 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1158 (T352010)', diff saved to https://phabricator.wikimedia.org/P58931 and previous config saved to /var/cache/conftool/dbconfig/20240326-230220-ladsgroup.json |
[production] |
22:58 |
<btullis@deploy1002> |
helmfile [staging] DONE helmfile.d/services/datahub: sync on main |
[production] |
22:56 |
<reedy@deploy1002> |
Finished scap: SecurePoll PopulateEditCount fix (duration: 25m 49s) |
[production] |
22:55 |
<btullis@deploy1002> |
helmfile [staging] START helmfile.d/services/datahub: apply on main |
[production] |
22:39 |
<btullis@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/datahub: sync on main |
[production] |
22:32 |
<btullis@deploy1002> |
helmfile [eqiad] START helmfile.d/services/datahub: apply on main |
[production] |
22:32 |
<btullis@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/datahub: sync on main |
[production] |
22:30 |
<reedy@deploy1002> |
Started scap: SecurePoll PopulateEditCount fix |
[production] |
22:24 |
<btullis@deploy1002> |
helmfile [codfw] START helmfile.d/services/datahub: apply on main |
[production] |
22:24 |
<btullis@deploy1002> |
helmfile [staging] DONE helmfile.d/services/datahub: sync on main |
[production] |
22:20 |
<btullis@deploy1002> |
helmfile [staging] START helmfile.d/services/datahub: apply on main |
[production] |
22:15 |
<pt1979@cumin2002> |
START - Cookbook sre.hosts.reimage for host dbprov2005.codfw.wmnet with OS bullseye |
[production] |
21:45 |
<ryankemper@cumin2002> |
END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_codfw: cycle some masters - ryankemper@cumin2002 - T358882 |
[production] |
21:38 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host dbprov2005.codfw.wmnet with OS bullseye |
[production] |
21:38 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - pt1979@cumin2002" |
[production] |
21:07 |
<vriley@cumin1002> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host dbprov1006.eqiad.wmnet with OS bullseye |
[production] |
21:03 |
<catrope@deploy1002> |
Finished scap: Backport for [[gerrit:1010938|Add autopatrolled, rollbacker and suppressredirect user groups for ckbwiktionary (T360228)]] (duration: 17m 37s) |
[production] |
20:57 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2112.codfw.wmnet with reason: Maintenance |
[production] |
20:57 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2112.codfw.wmnet with reason: Maintenance |
[production] |