2022-09-29
ยง
|
10:50 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 1:00:00 on 30 hosts with reason: Primary switchover s8 T318892 |
[production] |
10:50 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on 30 hosts with reason: Primary switchover s7 T318892 |
[production] |
10:50 |
<ayounsi@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on cr2-eqord,cr2-eqord IPv6 with reason: router upgrade |
[production] |
10:50 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 1:00:00 on 30 hosts with reason: Primary switchover s7 T318892 |
[production] |
10:50 |
<ayounsi@cumin1001> |
START - Cookbook sre.hosts.downtime for 1:00:00 on cr2-eqord,cr2-eqord IPv6 with reason: router upgrade |
[production] |
10:40 |
<XioNoX> |
repool cr2-eqiad - T295690 |
[production] |
10:36 |
<moritzm> |
installing poppler security updates |
[production] |
10:08 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db2174 (T314041)', diff saved to https://phabricator.wikimedia.org/P35153 and previous config saved to /var/cache/conftool/dbconfig/20220929-100849-ladsgroup.json |
[production] |
10:08 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2174.codfw.wmnet with reason: Maintenance |
[production] |
10:08 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2174.codfw.wmnet with reason: Maintenance |
[production] |
10:08 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2173 (T314041)', diff saved to https://phabricator.wikimedia.org/P35152 and previous config saved to /var/cache/conftool/dbconfig/20220929-100828-ladsgroup.json |
[production] |
10:07 |
<XioNoX> |
second (and longest) cr2-eqiad RE switchover - T295690 |
[production] |
09:53 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2173', diff saved to https://phabricator.wikimedia.org/P35150 and previous config saved to /var/cache/conftool/dbconfig/20220929-095321-ladsgroup.json |
[production] |
09:45 |
<moritzm> |
restarting superset to pick up expat security update |
[production] |
09:43 |
<kharlan@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/linkrecommendation: apply |
[production] |
09:42 |
<XioNoX> |
first cr2-eqiad RE switchover - T295690 |
[production] |
09:41 |
<kharlan@deploy1002> |
helmfile [codfw] START helmfile.d/services/linkrecommendation: apply |
[production] |
09:38 |
<kharlan@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/linkrecommendation: apply |
[production] |
09:38 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2173', diff saved to https://phabricator.wikimedia.org/P35149 and previous config saved to /var/cache/conftool/dbconfig/20220929-093815-ladsgroup.json |
[production] |
09:36 |
<kharlan@deploy1002> |
helmfile [eqiad] START helmfile.d/services/linkrecommendation: apply |
[production] |
09:34 |
<kharlan@deploy1002> |
helmfile [staging] DONE helmfile.d/services/linkrecommendation: apply |
[production] |
09:33 |
<kharlan@deploy1002> |
helmfile [staging] START helmfile.d/services/linkrecommendation: apply |
[production] |
09:33 |
<XioNoX> |
drain cr2-eqiad - T295690 |
[production] |
09:29 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: apply |
[production] |
09:29 |
<ayounsi@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on cr2-eqiad,cr2-eqiad IPv6,re0.cr2-eqiad.mgmt with reason: router upgrade |
[production] |
09:28 |
<ayounsi@cumin1001> |
START - Cookbook sre.hosts.downtime for 4:00:00 on cr2-eqiad,cr2-eqiad IPv6,re0.cr2-eqiad.mgmt with reason: router upgrade |
[production] |
09:26 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply |
[production] |
09:26 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply |
[production] |
09:26 |
<jynus@cumin2002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db2098.codfw.wmnet with OS bullseye |
[production] |
09:23 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2173 (T314041)', diff saved to https://phabricator.wikimedia.org/P35148 and previous config saved to /var/cache/conftool/dbconfig/20220929-092308-ladsgroup.json |
[production] |
09:21 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply |
[production] |
09:16 |
<XioNoX> |
repool cr1-eqiad - T295690 |
[production] |
09:11 |
<jnuche@deploy1002> |
rebuilt and synchronized wikiversions files: Revert "group1 wikis to 1.40.0-wmf.3" |
[production] |
09:07 |
<jynus@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db2098.codfw.wmnet with reason: host reimage |
[production] |
09:04 |
<jynus@cumin2002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db2098.codfw.wmnet with reason: host reimage |
[production] |
08:52 |
<jynus@cumin2002> |
START - Cookbook sre.hosts.reimage for host db2098.codfw.wmnet with OS bullseye |
[production] |
08:43 |
<XioNoX> |
second cr1-eqiad RE switchover - T295690 |
[production] |
08:27 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1177 (re)pooling @ 100%: After upgrade', diff saved to https://phabricator.wikimedia.org/P35146 and previous config saved to /var/cache/conftool/dbconfig/20220929-082757-root.json |
[production] |
08:26 |
<elukey@deploy1002> |
helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
08:26 |
<elukey@deploy1002> |
helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. |
[production] |
08:26 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. |
[production] |
08:26 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. |
[production] |
08:22 |
<elukey@deploy1002> |
helmfile [ml-serve-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
08:21 |
<elukey@deploy1002> |
helmfile [ml-serve-codfw] START helmfile.d/admin 'sync'. |
[production] |
08:15 |
<XioNoX> |
first cr1-eqiad RE switchover (for NVM firmware) - T295690 |
[production] |
08:12 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1177 (re)pooling @ 75%: After upgrade', diff saved to https://phabricator.wikimedia.org/P35145 and previous config saved to /var/cache/conftool/dbconfig/20220929-081252-root.json |
[production] |
08:03 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2121 (re)pooling @ 100%: After upgrade', diff saved to https://phabricator.wikimedia.org/P35144 and previous config saved to /var/cache/conftool/dbconfig/20220929-080340-root.json |
[production] |
07:57 |
<XioNoX> |
drain traffic away from cr1-eqiad - T295690 |
[production] |
07:57 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1177 (re)pooling @ 50%: After upgrade', diff saved to https://phabricator.wikimedia.org/P35143 and previous config saved to /var/cache/conftool/dbconfig/20220929-075747-root.json |
[production] |
07:49 |
<ayounsi@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on cr1-eqiad,cr1-eqiad IPv6,re0.cr1-eqiad.mgmt with reason: router upgrade |
[production] |