2023-05-10
§
|
07:14 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1212 (re)pooling @ 25%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P48075 and previous config saved to /var/cache/conftool/dbconfig/20230510-071449-root.json |
[production] |
07:14 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 25%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P48074 and previous config saved to /var/cache/conftool/dbconfig/20230510-071443-root.json |
[production] |
07:08 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2151 (re)pooling @ 50%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P48073 and previous config saved to /var/cache/conftool/dbconfig/20230510-070824-root.json |
[production] |
06:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1212 (re)pooling @ 10%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P48072 and previous config saved to /var/cache/conftool/dbconfig/20230510-065944-root.json |
[production] |
06:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 10%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P48071 and previous config saved to /var/cache/conftool/dbconfig/20230510-065938-root.json |
[production] |
06:53 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2151 (re)pooling @ 25%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P48070 and previous config saved to /var/cache/conftool/dbconfig/20230510-065319-root.json |
[production] |
06:52 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.reimage for host testvm2005.codfw.wmnet with OS bookworm |
[production] |
06:44 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1212 (re)pooling @ 5%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P48069 and previous config saved to /var/cache/conftool/dbconfig/20230510-064439-root.json |
[production] |
06:44 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1112 (re)pooling @ 5%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P48068 and previous config saved to /var/cache/conftool/dbconfig/20230510-064433-root.json |
[production] |
06:44 |
<marostegui> |
dbmaint eqiad failover s3 sanitarium master T336252 |
[production] |
06:41 |
<marostegui@cumin2002> |
dbctl commit (dc=all): 'Depool db1112 db1212 T336252', diff saved to https://phabricator.wikimedia.org/P48067 and previous config saved to /var/cache/conftool/dbconfig/20230510-064119-marostegui.json |
[production] |
06:38 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2151 (re)pooling @ 10%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P48066 and previous config saved to /var/cache/conftool/dbconfig/20230510-063814-root.json |
[production] |
06:23 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2151 (re)pooling @ 5%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P48065 and previous config saved to /var/cache/conftool/dbconfig/20230510-062309-root.json |
[production] |
06:08 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2151 (re)pooling @ 3%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P48064 and previous config saved to /var/cache/conftool/dbconfig/20230510-060805-root.json |
[production] |
06:06 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db2180', diff saved to https://phabricator.wikimedia.org/P48063 and previous config saved to /var/cache/conftool/dbconfig/20230510-060656-root.json |
[production] |
05:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2180 (re)pooling @ 1%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P48062 and previous config saved to /var/cache/conftool/dbconfig/20230510-055929-root.json |
[production] |
05:53 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2151 (re)pooling @ 1%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P48061 and previous config saved to /var/cache/conftool/dbconfig/20230510-055300-root.json |
[production] |
05:48 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db2151', diff saved to https://phabricator.wikimedia.org/P48060 and previous config saved to /var/cache/conftool/dbconfig/20230510-054833-root.json |
[production] |
05:42 |
<kart_> |
Updated MinT to 2023-05-10-045734-production (T331505) |
[production] |
05:42 |
<kartik@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/machinetranslation: apply |
[production] |
05:37 |
<kartik@deploy1002> |
helmfile [eqiad] START helmfile.d/services/machinetranslation: apply |
[production] |
05:35 |
<kartik@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/machinetranslation: apply |
[production] |
05:32 |
<kartik@deploy1002> |
helmfile [codfw] START helmfile.d/services/machinetranslation: apply |
[production] |
05:28 |
<kartik@deploy1002> |
helmfile [staging] DONE helmfile.d/services/machinetranslation: apply |
[production] |
05:26 |
<kartik@deploy1002> |
helmfile [staging] START helmfile.d/services/machinetranslation: apply |
[production] |
04:10 |
<mutante> |
gerrit1001 - rsyncing data over to gerrit1003, as root in a screen, but slowly with bwlimit 5m |
[production] |
01:34 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cloudcontrol2001-dev.codfw.wmnet with OS bullseye |
[production] |
01:34 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - pt1979@cumin2002" |
[production] |
2023-05-09
§
|
23:43 |
<brett@cumin2002> |
END (PASS) - Cookbook sre.cdn.roll-upgrade-haproxy (exit_code=0) rolling upgrade of HAProxy on A:cp-upload_eqiad |
[production] |
23:25 |
<brett@cumin2002> |
START - Cookbook sre.cdn.roll-upgrade-haproxy rolling upgrade of HAProxy on A:cp-upload_eqiad |
[production] |
23:22 |
<brett@cumin2002> |
END (PASS) - Cookbook sre.cdn.roll-upgrade-haproxy (exit_code=0) rolling upgrade of HAProxy on A:cp-text_eqiad |
[production] |
23:02 |
<brett@cumin2002> |
START - Cookbook sre.cdn.roll-upgrade-haproxy rolling upgrade of HAProxy on A:cp-text_eqiad |
[production] |
23:00 |
<brett@cumin2002> |
END (PASS) - Cookbook sre.cdn.roll-upgrade-haproxy (exit_code=0) rolling upgrade of HAProxy on A:cp-upload_codfw |
[production] |
22:46 |
<pt1979@cumin2002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - pt1979@cumin2002" |
[production] |
22:42 |
<brett@cumin2002> |
START - Cookbook sre.cdn.roll-upgrade-haproxy rolling upgrade of HAProxy on A:cp-upload_codfw |
[production] |
22:38 |
<brett@cumin2002> |
END (PASS) - Cookbook sre.cdn.roll-upgrade-haproxy (exit_code=0) rolling upgrade of HAProxy on A:cp-text_codfw |
[production] |
22:33 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2131 (T335845)', diff saved to https://phabricator.wikimedia.org/P48058 and previous config saved to /var/cache/conftool/dbconfig/20230509-223346-ladsgroup.json |
[production] |
22:32 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cloudcontrol2001-dev.codfw.wmnet with reason: host reimage |
[production] |
22:28 |
<pt1979@cumin2002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on cloudcontrol2001-dev.codfw.wmnet with reason: host reimage |
[production] |
22:23 |
<pt1979@cumin2002> |
START - Cookbook sre.hosts.reimage for host cloudcontrol2001-dev.codfw.wmnet with OS bullseye |
[production] |
22:18 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2131', diff saved to https://phabricator.wikimedia.org/P48057 and previous config saved to /var/cache/conftool/dbconfig/20230509-221840-ladsgroup.json |
[production] |
22:18 |
<brett@cumin2002> |
START - Cookbook sre.cdn.roll-upgrade-haproxy rolling upgrade of HAProxy on A:cp-text_codfw |
[production] |
22:06 |
<inflatador> |
bking@wcqs1002 depool wcqs1002 while it catches up on lag |
[production] |
22:03 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2131', diff saved to https://phabricator.wikimedia.org/P48056 and previous config saved to /var/cache/conftool/dbconfig/20230509-220333-ladsgroup.json |
[production] |
21:48 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2131 (T335845)', diff saved to https://phabricator.wikimedia.org/P48055 and previous config saved to /var/cache/conftool/dbconfig/20230509-214827-ladsgroup.json |
[production] |
21:45 |
<pt1979@cumin2002> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host cloudcontrol2001-dev.codfw.wmnet with OS bullseye |
[production] |
21:42 |
<brett@cumin2002> |
END (PASS) - Cookbook sre.cdn.roll-upgrade-haproxy (exit_code=0) rolling upgrade of HAProxy on A:cp-upload_esams |
[production] |
21:38 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db2131 (T335845)', diff saved to https://phabricator.wikimedia.org/P48054 and previous config saved to /var/cache/conftool/dbconfig/20230509-213834-ladsgroup.json |
[production] |
21:38 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2131.codfw.wmnet with reason: Maintenance |
[production] |
21:38 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2131.codfw.wmnet with reason: Maintenance |
[production] |