2022-11-29
ยง
|
16:18 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mw-debug: apply |
[production] |
16:18 |
<robh@cumin2002> |
START - Cookbook sre.network.configure-switch-interfaces for host cp5027 |
[production] |
16:18 |
<robh@cumin2002> |
END (PASS) - Cookbook sre.network.configure-switch-interfaces (exit_code=0) for host cp5026 |
[production] |
16:18 |
<robh@cumin2002> |
START - Cookbook sre.network.configure-switch-interfaces for host cp5026 |
[production] |
16:18 |
<robh@cumin2002> |
END (PASS) - Cookbook sre.network.configure-switch-interfaces (exit_code=0) for host cp5025 |
[production] |
16:18 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mw-debug: apply |
[production] |
16:18 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mw-debug: apply |
[production] |
16:18 |
<robh@cumin2002> |
START - Cookbook sre.network.configure-switch-interfaces for host cp5025 |
[production] |
16:18 |
<robh@cumin2002> |
END (PASS) - Cookbook sre.network.configure-switch-interfaces (exit_code=0) for host cp5024 |
[production] |
16:18 |
<robh@cumin2002> |
START - Cookbook sre.network.configure-switch-interfaces for host cp5024 |
[production] |
16:18 |
<robh@cumin2002> |
END (PASS) - Cookbook sre.network.configure-switch-interfaces (exit_code=0) for host cp5023 |
[production] |
16:18 |
<robh@cumin2002> |
START - Cookbook sre.network.configure-switch-interfaces for host cp5023 |
[production] |
16:18 |
<robh@cumin2002> |
END (PASS) - Cookbook sre.network.configure-switch-interfaces (exit_code=0) for host cp5022 |
[production] |
16:17 |
<robh@cumin2002> |
START - Cookbook sre.network.configure-switch-interfaces for host cp5022 |
[production] |
16:17 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'db1123 (re)pooling @ 100%: Maint done', diff saved to https://phabricator.wikimedia.org/P41752 and previous config saved to /var/cache/conftool/dbconfig/20221129-161604-ladsgroup.json |
[production] |
16:17 |
<robh@cumin2002> |
END (PASS) - Cookbook sre.network.configure-switch-interfaces (exit_code=0) for host cp5021 |
[production] |
16:17 |
<robh@cumin2002> |
START - Cookbook sre.network.configure-switch-interfaces for host cp5021 |
[production] |
16:16 |
<robh@cumin2002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
16:14 |
<robh@cumin2002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: eqsin hosts - robh@cumin2002" |
[production] |
16:14 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mw-debug: apply |
[production] |
16:14 |
<oblivian@deploy1002> |
Synchronized wmf-config/reverse-proxy.php: test deployment (duration: 04m 28s) |
[production] |
16:13 |
<robh@cumin2002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: eqsin hosts - robh@cumin2002" |
[production] |
16:13 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1184', diff saved to https://phabricator.wikimedia.org/P41751 and previous config saved to /var/cache/conftool/dbconfig/20221129-161329-marostegui.json |
[production] |
16:12 |
<oblivian@cumin1001> |
conftool action : set/pooled=yes; selector: dc=eqiad,name=mw14(89|9).* |
[production] |
16:11 |
<robh@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
16:09 |
<oblivian@deploy1002> |
Synchronized wmf-config/reverse-proxy.php: test deployment (duration: 04m 35s) |
[production] |
16:09 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2159', diff saved to https://phabricator.wikimedia.org/P41750 and previous config saved to /var/cache/conftool/dbconfig/20221129-160907-ladsgroup.json |
[production] |
16:08 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mw-debug: apply |
[production] |
16:07 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mw-debug: apply |
[production] |
16:06 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mw-debug: apply |
[production] |
16:04 |
<oblivian@deploy1002> |
Synchronized wmf-config/reverse-proxy.php: test deployment (duration: 04m 36s) |
[production] |
16:03 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mw-debug: apply |
[production] |
16:01 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'db1123 (re)pooling @ 75%: Maint done', diff saved to https://phabricator.wikimedia.org/P41749 and previous config saved to /var/cache/conftool/dbconfig/20221129-160059-ladsgroup.json |
[production] |
15:58 |
<oblivian@cumin1001> |
conftool action : set/pooled=no; selector: dc=eqiad,name=mw14(89|9).* |
[production] |
15:58 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1184', diff saved to https://phabricator.wikimedia.org/P41748 and previous config saved to /var/cache/conftool/dbconfig/20221129-155822-marostegui.json |
[production] |
15:54 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2159 (T323907)', diff saved to https://phabricator.wikimedia.org/P41747 and previous config saved to /var/cache/conftool/dbconfig/20221129-155401-ladsgroup.json |
[production] |
15:47 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.hardware.upgrade-firmware (exit_code=0) upgrade firmware for hosts ['db1204'] |
[production] |
15:45 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'db1123 (re)pooling @ 25%: Maint done', diff saved to https://phabricator.wikimedia.org/P41746 and previous config saved to /var/cache/conftool/dbconfig/20221129-154554-ladsgroup.json |
[production] |
15:45 |
<pt1979@cumin2002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['db1204'] |
[production] |
15:43 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1184 (T321126)', diff saved to https://phabricator.wikimedia.org/P41745 and previous config saved to /var/cache/conftool/dbconfig/20221129-154316-marostegui.json |
[production] |
15:42 |
<pt1979@cumin2002> |
END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['db1204'] |
[production] |
15:40 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1184 (T321126)', diff saved to https://phabricator.wikimedia.org/P41744 and previous config saved to /var/cache/conftool/dbconfig/20221129-154055-marostegui.json |
[production] |
15:40 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on db1184.eqiad.wmnet with reason: Maintenance |
[production] |
15:40 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 5:00:00 on db1184.eqiad.wmnet with reason: Maintenance |
[production] |
15:40 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1169 (T321126)', diff saved to https://phabricator.wikimedia.org/P41743 and previous config saved to /var/cache/conftool/dbconfig/20221129-154033-marostegui.json |
[production] |
15:30 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'db1123 (re)pooling @ 10%: Maint done', diff saved to https://phabricator.wikimedia.org/P41742 and previous config saved to /var/cache/conftool/dbconfig/20221129-153049-ladsgroup.json |
[production] |
15:25 |
<Emperor> |
set thanos ring replicas to 3.0 T311690 |
[production] |
15:25 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1169', diff saved to https://phabricator.wikimedia.org/P41741 and previous config saved to /var/cache/conftool/dbconfig/20221129-152526-marostegui.json |
[production] |
15:20 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.hardware.upgrade-firmware (exit_code=0) upgrade firmware for hosts ['db1205'] |
[production] |
15:16 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db2159 (T323907)', diff saved to https://phabricator.wikimedia.org/P41740 and previous config saved to /var/cache/conftool/dbconfig/20221129-151647-ladsgroup.json |
[production] |