2024-09-10
§
|
08:59 |
<jayme@deploy1003> |
helmfile [codfw] DONE helmfile.d/admin 'apply'. |
[production] |
08:59 |
<jayme@deploy1003> |
helmfile [codfw] START helmfile.d/admin 'apply'. |
[production] |
08:59 |
<jayme@deploy1003> |
helmfile [eqiad] DONE helmfile.d/admin 'apply'. |
[production] |
08:58 |
<jayme@deploy1003> |
helmfile [eqiad] START helmfile.d/admin 'apply'. |
[production] |
08:58 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'Set db2205 with weight 0 T374421', diff saved to https://phabricator.wikimedia.org/P68761 and previous config saved to /var/cache/conftool/dbconfig/20240910-085854-arnaudb.json |
[production] |
08:58 |
<arnaudb@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on 25 hosts with reason: Primary switchover s3 T374421 |
[production] |
08:58 |
<arnaudb@cumin1002> |
START - Cookbook sre.hosts.downtime for 1:00:00 on 25 hosts with reason: Primary switchover s3 T374421 |
[production] |
08:51 |
<brouberol@deploy1003> |
helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s_services/services/datahub: sync on production |
[production] |
08:47 |
<brouberol@deploy1003> |
helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s_services/services/datahub: apply on production |
[production] |
08:46 |
<brouberol@deploy1003> |
helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s_services/services/datahub: sync on production |
[production] |
08:46 |
<brouberol@deploy1003> |
helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s_services/services/datahub: apply on production |
[production] |
08:45 |
<brouberol@deploy1003> |
helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s_services/services/datahub-next: sync on staging |
[production] |
08:44 |
<jayme@cumin1002> |
START - Cookbook sre.kafka.roll-restart-reboot-brokers rolling restart_daemons on A:kafka-main-codfw |
[production] |
08:41 |
<brouberol@deploy1003> |
helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s_services/services/datahub-next: apply on staging |
[production] |
08:39 |
<moritzm> |
installing Java security updates on puppetservers |
[production] |
08:23 |
<jayme@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on kafka-main[2002,2007].codfw.wmnet with reason: Hardware refresh |
[production] |
08:23 |
<jayme@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on kafka-main[2002,2007].codfw.wmnet with reason: Hardware refresh |
[production] |
08:13 |
<dcausse@deploy1003> |
helmfile [eqiad] DONE helmfile.d/services/cirrus-streaming-updater: apply |
[production] |
08:12 |
<dcausse@deploy1003> |
helmfile [eqiad] START helmfile.d/services/cirrus-streaming-updater: apply |
[production] |
08:09 |
<arnaudb@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on db1246.eqiad.wmnet with reason: https://phabricator.wikimedia.org/T374215 → server depooled has hardware issues |
[production] |
08:09 |
<arnaudb@cumin1002> |
START - Cookbook sre.hosts.downtime for 7 days, 0:00:00 on db1246.eqiad.wmnet with reason: https://phabricator.wikimedia.org/T374215 → server depooled has hardware issues |
[production] |
08:08 |
<dcausse@deploy1003> |
helmfile [codfw] DONE helmfile.d/services/cirrus-streaming-updater: apply |
[production] |
08:07 |
<dcausse@deploy1003> |
helmfile [codfw] START helmfile.d/services/cirrus-streaming-updater: apply |
[production] |
07:55 |
<jayme> |
evacuating leadership for all partitions assigned to broker id 2002 on kafka-main-codfw - T363210 |
[production] |
07:20 |
<dcausse@deploy1003> |
Finished scap sync-world: Backport for [[gerrit:1060433|search: use the stem field when searching mul labels (T371401)]] (duration: 17m 22s) |
[production] |
07:15 |
<dcausse@deploy1003> |
dcausse: Continuing with sync |
[production] |
07:10 |
<dcausse@deploy1003> |
dcausse: Backport for [[gerrit:1060433|search: use the stem field when searching mul labels (T371401)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) |
[production] |
07:03 |
<dcausse@deploy1003> |
Started scap sync-world: Backport for [[gerrit:1060433|search: use the stem field when searching mul labels (T371401)]] |
[production] |
06:57 |
<arnaudb@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host pc2017.codfw.wmnet with OS bookworm |
[production] |
06:37 |
<arnaudb@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on pc2017.codfw.wmnet with reason: host reimage |
[production] |
06:34 |
<arnaudb@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on pc1017.eqiad.wmnet with reason: host reimage |
[production] |
06:31 |
<arnaudb@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on pc1017.eqiad.wmnet with reason: host reimage |
[production] |
06:18 |
<arnaudb@cumin1002> |
START - Cookbook sre.hosts.reimage for host pc2017.codfw.wmnet with OS bookworm |
[production] |
06:16 |
<arnaudb@cumin1002> |
START - Cookbook sre.hosts.reimage for host pc1017.eqiad.wmnet with OS bookworm |
[production] |
06:11 |
<kart_> |
Updated cxserver to 2024-08-28-053620-production |
[production] |
06:11 |
<kartik@deploy1003> |
helmfile [eqiad] DONE helmfile.d/services/cxserver: apply |
[production] |
06:10 |
<kartik@deploy1003> |
helmfile [eqiad] START helmfile.d/services/cxserver: apply |
[production] |
05:47 |
<kartik@deploy1003> |
helmfile [codfw] DONE helmfile.d/services/cxserver: apply |
[production] |
05:46 |
<kartik@deploy1003> |
helmfile [codfw] START helmfile.d/services/cxserver: apply |
[production] |
05:37 |
<kartik@deploy1003> |
helmfile [staging] DONE helmfile.d/services/cxserver: apply |
[production] |
05:36 |
<kartik@deploy1003> |
helmfile [staging] START helmfile.d/services/cxserver: apply |
[production] |
04:01 |
<mwpresync@deploy1003> |
Pruned MediaWiki: 1.43.0-wmf.19 (duration: 00m 58s) |
[production] |
03:47 |
<mwpresync@deploy1003> |
Finished scap sync-world: testwikis to 1.43.0-wmf.22 refs T373641 (duration: 45m 06s) |
[production] |
03:02 |
<mwpresync@deploy1003> |
Started scap sync-world: testwikis to 1.43.0-wmf.22 refs T373641 |
[production] |