2022-01-12
ยง
|
15:21 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1163 (T297191)', diff saved to https://phabricator.wikimedia.org/P18669 and previous config saved to /var/cache/conftool/dbconfig/20220112-152121-marostegui.json |
[production] |
15:14 |
<elukey> |
stop kafka* on kafka-main1001 to allow dcops maintenance (nic/bios upgrades) - T298867 |
[production] |
15:06 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1163', diff saved to https://phabricator.wikimedia.org/P18668 and previous config saved to /var/cache/conftool/dbconfig/20220112-150616-marostegui.json |
[production] |
14:59 |
<moritzm> |
switch kubestagetcd1005 to DRBD (needed to be able to shuffle instances around for the Ganeti buster update) |
[production] |
14:59 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on kubestagetcd1005.eqiad.wmnet with reason: switch to DRBD disk storage |
[production] |
14:59 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.downtime for 1:00:00 on kubestagetcd1005.eqiad.wmnet with reason: switch to DRBD disk storage |
[production] |
14:56 |
<oblivian@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/shellbox-media: sync on main |
[production] |
14:55 |
<oblivian@deploy1002> |
helmfile [eqiad] START helmfile.d/services/shellbox-media: apply on main |
[production] |
14:54 |
<oblivian@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/shellbox-media: apply on main |
[production] |
14:54 |
<oblivian@deploy1002> |
helmfile [eqiad] START helmfile.d/services/shellbox-media: apply on main |
[production] |
14:51 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1163', diff saved to https://phabricator.wikimedia.org/P18667 and previous config saved to /var/cache/conftool/dbconfig/20220112-145111-marostegui.json |
[production] |
14:42 |
<oblivian@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/shellbox-media: sync on main |
[production] |
14:42 |
<oblivian@deploy1002> |
helmfile [eqiad] START helmfile.d/services/shellbox-media: apply on main |
[production] |
14:40 |
<jelto> |
remove helm2 from deployment_server T251305 https://gerrit.wikimedia.org/r/c/operations/puppet/+/753026 |
[production] |
14:37 |
<jelto@deploy1002> |
helmfile [staging] DONE helmfile.d/services/blubberoid: sync on staging |
[production] |
14:37 |
<jelto@deploy1002> |
helmfile [staging] DONE helmfile.d/services/blubberoid: apply on production |
[production] |
14:37 |
<jelto@deploy1002> |
helmfile [staging] START helmfile.d/services/blubberoid: apply on staging |
[production] |
14:36 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.reboot-vm (exit_code=0) for VM netflow1002.eqiad.wmnet |
[production] |
14:36 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1163 (T297191)', diff saved to https://phabricator.wikimedia.org/P18666 and previous config saved to /var/cache/conftool/dbconfig/20220112-143606-marostegui.json |
[production] |
14:32 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1163 (T297191)', diff saved to https://phabricator.wikimedia.org/P18665 and previous config saved to /var/cache/conftool/dbconfig/20220112-143258-marostegui.json |
[production] |
14:32 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1163.eqiad.wmnet with reason: Maintenance |
[production] |
14:32 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1163.eqiad.wmnet with reason: Maintenance |
[production] |
14:32 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1140.eqiad.wmnet with reason: Maintenance |
[production] |
14:32 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1140.eqiad.wmnet with reason: Maintenance |
[production] |
14:32 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1139.eqiad.wmnet with reason: Maintenance |
[production] |
14:32 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1139.eqiad.wmnet with reason: Maintenance |
[production] |
14:32 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1128 (T297191)', diff saved to https://phabricator.wikimedia.org/P18664 and previous config saved to /var/cache/conftool/dbconfig/20220112-143241-marostegui.json |
[production] |
14:30 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.reboot-vm for VM netflow1002.eqiad.wmnet |
[production] |
14:30 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
14:26 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
14:26 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
14:25 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
14:23 |
<moritzm> |
switch kubestagetcd1004 to DRBD (needed to be able to shuffle instances around for the Ganeti buster update) |
[production] |
14:22 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on kubestagetcd1004.eqiad.wmnet with reason: switch to DRBD disk storage |
[production] |
14:22 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.downtime for 1:00:00 on kubestagetcd1004.eqiad.wmnet with reason: switch to DRBD disk storage |
[production] |
14:20 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
14:19 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
14:19 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
14:17 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1128', diff saved to https://phabricator.wikimedia.org/P18663 and previous config saved to /var/cache/conftool/dbconfig/20220112-141736-marostegui.json |
[production] |
14:17 |
<ladsgroup@deploy1002> |
Synchronized wmf-config: Config: [[gerrit:702421|Merge db-codfw.php and db-eqiad.php into db-production.php (T260297)]], Part III (duration: 01m 07s) |
[production] |
14:15 |
<ladsgroup@deploy1002> |
Synchronized wmf-config/CommonSettings.php: Config: [[gerrit:702421|Merge db-codfw.php and db-eqiad.php into db-production.php (T260297)]], Part II (duration: 01m 08s) |
[production] |
14:15 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.reboot-vm (exit_code=0) for VM webperf1002.eqiad.wmnet |
[production] |
14:14 |
<ladsgroup@deploy1002> |
Synchronized wmf-config/db-production.php: Config: [[gerrit:702421|Merge db-codfw.php and db-eqiad.php into db-production.php (T260297)]], Part I (duration: 01m 07s) |
[production] |
14:13 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
14:09 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.reboot-vm for VM webperf1002.eqiad.wmnet |
[production] |
14:07 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.reboot-vm (exit_code=0) for VM webperf1001.eqiad.wmnet |
[production] |
14:02 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1128', diff saved to https://phabricator.wikimedia.org/P18662 and previous config saved to /var/cache/conftool/dbconfig/20220112-140232-marostegui.json |
[production] |
14:02 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.reboot-vm for VM webperf1001.eqiad.wmnet |
[production] |
13:58 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Give more traffic to db1128 in s1 T295965', diff saved to https://phabricator.wikimedia.org/P18661 and previous config saved to /var/cache/conftool/dbconfig/20220112-135858-marostegui.json |
[production] |
13:53 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |