2022-01-12
ยง
|
17:08 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
17:08 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
17:07 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
17:06 |
<akosiaris@deploy1002> |
Synchronized wmf-config/ProductionServices.php: (no justification provided) (duration: 01m 21s) |
[production] |
17:00 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.reimage for host db1169.eqiad.wmnet with OS bullseye |
[production] |
16:58 |
<akosiaris@cumin1001> |
END (PASS) - Cookbook sre.ganeti.reboot-vm (exit_code=0) for VM poolcounter1005.eqiad.wmnet |
[production] |
16:55 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1099:3311 (T297191)', diff saved to https://phabricator.wikimedia.org/P18681 and previous config saved to /var/cache/conftool/dbconfig/20220112-165542-marostegui.json |
[production] |
16:54 |
<btullis@cumin1001> |
END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) for Zookeeper A:zookeeper-druid-public cluster: Roll restart of jvm daemons. |
[production] |
16:54 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1099:3311 (T297191)', diff saved to https://phabricator.wikimedia.org/P18680 and previous config saved to /var/cache/conftool/dbconfig/20220112-165434-marostegui.json |
[production] |
16:54 |
<akosiaris@cumin1001> |
START - Cookbook sre.ganeti.reboot-vm for VM poolcounter1005.eqiad.wmnet |
[production] |
16:54 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1099.eqiad.wmnet with reason: Maintenance |
[production] |
16:54 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1099.eqiad.wmnet with reason: Maintenance |
[production] |
16:53 |
<hnowlan> |
Decommissioning cassandra instance restbase2009-c via nodetool |
[production] |
16:48 |
<btullis@cumin1001> |
START - Cookbook sre.zookeeper.roll-restart-zookeeper for Zookeeper A:zookeeper-druid-public cluster: Roll restart of jvm daemons. |
[production] |
16:47 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
16:46 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
16:46 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
16:46 |
<akosiaris@deploy1002> |
Synchronized wmf-config/ProductionServices.php: (no justification provided) (duration: 01m 21s) |
[production] |
16:45 |
<elukey> |
elukey@prometheus2004:~$ sudo apt-get remove linux-image-4.9.0-8-amd64 linux-image-4.9.0-9-amd64 linux-image-4.9.0-11-amd64 linux-image-4.9.0-12-amd64 linux-image-4.9.0-13-amd64 |
[production] |
16:45 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
16:44 |
<elukey> |
elukey@prometheus2003:~$ sudo apt-get remove linux-image-4.9.0-8-amd64 linux-image-4.9.0-9-amd64 linux-image-4.9.0-11-amd64 linux-image-4.9.0-12-amd64 linux-image-4.9.0-13-amd64 |
[production] |
16:40 |
<elukey> |
elukey@prometheus1004:~$ sudo apt-get remove linux-image-4.9.0-8-amd64 linux-image-4.9.0-9-amd64 linux-image-4.9.0-11-amd64 linux-image-4.9.0-12-amd64 linux-image-4.9.0-13-amd64 |
[production] |
16:39 |
<elukey> |
elukey@prometheus1003:~$ sudo apt-get remove linux-image-4.9.0-11-amd64 linux-image-4.9.0-12-amd64 linux-image-4.9.0-13-amd64 linux-image-4.9.0-8-amd64 linux-image-4.9.0-9-amd64 |
[production] |
16:39 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1135', diff saved to https://phabricator.wikimedia.org/P18678 and previous config saved to /var/cache/conftool/dbconfig/20220112-163919-marostegui.json |
[production] |
16:39 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.reboot-vm (exit_code=0) for VM mx1001.wikimedia.org |
[production] |
16:36 |
<akosiaris@cumin1001> |
END (PASS) - Cookbook sre.ganeti.reboot-vm (exit_code=0) for VM poolcounter1004.eqiad.wmnet |
[production] |
16:35 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.reboot-vm for VM mx1001.wikimedia.org |
[production] |
16:35 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
16:31 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
16:31 |
<akosiaris@cumin1001> |
START - Cookbook sre.ganeti.reboot-vm for VM poolcounter1004.eqiad.wmnet |
[production] |
16:31 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
16:27 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
16:25 |
<akosiaris@deploy1002> |
Synchronized wmf-config/ProductionServices.php: (no justification provided) (duration: 01m 16s) |
[production] |
16:25 |
<elukey> |
stop kafka* on kafka-main1003 to allow dcops maintenance (nic/bios upgrades) - T298867 |
[production] |
16:24 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1135', diff saved to https://phabricator.wikimedia.org/P18677 and previous config saved to /var/cache/conftool/dbconfig/20220112-162414-marostegui.json |
[production] |
16:20 |
<moritzm> |
switch kubestagetcd1006 to DRBD (needed to be able to shuffle instances around for the Ganeti buster update) |
[production] |
16:19 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on kubestagetcd1006.eqiad.wmnet with reason: switch to DRBD disk storage |
[production] |
16:19 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.downtime for 1:00:00 on kubestagetcd1006.eqiad.wmnet with reason: switch to DRBD disk storage |
[production] |
16:09 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1135 (T297191)', diff saved to https://phabricator.wikimedia.org/P18676 and previous config saved to /var/cache/conftool/dbconfig/20220112-160910-marostegui.json |
[production] |
16:08 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1135 (T297191)', diff saved to https://phabricator.wikimedia.org/P18675 and previous config saved to /var/cache/conftool/dbconfig/20220112-160802-marostegui.json |
[production] |
16:08 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1135.eqiad.wmnet with reason: Maintenance |
[production] |
16:07 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1135.eqiad.wmnet with reason: Maintenance |
[production] |
16:07 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1134 (T297191)', diff saved to https://phabricator.wikimedia.org/P18674 and previous config saved to /var/cache/conftool/dbconfig/20220112-160755-marostegui.json |
[production] |
16:02 |
<elukey> |
stop kafka* on kafka-main1002 to allow dcops maintenance (nic/bios upgrades) - T298867 |
[production] |
15:57 |
<oblivian@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/shellbox-media: sync on main |
[production] |
15:56 |
<oblivian@deploy1002> |
helmfile [eqiad] START helmfile.d/services/shellbox-media: apply on main |
[production] |
15:56 |
<bking@cumin1001> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host elastic2051.codfw.wmnet with OS stretch |
[production] |
15:52 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1134', diff saved to https://phabricator.wikimedia.org/P18673 and previous config saved to /var/cache/conftool/dbconfig/20220112-155250-marostegui.json |
[production] |
15:37 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1134', diff saved to https://phabricator.wikimedia.org/P18672 and previous config saved to /var/cache/conftool/dbconfig/20220112-153745-marostegui.json |
[production] |
15:23 |
<bking@cumin1001> |
START - Cookbook sre.hosts.reimage for host elastic2051.codfw.wmnet with OS stretch |
[production] |