2023-08-30
ยง
|
13:22 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1188', diff saved to https://phabricator.wikimedia.org/P52090 and previous config saved to /var/cache/conftool/dbconfig/20230830-132218-ladsgroup.json |
[production] |
13:21 |
<elukey@cumin1001> |
START - Cookbook sre.kafka.reboot-workers for Kafka main-codfw cluster: Reboot kafka nodes |
[production] |
13:21 |
<jiji@cumin1001> |
END (PASS) - Cookbook sre.k8s.reboot-nodes (exit_code=0) rolling reboot on A:wikikube-worker-codfw |
[production] |
13:20 |
<bking@cumin1001> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: apply security updates - bking@cumin1001 - T344587 |
[production] |
13:18 |
<akosiaris@deploy1002> |
helmfile [staging] START helmfile.d/services/linkrecommendation: apply |
[production] |
13:15 |
<samtar@deploy1002> |
Finished scap: Backport for [[gerrit:951042|IS: Enable Phonos on all projects (T336763)]] (duration: 09m 29s) |
[production] |
13:11 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2170:3312', diff saved to https://phabricator.wikimedia.org/P52089 and previous config saved to /var/cache/conftool/dbconfig/20230830-131157-ladsgroup.json |
[production] |
13:10 |
<samtar@deploy1002> |
samtar: Continuing with sync |
[production] |
13:09 |
<jhancock@cumin2002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['kubernetes2031'] |
[production] |
13:09 |
<jhancock@cumin2002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['kubernetes2030'] |
[production] |
13:09 |
<jhancock@cumin2002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['kubernetes2029'] |
[production] |
13:09 |
<jhancock@cumin2002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['kubernetes2028'] |
[production] |
13:09 |
<jhancock@cumin2002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['kubernetes2027'] |
[production] |
13:09 |
<jhancock@cumin2002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['kubernetes2026'] |
[production] |
13:08 |
<samtar@deploy1002> |
samtar: Backport for [[gerrit:951042|IS: Enable Phonos on all projects (T336763)]] synced to the testservers mwdebug1001.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug2002.codfw.wmnet, mwdebug1002.eqiad.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) |
[production] |
13:07 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1188', diff saved to https://phabricator.wikimedia.org/P52088 and previous config saved to /var/cache/conftool/dbconfig/20230830-130712-ladsgroup.json |
[production] |
13:06 |
<samtar@deploy1002> |
Started scap: Backport for [[gerrit:951042|IS: Enable Phonos on all projects (T336763)]] |
[production] |
13:04 |
<samtar@deploy1002> |
backport Cancelled |
[production] |
13:03 |
<jhancock@cumin2002> |
END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host kubernetes2034.mgmt.codfw.wmnet with reboot policy FORCED |
[production] |
13:02 |
<cmooney@cumin1001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
13:02 |
<cmooney@cumin1001> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add management record for ssw1-a1-codfw - cmooney@cumin1001" |
[production] |
13:02 |
<ayounsi@cumin1001> |
END (FAIL) - Cookbook sre.network.tls (exit_code=99) for network device asw2-22-ulsfo |
[production] |
13:02 |
<ayounsi@cumin1001> |
START - Cookbook sre.network.tls for network device asw2-22-ulsfo |
[production] |
13:01 |
<cmooney@cumin1001> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add management record for ssw1-a1-codfw - cmooney@cumin1001" |
[production] |
13:00 |
<jhancock@cumin2002> |
START - Cookbook sre.hosts.provision for host kubernetes2034.mgmt.codfw.wmnet with reboot policy FORCED |
[production] |
12:59 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1034.eqiad.wmnet |
[production] |
12:58 |
<cmooney@cumin1001> |
START - Cookbook sre.dns.netbox |
[production] |
12:58 |
<cmooney@cumin1001> |
START - Cookbook sre.network.provision for device ssw1-a1-codfw.mgmt.codfw.wmnet |
[production] |
12:58 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1034.eqiad.wmnet |
[production] |
12:57 |
<akosiaris@deploy1002> |
helmfile [staging] DONE helmfile.d/services/linkrecommendation: apply |
[production] |
12:56 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2170:3312 (T343718)', diff saved to https://phabricator.wikimedia.org/P52087 and previous config saved to /var/cache/conftool/dbconfig/20230830-125650-ladsgroup.json |
[production] |
12:56 |
<elukey> |
restart kubelet on ml-serve1001 to clear prometheus metrics |
[production] |
12:55 |
<taavi@deploy1002> |
Finished scap: Backport for [[gerrit:951591|wmf-config: remove public subnets from reverse-proxy.php (T344704 T329219)]] (duration: 11m 28s) |
[production] |
12:54 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. |
[production] |
12:54 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. |
[production] |
12:54 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti1034.eqiad.wmnet |
[production] |
12:53 |
<elukey@deploy1002> |
helmfile [ml-serve-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
12:53 |
<elukey@deploy1002> |
helmfile [ml-serve-codfw] START helmfile.d/admin 'sync'. |
[production] |
12:53 |
<elukey@deploy1002> |
helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
12:52 |
<elukey@deploy1002> |
helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. |
[production] |
12:52 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1188 (T343718)', diff saved to https://phabricator.wikimedia.org/P52086 and previous config saved to /var/cache/conftool/dbconfig/20230830-125206-ladsgroup.json |
[production] |
12:49 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1188 (T343718)', diff saved to https://phabricator.wikimedia.org/P52085 and previous config saved to /var/cache/conftool/dbconfig/20230830-124954-ladsgroup.json |
[production] |
12:49 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1188.eqiad.wmnet with reason: Maintenance |
[production] |
12:49 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1188.eqiad.wmnet with reason: Maintenance |
[production] |
12:49 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1182 (T343718)', diff saved to https://phabricator.wikimedia.org/P52084 and previous config saved to /var/cache/conftool/dbconfig/20230830-124933-ladsgroup.json |
[production] |
12:47 |
<taavi@deploy1002> |
sukhe and taavi: Continuing with sync |
[production] |
12:46 |
<taavi@deploy1002> |
sukhe and taavi: Backport for [[gerrit:951591|wmf-config: remove public subnets from reverse-proxy.php (T344704 T329219)]] synced to the testservers mwdebug2002.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) |
[production] |
12:46 |
<akosiaris@deploy1002> |
helmfile [staging] START helmfile.d/services/linkrecommendation: apply |
[production] |
12:45 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores1009.eqiad.wmnet |
[production] |
12:43 |
<taavi@deploy1002> |
Started scap: Backport for [[gerrit:951591|wmf-config: remove public subnets from reverse-proxy.php (T344704 T329219)]] |
[production] |