2023-08-30
ยง
|
13:06 |
<samtar@deploy1002> |
Started scap: Backport for [[gerrit:951042|IS: Enable Phonos on all projects (T336763)]] |
[production] |
13:04 |
<samtar@deploy1002> |
backport Cancelled |
[production] |
13:03 |
<jhancock@cumin2002> |
END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host kubernetes2034.mgmt.codfw.wmnet with reboot policy FORCED |
[production] |
13:02 |
<cmooney@cumin1001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
13:02 |
<cmooney@cumin1001> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add management record for ssw1-a1-codfw - cmooney@cumin1001" |
[production] |
13:02 |
<ayounsi@cumin1001> |
END (FAIL) - Cookbook sre.network.tls (exit_code=99) for network device asw2-22-ulsfo |
[production] |
13:02 |
<ayounsi@cumin1001> |
START - Cookbook sre.network.tls for network device asw2-22-ulsfo |
[production] |
13:01 |
<cmooney@cumin1001> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add management record for ssw1-a1-codfw - cmooney@cumin1001" |
[production] |
13:00 |
<jhancock@cumin2002> |
START - Cookbook sre.hosts.provision for host kubernetes2034.mgmt.codfw.wmnet with reboot policy FORCED |
[production] |
12:59 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1034.eqiad.wmnet |
[production] |
12:58 |
<cmooney@cumin1001> |
START - Cookbook sre.dns.netbox |
[production] |
12:58 |
<cmooney@cumin1001> |
START - Cookbook sre.network.provision for device ssw1-a1-codfw.mgmt.codfw.wmnet |
[production] |
12:58 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1034.eqiad.wmnet |
[production] |
12:57 |
<akosiaris@deploy1002> |
helmfile [staging] DONE helmfile.d/services/linkrecommendation: apply |
[production] |
12:56 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2170:3312 (T343718)', diff saved to https://phabricator.wikimedia.org/P52087 and previous config saved to /var/cache/conftool/dbconfig/20230830-125650-ladsgroup.json |
[production] |
12:56 |
<elukey> |
restart kubelet on ml-serve1001 to clear prometheus metrics |
[production] |
12:55 |
<taavi@deploy1002> |
Finished scap: Backport for [[gerrit:951591|wmf-config: remove public subnets from reverse-proxy.php (T344704 T329219)]] (duration: 11m 28s) |
[production] |
12:54 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. |
[production] |
12:54 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. |
[production] |
12:54 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti1034.eqiad.wmnet |
[production] |
12:53 |
<elukey@deploy1002> |
helmfile [ml-serve-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
12:53 |
<elukey@deploy1002> |
helmfile [ml-serve-codfw] START helmfile.d/admin 'sync'. |
[production] |
12:53 |
<elukey@deploy1002> |
helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
12:52 |
<elukey@deploy1002> |
helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. |
[production] |
12:52 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1188 (T343718)', diff saved to https://phabricator.wikimedia.org/P52086 and previous config saved to /var/cache/conftool/dbconfig/20230830-125206-ladsgroup.json |
[production] |
12:49 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1188 (T343718)', diff saved to https://phabricator.wikimedia.org/P52085 and previous config saved to /var/cache/conftool/dbconfig/20230830-124954-ladsgroup.json |
[production] |
12:49 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1188.eqiad.wmnet with reason: Maintenance |
[production] |
12:49 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1188.eqiad.wmnet with reason: Maintenance |
[production] |
12:49 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1182 (T343718)', diff saved to https://phabricator.wikimedia.org/P52084 and previous config saved to /var/cache/conftool/dbconfig/20230830-124933-ladsgroup.json |
[production] |
12:47 |
<taavi@deploy1002> |
sukhe and taavi: Continuing with sync |
[production] |
12:46 |
<taavi@deploy1002> |
sukhe and taavi: Backport for [[gerrit:951591|wmf-config: remove public subnets from reverse-proxy.php (T344704 T329219)]] synced to the testservers mwdebug2002.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) |
[production] |
12:46 |
<akosiaris@deploy1002> |
helmfile [staging] START helmfile.d/services/linkrecommendation: apply |
[production] |
12:45 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores1009.eqiad.wmnet |
[production] |
12:43 |
<taavi@deploy1002> |
Started scap: Backport for [[gerrit:951591|wmf-config: remove public subnets from reverse-proxy.php (T344704 T329219)]] |
[production] |
12:43 |
<ladsgroup@deploy1002> |
Finished scap: Backport for [[gerrit:953590|ores-extension: fix thresholds (T343308)]] (duration: 25m 53s) |
[production] |
12:38 |
<elukey@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host ores1009.eqiad.wmnet |
[production] |
12:37 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores1008.eqiad.wmnet |
[production] |
12:34 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1182', diff saved to https://phabricator.wikimedia.org/P52083 and previous config saved to /var/cache/conftool/dbconfig/20230830-123427-ladsgroup.json |
[production] |
12:33 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1034.eqiad.wmnet |
[production] |
12:30 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db2170:3312 (T343718)', diff saved to https://phabricator.wikimedia.org/P52082 and previous config saved to /var/cache/conftool/dbconfig/20230830-123001-ladsgroup.json |
[production] |
12:29 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2170.codfw.wmnet with reason: Maintenance |
[production] |
12:29 |
<elukey@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host ores1008.eqiad.wmnet |
[production] |
12:29 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2170.codfw.wmnet with reason: Maintenance |
[production] |
12:29 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2148 (T343718)', diff saved to https://phabricator.wikimedia.org/P52081 and previous config saved to /var/cache/conftool/dbconfig/20230830-122940-ladsgroup.json |
[production] |
12:29 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. |
[production] |
12:28 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. |
[production] |
12:28 |
<elukey@deploy1002> |
helmfile [ml-serve-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
12:27 |
<elukey@deploy1002> |
helmfile [ml-serve-codfw] START helmfile.d/admin 'sync'. |
[production] |
12:27 |
<elukey@deploy1002> |
helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
12:26 |
<elukey@deploy1002> |
helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. |
[production] |