2022-09-06
ยง
|
12:04 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2180.codfw.wmnet with reason: Maintenance |
[production] |
12:04 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2180.codfw.wmnet with reason: Maintenance |
[production] |
12:04 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2171:3316 (T314041)', diff saved to https://phabricator.wikimedia.org/P33947 and previous config saved to /var/cache/conftool/dbconfig/20220906-120412-ladsgroup.json |
[production] |
12:03 |
<cgoubert@puppetmaster1001> |
conftool action : set/pooled=inactive; selector: dc=eqiad,cluster=parsoid,name=wtp1040.eqiad.wmnet |
[production] |
12:03 |
<cgoubert@puppetmaster1001> |
conftool action : set/pooled=inactive; selector: dc=eqiad,cluster=parsoid,name=wtp1039.eqiad.wmnet |
[production] |
12:03 |
<cgoubert@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on wtp[1039-1040].eqiad.wmnet with reason: Downtiming replaced wtp servers |
[production] |
12:02 |
<cgoubert@cumin1001> |
START - Cookbook sre.hosts.downtime for 7 days, 0:00:00 on wtp[1039-1040].eqiad.wmnet with reason: Downtiming replaced wtp servers |
[production] |
12:01 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 50%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P33946 and previous config saved to /var/cache/conftool/dbconfig/20220906-120135-root.json |
[production] |
12:01 |
<claime> |
depooled wtp1042.eqiad.wmnet from parsoid cluster T307219 |
[production] |
11:46 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 25%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P33945 and previous config saved to /var/cache/conftool/dbconfig/20220906-114631-root.json |
[production] |
11:35 |
<jayme@deploy1002> |
helmfile [codfw] DONE helmfile.d/admin 'apply'. |
[production] |
11:34 |
<jayme@deploy1002> |
helmfile [codfw] START helmfile.d/admin 'apply'. |
[production] |
11:31 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 10%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P33944 and previous config saved to /var/cache/conftool/dbconfig/20220906-113126-root.json |
[production] |
11:27 |
<claime> |
pooled parse1009.eqiad.wmnet (php 7.4 only) in parsoid cluster T307219 |
[production] |
11:26 |
<jbond@cumin2002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "sync data - jbond@cumin2002" |
[production] |
11:26 |
<cgoubert@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on 12 hosts with reason: Downtime pending inclusion in production |
[production] |
11:26 |
<cgoubert@cumin1001> |
START - Cookbook sre.hosts.downtime for 7 days, 0:00:00 on 12 hosts with reason: Downtime pending inclusion in production |
[production] |
11:25 |
<jbond@cumin2002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "sync data - jbond@cumin2002" |
[production] |
11:17 |
<XioNoX> |
put cr4-ulsfo back in service - T295690 |
[production] |
11:16 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 5%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P33943 and previous config saved to /var/cache/conftool/dbconfig/20220906-111621-root.json |
[production] |
11:12 |
<jayme@deploy1002> |
helmfile [staging-eqiad] DONE helmfile.d/admin 'apply'. |
[production] |
11:12 |
<jayme@deploy1002> |
helmfile [staging-eqiad] START helmfile.d/admin 'apply'. |
[production] |
11:12 |
<jayme@deploy1002> |
helmfile [staging-codfw] DONE helmfile.d/admin 'apply'. |
[production] |
11:11 |
<jayme@deploy1002> |
helmfile [staging-codfw] START helmfile.d/admin 'apply'. |
[production] |
11:11 |
<moritzm> |
installing ghostscript updates on stretch |
[production] |
11:06 |
<XioNoX> |
restart cr4-ulsfo for software upgrade - T295690 |
[production] |
11:01 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 4%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P33942 and previous config saved to /var/cache/conftool/dbconfig/20220906-110116-root.json |
[production] |
10:58 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1189 (re)pooling @ 100%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33941 and previous config saved to /var/cache/conftool/dbconfig/20220906-105841-root.json |
[production] |
10:58 |
<cgoubert@cumin1001> |
END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for parse1009.eqiad.wmnet |
[production] |
10:57 |
<cgoubert@cumin1001> |
START - Cookbook sre.hosts.remove-downtime for parse1009.eqiad.wmnet |
[production] |
10:52 |
<moritzm> |
uploaded ghostscript 9.26a~dfsg-0+deb9u9+wmf1 to apt.wikimedia.org |
[production] |
10:46 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 3%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P33940 and previous config saved to /var/cache/conftool/dbconfig/20220906-104611-root.json |
[production] |
10:44 |
<btullis@deploy1002> |
helmfile [dse-k8s-eqiad] DONE helmfile.d/admin 'sync'. |
[production] |
10:44 |
<btullis@deploy1002> |
helmfile [dse-k8s-eqiad] START helmfile.d/admin 'sync'. |
[production] |
10:43 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1189 (re)pooling @ 75%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33939 and previous config saved to /var/cache/conftool/dbconfig/20220906-104336-root.json |
[production] |
10:42 |
<XioNoX> |
drain traffic from cr4-ulsfo - T295690 |
[production] |
10:40 |
<jayme> |
switched primary kube-controller-manager from kubemaster1001 to kubemaster1002 |
[production] |
10:34 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1188 (re)pooling @ 100%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33938 and previous config saved to /var/cache/conftool/dbconfig/20220906-103402-root.json |
[production] |
10:31 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 2%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P33937 and previous config saved to /var/cache/conftool/dbconfig/20220906-103104-root.json |
[production] |
10:30 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1174 (re)pooling @ 100%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33936 and previous config saved to /var/cache/conftool/dbconfig/20220906-103017-root.json |
[production] |
10:29 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1119 (re)pooling @ 100%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33935 and previous config saved to /var/cache/conftool/dbconfig/20220906-102919-root.json |
[production] |
10:28 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1189 (re)pooling @ 50%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33934 and previous config saved to /var/cache/conftool/dbconfig/20220906-102831-root.json |
[production] |
10:27 |
<btullis@deploy1002> |
helmfile [dse-k8s-eqiad] DONE helmfile.d/admin 'sync'. |
[production] |
10:27 |
<btullis@deploy1002> |
helmfile [dse-k8s-eqiad] START helmfile.d/admin 'sync'. |
[production] |
10:26 |
<XioNoX> |
put cr3-ulsfo back in service - T295690 |
[production] |
10:25 |
<btullis@deploy1002> |
helmfile [dse-k8s-eqiad] DONE helmfile.d/admin 'sync'. |
[production] |
10:25 |
<btullis@deploy1002> |
helmfile [dse-k8s-eqiad] START helmfile.d/admin 'sync'. |
[production] |
10:21 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1103 (re)pooling @ 100%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33932 and previous config saved to /var/cache/conftool/dbconfig/20220906-102152-root.json |
[production] |
10:18 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1188 (re)pooling @ 75%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33931 and previous config saved to /var/cache/conftool/dbconfig/20220906-101858-root.json |
[production] |
10:15 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 1%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P33930 and previous config saved to /var/cache/conftool/dbconfig/20220906-101559-root.json |
[production] |