2022-09-06
ยง
|
13:16 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply |
[production] |
13:16 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply |
[production] |
13:15 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1134', diff saved to https://phabricator.wikimedia.org/P33952 and previous config saved to /var/cache/conftool/dbconfig/20220906-131510-ladsgroup.json |
[production] |
13:12 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply |
[production] |
13:00 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1134 (T312863)', diff saved to https://phabricator.wikimedia.org/P33951 and previous config saved to /var/cache/conftool/dbconfig/20220906-130004-ladsgroup.json |
[production] |
12:31 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 100%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P33950 and previous config saved to /var/cache/conftool/dbconfig/20220906-123145-root.json |
[production] |
12:29 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:15:00 on puppetdb2002.codfw.wmnet with reason: Temporarily stop puppetdb/postgres |
[production] |
12:29 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.downtime for 0:15:00 on puppetdb2002.codfw.wmnet with reason: Temporarily stop puppetdb/postgres |
[production] |
12:16 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 75%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P33949 and previous config saved to /var/cache/conftool/dbconfig/20220906-121640-root.json |
[production] |
12:15 |
<XioNoX> |
repool ulsfo - T295690 |
[production] |
12:14 |
<cgoubert@puppetmaster1001> |
conftool action : set/pooled=no:weight=10; selector: dc=eqiad,cluster=parsoid,name=parse1010.eqiad.wmnet |
[production] |
12:05 |
<claime> |
Set wtp10[38-40].eqiad.wmnet inactive pending decommission T317025 |
[production] |
12:04 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db2180 (T314041)', diff saved to https://phabricator.wikimedia.org/P33948 and previous config saved to /var/cache/conftool/dbconfig/20220906-120433-ladsgroup.json |
[production] |
12:04 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2180.codfw.wmnet with reason: Maintenance |
[production] |
12:04 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2180.codfw.wmnet with reason: Maintenance |
[production] |
12:04 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2171:3316 (T314041)', diff saved to https://phabricator.wikimedia.org/P33947 and previous config saved to /var/cache/conftool/dbconfig/20220906-120412-ladsgroup.json |
[production] |
12:03 |
<cgoubert@puppetmaster1001> |
conftool action : set/pooled=inactive; selector: dc=eqiad,cluster=parsoid,name=wtp1040.eqiad.wmnet |
[production] |
12:03 |
<cgoubert@puppetmaster1001> |
conftool action : set/pooled=inactive; selector: dc=eqiad,cluster=parsoid,name=wtp1039.eqiad.wmnet |
[production] |
12:03 |
<cgoubert@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on wtp[1039-1040].eqiad.wmnet with reason: Downtiming replaced wtp servers |
[production] |
12:02 |
<cgoubert@cumin1001> |
START - Cookbook sre.hosts.downtime for 7 days, 0:00:00 on wtp[1039-1040].eqiad.wmnet with reason: Downtiming replaced wtp servers |
[production] |
12:01 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 50%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P33946 and previous config saved to /var/cache/conftool/dbconfig/20220906-120135-root.json |
[production] |
12:01 |
<claime> |
depooled wtp1042.eqiad.wmnet from parsoid cluster T307219 |
[production] |
11:46 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 25%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P33945 and previous config saved to /var/cache/conftool/dbconfig/20220906-114631-root.json |
[production] |
11:35 |
<jayme@deploy1002> |
helmfile [codfw] DONE helmfile.d/admin 'apply'. |
[production] |
11:34 |
<jayme@deploy1002> |
helmfile [codfw] START helmfile.d/admin 'apply'. |
[production] |
11:31 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 10%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P33944 and previous config saved to /var/cache/conftool/dbconfig/20220906-113126-root.json |
[production] |
11:27 |
<claime> |
pooled parse1009.eqiad.wmnet (php 7.4 only) in parsoid cluster T307219 |
[production] |
11:26 |
<jbond@cumin2002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "sync data - jbond@cumin2002" |
[production] |
11:26 |
<cgoubert@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on 12 hosts with reason: Downtime pending inclusion in production |
[production] |
11:26 |
<cgoubert@cumin1001> |
START - Cookbook sre.hosts.downtime for 7 days, 0:00:00 on 12 hosts with reason: Downtime pending inclusion in production |
[production] |
11:25 |
<jbond@cumin2002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "sync data - jbond@cumin2002" |
[production] |
11:17 |
<XioNoX> |
put cr4-ulsfo back in service - T295690 |
[production] |
11:16 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 5%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P33943 and previous config saved to /var/cache/conftool/dbconfig/20220906-111621-root.json |
[production] |
11:12 |
<jayme@deploy1002> |
helmfile [staging-eqiad] DONE helmfile.d/admin 'apply'. |
[production] |
11:12 |
<jayme@deploy1002> |
helmfile [staging-eqiad] START helmfile.d/admin 'apply'. |
[production] |
11:12 |
<jayme@deploy1002> |
helmfile [staging-codfw] DONE helmfile.d/admin 'apply'. |
[production] |
11:11 |
<jayme@deploy1002> |
helmfile [staging-codfw] START helmfile.d/admin 'apply'. |
[production] |
11:11 |
<moritzm> |
installing ghostscript updates on stretch |
[production] |
11:06 |
<XioNoX> |
restart cr4-ulsfo for software upgrade - T295690 |
[production] |
11:01 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 4%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P33942 and previous config saved to /var/cache/conftool/dbconfig/20220906-110116-root.json |
[production] |
10:58 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1189 (re)pooling @ 100%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33941 and previous config saved to /var/cache/conftool/dbconfig/20220906-105841-root.json |
[production] |
10:58 |
<cgoubert@cumin1001> |
END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for parse1009.eqiad.wmnet |
[production] |
10:57 |
<cgoubert@cumin1001> |
START - Cookbook sre.hosts.remove-downtime for parse1009.eqiad.wmnet |
[production] |
10:52 |
<moritzm> |
uploaded ghostscript 9.26a~dfsg-0+deb9u9+wmf1 to apt.wikimedia.org |
[production] |
10:46 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 3%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P33940 and previous config saved to /var/cache/conftool/dbconfig/20220906-104611-root.json |
[production] |
10:44 |
<btullis@deploy1002> |
helmfile [dse-k8s-eqiad] DONE helmfile.d/admin 'sync'. |
[production] |
10:44 |
<btullis@deploy1002> |
helmfile [dse-k8s-eqiad] START helmfile.d/admin 'sync'. |
[production] |
10:43 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1189 (re)pooling @ 75%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33939 and previous config saved to /var/cache/conftool/dbconfig/20220906-104336-root.json |
[production] |
10:42 |
<XioNoX> |
drain traffic from cr4-ulsfo - T295690 |
[production] |
10:40 |
<jayme> |
switched primary kube-controller-manager from kubemaster1001 to kubemaster1002 |
[production] |