2022-06-13
ยง
|
11:18 |
<jbond@cumin1001> |
conftool action : set/pooled=true; selector: dnsdisc=netbox,name=codfw |
[production] |
11:18 |
<marostegui> |
Reboot db1131 for kernel upgrade T310485 |
[production] |
11:16 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1131 for kernel upgrade', diff saved to https://phabricator.wikimedia.org/P29645 and previous config saved to /var/cache/conftool/dbconfig/20220613-111621-root.json |
[production] |
11:15 |
<jbond@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host netbox2002.codfw.wmnet |
[production] |
11:15 |
<jbond@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host people2002.codfw.wmnet |
[production] |
11:15 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143', diff saved to https://phabricator.wikimedia.org/P29644 and previous config saved to /var/cache/conftool/dbconfig/20220613-111459-marostegui.json |
[production] |
11:14 |
<jbond@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host netboxdb2002.codfw.wmnet |
[production] |
11:12 |
<jbond@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host netboxdb2002.codfw.wmnet |
[production] |
11:12 |
<jbond@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host pki2002.codfw.wmnet |
[production] |
11:11 |
<jbond@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host people2002.codfw.wmnet |
[production] |
11:10 |
<jbond@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host people1003.eqiad.wmnet |
[production] |
11:08 |
<jbond@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host people1003.eqiad.wmnet |
[production] |
11:07 |
<jbond@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host puppetboard1002.eqiad.wmnet |
[production] |
11:07 |
<jbond@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host pki2002.codfw.wmnet |
[production] |
11:04 |
<jbond@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host puppetboard1002.eqiad.wmnet |
[production] |
11:03 |
<jbond@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host puppetboard2002.codfw.wmnet |
[production] |
11:02 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.idm.logout (exit_code=0) Logging Dsharpe out of all services on: 1219 hosts |
[production] |
11:00 |
<jmm@cumin2002> |
START - Cookbook sre.idm.logout Logging Dsharpe out of all services on: 1219 hosts |
[production] |
11:00 |
<jbond@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host puppetboard2002.codfw.wmnet |
[production] |
10:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143', diff saved to https://phabricator.wikimedia.org/P29643 and previous config saved to /var/cache/conftool/dbconfig/20220613-105954-marostegui.json |
[production] |
10:56 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.idm.logout (exit_code=0) Logging Dsharpe out of all services on: 609 hosts |
[production] |
10:56 |
<jmm@cumin2002> |
START - Cookbook sre.idm.logout Logging Dsharpe out of all services on: 609 hosts |
[production] |
10:52 |
<klausman@deploy1002> |
helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
10:52 |
<klausman@deploy1002> |
helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. |
[production] |
10:52 |
<klausman@deploy1002> |
helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
10:51 |
<klausman@deploy1002> |
helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. |
[production] |
10:51 |
<klausman@deploy1002> |
helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
10:50 |
<klausman@deploy1002> |
helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. |
[production] |
10:50 |
<klausman@deploy1002> |
helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
10:50 |
<klausman@deploy1002> |
helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. |
[production] |
10:44 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1143 (T310011)', diff saved to https://phabricator.wikimedia.org/P29642 and previous config saved to /var/cache/conftool/dbconfig/20220613-104449-marostegui.json |
[production] |
10:38 |
<klausman@deploy1002> |
helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
10:38 |
<klausman@deploy1002> |
helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. |
[production] |
10:37 |
<klausman@deploy1002> |
helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
10:37 |
<klausman@deploy1002> |
helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. |
[production] |
10:15 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1143 (T310011)', diff saved to https://phabricator.wikimedia.org/P29641 and previous config saved to /var/cache/conftool/dbconfig/20220613-101537-marostegui.json |
[production] |
10:15 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1143.eqiad.wmnet with reason: Maintenance |
[production] |
10:15 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db1143.eqiad.wmnet with reason: Maintenance |
[production] |
10:13 |
<moritzm> |
installing 5.10.120 kernel updates on bullseye hosts |
[production] |
09:53 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1150.eqiad.wmnet with reason: Maintenance |
[production] |
09:53 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db1150.eqiad.wmnet with reason: Maintenance |
[production] |
09:12 |
<moritzm> |
drain ganeti3001 for firmware update/reimage T308238 |
[production] |
09:07 |
<moritzm> |
installing ntfs-3g security updates |
[production] |
07:54 |
<moritzm> |
failover ganeti master in esams to ganeti3003 T308238 |
[production] |
07:18 |
<joal> |
Manually rerun webrequest_text laod for hour 2022-06-12T08:00 |
[production] |
06:41 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1172 (re)pooling @ 100%: After schema change', diff saved to https://phabricator.wikimedia.org/P29640 and previous config saved to /var/cache/conftool/dbconfig/20220613-064109-root.json |
[production] |
06:26 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1172 (re)pooling @ 75%: After schema change', diff saved to https://phabricator.wikimedia.org/P29639 and previous config saved to /var/cache/conftool/dbconfig/20220613-062605-root.json |
[production] |
06:11 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1172 (re)pooling @ 50%: After schema change', diff saved to https://phabricator.wikimedia.org/P29638 and previous config saved to /var/cache/conftool/dbconfig/20220613-061101-root.json |
[production] |
05:55 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1172 (re)pooling @ 25%: After schema change', diff saved to https://phabricator.wikimedia.org/P29637 and previous config saved to /var/cache/conftool/dbconfig/20220613-055557-root.json |
[production] |
05:46 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1172', diff saved to https://phabricator.wikimedia.org/P29636 and previous config saved to /var/cache/conftool/dbconfig/20220613-054623-marostegui.json |
[production] |