2023-02-10
ยง
|
08:29 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1165', diff saved to https://phabricator.wikimedia.org/P44140 and previous config saved to /var/cache/conftool/dbconfig/20230210-082923-marostegui.json |
[production] |
08:16 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2120', diff saved to https://phabricator.wikimedia.org/P44139 and previous config saved to /var/cache/conftool/dbconfig/20230210-081612-marostegui.json |
[production] |
08:14 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1165', diff saved to https://phabricator.wikimedia.org/P44138 and previous config saved to /var/cache/conftool/dbconfig/20230210-081417-marostegui.json |
[production] |
08:12 |
<moritzm> |
installing virglrenderer security updates |
[production] |
08:08 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1130 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P44137 and previous config saved to /var/cache/conftool/dbconfig/20230210-080841-root.json |
[production] |
08:01 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2120', diff saved to https://phabricator.wikimedia.org/P44136 and previous config saved to /var/cache/conftool/dbconfig/20230210-080106-marostegui.json |
[production] |
07:59 |
<elukey@cumin1001> |
END (FAIL) - Cookbook sre.k8s.upgrade-cluster (exit_code=99) Upgrade K8s version: Upgrade ml-staging-codfw cluster to 1.23 |
[production] |
07:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1165 (T329203)', diff saved to https://phabricator.wikimedia.org/P44135 and previous config saved to /var/cache/conftool/dbconfig/20230210-075911-marostegui.json |
[production] |
07:59 |
<elukey@cumin1001> |
START - Cookbook sre.k8s.upgrade-cluster Upgrade K8s version: Upgrade ml-staging-codfw cluster to 1.23 |
[production] |
07:57 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1165 (T329203)', diff saved to https://phabricator.wikimedia.org/P44134 and previous config saved to /var/cache/conftool/dbconfig/20230210-075702-marostegui.json |
[production] |
07:56 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on clouddb[1015,1019,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance |
[production] |
07:56 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on clouddb[1015,1019,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance |
[production] |
07:56 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1165.eqiad.wmnet with reason: Maintenance |
[production] |
07:56 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db1165.eqiad.wmnet with reason: Maintenance |
[production] |
07:53 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1130 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P44133 and previous config saved to /var/cache/conftool/dbconfig/20230210-075336-root.json |
[production] |
07:53 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1140.eqiad.wmnet with reason: Maintenance |
[production] |
07:53 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db1140.eqiad.wmnet with reason: Maintenance |
[production] |
07:53 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1113:3316 (T329203)', diff saved to https://phabricator.wikimedia.org/P44132 and previous config saved to /var/cache/conftool/dbconfig/20230210-075314-marostegui.json |
[production] |
07:46 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2120 (T328817)', diff saved to https://phabricator.wikimedia.org/P44131 and previous config saved to /var/cache/conftool/dbconfig/20230210-074600-marostegui.json |
[production] |
07:43 |
<elukey@cumin1001> |
END (FAIL) - Cookbook sre.k8s.upgrade-cluster (exit_code=99) Upgrade K8s version: Upgrade ml-staging-codfw cluster to 1.23 |
[production] |
07:41 |
<elukey@cumin1001> |
START - Cookbook sre.k8s.upgrade-cluster Upgrade K8s version: Upgrade ml-staging-codfw cluster to 1.23 |
[production] |
07:39 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db2120 (T328817)', diff saved to https://phabricator.wikimedia.org/P44130 and previous config saved to /var/cache/conftool/dbconfig/20230210-073902-marostegui.json |
[production] |
07:38 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2120.codfw.wmnet with reason: Maintenance |
[production] |
07:38 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db2120.codfw.wmnet with reason: Maintenance |
[production] |
07:38 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2108 (T328817)', diff saved to https://phabricator.wikimedia.org/P44129 and previous config saved to /var/cache/conftool/dbconfig/20230210-073841-marostegui.json |
[production] |
07:38 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1130 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P44128 and previous config saved to /var/cache/conftool/dbconfig/20230210-073831-root.json |
[production] |
07:38 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1113:3316', diff saved to https://phabricator.wikimedia.org/P44127 and previous config saved to /var/cache/conftool/dbconfig/20230210-073808-marostegui.json |
[production] |
07:38 |
<moritzm> |
installing wireshark security updates |
[production] |
07:23 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2108', diff saved to https://phabricator.wikimedia.org/P44126 and previous config saved to /var/cache/conftool/dbconfig/20230210-072335-marostegui.json |
[production] |
07:23 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1130 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P44125 and previous config saved to /var/cache/conftool/dbconfig/20230210-072327-root.json |
[production] |
07:23 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1113:3316', diff saved to https://phabricator.wikimedia.org/P44124 and previous config saved to /var/cache/conftool/dbconfig/20230210-072301-marostegui.json |
[production] |
07:08 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2108', diff saved to https://phabricator.wikimedia.org/P44123 and previous config saved to /var/cache/conftool/dbconfig/20230210-070829-marostegui.json |
[production] |
07:08 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1130 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P44122 and previous config saved to /var/cache/conftool/dbconfig/20230210-070822-root.json |
[production] |
07:07 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1113:3316 (T329203)', diff saved to https://phabricator.wikimedia.org/P44121 and previous config saved to /var/cache/conftool/dbconfig/20230210-070755-marostegui.json |
[production] |
06:57 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts db1098.eqiad.wmnet |
[production] |
06:57 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
06:57 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: db1098.eqiad.wmnet decommissioned, removing all IPs except the asset tag one - marostegui@cumin1001" |
[production] |
06:53 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2108 (T328817)', diff saved to https://phabricator.wikimedia.org/P44120 and previous config saved to /var/cache/conftool/dbconfig/20230210-065322-marostegui.json |
[production] |
06:53 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1130 (re)pooling @ 5%: Repooling', diff saved to https://phabricator.wikimedia.org/P44119 and previous config saved to /var/cache/conftool/dbconfig/20230210-065317-root.json |
[production] |
06:47 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db2108 (T328817)', diff saved to https://phabricator.wikimedia.org/P44118 and previous config saved to /var/cache/conftool/dbconfig/20230210-064728-marostegui.json |
[production] |
06:47 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2108.codfw.wmnet with reason: Maintenance |
[production] |
06:47 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db2108.codfw.wmnet with reason: Maintenance |
[production] |
06:46 |
<marostegui@cumin1001> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: db1098.eqiad.wmnet decommissioned, removing all IPs except the asset tag one - marostegui@cumin1001" |
[production] |
06:44 |
<marostegui@cumin1001> |
START - Cookbook sre.dns.netbox |
[production] |
06:43 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2100.codfw.wmnet with reason: Maintenance |
[production] |
06:43 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db2100.codfw.wmnet with reason: Maintenance |
[production] |
06:40 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.decommission for hosts db1098.eqiad.wmnet |
[production] |
06:38 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2098.codfw.wmnet with reason: Maintenance |
[production] |
06:38 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db2098.codfw.wmnet with reason: Maintenance |
[production] |
06:38 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1130 (re)pooling @ 1%: Repooling', diff saved to https://phabricator.wikimedia.org/P44117 and previous config saved to /var/cache/conftool/dbconfig/20230210-063812-root.json |
[production] |