2023-08-28
ยง
|
15:03 |
<wm-bot2> |
Restarting openstack services on cloudvirt1045: ['nova-compute', 'neutron-linuxbridge-agent'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:02 |
<wm-bot2> |
Restarting openstack services on cloudvirt1040: ['nova-compute', 'neutron-linuxbridge-agent'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:02 |
<wm-bot2> |
Restarting openstack services on cloudvirt1036: ['nova-compute', 'neutron-linuxbridge-agent'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:02 |
<wm-bot2> |
Restarting openstack services on cloudvirt1034: ['nova-compute', 'neutron-linuxbridge-agent'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:02 |
<wm-bot2> |
Restarting openstack services on cloudvirt1039: ['nova-compute', 'neutron-linuxbridge-agent'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:02 |
<wm-bot2> |
Restarting openstack services on cloudvirt1037: ['nova-compute', 'neutron-linuxbridge-agent'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:02 |
<wm-bot2> |
Restarting openstack services on cloudvirt1035: ['nova-compute', 'neutron-linuxbridge-agent'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:02 |
<wm-bot2> |
Restarting openstack services on cloudvirt1033: ['nova-compute', 'neutron-linuxbridge-agent'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:02 |
<wm-bot2> |
Restarting openstack services on cloudvirt1031: ['nova-compute', 'neutron-linuxbridge-agent'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:02 |
<wm-bot2> |
Restarting openstack services on cloudvirt1032: ['nova-compute', 'neutron-linuxbridge-agent'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:02 |
<wm-bot2> |
Restarting openstack services on cloudcontrol1005: ['nova-conductor', 'nova-scheduler', 'nova-api', 'nova-api-metadata', 'cinder-volume', 'cinder-scheduler', 'neutron-api', 'neutron-rpc-server', 'trove-api', 'trove-conductor', 'trove-taskmanager', 'keystone', 'keystone-admin', 'glance-api', 'magnum-api', 'magnum-conductor', 'heat-api', 'heat-api-cfn', 'heat-engine'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:02 |
<wm-bot2> |
Restarting openstack services on cloudvirt1028: ['nova-compute', 'neutron-linuxbridge-agent'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:02 |
<wm-bot2> |
Restarting openstack services on cloudvirt1030: ['nova-compute', 'neutron-linuxbridge-agent'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:02 |
<wm-bot2> |
Restarting openstack services on cloudvirt1027: ['nova-compute', 'neutron-linuxbridge-agent'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:01 |
<wm-bot2> |
Restarting openstack services on cloudvirt1026: ['nova-compute', 'neutron-linuxbridge-agent'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:01 |
<wm-bot2> |
Restarting openstack services on cloudvirt1029: ['nova-compute', 'neutron-linuxbridge-agent'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:01 |
<wm-bot2> |
Restarting openstack services on cloudvirt1025: ['nova-compute', 'neutron-linuxbridge-agent'] (T345084) - cookbook ran by root@cloudcumin1001 |
[admin] |
15:01 |
<fnegri@cloudcumin1001> |
START - Cookbook wmcs.openstack.restart_openstack |
[admin] |
14:59 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2129', diff saved to https://phabricator.wikimedia.org/P51662 and previous config saved to /var/cache/conftool/dbconfig/20230828-145940-ladsgroup.json |
[production] |
14:59 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db2106 (T343718)', diff saved to https://phabricator.wikimedia.org/P51661 and previous config saved to /var/cache/conftool/dbconfig/20230828-145921-ladsgroup.json |
[production] |
14:59 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance es1027', diff saved to https://phabricator.wikimedia.org/P51660 and previous config saved to /var/cache/conftool/dbconfig/20230828-145912-ladsgroup.json |
[production] |
14:59 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2106.codfw.wmnet with reason: Maintenance |
[production] |
14:59 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2106.codfw.wmnet with reason: Maintenance |
[production] |
14:58 |
<wm-bot2> |
deployed kubernetes component envvars-api (90055b5) (T344502) - cookbook ran by dcaro@urcuchillay |
[tools] |
14:57 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1173', diff saved to https://phabricator.wikimedia.org/P51659 and previous config saved to /var/cache/conftool/dbconfig/20230828-145730-ladsgroup.json |
[production] |
14:55 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.k8s.reboot-nodes (exit_code=0) rolling reboot on A:ml-serve-worker-eqiad |
[production] |
14:54 |
<claime> |
bounced ferm.service on ml-serve1008 |
[production] |
14:54 |
<wm-bot2> |
deployed kubernetes component envvars-api (90055b5) (T344502) - cookbook ran by dcaro@urcuchillay |
[toolsbeta] |
14:53 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti2026.codfw.wmnet |
[production] |
14:53 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2026.codfw.wmnet |
[production] |
14:51 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1163 (T343718)', diff saved to https://phabricator.wikimedia.org/P51658 and previous config saved to /var/cache/conftool/dbconfig/20230828-145116-ladsgroup.json |
[production] |
14:49 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db2167:3311 (T343718)', diff saved to https://phabricator.wikimedia.org/P51657 and previous config saved to /var/cache/conftool/dbconfig/20230828-144924-ladsgroup.json |
[production] |
14:49 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2167.codfw.wmnet with reason: Maintenance |
[production] |
14:49 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2167.codfw.wmnet with reason: Maintenance |
[production] |
14:49 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2153 (T343718)', diff saved to https://phabricator.wikimedia.org/P51656 and previous config saved to /var/cache/conftool/dbconfig/20230828-144903-ladsgroup.json |
[production] |
14:47 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti2026.codfw.wmnet |
[production] |
14:44 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2129', diff saved to https://phabricator.wikimedia.org/P51655 and previous config saved to /var/cache/conftool/dbconfig/20230828-144433-ladsgroup.json |
[production] |
14:44 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance es1027 (T344589)', diff saved to https://phabricator.wikimedia.org/P51654 and previous config saved to /var/cache/conftool/dbconfig/20230828-144406-ladsgroup.json |
[production] |
14:42 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1173', diff saved to https://phabricator.wikimedia.org/P51653 and previous config saved to /var/cache/conftool/dbconfig/20230828-144224-ladsgroup.json |
[production] |
14:40 |
<fabfur> |
enable puppet and start pybal on lvs6002 (T344587) |
[production] |
14:40 |
<fabfur@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs6002.drmrs.wmnet |
[production] |
14:39 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2026.codfw.wmnet |
[production] |
14:38 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling es1027 (T344589)', diff saved to https://phabricator.wikimedia.org/P51652 and previous config saved to /var/cache/conftool/dbconfig/20230828-143808-ladsgroup.json |
[production] |
14:38 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on es1027.eqiad.wmnet with reason: Maintenance |
[production] |
14:37 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on es1027.eqiad.wmnet with reason: Maintenance |
[production] |
14:37 |
<fabfur@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host lvs6002.drmrs.wmnet |
[production] |
14:36 |
<jbond@cumin1001> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host puppetserver1002.eqiad.wmnet with OS bookworm |
[production] |
14:34 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance es1030 (T344589)', diff saved to https://phabricator.wikimedia.org/P51651 and previous config saved to /var/cache/conftool/dbconfig/20230828-143453-ladsgroup.json |
[production] |
14:33 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2153', diff saved to https://phabricator.wikimedia.org/P51650 and previous config saved to /var/cache/conftool/dbconfig/20230828-143357-ladsgroup.json |
[production] |
14:32 |
<bblack> |
esams cp clusters: rolling restarts of varnish-frontend ~1h apart over the next ~8h, to apply memory sizing change from: https://gerrit.wikimedia.org/r/c/operations/puppet/+/952866/ (earlier run only did 1 host per cluster before we changed direction!) |
[production] |