1001-1050 of 10000 results (80ms)
2023-08-28 ยง
15:14 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance es1027', diff saved to https://phabricator.wikimedia.org/P51666 and previous config saved to /var/cache/conftool/dbconfig/20230828-151418-ladsgroup.json [production]
15:13 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1180 (T344589)', diff saved to https://phabricator.wikimedia.org/P51665 and previous config saved to /var/cache/conftool/dbconfig/20230828-151300-ladsgroup.json [production]
15:12 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1180.eqiad.wmnet with reason: Maintenance [production]
15:12 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1180.eqiad.wmnet with reason: Maintenance [production]
15:12 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1173 (T344589)', diff saved to https://phabricator.wikimedia.org/P51664 and previous config saved to /var/cache/conftool/dbconfig/20230828-151236-ladsgroup.json [production]
15:09 <jhancock@cumin2002> START - Cookbook sre.hosts.provision for host moss-be2003.mgmt.codfw.wmnet with reboot policy FORCED [production]
15:07 <isaranto@deploy1002> helmfile [ml-serve-codfw] 'sync' command on namespace 'ores-legacy' for release 'main' . [production]
15:06 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1163', diff saved to https://phabricator.wikimedia.org/P51663 and previous config saved to /var/cache/conftool/dbconfig/20230828-150622-ladsgroup.json [production]
15:06 <isaranto@deploy1002> helmfile [ml-serve-eqiad] 'sync' command on namespace 'ores-legacy' for release 'main' . [production]
15:05 <isaranto@deploy1002> helmfile [ml-staging-codfw] 'sync' command on namespace 'ores-legacy' for release 'main' . [production]
14:59 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2129', diff saved to https://phabricator.wikimedia.org/P51662 and previous config saved to /var/cache/conftool/dbconfig/20230828-145940-ladsgroup.json [production]
14:59 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2106 (T343718)', diff saved to https://phabricator.wikimedia.org/P51661 and previous config saved to /var/cache/conftool/dbconfig/20230828-145921-ladsgroup.json [production]
14:59 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance es1027', diff saved to https://phabricator.wikimedia.org/P51660 and previous config saved to /var/cache/conftool/dbconfig/20230828-145912-ladsgroup.json [production]
14:59 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2106.codfw.wmnet with reason: Maintenance [production]
14:59 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2106.codfw.wmnet with reason: Maintenance [production]
14:57 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1173', diff saved to https://phabricator.wikimedia.org/P51659 and previous config saved to /var/cache/conftool/dbconfig/20230828-145730-ladsgroup.json [production]
14:55 <elukey@cumin1001> END (PASS) - Cookbook sre.k8s.reboot-nodes (exit_code=0) rolling reboot on A:ml-serve-worker-eqiad [production]
14:54 <claime> bounced ferm.service on ml-serve1008 [production]
14:53 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti2026.codfw.wmnet [production]
14:53 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2026.codfw.wmnet [production]
14:51 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1163 (T343718)', diff saved to https://phabricator.wikimedia.org/P51658 and previous config saved to /var/cache/conftool/dbconfig/20230828-145116-ladsgroup.json [production]
14:49 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2167:3311 (T343718)', diff saved to https://phabricator.wikimedia.org/P51657 and previous config saved to /var/cache/conftool/dbconfig/20230828-144924-ladsgroup.json [production]
14:49 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2167.codfw.wmnet with reason: Maintenance [production]
14:49 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2167.codfw.wmnet with reason: Maintenance [production]
14:49 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2153 (T343718)', diff saved to https://phabricator.wikimedia.org/P51656 and previous config saved to /var/cache/conftool/dbconfig/20230828-144903-ladsgroup.json [production]
14:47 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti2026.codfw.wmnet [production]
14:44 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2129', diff saved to https://phabricator.wikimedia.org/P51655 and previous config saved to /var/cache/conftool/dbconfig/20230828-144433-ladsgroup.json [production]
14:44 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance es1027 (T344589)', diff saved to https://phabricator.wikimedia.org/P51654 and previous config saved to /var/cache/conftool/dbconfig/20230828-144406-ladsgroup.json [production]
14:42 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1173', diff saved to https://phabricator.wikimedia.org/P51653 and previous config saved to /var/cache/conftool/dbconfig/20230828-144224-ladsgroup.json [production]
14:40 <fabfur> enable puppet and start pybal on lvs6002 (T344587) [production]
14:40 <fabfur@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs6002.drmrs.wmnet [production]
14:39 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2026.codfw.wmnet [production]
14:38 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling es1027 (T344589)', diff saved to https://phabricator.wikimedia.org/P51652 and previous config saved to /var/cache/conftool/dbconfig/20230828-143808-ladsgroup.json [production]
14:38 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on es1027.eqiad.wmnet with reason: Maintenance [production]
14:37 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on es1027.eqiad.wmnet with reason: Maintenance [production]
14:37 <fabfur@cumin1001> START - Cookbook sre.hosts.reboot-single for host lvs6002.drmrs.wmnet [production]
14:36 <jbond@cumin1001> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host puppetserver1002.eqiad.wmnet with OS bookworm [production]
14:34 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance es1030 (T344589)', diff saved to https://phabricator.wikimedia.org/P51651 and previous config saved to /var/cache/conftool/dbconfig/20230828-143453-ladsgroup.json [production]
14:33 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2153', diff saved to https://phabricator.wikimedia.org/P51650 and previous config saved to /var/cache/conftool/dbconfig/20230828-143357-ladsgroup.json [production]
14:32 <bblack> esams cp clusters: rolling restarts of varnish-frontend ~1h apart over the next ~8h, to apply memory sizing change from: https://gerrit.wikimedia.org/r/c/operations/puppet/+/952866/ (earlier run only did 1 host per cluster before we changed direction!) [production]
14:31 <eevans@cumin1001> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for restbase1027.eqiad.wmnet [production]
14:31 <eevans@cumin1001> START - Cookbook sre.hosts.remove-downtime for restbase1027.eqiad.wmnet [production]
14:29 <jhancock@cumin2002> END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host moss-be2003.mgmt.codfw.wmnet with reboot policy FORCED [production]
14:29 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2129 (T344589)', diff saved to https://phabricator.wikimedia.org/P51649 and previous config saved to /var/cache/conftool/dbconfig/20230828-142927-ladsgroup.json [production]
14:28 <jhancock@cumin2002> START - Cookbook sre.hosts.provision for host moss-be2003.mgmt.codfw.wmnet with reboot policy FORCED [production]
14:27 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1173 (T344589)', diff saved to https://phabricator.wikimedia.org/P51648 and previous config saved to /var/cache/conftool/dbconfig/20230828-142718-ladsgroup.json [production]
14:25 <claime> bounced ferm.service on ml-serve1007 [production]
14:21 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2129 (T344589)', diff saved to https://phabricator.wikimedia.org/P51647 and previous config saved to /var/cache/conftool/dbconfig/20230828-142105-ladsgroup.json [production]
14:20 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1173 (T344589)', diff saved to https://phabricator.wikimedia.org/P51646 and previous config saved to /var/cache/conftool/dbconfig/20230828-142056-ladsgroup.json [production]
14:20 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2129.codfw.wmnet with reason: Maintenance [production]