551-600 of 10000 results (52ms)
2022-03-08 ยง
11:53 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db2105.codfw.wmnet with reason: Maintenance [production]
11:51 <btullis@cumin2002> START - Cookbook sre.druid.roll-restart-workers for Druid public cluster: Roll restart of Druid jvm daemons. [production]
11:50 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cp1083.eqiad.wmnet with OS buster [production]
11:48 <vgutierrez> pool cp1083 with HAProxy as TLS termination layer - T290005 [production]
11:44 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1131', diff saved to https://phabricator.wikimedia.org/P22088 and previous config saved to /var/cache/conftool/dbconfig/20220308-114434-marostegui.json [production]
11:41 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1123.eqiad.wmnet with reason: Maintenance [production]
11:41 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db1123.eqiad.wmnet with reason: Maintenance [production]
11:34 <marostegui@cumin1001> dbctl commit (dc=all): 'db1162 (re)pooling @ 100%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P22086 and previous config saved to /var/cache/conftool/dbconfig/20220308-113424-root.json [production]
11:31 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ml-serve2008.codfw.wmnet [production]
11:31 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1123 (T300381)', diff saved to https://phabricator.wikimedia.org/P22085 and previous config saved to /var/cache/conftool/dbconfig/20220308-113110-marostegui.json [production]
11:31 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1123.eqiad.wmnet with reason: Maintenance [production]
11:31 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db1123.eqiad.wmnet with reason: Maintenance [production]
11:31 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1112 (T300381)', diff saved to https://phabricator.wikimedia.org/P22084 and previous config saved to /var/cache/conftool/dbconfig/20220308-113102-marostegui.json [production]
11:30 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cp1083.eqiad.wmnet with reason: host reimage [production]
11:29 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1131 (T298294)', diff saved to https://phabricator.wikimedia.org/P22083 and previous config saved to /var/cache/conftool/dbconfig/20220308-112929-marostegui.json [production]
11:29 <btullis@cumin1001> END (PASS) - Cookbook sre.druid.roll-restart-workers (exit_code=0) for Druid test cluster: Roll restart of Druid jvm daemons. [production]
11:28 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1131 (T298294)', diff saved to https://phabricator.wikimedia.org/P22082 and previous config saved to /var/cache/conftool/dbconfig/20220308-112811-marostegui.json [production]
11:28 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db1131.eqiad.wmnet with reason: Maintenance [production]
11:28 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 8:00:00 on db1131.eqiad.wmnet with reason: Maintenance [production]
11:28 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1098:3316 (T298294)', diff saved to https://phabricator.wikimedia.org/P22081 and previous config saved to /var/cache/conftool/dbconfig/20220308-112804-marostegui.json [production]
11:27 <vgutierrez@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on cp1083.eqiad.wmnet with reason: host reimage [production]
11:25 <elukey@cumin1001> START - Cookbook sre.hosts.reboot-single for host ml-serve2008.codfw.wmnet [production]
11:25 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ml-serve2007.codfw.wmnet [production]
11:20 <btullis@cumin1001> START - Cookbook sre.druid.roll-restart-workers for Druid test cluster: Roll restart of Druid jvm daemons. [production]
11:19 <marostegui@cumin1001> dbctl commit (dc=all): 'db1162 (re)pooling @ 75%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P22080 and previous config saved to /var/cache/conftool/dbconfig/20220308-111920-root.json [production]
11:18 <elukey@cumin1001> START - Cookbook sre.hosts.reboot-single for host ml-serve2007.codfw.wmnet [production]
11:17 <hnowlan@deploy1002> helmfile [eqiad] DONE helmfile.d/services/changeprop-jobqueue: sync [production]
11:15 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1112', diff saved to https://phabricator.wikimedia.org/P22079 and previous config saved to /var/cache/conftool/dbconfig/20220308-111558-marostegui.json [production]
11:15 <XioNoX> Cleanup transport-in filters for codfw/eqiad (CR747551) [production]
11:12 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1098:3316', diff saved to https://phabricator.wikimedia.org/P22078 and previous config saved to /var/cache/conftool/dbconfig/20220308-111259-marostegui.json [production]
11:12 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ml-serve2006.codfw.wmnet [production]
11:11 <hnowlan@deploy1002> helmfile [eqiad] START helmfile.d/services/changeprop-jobqueue: sync [production]
11:11 <vgutierrez@cumin1001> START - Cookbook sre.hosts.reimage for host cp1083.eqiad.wmnet with OS buster [production]
11:10 <vgutierrez@cumin1001> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host cp1083.eqiad.wmnet with OS buster [production]
11:09 <hnowlan@deploy1002> helmfile [eqiad] DONE helmfile.d/services/changeprop-jobqueue: sync [production]
11:08 <klausman@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ml-cache2003.codfw.wmnet [production]
11:06 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host datahubsearch1003.eqiad.wmnet [production]
11:06 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cp1083.eqiad.wmnet with reason: host reimage [production]
11:05 <elukey@cumin1001> START - Cookbook sre.hosts.reboot-single for host ml-serve2006.codfw.wmnet [production]
11:04 <marostegui@cumin1001> dbctl commit (dc=all): 'db1162 (re)pooling @ 50%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P22077 and previous config saved to /var/cache/conftool/dbconfig/20220308-110416-root.json [production]
11:03 <klausman@cumin2002> START - Cookbook sre.hosts.reboot-single for host ml-cache2003.codfw.wmnet [production]
11:03 <klausman@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ml-cache2002.codfw.wmnet [production]
11:03 <vgutierrez@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on cp1083.eqiad.wmnet with reason: host reimage [production]
11:02 <btullis@cumin1001> END (PASS) - Cookbook sre.aqs.roll-restart (exit_code=0) for AQS aqs cluster: Roll restart of all AQS's nodejs daemons. [production]
11:02 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ml-serve2005.codfw.wmnet [production]
11:00 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1112', diff saved to https://phabricator.wikimedia.org/P22076 and previous config saved to /var/cache/conftool/dbconfig/20220308-110053-marostegui.json [production]
10:59 <btullis@cumin1001> START - Cookbook sre.aqs.roll-restart for AQS aqs cluster: Roll restart of all AQS's nodejs daemons. [production]
10:59 <hnowlan@deploy1002> helmfile [eqiad] START helmfile.d/services/changeprop-jobqueue: sync [production]
10:59 <klausman@cumin2002> START - Cookbook sre.hosts.reboot-single for host ml-cache2002.codfw.wmnet [production]
10:59 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host datahubsearch1003.eqiad.wmnet [production]