1301-1350 of 10000 results (44ms)
2022-02-07 ยง
18:02 <hnowlan@cumin1001> START - Cookbook sre.hosts.downtime for 0:30:00 on restbase2020.codfw.wmnet with reason: Firmware upgrade [production]
18:02 <hnowlan@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:30:00 on restbase2019.codfw.wmnet with reason: Firmware upgrade [production]
18:02 <hnowlan@cumin1001> START - Cookbook sre.hosts.downtime for 0:30:00 on restbase2019.codfw.wmnet with reason: Firmware upgrade [production]
18:01 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
17:56 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]
17:56 <hnowlan@puppetmaster1001> conftool action : set/pooled=no; selector: name=restbase2020.wmnet [production]
17:56 <hnowlan@puppetmaster1001> conftool action : set/pooled=no; selector: name=restbase2019.wmnet [production]
17:53 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1170:3317', diff saved to https://phabricator.wikimedia.org/P20212 and previous config saved to /var/cache/conftool/dbconfig/20220207-175352-ladsgroup.json [production]
17:51 <elukey@cumin1001> START - Cookbook sre.hosts.reimage for host ml-serve2005.codfw.wmnet with OS buster [production]
17:42 <volans@cumin2002> END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host mc2042.mgmt.codfw.wmnet with reboot policy FORCED [production]
17:38 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1170:3317', diff saved to https://phabricator.wikimedia.org/P20211 and previous config saved to /var/cache/conftool/dbconfig/20220207-173848-ladsgroup.json [production]
17:26 <volans@cumin2002> START - Cookbook sre.hosts.provision for host mc2042.mgmt.codfw.wmnet with reboot policy FORCED [production]
17:26 <pt1979@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host ganeti2030.codfw.wmnet with OS buster [production]
17:23 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1170:3317 (T298554)', diff saved to https://phabricator.wikimedia.org/P20210 and previous config saved to /var/cache/conftool/dbconfig/20220207-172343-ladsgroup.json [production]
16:59 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1170:3317 (T298554)', diff saved to https://phabricator.wikimedia.org/P20209 and previous config saved to /var/cache/conftool/dbconfig/20220207-165952-ladsgroup.json [production]
16:59 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1170.eqiad.wmnet with reason: Maintenance [production]
16:59 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1170.eqiad.wmnet with reason: Maintenance [production]
16:59 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1174 (T298554)', diff saved to https://phabricator.wikimedia.org/P20208 and previous config saved to /var/cache/conftool/dbconfig/20220207-165944-ladsgroup.json [production]
16:55 <pt1979@cumin2002> START - Cookbook sre.hosts.reimage for host ganeti2030.codfw.wmnet with OS buster [production]
16:52 <pt1979@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host ganeti2029.codfw.wmnet with OS buster [production]
16:44 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1174', diff saved to https://phabricator.wikimedia.org/P20207 and previous config saved to /var/cache/conftool/dbconfig/20220207-164439-ladsgroup.json [production]
16:41 <moritzm> switch kubestagetcd2003 to plain disk storage [production]
16:39 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on kubestagetcd2003.codfw.wmnet with reason: Switch to plain disk storage [production]
16:38 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 1:00:00 on kubestagetcd2003.codfw.wmnet with reason: Switch to plain disk storage [production]
16:30 <moritzm> switch kubestagetcd2002 to plain disk storage [production]
16:29 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1174', diff saved to https://phabricator.wikimedia.org/P20206 and previous config saved to /var/cache/conftool/dbconfig/20220207-162935-ladsgroup.json [production]
16:29 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on kubestagetcd2002.codfw.wmnet with reason: Switch to plain disk storage [production]
16:29 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 1:00:00 on kubestagetcd2002.codfw.wmnet with reason: Switch to plain disk storage [production]
16:24 <moritzm> switch kubestagetcd2001 to plain disk storage [production]
16:22 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on kubestagetcd2001.codfw.wmnet with reason: Switch to plain disk storage [production]
16:22 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 1:00:00 on kubestagetcd2001.codfw.wmnet with reason: Switch to plain disk storage [production]
16:22 <pt1979@cumin2002> START - Cookbook sre.hosts.reimage for host ganeti2029.codfw.wmnet with OS buster [production]
16:14 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1174 (T298554)', diff saved to https://phabricator.wikimedia.org/P20205 and previous config saved to /var/cache/conftool/dbconfig/20220207-161430-ladsgroup.json [production]
16:05 <moritzm> migrating instances off ganeti1021 [production]
16:04 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host ml-serve2005.codfw.wmnet with OS bullseye [production]
16:04 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1174 (T298554)', diff saved to https://phabricator.wikimedia.org/P20204 and previous config saved to /var/cache/conftool/dbconfig/20220207-160441-ladsgroup.json [production]
16:04 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1174.eqiad.wmnet with reason: Maintenance [production]
16:04 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1174.eqiad.wmnet with reason: Maintenance [production]
16:04 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1158 (T298554)', diff saved to https://phabricator.wikimedia.org/P20203 and previous config saved to /var/cache/conftool/dbconfig/20220207-160433-ladsgroup.json [production]
15:49 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1158', diff saved to https://phabricator.wikimedia.org/P20201 and previous config saved to /var/cache/conftool/dbconfig/20220207-154928-ladsgroup.json [production]
15:47 <moritzm> installing pillow security updates [production]
15:44 <jayme@deploy1002> Finished deploy [restbase/deploy@0848b15] (dev-cluster): (no justification provided) (duration: 02m 30s) [production]
15:41 <jayme@deploy1002> Started deploy [restbase/deploy@0848b15] (dev-cluster): (no justification provided) [production]
15:40 <jayme> updated scap to 4.3.0 on A:mw-canary, A:parsoid-canary, A:mw-jobrunner-canary, A:restbase-canary - T300804 [production]
15:36 <jayme> uploaded scap 4.3-0 to apt.w.o - T300804 [production]
15:34 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1158', diff saved to https://phabricator.wikimedia.org/P20200 and previous config saved to /var/cache/conftool/dbconfig/20220207-153424-ladsgroup.json [production]
15:30 <elukey@cumin1001> START - Cookbook sre.hosts.reimage for host ml-serve2005.codfw.wmnet with OS bullseye [production]
15:19 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1158 (T298554)', diff saved to https://phabricator.wikimedia.org/P20199 and previous config saved to /var/cache/conftool/dbconfig/20220207-151917-ladsgroup.json [production]
15:10 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1158 (T298554)', diff saved to https://phabricator.wikimedia.org/P20198 and previous config saved to /var/cache/conftool/dbconfig/20220207-151018-ladsgroup.json [production]
15:10 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance [production]