3501-3550 of 10000 results (87ms)
2023-02-13 ยง
20:52 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1111', diff saved to https://phabricator.wikimedia.org/P44495 and previous config saved to /var/cache/conftool/dbconfig/20230213-205211-marostegui.json [production]
20:49 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2109', diff saved to https://phabricator.wikimedia.org/P44494 and previous config saved to /var/cache/conftool/dbconfig/20230213-204920-ladsgroup.json [production]
20:43 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2159', diff saved to https://phabricator.wikimedia.org/P44493 and previous config saved to /var/cache/conftool/dbconfig/20230213-204348-marostegui.json [production]
20:39 <cmooney@cumin1001> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['cloudcephosd1001.eqiad.wmnet'] [production]
20:37 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1111', diff saved to https://phabricator.wikimedia.org/P44492 and previous config saved to /var/cache/conftool/dbconfig/20230213-203704-marostegui.json [production]
20:34 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2109 (T328255)', diff saved to https://phabricator.wikimedia.org/P44491 and previous config saved to /var/cache/conftool/dbconfig/20230213-203413-ladsgroup.json [production]
20:32 <dcausse> restarting blazegraph on wdqs1012 (BlazegraphFreeAllocatorsDecreasingRapidly) [production]
20:30 <cmooney@cumin1001> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudcephosd1001.eqiad.wmnet'] [production]
20:28 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2159', diff saved to https://phabricator.wikimedia.org/P44490 and previous config saved to /var/cache/conftool/dbconfig/20230213-202842-marostegui.json [production]
20:26 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2109 (T328255)', diff saved to https://phabricator.wikimedia.org/P44489 and previous config saved to /var/cache/conftool/dbconfig/20230213-202656-ladsgroup.json [production]
20:26 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2109.codfw.wmnet with reason: Maintenance [production]
20:26 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db2109.codfw.wmnet with reason: Maintenance [production]
20:26 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2105 (T328255)', diff saved to https://phabricator.wikimedia.org/P44488 and previous config saved to /var/cache/conftool/dbconfig/20230213-202635-ladsgroup.json [production]
20:24 <cmooney@cumin1001> END (PASS) - Cookbook sre.hardware.upgrade-firmware (exit_code=0) upgrade firmware for hosts ['cloudcephosd1001.eqiad.wmnet'] [production]
20:21 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1111 (T328817)', diff saved to https://phabricator.wikimedia.org/P44487 and previous config saved to /var/cache/conftool/dbconfig/20230213-202157-marostegui.json [production]
20:13 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2159 (T329203)', diff saved to https://phabricator.wikimedia.org/P44486 and previous config saved to /var/cache/conftool/dbconfig/20230213-201336-marostegui.json [production]
20:13 <cmooney@cumin1001> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudcephosd1001.eqiad.wmnet'] [production]
20:12 <elukey@cumin1001> END (PASS) - Cookbook sre.k8s.upgrade-cluster (exit_code=0) Upgrade K8s version: Upgrade ml-staging-codfw cluster to 1.23 [production]
20:12 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host ml-staging2002.codfw.wmnet with OS bullseye [production]
20:11 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2105', diff saved to https://phabricator.wikimedia.org/P44485 and previous config saved to /var/cache/conftool/dbconfig/20230213-201129-ladsgroup.json [production]
20:07 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db2159 (T329203)', diff saved to https://phabricator.wikimedia.org/P44484 and previous config saved to /var/cache/conftool/dbconfig/20230213-200742-marostegui.json [production]
20:07 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2095.codfw.wmnet with reason: Maintenance [production]
20:07 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2095.codfw.wmnet with reason: Maintenance [production]
20:07 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2159.codfw.wmnet with reason: Maintenance [production]
20:07 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db2159.codfw.wmnet with reason: Maintenance [production]
20:06 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2150 (T329203)', diff saved to https://phabricator.wikimedia.org/P44483 and previous config saved to /var/cache/conftool/dbconfig/20230213-200654-marostegui.json [production]
19:57 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1111 (T328817)', diff saved to https://phabricator.wikimedia.org/P44482 and previous config saved to /var/cache/conftool/dbconfig/20230213-195743-marostegui.json [production]
19:57 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1111.eqiad.wmnet with reason: Maintenance [production]
19:57 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db1111.eqiad.wmnet with reason: Maintenance [production]
19:57 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1104 (T328817)', diff saved to https://phabricator.wikimedia.org/P44481 and previous config saved to /var/cache/conftool/dbconfig/20230213-195722-marostegui.json [production]
19:56 <cmooney@cumin1001> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=1) upgrade firmware for hosts ['cloudcephosd1001.eqiad.wmnet'] [production]
19:56 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2105', diff saved to https://phabricator.wikimedia.org/P44480 and previous config saved to /var/cache/conftool/dbconfig/20230213-195623-ladsgroup.json [production]
19:56 <cmooney@cumin1001> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudcephosd1001.eqiad.wmnet'] [production]
16:24 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2108', diff saved to https://phabricator.wikimedia.org/P44436 and previous config saved to /var/cache/conftool/dbconfig/20230213-162456-marostegui.json [production]
16:23 <pt1979@cumin2002> END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host mw2438.mgmt.codfw.wmnet with reboot policy FORCED [production]
16:18 <marostegui@cumin1001> dbctl commit (dc=all): 'db1130 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P44435 and previous config saved to /var/cache/conftool/dbconfig/20230213-161824-root.json [production]
16:16 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db2168:3318 (T328817)', diff saved to https://phabricator.wikimedia.org/P44434 and previous config saved to /var/cache/conftool/dbconfig/20230213-161605-marostegui.json [production]
16:15 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2168.codfw.wmnet with reason: Maintenance [production]
16:15 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db2168.codfw.wmnet with reason: Maintenance [production]
16:15 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2167:3318 (T328817)', diff saved to https://phabricator.wikimedia.org/P44433 and previous config saved to /var/cache/conftool/dbconfig/20230213-161543-marostegui.json [production]
16:10 <jmm@cumin2002> END (PASS) - Cookbook sre.elasticsearch.restart-nginx (exit_code=0) rolling restart_daemons on A:cloudelastic [production]
16:09 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2108', diff saved to https://phabricator.wikimedia.org/P44432 and previous config saved to /var/cache/conftool/dbconfig/20230213-160950-marostegui.json [production]
16:07 <jmm@cumin2002> START - Cookbook sre.elasticsearch.restart-nginx rolling restart_daemons on A:cloudelastic [production]
16:03 <marostegui@cumin1001> dbctl commit (dc=all): 'db1130 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P44431 and previous config saved to /var/cache/conftool/dbconfig/20230213-160320-root.json [production]
16:02 <jmm@cumin2002> END (PASS) - Cookbook sre.elasticsearch.restart-nginx (exit_code=0) rolling restart_daemons on A:relforge [production]
16:02 <elukey@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) for hosts cloudcephosd1001.eqiad.wmnet [production]
16:02 <elukey@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
16:02 <elukey@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: cloudcephosd1001.eqiad.wmnet decommissioned, removing all IPs except the asset tag one - elukey@cumin1001" [production]
16:01 <jmm@cumin2002> START - Cookbook sre.elasticsearch.restart-nginx rolling restart_daemons on A:relforge [production]
16:00 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2167:3318', diff saved to https://phabricator.wikimedia.org/P44429 and previous config saved to /var/cache/conftool/dbconfig/20230213-160037-marostegui.json [production]