8751-8800 of 10000 results (103ms)
2022-11-29 ยง
15:16 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2150 (T323907)', diff saved to https://phabricator.wikimedia.org/P41739 and previous config saved to /var/cache/conftool/dbconfig/20221129-151609-ladsgroup.json [production]
15:10 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1169', diff saved to https://phabricator.wikimedia.org/P41737 and previous config saved to /var/cache/conftool/dbconfig/20221129-151020-marostegui.json [production]
15:07 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on an-worker1089.eqiad.wmnet with reason: replacing RAID controller battery [production]
15:06 <btullis@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on an-worker1089.eqiad.wmnet with reason: replacing RAID controller battery [production]
15:03 <pt1979@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['db1205'] [production]
15:03 <pt1979@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['db1204'] [production]
15:02 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-debug: apply [production]
15:01 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mw-debug: apply [production]
15:01 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mw-debug: apply [production]
15:01 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2150', diff saved to https://phabricator.wikimedia.org/P41735 and previous config saved to /var/cache/conftool/dbconfig/20221129-150103-ladsgroup.json [production]
15:00 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mw-debug: apply [production]
15:00 <hnowlan> removing /srv/cassandra on all maps hosts [production]
15:00 <oblivian@cumin1001> conftool action : set/pooled=inactive; selector: dc=eqiad,name=mw14(89|9).* [production]
14:58 <oblivian@deploy1002> Synchronized wmf-config/reverse-proxy.php: test deployment (duration: 04m 13s) [production]
14:55 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-debug: apply [production]
14:55 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1169 (T321126)', diff saved to https://phabricator.wikimedia.org/P41734 and previous config saved to /var/cache/conftool/dbconfig/20221129-145513-marostegui.json [production]
14:54 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on 6 hosts with reason: replacing RAID controller battery [production]
14:54 <btullis@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on 6 hosts with reason: replacing RAID controller battery [production]
14:51 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mw-debug: apply [production]
14:51 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mw-debug: apply [production]
14:51 <taavi@deploy1002> Finished scap: testing a scap sync (duration: 05m 17s) [production]
14:49 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1169 (T321126)', diff saved to https://phabricator.wikimedia.org/P41732 and previous config saved to /var/cache/conftool/dbconfig/20221129-144952-marostegui.json [production]
14:49 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on db1169.eqiad.wmnet with reason: Maintenance [production]
14:49 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mw-debug: apply [production]
14:49 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 5:00:00 on db1169.eqiad.wmnet with reason: Maintenance [production]
14:49 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on db1140.eqiad.wmnet with reason: Maintenance [production]
14:49 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1123.eqiad.wmnet with reason: Maintenance [production]
14:49 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1123.eqiad.wmnet with reason: Maintenance [production]
14:49 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 5:00:00 on db1140.eqiad.wmnet with reason: Maintenance [production]
14:48 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on db1139.eqiad.wmnet with reason: Maintenance [production]
14:48 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 5:00:00 on db1139.eqiad.wmnet with reason: Maintenance [production]
14:48 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1135 (T321126)', diff saved to https://phabricator.wikimedia.org/P41731 and previous config saved to /var/cache/conftool/dbconfig/20221129-144831-marostegui.json [production]
14:45 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2150', diff saved to https://phabricator.wikimedia.org/P41730 and previous config saved to /var/cache/conftool/dbconfig/20221129-144556-ladsgroup.json [production]
14:45 <taavi@deploy1002> Started scap: testing a scap sync [production]
14:43 <pt1979@cumin2002> END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host db1205.mgmt.eqiad.wmnet with reboot policy FORCED [production]
14:43 <pt1979@cumin2002> END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host db1204.mgmt.eqiad.wmnet with reboot policy FORCED [production]
14:37 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 10:00:00 on db1123.eqiad.wmnet with reason: Maintenance [production]
14:37 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 10:00:00 on db1123.eqiad.wmnet with reason: Maintenance [production]
14:35 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1123.eqiad.wmnet with reason: Maintenance [production]
14:34 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1123.eqiad.wmnet with reason: Maintenance [production]
14:33 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1135', diff saved to https://phabricator.wikimedia.org/P41729 and previous config saved to /var/cache/conftool/dbconfig/20221129-143324-marostegui.json [production]
14:33 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1123.eqiad.wmnet with reason: Maintenance [production]
14:32 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1123.eqiad.wmnet with reason: Maintenance [production]
14:30 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2150 (T323907)', diff saved to https://phabricator.wikimedia.org/P41728 and previous config saved to /var/cache/conftool/dbconfig/20221129-143049-ladsgroup.json [production]
14:29 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-debug: apply [production]
14:28 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mw-debug: apply [production]
14:28 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mw-debug: apply [production]
14:27 <taavi@deploy1002> Finished scap: re-syncing the backport to see if the errors fix themself (duration: 04m 58s) [production]
14:25 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mw-debug: apply [production]
14:22 <taavi@deploy1002> Started scap: re-syncing the backport to see if the errors fix themself [production]