8101-8150 of 10000 results (91ms)
2022-11-29 ยง
16:14 <robh@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: eqsin hosts - robh@cumin2002" [production]
16:14 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mw-debug: apply [production]
16:14 <oblivian@deploy1002> Synchronized wmf-config/reverse-proxy.php: test deployment (duration: 04m 28s) [production]
16:13 <robh@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: eqsin hosts - robh@cumin2002" [production]
16:13 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1184', diff saved to https://phabricator.wikimedia.org/P41751 and previous config saved to /var/cache/conftool/dbconfig/20221129-161329-marostegui.json [production]
16:12 <oblivian@cumin1001> conftool action : set/pooled=yes; selector: dc=eqiad,name=mw14(89|9).* [production]
16:11 <robh@cumin2002> START - Cookbook sre.dns.netbox [production]
16:09 <oblivian@deploy1002> Synchronized wmf-config/reverse-proxy.php: test deployment (duration: 04m 35s) [production]
16:09 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2159', diff saved to https://phabricator.wikimedia.org/P41750 and previous config saved to /var/cache/conftool/dbconfig/20221129-160907-ladsgroup.json [production]
16:08 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-debug: apply [production]
16:07 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mw-debug: apply [production]
16:06 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mw-debug: apply [production]
16:04 <oblivian@deploy1002> Synchronized wmf-config/reverse-proxy.php: test deployment (duration: 04m 36s) [production]
16:03 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mw-debug: apply [production]
16:01 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1123 (re)pooling @ 75%: Maint done', diff saved to https://phabricator.wikimedia.org/P41749 and previous config saved to /var/cache/conftool/dbconfig/20221129-160059-ladsgroup.json [production]
15:58 <oblivian@cumin1001> conftool action : set/pooled=no; selector: dc=eqiad,name=mw14(89|9).* [production]
15:58 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1184', diff saved to https://phabricator.wikimedia.org/P41748 and previous config saved to /var/cache/conftool/dbconfig/20221129-155822-marostegui.json [production]
15:54 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2159 (T323907)', diff saved to https://phabricator.wikimedia.org/P41747 and previous config saved to /var/cache/conftool/dbconfig/20221129-155401-ladsgroup.json [production]
15:47 <pt1979@cumin2002> END (PASS) - Cookbook sre.hardware.upgrade-firmware (exit_code=0) upgrade firmware for hosts ['db1204'] [production]
15:45 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1123 (re)pooling @ 25%: Maint done', diff saved to https://phabricator.wikimedia.org/P41746 and previous config saved to /var/cache/conftool/dbconfig/20221129-154554-ladsgroup.json [production]
15:45 <pt1979@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['db1204'] [production]
15:43 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1184 (T321126)', diff saved to https://phabricator.wikimedia.org/P41745 and previous config saved to /var/cache/conftool/dbconfig/20221129-154316-marostegui.json [production]
15:42 <pt1979@cumin2002> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['db1204'] [production]
15:40 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1184 (T321126)', diff saved to https://phabricator.wikimedia.org/P41744 and previous config saved to /var/cache/conftool/dbconfig/20221129-154055-marostegui.json [production]
15:40 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on db1184.eqiad.wmnet with reason: Maintenance [production]
15:40 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 5:00:00 on db1184.eqiad.wmnet with reason: Maintenance [production]
15:40 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1169 (T321126)', diff saved to https://phabricator.wikimedia.org/P41743 and previous config saved to /var/cache/conftool/dbconfig/20221129-154033-marostegui.json [production]
15:30 <ladsgroup@cumin1001> dbctl commit (dc=all): 'db1123 (re)pooling @ 10%: Maint done', diff saved to https://phabricator.wikimedia.org/P41742 and previous config saved to /var/cache/conftool/dbconfig/20221129-153049-ladsgroup.json [production]
15:25 <Emperor> set thanos ring replicas to 3.0 T311690 [production]
15:25 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1169', diff saved to https://phabricator.wikimedia.org/P41741 and previous config saved to /var/cache/conftool/dbconfig/20221129-152526-marostegui.json [production]
15:20 <pt1979@cumin2002> END (PASS) - Cookbook sre.hardware.upgrade-firmware (exit_code=0) upgrade firmware for hosts ['db1205'] [production]
15:16 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2159 (T323907)', diff saved to https://phabricator.wikimedia.org/P41740 and previous config saved to /var/cache/conftool/dbconfig/20221129-151647-ladsgroup.json [production]
15:16 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2095.codfw.wmnet with reason: Maintenance [production]
15:16 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2095.codfw.wmnet with reason: Maintenance [production]
15:16 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2159.codfw.wmnet with reason: Maintenance [production]
15:16 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db2159.codfw.wmnet with reason: Maintenance [production]
15:16 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2150 (T323907)', diff saved to https://phabricator.wikimedia.org/P41739 and previous config saved to /var/cache/conftool/dbconfig/20221129-151609-ladsgroup.json [production]
15:10 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1169', diff saved to https://phabricator.wikimedia.org/P41737 and previous config saved to /var/cache/conftool/dbconfig/20221129-151020-marostegui.json [production]
15:07 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on an-worker1089.eqiad.wmnet with reason: replacing RAID controller battery [production]
15:06 <btullis@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on an-worker1089.eqiad.wmnet with reason: replacing RAID controller battery [production]
15:03 <pt1979@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['db1205'] [production]
15:03 <pt1979@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['db1204'] [production]
15:02 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-debug: apply [production]
15:01 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mw-debug: apply [production]
15:01 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mw-debug: apply [production]
15:01 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2150', diff saved to https://phabricator.wikimedia.org/P41735 and previous config saved to /var/cache/conftool/dbconfig/20221129-150103-ladsgroup.json [production]
15:00 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mw-debug: apply [production]
15:00 <hnowlan> removing /srv/cassandra on all maps hosts [production]
15:00 <oblivian@cumin1001> conftool action : set/pooled=inactive; selector: dc=eqiad,name=mw14(89|9).* [production]
14:58 <oblivian@deploy1002> Synchronized wmf-config/reverse-proxy.php: test deployment (duration: 04m 13s) [production]