1951-2000 of 10000 results (45ms)
2022-02-02 ยง
15:25 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db2121.codfw.wmnet with reason: Maintenance [production]
15:25 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1127 (T300402)', diff saved to https://phabricator.wikimedia.org/P19970 and previous config saved to /var/cache/conftool/dbconfig/20220202-152552-marostegui.json [production]
15:19 <pt1979@cumin2002> START - Cookbook sre.hosts.provision for host ganeti2029.mgmt.codfw.wmnet with reboot policy FORCED [production]
15:16 <pt1979@cumin2002> END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host ganeti2029.mgmt.codfw.wmnet with reboot policy FORCED [production]
15:10 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1127', diff saved to https://phabricator.wikimedia.org/P19969 and previous config saved to /var/cache/conftool/dbconfig/20220202-151047-marostegui.json [production]
15:08 <marostegui@cumin1001> dbctl commit (dc=all): 'db1179 (re)pooling @ 100%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P19968 and previous config saved to /var/cache/conftool/dbconfig/20220202-150832-root.json [production]
15:00 <XioNoX> esams: push Capirca generated loopback filters [production]
14:59 <pt1979@cumin2002> START - Cookbook sre.hosts.provision for host ganeti2029.mgmt.codfw.wmnet with reboot policy FORCED [production]
14:55 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1127', diff saved to https://phabricator.wikimedia.org/P19967 and previous config saved to /var/cache/conftool/dbconfig/20220202-145542-marostegui.json [production]
14:53 <marostegui@cumin1001> dbctl commit (dc=all): 'db1179 (re)pooling @ 75%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P19966 and previous config saved to /var/cache/conftool/dbconfig/20220202-145329-root.json [production]
14:47 <jayme@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
14:44 <XioNoX> codfw: push Capirca generated loopback filters [production]
14:40 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1127 (T300402)', diff saved to https://phabricator.wikimedia.org/P19965 and previous config saved to /var/cache/conftool/dbconfig/20220202-144038-marostegui.json [production]
14:39 <jayme@cumin1001> START - Cookbook sre.dns.netbox [production]
14:38 <marostegui@cumin1001> dbctl commit (dc=all): 'db1179 (re)pooling @ 50%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P19963 and previous config saved to /var/cache/conftool/dbconfig/20220202-143825-root.json [production]
14:32 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1127 (T300402)', diff saved to https://phabricator.wikimedia.org/P19962 and previous config saved to /var/cache/conftool/dbconfig/20220202-143221-marostegui.json [production]
14:32 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1127.eqiad.wmnet with reason: Maintenance [production]
14:32 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1127.eqiad.wmnet with reason: Maintenance [production]
14:32 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1101:3317 (T300402)', diff saved to https://phabricator.wikimedia.org/P19961 and previous config saved to /var/cache/conftool/dbconfig/20220202-143214-marostegui.json [production]
14:23 <marostegui@cumin1001> dbctl commit (dc=all): 'db1179 (re)pooling @ 25%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P19960 and previous config saved to /var/cache/conftool/dbconfig/20220202-142321-root.json [production]
14:21 <XioNoX> eqsin: push Capirca generated loopback filters [production]
14:19 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
14:18 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
14:18 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
14:17 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1101:3317', diff saved to https://phabricator.wikimedia.org/P19959 and previous config saved to /var/cache/conftool/dbconfig/20220202-141709-marostegui.json [production]
14:16 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
14:15 <XioNoX> cr2-eqdfw: push Capirca generated loopback filters [production]
14:14 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove weight from es1020 - as it is the master', diff saved to https://phabricator.wikimedia.org/P19958 and previous config saved to /var/cache/conftool/dbconfig/20220202-141455-marostegui.json [production]
14:13 <vgutierrez> pool cp1087 running envoy as TLS terminator - T271421 [production]
14:09 <XioNoX> cr2-eqord: push Capirca generated loopback filters [production]
14:08 <marostegui@cumin1001> dbctl commit (dc=all): 'db1179 (re)pooling @ 10%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P19957 and previous config saved to /var/cache/conftool/dbconfig/20220202-140818-root.json [production]
14:03 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1179 schema change', diff saved to https://phabricator.wikimedia.org/P19956 and previous config saved to /var/cache/conftool/dbconfig/20220202-140317-marostegui.json [production]
14:02 <marostegui@cumin1001> dbctl commit (dc=all): 'db1166 (re)pooling @ 100%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P19955 and previous config saved to /var/cache/conftool/dbconfig/20220202-140239-root.json [production]
14:02 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1101:3317', diff saved to https://phabricator.wikimedia.org/P19954 and previous config saved to /var/cache/conftool/dbconfig/20220202-140204-marostegui.json [production]
13:50 <elukey> move docker on ml-serve-ctrl* nodes from device mapper to overlay2 [production]
13:47 <marostegui@cumin1001> dbctl commit (dc=all): 'db1166 (re)pooling @ 75%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P19953 and previous config saved to /var/cache/conftool/dbconfig/20220202-134735-root.json [production]
13:47 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1101:3317 (T300402)', diff saved to https://phabricator.wikimedia.org/P19952 and previous config saved to /var/cache/conftool/dbconfig/20220202-134659-marostegui.json [production]
13:40 <XioNoX> ULSFO routers: push Capirca generated loopback filters [production]
13:37 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1101:3317 (T300402)', diff saved to https://phabricator.wikimedia.org/P19951 and previous config saved to /var/cache/conftool/dbconfig/20220202-133713-marostegui.json [production]
13:37 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1101.eqiad.wmnet with reason: Maintenance [production]
13:37 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1101.eqiad.wmnet with reason: Maintenance [production]
13:35 <otto@deploy1002> helmfile [eqiad] DONE helmfile.d/services/eventgate-main: sync on production [production]
13:34 <otto@deploy1002> helmfile [eqiad] DONE helmfile.d/services/eventgate-main: sync on canary [production]
13:34 <otto@deploy1002> helmfile [eqiad] START helmfile.d/services/eventgate-main: sync on production [production]
13:34 <otto@deploy1002> helmfile [eqiad] START helmfile.d/services/eventgate-main: sync on canary [production]
13:33 <otto@deploy1002> helmfile [codfw] DONE helmfile.d/services/eventgate-main: sync on canary [production]
13:33 <otto@deploy1002> helmfile [codfw] DONE helmfile.d/services/eventgate-main: sync on production [production]
13:32 <otto@deploy1002> helmfile [codfw] START helmfile.d/services/eventgate-main: sync on production [production]
13:32 <otto@deploy1002> helmfile [codfw] START helmfile.d/services/eventgate-main: sync on canary [production]
13:32 <ottomata> roll restarting eventgate-main to pick up stream-configs for rdf-streaming-updater.reconcile [production]