6051-6100 of 10000 results (109ms)
2024-06-10 ยง
12:05 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti2018.codfw.wmnet [production]
12:04 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2018.codfw.wmnet [production]
11:58 <arnaudb@cumin1002> dbctl commit (dc=all): 'db2204 (re)pooling @ 50%: post maintenance repool', diff saved to https://phabricator.wikimedia.org/P64530 and previous config saved to /var/cache/conftool/dbconfig/20240610-115834-arnaudb.json [production]
11:56 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti2018.codfw.wmnet [production]
11:53 <oblivian@deploy1002> Started scap: Deploying change to base mediawiki image (take 2) [production]
11:49 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db1180 (T364069)', diff saved to https://phabricator.wikimedia.org/P64528 and previous config saved to /var/cache/conftool/dbconfig/20240610-114957-marostegui.json [production]
11:49 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1180.eqiad.wmnet with reason: Maintenance [production]
11:49 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1180.eqiad.wmnet with reason: Maintenance [production]
11:49 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1173 (T364069)', diff saved to https://phabricator.wikimedia.org/P64527 and previous config saved to /var/cache/conftool/dbconfig/20240610-114934-marostegui.json [production]
11:49 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1016.eqiad.wmnet [production]
11:48 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1016.eqiad.wmnet [production]
11:44 <oblivian@deploy1002> sync-world aborted: Deploying change to base mediawiki image (duration: 10m 21s) [production]
11:43 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2018.codfw.wmnet [production]
11:43 <arnaudb@cumin1002> dbctl commit (dc=all): 'db2204 (re)pooling @ 25%: post maintenance repool', diff saved to https://phabricator.wikimedia.org/P64526 and previous config saved to /var/cache/conftool/dbconfig/20240610-114329-arnaudb.json [production]
11:43 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti1016.eqiad.wmnet [production]
11:39 <brouberol@deploy1002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s_services/services/datahub: sync on production [production]
11:36 <brouberol@deploy1002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s_services/services/datahub: apply on production [production]
11:36 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti2017.codfw.wmnet [production]
11:36 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2017.codfw.wmnet [production]
11:35 <brouberol@deploy1002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s_services/services/datahub-next: sync on staging [production]
11:34 <oblivian@deploy1002> Started scap: Deploying change to base mediawiki image [production]
11:34 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1173', diff saved to https://phabricator.wikimedia.org/P64525 and previous config saved to /var/cache/conftool/dbconfig/20240610-113426-marostegui.json [production]
11:34 <oblivian@deploy1002> Unlocked for deployment [ALL REPOSITORIES]: setting global lock while working on mw-on-k8s --joe. Ping me if you need urgent deployments (duration: 10m 22s) [production]
11:32 <brouberol@deploy1002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s_services/services/datahub-next: apply on staging [production]
11:29 <fabfur> restarting pybal on lvs6003,lvs6001 to apply https://gerrit.wikimedia.org/r/c/operations/puppet/+/1039947 (T366466) [production]
11:28 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1016.eqiad.wmnet [production]
11:28 <arnaudb@cumin1002> dbctl commit (dc=all): 'db2204 (re)pooling @ 10%: post maintenance repool', diff saved to https://phabricator.wikimedia.org/P64524 and previous config saved to /var/cache/conftool/dbconfig/20240610-112821-arnaudb.json [production]
11:28 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti2017.codfw.wmnet [production]
11:26 <fabfur> enabling && running puppet on A:lvs-drmrs to apply https://gerrit.wikimedia.org/r/c/operations/puppet/+/1039947 (T366466) [production]
11:25 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1015.eqiad.wmnet [production]
11:25 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1015.eqiad.wmnet [production]
11:23 <oblivian@deploy1002> Locking from deployment [ALL REPOSITORIES]: setting global lock while working on mw-on-k8s --joe. Ping me if you need urgent deployments [production]
11:19 <oblivian@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-debug: apply [production]
11:19 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti1015.eqiad.wmnet [production]
11:19 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1173', diff saved to https://phabricator.wikimedia.org/P64523 and previous config saved to /var/cache/conftool/dbconfig/20240610-111917-marostegui.json [production]
11:19 <oblivian@deploy1002> helmfile [codfw] START helmfile.d/services/mw-debug: apply [production]
11:19 <oblivian@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mw-debug: apply [production]
11:18 <oblivian@deploy1002> helmfile [eqiad] START helmfile.d/services/mw-debug: apply [production]
11:13 <arnaudb@cumin1002> dbctl commit (dc=all): 'db2204 (re)pooling @ 5%: post maintenance repool', diff saved to https://phabricator.wikimedia.org/P64522 and previous config saved to /var/cache/conftool/dbconfig/20240610-111315-arnaudb.json [production]
10:47 <taavi@cumin1002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cloudgw1002.eqiad.wmnet [production]
10:43 <arnaudb@cumin1002> dbctl commit (dc=all): 'db2204 (re)pooling @ 1%: post maintenance repool', diff saved to https://phabricator.wikimedia.org/P64519 and previous config saved to /var/cache/conftool/dbconfig/20240610-104303-arnaudb.json [production]
10:41 <taavi@cumin1002> START - Cookbook sre.hosts.reboot-single for host cloudgw1002.eqiad.wmnet [production]
10:41 <fabfur> depooling text@drmrs to apply IPIP encapsulation patches (T366466) [production]
10:34 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti2016.codfw.wmnet [production]
10:34 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2016.codfw.wmnet [production]
10:27 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2204.codfw.wmnet with reason: Maintenance [production]
10:27 <arnaudb@cumin1002> START - Cookbook sre.hosts.downtime for 12:00:00 on db2204.codfw.wmnet with reason: Maintenance [production]
10:26 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti2016.codfw.wmnet [production]
10:25 <isaranto@deploy1002> helmfile [ml-staging-codfw] 'sync' command on namespace 'ores-legacy' for release 'main' . [production]
10:25 <arnaudb@cumin1002> dbctl commit (dc=all): 'Depool db2204 T367019', diff saved to https://phabricator.wikimedia.org/P64518 and previous config saved to /var/cache/conftool/dbconfig/20240610-102511-arnaudb.json [production]