2024-10-23
§
|
08:07 |
<jelto@cumin1002> |
START - Cookbook sre.hosts.move-vlan for host gitlab-runner2004 |
[production] |
08:07 |
<jelto@cumin1002> |
START - Cookbook sre.hosts.reimage for host gitlab-runner2004.codfw.wmnet with OS bullseye |
[production] |
08:06 |
<jmm@cumin2002> |
START - Cookbook sre.cassandra.roll-restart for nodes matching A:cassandra-dev: new JDK - jmm@cumin2002 |
[production] |
08:02 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti1039.eqiad.wmnet |
[production] |
07:56 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on 10 hosts with reason: reboot |
[production] |
07:56 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.downtime for 1:00:00 on 10 hosts with reason: reboot |
[production] |
07:52 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host gitlab-runner2003.codfw.wmnet with OS bullseye |
[production] |
07:35 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on gitlab-runner2003.codfw.wmnet with reason: host reimage |
[production] |
07:33 |
<moritzm> |
installing perf updates on bookworm nodes |
[production] |
07:32 |
<jelto@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on gitlab-runner2003.codfw.wmnet with reason: host reimage |
[production] |
07:24 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2012.codfw.wmnet |
[production] |
07:24 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.changedisk (exit_code=0) for changing disk type of ml-etcd2002.codfw.wmnet to plain |
[production] |
07:23 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.changedisk for changing disk type of ml-etcd2002.codfw.wmnet to plain |
[production] |
07:23 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti2012.codfw.wmnet |
[production] |
07:22 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2012.codfw.wmnet |
[production] |
07:22 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.changedisk (exit_code=0) for changing disk type of ml-etcd2002.codfw.wmnet to drbd |
[production] |
07:15 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.hosts.move-vlan (exit_code=0) for host gitlab-runner2003 |
[production] |
07:15 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.network.configure-switch-interfaces (exit_code=0) for host gitlab-runner2003 |
[production] |
07:15 |
<jelto@cumin1002> |
START - Cookbook sre.network.configure-switch-interfaces for host gitlab-runner2003 |
[production] |
07:15 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) gitlab-runner2003.codfw.wmnet 93.32.192.10.in-addr.arpa 3.9.0.0.2.3.0.0.2.9.1.0.0.1.0.0.3.0.1.0.0.6.8.0.0.0.0.0.0.2.6.2.ip6.arpa on all recursors |
[production] |
07:15 |
<jelto@cumin1002> |
START - Cookbook sre.dns.wipe-cache gitlab-runner2003.codfw.wmnet 93.32.192.10.in-addr.arpa 3.9.0.0.2.3.0.0.2.9.1.0.0.1.0.0.3.0.1.0.0.6.8.0.0.0.0.0.0.2.6.2.ip6.arpa on all recursors |
[production] |
07:15 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
07:15 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Update records for host gitlab-runner2003 - jelto@cumin1002" |
[production] |
07:15 |
<jelto@cumin1002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Update records for host gitlab-runner2003 - jelto@cumin1002" |
[production] |
07:12 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.changedisk for changing disk type of ml-etcd2002.codfw.wmnet to drbd |
[production] |
07:11 |
<jelto@cumin1002> |
START - Cookbook sre.dns.netbox |
[production] |
07:11 |
<jelto@cumin1002> |
START - Cookbook sre.hosts.move-vlan for host gitlab-runner2003 |
[production] |
07:10 |
<jelto@cumin1002> |
START - Cookbook sre.hosts.reimage for host gitlab-runner2003.codfw.wmnet with OS bullseye |
[production] |
06:48 |
<kart_> |
Updated cxserver to 2024-10-23-055433-production |
[production] |
06:47 |
<kartik@deploy2002> |
helmfile [eqiad] DONE helmfile.d/services/cxserver: apply |
[production] |
06:47 |
<kartik@deploy2002> |
helmfile [eqiad] START helmfile.d/services/cxserver: apply |
[production] |
06:45 |
<kartik@deploy2002> |
helmfile [codfw] DONE helmfile.d/services/cxserver: apply |
[production] |
06:44 |
<kartik@deploy2002> |
helmfile [codfw] START helmfile.d/services/cxserver: apply |
[production] |
06:44 |
<kartik@deploy2002> |
helmfile [staging] DONE helmfile.d/services/cxserver: apply |
[production] |
06:44 |
<kartik@deploy2002> |
helmfile [staging] START helmfile.d/services/cxserver: apply |
[production] |
06:38 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti2012.codfw.wmnet |
[production] |
06:35 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2012.codfw.wmnet |
[production] |
05:20 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host bast3007.wikimedia.org |
[production] |
05:15 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host bast3007.wikimedia.org |
[production] |
04:18 |
<eileen> |
civicrm upgraded from de642bea to ce44ce45 |
[production] |
00:01 |
<ejegg> |
fundraising civicrm upgraded from 5463f37b to de642bea |
[production] |
2024-10-22
§
|
23:32 |
<ejegg> |
fundraising civicrm upgraded from d9e85c3d to 5463f37b |
[production] |
22:59 |
<ejegg> |
fundraising civicrm upgraded from 36660cb3 to d9e85c3d |
[production] |
22:38 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'db1211 (re)pooling @ 100%: Maint over', diff saved to https://phabricator.wikimedia.org/P70562 and previous config saved to /var/cache/conftool/dbconfig/20241022-223858-ladsgroup.json |
[production] |
22:23 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'db1211 (re)pooling @ 75%: Maint over', diff saved to https://phabricator.wikimedia.org/P70561 and previous config saved to /var/cache/conftool/dbconfig/20241022-222352-ladsgroup.json |
[production] |
22:11 |
<zabe@deploy2002> |
Finished scap sync-world: Backport for [[gerrit:1082278|s1: Reduce revision-slots cache expiry to 60 seconds (T183490)]] (duration: 07m 17s) |
[production] |
22:08 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'db1211 (re)pooling @ 25%: Maint over', diff saved to https://phabricator.wikimedia.org/P70560 and previous config saved to /var/cache/conftool/dbconfig/20241022-220847-ladsgroup.json |
[production] |
22:07 |
<zabe@deploy2002> |
zabe: Continuing with sync |
[production] |
22:06 |
<zabe@deploy2002> |
zabe: Backport for [[gerrit:1082278|s1: Reduce revision-slots cache expiry to 60 seconds (T183490)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) |
[production] |
22:03 |
<zabe@deploy2002> |
Started scap sync-world: Backport for [[gerrit:1082278|s1: Reduce revision-slots cache expiry to 60 seconds (T183490)]] |
[production] |