2025-06-09
§
|
07:41 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2169 (T396130)', diff saved to https://phabricator.wikimedia.org/P77213 and previous config saved to /var/cache/conftool/dbconfig/20250609-074112-marostegui.json |
[production] |
07:34 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db2169 (T396130)', diff saved to https://phabricator.wikimedia.org/P77211 and previous config saved to /var/cache/conftool/dbconfig/20250609-073403-marostegui.json |
[production] |
07:33 |
<marostegui@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2169.codfw.wmnet with reason: Maintenance |
[production] |
07:28 |
<marostegui@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2158.codfw.wmnet with reason: Maintenance |
[production] |
07:23 |
<marostegui@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2151.codfw.wmnet with reason: Maintenance |
[production] |
07:23 |
<marostegui@cumin1002> |
START - Cookbook sre.mysql.pool db2244 gradually with 4 steps - Pool db2244.codfw.wmnet in after cloning |
[production] |
06:22 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.mysql.pool (exit_code=0) db2243 gradually with 4 steps - Pool db2243.codfw.wmnet in after cloning |
[production] |
05:42 |
<marostegui> |
Add MariaDB 10.11.13 to the repo T395663 |
[production] |
05:37 |
<marostegui@cumin1002> |
START - Cookbook sre.mysql.pool db2243 gradually with 4 steps - Pool db2243.codfw.wmnet in after cloning |
[production] |
05:24 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Add db2244 to dbctl depooled T393989', diff saved to https://phabricator.wikimedia.org/P77205 and previous config saved to /var/cache/conftool/dbconfig/20250609-052451-marostegui.json |
[production] |
05:00 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.mysql.depool (exit_code=0) db2243 - Depool db2243.codfw.wmnet to then clone it to db2244.codfw.wmnet - marostegui@cumin1002 |
[production] |
05:00 |
<marostegui@cumin1002> |
START - Cookbook sre.mysql.depool db2243 - Depool db2243.codfw.wmnet to then clone it to db2244.codfw.wmnet - marostegui@cumin1002 |
[production] |
05:00 |
<marostegui@cumin1002> |
START - Cookbook sre.mysql.clone of db2243.codfw.wmnet onto db2244.codfw.wmnet |
[production] |
2025-06-06
§
|
21:33 |
<fceratto@deploy1003> |
helmfile [aux-k8s-eqiad] 'sync' command on namespace 'zarcillo' for release 'main' . |
[production] |
21:25 |
<fceratto@deploy1003> |
helmfile [aux-k8s-eqiad] 'sync' command on namespace 'zarcillo' for release 'main' . |
[production] |
21:19 |
<fceratto@deploy1003> |
helmfile [aux-k8s-eqiad] 'sync' command on namespace 'zarcillo' for release 'main' . |
[production] |
21:15 |
<fceratto@deploy1003> |
helmfile [aux-k8s-eqiad] 'sync' command on namespace 'zarcillo' for release 'main' . |
[production] |
21:02 |
<bking@cumin2002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 10 days, 0:00:00 on relforge[1003-1004].eqiad.wmnet with reason: downtime before decom |
[production] |
20:42 |
<jhancock@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on thanos-be2007.codfw.wmnet with reason: host reimage |
[production] |
20:40 |
<fceratto@deploy1003> |
helmfile [aux-k8s-eqiad] 'sync' command on namespace 'zarcillo' for release 'main' . |
[production] |
20:38 |
<jhancock@cumin2002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on thanos-be2007.codfw.wmnet with reason: host reimage |
[production] |
20:35 |
<fceratto@deploy1003> |
helmfile [aux-k8s-eqiad] 'sync' command on namespace 'zarcillo' for release 'main' . |
[production] |
20:16 |
<fceratto@deploy1003> |
helmfile [aux-k8s-eqiad] 'sync' command on namespace 'zarcillo' for release 'main' . |
[production] |
20:15 |
<jhancock@cumin2002> |
START - Cookbook sre.hosts.reimage for host thanos-be2007.codfw.wmnet with OS bullseye |
[production] |
20:06 |
<jhancock@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on thanos-be2006.codfw.wmnet with reason: host reimage |
[production] |
20:03 |
<jhancock@cumin2002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on thanos-be2006.codfw.wmnet with reason: host reimage |
[production] |
19:45 |
<vriley@cumin1002> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host an-worker1185.eqiad.wmnet with OS bullseye |
[production] |
19:31 |
<jhancock@cumin2002> |
START - Cookbook sre.hosts.reimage for host thanos-be2006.codfw.wmnet with OS bullseye |
[production] |
19:11 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_eqiad: T383811 - bking@cumin2002 |
[production] |
19:01 |
<jhancock@cumin2002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db2244.codfw.wmnet with OS bookworm |
[production] |
19:01 |
<jhancock@cumin2002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - jhancock@cumin2002" |
[production] |
18:25 |
<vriley@cumin1002> |
START - Cookbook sre.hosts.reimage for host an-worker1185.eqiad.wmnet with OS bullseye |
[production] |
18:06 |
<jhancock@cumin2002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - jhancock@cumin2002" |
[production] |
17:49 |
<jhancock@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db2244.codfw.wmnet with reason: host reimage |
[production] |
17:46 |
<jhancock@cumin2002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db2244.codfw.wmnet with reason: host reimage |
[production] |
17:29 |
<jhancock@cumin2002> |
START - Cookbook sre.hosts.reimage for host db2244.codfw.wmnet with OS bookworm |
[production] |
17:29 |
<jhancock@cumin2002> |
END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=1) upgrade firmware for hosts ['db2244'] |
[production] |
17:29 |
<jhancock@cumin2002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['db2244'] |
[production] |
17:20 |
<bking@cumin2002> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_eqiad: T383811 - bking@cumin2002 |
[production] |
17:10 |
<jhancock@cumin2002> |
END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host db2244.mgmt.codfw.wmnet with chassis set policy FORCE_RESTART and with Dell SCP reboot policy FORCED |
[production] |
17:08 |
<bking@cumin2002> |
END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_eqiad: T383811 - bking@cumin2002 |
[production] |
17:06 |
<bking@cumin2002> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_eqiad: T383811 - bking@cumin2002 |
[production] |
17:00 |
<sukhe> |
forced agent run on O:alerting_host to reload vopsbot to add cdobbins |
[production] |
16:57 |
<jhancock@cumin2002> |
START - Cookbook sre.hosts.provision for host db2244.mgmt.codfw.wmnet with chassis set policy FORCE_RESTART and with Dell SCP reboot policy FORCED |
[production] |