2024-05-20
ยง
|
13:24 |
<jhancock@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
13:24 |
<jhancock@cumin2002> |
END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host sretest2002.mgmt.codfw.wmnet with reboot policy FORCED |
[production] |
13:23 |
<reedy@deploy1002> |
Synchronized wmf-config/: T360989 T365323 (duration: 15m 35s) |
[production] |
13:22 |
<hnowlan> |
migrating 80% of commons traffic to k8s |
[production] |
13:19 |
<topranks> |
adding outbound ACL on irb.2002 on lsw1 switches in codfw to test DHCP function T365204 |
[production] |
13:18 |
<jhancock@cumin2002> |
START - Cookbook sre.hosts.provision for host sretest2002.mgmt.codfw.wmnet with reboot policy FORCED |
[production] |
13:18 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2181 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P62689 and previous config saved to /var/cache/conftool/dbconfig/20240520-131803-root.json |
[production] |
13:17 |
<vgutierrez> |
depool upload@eqsin before enabling IPIP encapsulation - T357257 |
[production] |
13:11 |
<vgutierrez> |
Re-enable puppet on A:ncredir && A:cp-upload_ulsfo - T365354 |
[production] |
13:04 |
<Emperor> |
depool, restart swift-proxy, repool moss-fe1001 as ~12% connection failures reported by envoy since late 14th May T360913 |
[production] |
13:02 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2181 (re)pooling @ 5%: Repooling', diff saved to https://phabricator.wikimedia.org/P62688 and previous config saved to /var/cache/conftool/dbconfig/20240520-130257-root.json |
[production] |
12:59 |
<akosiaris@cumin1002> |
START - Cookbook sre.hosts.reimage for host kafka-main1006.eqiad.wmnet with OS bullseye |
[production] |
12:54 |
<vgutierrez> |
disable puppet on A:ncredir && A:cp-upload_ulsfo before merging https://gerrit.wikimedia.org/r/c/operations/puppet/+/1034074 - T365354 |
[production] |
12:52 |
<marostegui> |
Deploy schema change on s7 (only frwiktionary) eqiad with replication dbmaint T365352 |
[production] |
12:48 |
<cmooney@cumin1002> |
START - Cookbook sre.hosts.reimage for host sretest2002.wikimedia.org with OS bookworm |
[production] |
12:47 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2181 (re)pooling @ 1%: Repooling', diff saved to https://phabricator.wikimedia.org/P62687 and previous config saved to /var/cache/conftool/dbconfig/20240520-124749-root.json |
[production] |
12:46 |
<cmooney@cumin1002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
12:46 |
<cmooney@cumin1002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Change mgmt dns for sretest2002 - cmooney@cumin1002" |
[production] |
12:45 |
<cmooney@cumin1002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Change mgmt dns for sretest2002 - cmooney@cumin1002" |
[production] |
12:44 |
<cmooney@cumin1002> |
END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) sretest2002.mgmt.codfw.wmnet on all recursors |
[production] |
12:44 |
<cmooney@cumin1002> |
START - Cookbook sre.dns.wipe-cache sretest2002.mgmt.codfw.wmnet on all recursors |
[production] |
12:22 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db2181.codfw.wmnet with OS bookworm |
[production] |
12:01 |
<marostegui> |
Deploy schema change on s4 eqiad with replication dbmaint T365352 |
[production] |
11:59 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db2181.codfw.wmnet with reason: host reimage |
[production] |
11:56 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db2181.codfw.wmnet with reason: host reimage |
[production] |
11:56 |
<marostegui> |
Deploy schema change on s5 eqiad with replication dbmaint T365352 |
[production] |
11:47 |
<marostegui> |
Deploy urgent schema change on s8 eqiad with replication dbmaint T365352 |
[production] |
11:40 |
<hnowlan> |
migrating 30% of commons traffic to k8s |
[production] |
11:38 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.reimage for host db2181.codfw.wmnet with OS bookworm |
[production] |
11:30 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2175 (re)pooling @ 100%: After reimage', diff saved to https://phabricator.wikimedia.org/P62685 and previous config saved to /var/cache/conftool/dbconfig/20240520-113038-root.json |
[production] |
11:22 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2181.codfw.wmnet with reason: Migration to bookworm |
[production] |
11:22 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2181.codfw.wmnet with reason: Migration to bookworm |
[production] |
11:15 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.ipmi-password-reset (exit_code=0) |
[production] |
11:15 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2175 (re)pooling @ 75%: After reimage', diff saved to https://phabricator.wikimedia.org/P62684 and previous config saved to /var/cache/conftool/dbconfig/20240520-111530-root.json |
[production] |
11:15 |
<marostegui@cumin1002> |
Updating IPMI password on 1 hosts - marostegui@cumin1002 |
[production] |
11:14 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.ipmi-password-reset |
[production] |
11:14 |
<marostegui@cumin1002> |
END (FAIL) - Cookbook sre.hosts.ipmi-password-reset (exit_code=99) |
[production] |
11:14 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.ipmi-password-reset |
[production] |
11:03 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db2181.codfw.wmnet with reason: Migration to bookworm |
[production] |
11:02 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db2181.codfw.wmnet with reason: Migration to bookworm |
[production] |
11:02 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depool db2181 T363792', diff saved to https://phabricator.wikimedia.org/P62682 and previous config saved to /var/cache/conftool/dbconfig/20240520-110217-root.json |
[production] |
11:00 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2175 (re)pooling @ 50%: After reimage', diff saved to https://phabricator.wikimedia.org/P62681 and previous config saved to /var/cache/conftool/dbconfig/20240520-110023-root.json |
[production] |
10:46 |
<Dreamy_Jazz> |
Restarting MediaModeration scanning script - https://wikitech.wikimedia.org/wiki/MediaModeration |
[production] |
10:45 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2175 (re)pooling @ 25%: After reimage', diff saved to https://phabricator.wikimedia.org/P62680 and previous config saved to /var/cache/conftool/dbconfig/20240520-104517-root.json |
[production] |
10:31 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db2175.codfw.wmnet with OS bookworm |
[production] |
10:30 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2175 (re)pooling @ 10%: After reimage', diff saved to https://phabricator.wikimedia.org/P62679 and previous config saved to /var/cache/conftool/dbconfig/20240520-103011-root.json |
[production] |
10:18 |
<godog> |
bounce prometheus@k8s in eqiad - T343529 |
[production] |
10:08 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db2175.codfw.wmnet with reason: host reimage |
[production] |
10:05 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db2175.codfw.wmnet with reason: host reimage |
[production] |
09:57 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Depooling db2182 (T352010)', diff saved to https://phabricator.wikimedia.org/P62678 and previous config saved to /var/cache/conftool/dbconfig/20240520-095729-ladsgroup.json |
[production] |