5401-5450 of 10000 results (106ms)
2024-04-14 §
11:17 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on 6 hosts with reason: Investigating [production]
11:17 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on 6 hosts with reason: Investigating [production]
2024-04-13 §
23:39 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1244 (T356166)', diff saved to https://phabricator.wikimedia.org/P60479 and previous config saved to /var/cache/conftool/dbconfig/20240413-233953-marostegui.json [production]
23:24 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1244', diff saved to https://phabricator.wikimedia.org/P60478 and previous config saved to /var/cache/conftool/dbconfig/20240413-232443-marostegui.json [production]
23:09 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1244', diff saved to https://phabricator.wikimedia.org/P60477 and previous config saved to /var/cache/conftool/dbconfig/20240413-230935-marostegui.json [production]
22:54 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1244 (T356166)', diff saved to https://phabricator.wikimedia.org/P60476 and previous config saved to /var/cache/conftool/dbconfig/20240413-225428-marostegui.json [production]
15:42 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db1244 (T356166)', diff saved to https://phabricator.wikimedia.org/P60475 and previous config saved to /var/cache/conftool/dbconfig/20240413-154240-marostegui.json [production]
15:42 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1244.eqiad.wmnet with reason: Maintenance [production]
15:42 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 12:00:00 on db1244.eqiad.wmnet with reason: Maintenance [production]
15:42 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1242 (T356166)', diff saved to https://phabricator.wikimedia.org/P60474 and previous config saved to /var/cache/conftool/dbconfig/20240413-154217-marostegui.json [production]
15:27 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1242', diff saved to https://phabricator.wikimedia.org/P60473 and previous config saved to /var/cache/conftool/dbconfig/20240413-152709-marostegui.json [production]
15:12 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1242', diff saved to https://phabricator.wikimedia.org/P60472 and previous config saved to /var/cache/conftool/dbconfig/20240413-151201-marostegui.json [production]
14:56 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1242 (T356166)', diff saved to https://phabricator.wikimedia.org/P60471 and previous config saved to /var/cache/conftool/dbconfig/20240413-145653-marostegui.json [production]
06:06 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db1242 (T356166)', diff saved to https://phabricator.wikimedia.org/P60470 and previous config saved to /var/cache/conftool/dbconfig/20240413-060646-marostegui.json [production]
06:06 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1242.eqiad.wmnet with reason: Maintenance [production]
06:06 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 12:00:00 on db1242.eqiad.wmnet with reason: Maintenance [production]
00:52 <pt1979@cumin2002> END (FAIL) - Cookbook sre.hosts.dhcp (exit_code=99) for host cp1115.eqiad.wmnet [production]
2024-04-12 §
21:03 <cdanis@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mw-debug: apply [production]
21:03 <cdanis@deploy1002> helmfile [eqiad] START helmfile.d/services/mw-debug: apply [production]
20:43 <cdanis@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mw-debug: apply [production]
20:43 <cdanis@deploy1002> helmfile [eqiad] START helmfile.d/services/mw-debug: apply [production]
19:36 <bking@cumin2002> END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Unbanning all hosts in search_codfw [production]
19:36 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Unbanning all hosts in search_codfw [production]
18:56 <andrew@cumin1002> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) for hosts cloudbackup2002.codfw.wmnet [production]
18:56 <andrew@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
18:56 <andrew@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: cloudbackup2002.codfw.wmnet decommissioned, removing all IPs except the asset tag one - andrew@cumin1002" [production]
18:55 <andrew@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: cloudbackup2002.codfw.wmnet decommissioned, removing all IPs except the asset tag one - andrew@cumin1002" [production]
18:52 <andrew@cumin1002> START - Cookbook sre.dns.netbox [production]
18:47 <andrew@cumin1002> START - Cookbook sre.hosts.decommission for hosts cloudbackup2002.codfw.wmnet [production]
18:46 <andrew@cumin1002> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts cloudbackup2001.codfw.wmnet [production]
18:46 <andrew@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
18:46 <andrew@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: cloudbackup2001.codfw.wmnet decommissioned, removing all IPs except the asset tag one - andrew@cumin1002" [production]
18:44 <andrew@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: cloudbackup2001.codfw.wmnet decommissioned, removing all IPs except the asset tag one - andrew@cumin1002" [production]
18:40 <andrew@cumin1002> START - Cookbook sre.dns.netbox [production]
18:35 <andrew@cumin1002> START - Cookbook sre.hosts.decommission for hosts cloudbackup2001.codfw.wmnet [production]
17:00 <mutante> crm2001 - on initial puppet run adding envoy build-envoy-config failed building config and service failed due to dependency issue. manual run of "sudo /usr/local/sbin/build-envoy-config -c /etc/envoy/" and restarted envoyproxy.service [production]
16:19 <btullis@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host matomo1003.eqiad.wmnet with OS bookworm [production]
16:16 <elukey> move cassandra instances on cassandra-dev to the new truststore (allowing PKI certs) - T352647 [production]
15:59 <elukey@deploy1002> helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'revertrisk' for release 'main' . [production]
15:56 <sukhe@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4 days, 0:00:00 on cp1115.eqiad.wmnet with reason: testing PXE boot issues [production]
15:56 <sukhe@cumin1002> START - Cookbook sre.hosts.downtime for 4 days, 0:00:00 on cp1115.eqiad.wmnet with reason: testing PXE boot issues [production]
15:55 <elukey@deploy1002> helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'readability' for release 'main' . [production]
15:53 <isaranto@deploy1002> helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'experimental' for release 'main' . [production]
15:51 <bking@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on elastic2090.codfw.wmnet with reason: T353878 [production]
15:51 <bking@cumin2002> START - Cookbook sre.hosts.downtime for 1:00:00 on elastic2090.codfw.wmnet with reason: T353878 [production]
15:51 <elukey@deploy1002> helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'revscoring-editquality-reverted' for release 'main' . [production]
15:50 <elukey@deploy1002> helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'revscoring-editquality-goodfaith' for release 'main' . [production]
15:50 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning hosts: elastic2090 for reboot to get rid of broken systemd units - bking@cumin2002 - T353878 [production]
15:50 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: elastic2090 for reboot to get rid of broken systemd units - bking@cumin2002 - T353878 [production]
15:50 <btullis@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on matomo1003.eqiad.wmnet with reason: host reimage [production]