8551-8600 of 10000 results (99ms)
2023-05-04 §
06:10 <slyngshede@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=99) for hosts test-reimage2001.codfw.wmnet [production]
06:10 <slyngshede@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
06:10 <slyngshede@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: test-reimage2001.codfw.wmnet decommissioned, removing all IPs except the asset tag one - slyngshede@cumin1001" [production]
06:08 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on sretest1002.eqiad.wmnet with reason: host reimage [production]
06:07 <slyngshede@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: test-reimage2001.codfw.wmnet decommissioned, removing all IPs except the asset tag one - slyngshede@cumin1001" [production]
06:05 <slyngshede@cumin1001> START - Cookbook sre.dns.netbox [production]
06:05 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on sretest1002.eqiad.wmnet with reason: host reimage [production]
06:01 <slyngshede@cumin1001> START - Cookbook sre.hosts.decommission for hosts test-reimage2001.codfw.wmnet [production]
05:59 <jmm@cumin2002> END (FAIL) - Cookbook sre.hosts.reboot-single (exit_code=1) for host bast5003.wikimedia.org [production]
05:54 <ryankemper@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.REBOOT (3 nodes at a time) for ElasticSearch cluster search_eqiad: eqiad cluster reboot - ryankemper@cumin1001 - T335835 [production]
05:53 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host bast5003.wikimedia.org [production]
05:51 <jmm@cumin2002> START - Cookbook sre.hosts.reimage for host sretest1002.eqiad.wmnet with OS bookworm [production]
04:54 <ryankemper@cumin1001> END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.REBOOT (1 nodes at a time) for ElasticSearch cluster relforge: relforge cluster reboot - ryankemper@cumin1001 - T335835 [production]
04:51 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.REBOOT (3 nodes at a time) for ElasticSearch cluster search_eqiad: eqiad cluster reboot - ryankemper@cumin1001 - T335835 [production]
04:50 <ryankemper@cumin1001> START - Cookbook sre.wdqs.reboot [production]
04:47 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on 6 hosts with reason: Rolling reboot for T335835 [production]
04:47 <ryankemper@cumin1001> START - Cookbook sre.hosts.downtime for 4:00:00 on 6 hosts with reason: Rolling reboot for T335835 [production]
04:45 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.REBOOT (1 nodes at a time) for ElasticSearch cluster relforge: relforge cluster reboot - ryankemper@cumin1001 - T335835 [production]
04:39 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on relforge[1003-1004].eqiad.wmnet with reason: Rolling reboot T335835 [production]
04:38 <ryankemper@cumin1001> START - Cookbook sre.hosts.downtime for 4:00:00 on relforge[1003-1004].eqiad.wmnet with reason: Rolling reboot T335835 [production]
04:38 <ryankemper> [Elastic] Reboot operation failed w/ (likely transient) read timeouts, will try again in 10 mins [production]
04:37 <ryankemper@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.REBOOT (3 nodes at a time) for ElasticSearch cluster search_eqiad: eqiad cluster reboot - ryankemper@cumin1001 - T335835 [production]
04:36 <ryankemper> [Elastic] Beginning rolling reboot of eqiad elastic, 3 nodes at a time, `ryankemper@cumin1001` tmux session `reboot_eqiad` [production]
04:36 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.REBOOT (3 nodes at a time) for ElasticSearch cluster search_eqiad: eqiad cluster reboot - ryankemper@cumin1001 - T335835 [production]
04:30 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on 50 hosts with reason: Rolling reboot of eqiad for T335835 [production]
04:29 <ryankemper@cumin1001> START - Cookbook sre.hosts.downtime for 4:00:00 on 50 hosts with reason: Rolling reboot of eqiad for T335835 [production]
02:42 <eileen> config revision changed from 5ac52d82 to 7ac11236 reduce batch size, avoid failmail [production]
02:35 <eileen> config revision changed from 121a864a to 5ac52d82 [production]
02:33 <eileen> civicrm upgraded from b97aaa08 to 05523a9d [production]
01:29 <eileen> config revision changed from 26147e89 to 121a864a - disabling populate as it keeps rolling back so prob another overlong row [production]
2023-05-03 §
23:55 <eileen> config revision changed from 2995f558 to 26147e89 [production]
23:15 <bking@cumin1001> END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.REBOOT (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw cluster reboot - bking@cumin1001 - T335835 [production]
23:10 <tzatziki> removing 1 file for legal compliance [production]
23:01 <eileen> config revision changed from 69f60bb9 to 2995f558 [production]
22:42 <zabe@deploy1002> Finished scap: Backport for [[gerrit:914903|Start writing to af_actor/afh_actor in group1 wikis (T334295)]] (duration: 07m 13s) [production]
22:37 <zabe@deploy1002> zabe: Backport for [[gerrit:914903|Start writing to af_actor/afh_actor in group1 wikis (T334295)]] synced to the testservers: mwdebug1001.eqiad.wmnet, mwdebug2002.codfw.wmnet, mwdebug2001.codfw.wmnet, mwdebug1002.eqiad.wmnet [production]
22:35 <zabe@deploy1002> Started scap: Backport for [[gerrit:914903|Start writing to af_actor/afh_actor in group1 wikis (T334295)]] [production]
22:34 <tzatziki> removing 12 files for legal compliance [production]
22:19 <eevans@cumin1001> END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching aqs1010.eqiad.wmnet: Upgrade Cassandra — T335383 - eevans@cumin1001 [production]
22:11 <eevans@cumin1001> START - Cookbook sre.cassandra.roll-restart for nodes matching aqs1010.eqiad.wmnet: Upgrade Cassandra — T335383 - eevans@cumin1001 [production]
22:08 <eevans@cumin1001> END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching aqs2001.codfw.wmnet: Upgrade Cassandra — T335383 - eevans@cumin1001 [production]
22:04 <eileen> civicrm upgraded from c6149ad2 to b97aaa08 [production]
22:00 <eevans@cumin1001> START - Cookbook sre.cassandra.roll-restart for nodes matching aqs2001.codfw.wmnet: Upgrade Cassandra — T335383 - eevans@cumin1001 [production]
21:55 <brett> Disable puppet on lvs4008 for new pybal deployment (just in case immediate config rollback is required) - T263797 [production]
21:43 <milimetric@deploy1002> Finished deploy [analytics/refinery@c53c095] (thin): Deploy THIN [analytics/refinery@c53c095] (duration: 00m 06s) [production]
21:43 <milimetric@deploy1002> Started deploy [analytics/refinery@c53c095] (thin): Deploy THIN [analytics/refinery@c53c095] [production]
21:31 <eevans@cumin1001> END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching restbase10[17-33].eqiad.wmnet: Upgrade Cassandra — T335383 - eevans@cumin1001 [production]
21:31 <brett> Uploaded pybal_1.15.11 to apt1001 via reprepro [production]
21:31 <milimetric@deploy1002> Finished deploy [analytics/refinery@c53c095]: Refinery deploy [analytics/refinery@c53c095] (duration: 08m 22s) [production]
21:22 <milimetric@deploy1002> Started deploy [analytics/refinery@c53c095]: Refinery deploy [analytics/refinery@c53c095] [production]