7551-7600 of 10000 results (90ms)
2023-01-10 §
07:16 <ayounsi@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: check if dns update is needed after change of rec-dns-lb IPs status - ayounsi@cumin1001" [production]
07:14 <ayounsi@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: check if dns update is needed after change of rec-dns-lb IPs status - ayounsi@cumin1001" [production]
07:11 <ayounsi@cumin1001> START - Cookbook sre.dns.netbox [production]
07:10 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1130.eqiad.wmnet with reason: Maintenance [production]
07:10 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1130.eqiad.wmnet with reason: Maintenance [production]
07:06 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depool db1130 T326133', diff saved to https://phabricator.wikimedia.org/P42941 and previous config saved to /var/cache/conftool/dbconfig/20230110-070628-ladsgroup.json [production]
07:03 <XioNoX> remove static routes for legacy dns-rec-lb IPs - T239993 [production]
07:02 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Promote db1100 to s5 primary and set section read-write T326133', diff saved to https://phabricator.wikimedia.org/P42940 and previous config saved to /var/cache/conftool/dbconfig/20230110-070223-ladsgroup.json [production]
07:01 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Set s5 eqiad as read-only for maintenance - T326133', diff saved to https://phabricator.wikimedia.org/P42939 and previous config saved to /var/cache/conftool/dbconfig/20230110-070152-ladsgroup.json [production]
07:01 <Amir1> Starting s5 eqiad failover from db1130 to db1100 - T326133 [production]
06:23 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Set db1100 with weight 0 T326133', diff saved to https://phabricator.wikimedia.org/P42938 and previous config saved to /var/cache/conftool/dbconfig/20230110-062309-ladsgroup.json [production]
06:22 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on 25 hosts with reason: Primary switchover s5 T326133 [production]
06:22 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1:00:00 on 25 hosts with reason: Primary switchover s5 T326133 [production]
05:39 <slyngshede@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Sync idm-test1001 - slyngshede@cumin1001" [production]
05:38 <slyngshede@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Sync idm-test1001 - slyngshede@cumin1001" [production]
03:14 <eileen> civicrm upgraded from 391e8482 to 9afd2789 [production]
03:12 <ryankemper@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: plugin upgrade - ryankemper@cumin1001 - T324247 [production]
02:46 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: plugin upgrade - ryankemper@cumin1001 - T324247 [production]
02:41 <ryankemper@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: plugin upgrade - ryankemper@cumin1001 - T324247 [production]
02:08 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: plugin upgrade - ryankemper@cumin1001 - T324247 [production]
01:50 <krinkle@deploy1002> Finished deploy [integration/docroot@f59119c]: (no justification provided) (duration: 00m 14s) [production]
01:50 <krinkle@deploy1002> Started deploy [integration/docroot@f59119c]: (no justification provided) [production]
01:28 <eileen> civicrm upgraded from e3405a4e to 391e8482 [production]
00:48 <bking@cumin1001> END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: plugin upgrade - bking@cumin1001 - T324247 [production]
2023-01-09 §
22:34 <jiji@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host mc2043.codfw.wmnet [production]
22:33 <bking@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: plugin upgrade - bking@cumin1001 - T324247 [production]
22:32 <bking@cumin1001> END (ERROR) - Cookbook sre.elasticsearch.rolling-operation (exit_code=97) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: plugin upgrade - bking@cumin1001 - T324247 [production]
22:28 <jiji@cumin1001> START - Cookbook sre.hosts.reboot-single for host mc2043.codfw.wmnet [production]
22:25 <jiji@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts mc2030.codfw.wmnet [production]
22:25 <jiji@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
22:25 <jiji@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: mc2030.codfw.wmnet decommissioned, removing all IPs except the asset tag one - jiji@cumin1001" [production]
22:15 <jiji@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: mc2030.codfw.wmnet decommissioned, removing all IPs except the asset tag one - jiji@cumin1001" [production]
22:11 <jiji@cumin1001> START - Cookbook sre.dns.netbox [production]
22:05 <jiji@cumin1001> START - Cookbook sre.hosts.decommission for hosts mc2030.codfw.wmnet [production]
22:03 <jiji@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts mc2029.codfw.wmnet [production]
22:03 <jiji@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
22:03 <jiji@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: mc2029.codfw.wmnet decommissioned, removing all IPs except the asset tag one - jiji@cumin1001" [production]
22:00 <jiji@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: mc2029.codfw.wmnet decommissioned, removing all IPs except the asset tag one - jiji@cumin1001" [production]
21:54 <jiji@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host mc2042.codfw.wmnet [production]
21:52 <kindrobot> close UTC late backport window [production]
21:50 <jiji@cumin1001> START - Cookbook sre.dns.netbox [production]
21:47 <kindrobot@deploy1002> Sync cancelled. [production]
21:47 <kindrobot@deploy1002> kindrobot and trainbranchbot: Backport for [[gerrit:877260|Revert "[config]: Deploy GDI Safety Survey Wave 4"]] synced to the testservers: mwdebug2002.codfw.wmnet, mwdebug1002.eqiad.wmnet, mwdebug1001.eqiad.wmnet, mwdebug2001.codfw.wmnet [production]
21:47 <jiji@cumin1001> START - Cookbook sre.hosts.reboot-single for host mc2042.codfw.wmnet [production]
21:45 <bking@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: plugin upgrade - bking@cumin1001 - T324247 [production]
21:45 <kindrobot@deploy1002> Started scap: Backport for [[gerrit:877260|Revert "[config]: Deploy GDI Safety Survey Wave 4"]] [production]
21:39 <kindrobot@deploy1002> Sync cancelled. [production]
21:38 <jiji@cumin1001> START - Cookbook sre.hosts.decommission for hosts mc2029.codfw.wmnet [production]
21:37 <jiji@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts mc2027.codfw.wmnet [production]
21:37 <jiji@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]