6351-6400 of 10000 results (78ms)
2023-03-14 ยง
10:02 <claime> Locking scap deployment for service switchover - T331541 [production]
10:00 <claime> Locking scap deployment for service switchover - T330651 [production]
09:56 <jayme> disabling puppet on P:calico::kubernetes for T325268 [production]
09:54 <jayme@deploy2002> helmfile [staging-eqiad] DONE helmfile.d/admin 'apply'. [production]
09:53 <jayme@deploy2002> helmfile [staging-eqiad] START helmfile.d/admin 'apply'. [production]
09:51 <jayme@deploy2002> helmfile [staging-codfw] DONE helmfile.d/admin 'apply'. [production]
09:51 <jayme@deploy2002> helmfile [staging-codfw] START helmfile.d/admin 'apply'. [production]
09:48 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2149', diff saved to https://phabricator.wikimedia.org/P45849 and previous config saved to /var/cache/conftool/dbconfig/20230314-094828-marostegui.json [production]
09:42 <jayme@deploy2002> helmfile [staging-codfw] DONE helmfile.d/admin 'apply'. [production]
09:36 <moritzm> installing NSS security updates [production]
09:33 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2149 (T329260)', diff saved to https://phabricator.wikimedia.org/P45848 and previous config saved to /var/cache/conftool/dbconfig/20230314-093321-marostegui.json [production]
09:32 <jayme@deploy2002> helmfile [staging-codfw] START helmfile.d/admin 'apply'. [production]
09:23 <Emperor> reboot ms-be2040 T331860 [production]
09:06 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db2149 (T329260)', diff saved to https://phabricator.wikimedia.org/P45847 and previous config saved to /var/cache/conftool/dbconfig/20230314-090649-marostegui.json [production]
09:06 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2149.codfw.wmnet with reason: Maintenance [production]
09:06 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db2149.codfw.wmnet with reason: Maintenance [production]
08:43 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2139.codfw.wmnet with reason: Maintenance [production]
08:42 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db2139.codfw.wmnet with reason: Maintenance [production]
08:42 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2127 (T329260)', diff saved to https://phabricator.wikimedia.org/P45846 and previous config saved to /var/cache/conftool/dbconfig/20230314-084249-marostegui.json [production]
08:38 <vgutierrez> test HAProxy 2.6.10 in cp4044 and cp4045 [production]
08:31 <vgutierrez> fetch haproxy 2.6.10 for thirdparty/haproxy26 (buster && bullseye) @ apt.wm.o [production]
08:27 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2127', diff saved to https://phabricator.wikimedia.org/P45845 and previous config saved to /var/cache/conftool/dbconfig/20230314-082743-marostegui.json [production]
08:12 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2127', diff saved to https://phabricator.wikimedia.org/P45843 and previous config saved to /var/cache/conftool/dbconfig/20230314-081236-marostegui.json [production]
07:57 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2127 (T329260)', diff saved to https://phabricator.wikimedia.org/P45842 and previous config saved to /var/cache/conftool/dbconfig/20230314-075730-marostegui.json [production]
07:32 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db2127 (T329260)', diff saved to https://phabricator.wikimedia.org/P45841 and previous config saved to /var/cache/conftool/dbconfig/20230314-073210-marostegui.json [production]
07:32 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2127.codfw.wmnet with reason: Maintenance [production]
07:31 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db2127.codfw.wmnet with reason: Maintenance [production]
07:31 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2109 (T329260)', diff saved to https://phabricator.wikimedia.org/P45840 and previous config saved to /var/cache/conftool/dbconfig/20230314-073149-marostegui.json [production]
07:25 <marostegui> Migrate db1183 to mariadb m5 eqiad dbmaint 10.6 T322294 [production]
07:16 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2109', diff saved to https://phabricator.wikimedia.org/P45839 and previous config saved to /var/cache/conftool/dbconfig/20230314-071643-marostegui.json [production]
07:13 <marostegui> Migrate db2135 to mariadb m5 codfw dbmaint 10.6 [production]
07:01 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2109', diff saved to https://phabricator.wikimedia.org/P45838 and previous config saved to /var/cache/conftool/dbconfig/20230314-070137-marostegui.json [production]
06:46 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2109 (T329260)', diff saved to https://phabricator.wikimedia.org/P45837 and previous config saved to /var/cache/conftool/dbconfig/20230314-064630-marostegui.json [production]
06:42 <denisse@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts centrallog1001 [production]
06:42 <denisse@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
06:42 <denisse@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: centrallog1001 decommissioned, removing all IPs except the asset tag one - denisse@cumin1001" [production]
06:41 <hashar> gerrit: changed `operations/puppet` merge strategy to allow "content merges" (see `ops` list for the rationale) [production]
06:36 <denisse@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: centrallog1001 decommissioned, removing all IPs except the asset tag one - denisse@cumin1001" [production]
06:34 <denisse@cumin1001> START - Cookbook sre.dns.netbox [production]
06:28 <denisse@cumin1001> START - Cookbook sre.hosts.decommission for hosts centrallog1001 [production]
06:16 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db2109 (T329260)', diff saved to https://phabricator.wikimedia.org/P45836 and previous config saved to /var/cache/conftool/dbconfig/20230314-061633-marostegui.json [production]
06:16 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2109.codfw.wmnet with reason: Maintenance [production]
06:16 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db2109.codfw.wmnet with reason: Maintenance [production]
06:04 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2112.codfw.wmnet with reason: Maintenance [production]
06:04 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db2112.codfw.wmnet with reason: Maintenance [production]
05:07 <ryankemper> [WDQS Deploy] Restarting `wdqs-categories` across lvs-managed hosts, one node at a time: `sudo -E cumin -b 1 'A:wdqs-all and not A:wdqs-test' 'depool && sleep 45 && systemctl restart wdqs-categories && sleep 45 && pool'` [production]
05:07 <ryankemper> [WDQS Deploy] Restarted `wdqs-categories` across all test hosts simultaneously: `sudo -E cumin 'A:wdqs-test' 'systemctl restart wdqs-categories'` [production]
05:07 <ryankemper> [WDQS Deploy] Restarted `wdqs-updater` across all hosts, 4 hosts at a time: `sudo -E cumin -b 4 'A:wdqs-all' 'systemctl restart wdqs-updater'` [production]
05:05 <ryankemper@deploy2002> Finished deploy [wdqs/wdqs@61ef435]: 0.3.122 (duration: 08m 45s) [production]
04:57 <ryankemper> [WDQS Deploy] Tests passing following deploy of `0.3.122` on canary `wdqs1003`; proceeding to rest of fleet [production]