1651-1700 of 2516 results (23ms)
2021-03-05 §
21:40 <andrewbogott> replacing 'observer' role with 'reader' role in eqiad1 T276018 [admin]
21:21 <andrewbogott> replacing 'observer' role with 'reader' role in eqiad1 [admin]
16:23 <arturo> rebooting cloudvirt1036 for T275753 [admin]
12:30 <arturo> draining cloudvirt1036 for T275753 [admin]
12:25 <arturo> rebooting cloudvirt1035 for T275753 [admin]
10:49 <arturo> rebooting cloudvirt1035 for T275753 [admin]
10:47 <arturo> rebooting cloudvirt1034 for T275753 [admin]
10:26 <arturo> draining cloudvirt1034 for T275753 [admin]
10:25 <arturo> rebooting cloudvirt1033 for T275753 [admin]
09:18 <arturo> draining cloudvirt1033 for T275753 [admin]
2021-03-04 §
18:36 <andrewbogott> rebooting cloudmetrics1002; the console is hanging [admin]
16:59 <arturo> rebooting cloudvirt1032 for T275753 [admin]
16:34 <arturo> draining cloudvirt1032 for T275753 [admin]
16:33 <arturo> rebooting cloudvirt1031 for T275753 [admin]
16:11 <arturo> draining cloudvirt1031 for T275753 [admin]
16:09 <arturo> rebooting cloudvirt1026 for T275753 [admin]
15:57 <arturo> draining cloudvirt1026 for T275753 [admin]
15:55 <arturo> rebooting cloudvirt1025 for T275753 [admin]
15:41 <arturo> draining cloudvirt1025 for T275753 [admin]
15:12 <arturo> rebooting cloudvirt1024 for T275753 [admin]
11:29 <arturo> draining cloudvirt1024 for T275753 [admin]
11:24 <dcaro> rebooted cloudvirt1022, re-adding to ceph and removing from maintenance host aggregate for T275753 [admin]
11:01 <dcaro> rebooting cloudvirt1022 for T275753 [admin]
09:12 <dcaro> draining cloudvirt1022 for T275753 [admin]
2021-03-03 §
17:16 <andrewbogott> restarting rabbitmq-server on cloudcontrol1003,1004,1005; trying to explain amqp errors in scheduler logs [admin]
16:03 <dcaro> draining cloudvirt1022 for T275753 [admin]
16:03 <dcaro> draining cloudvirt1022 for TT275753 [admin]
16:00 <arturo> move cloudvirt1013 into the 'toobusy' host aggregate, it has 221% cpu subscription and 82% MEM subscription [admin]
15:34 <arturo> rebooting cloudvirt1021 for T275753 [admin]
14:31 <arturo> draining cloudvirt1021 for T275753 [admin]
13:59 <arturo> rebooting cloudvirt1018 for T275753 [admin]
13:28 <arturo> draining cloudvirt1018 for T275753 [admin]
12:49 <arturo> rebooting cloudvirt1017 for T275753 [admin]
12:22 <arturo> draining cloudvirt1017 for T275753 [admin]
12:20 <arturo> rebooting cloudvirt1016 for T275753 [admin]
12:01 <arturo> draining cloudvirt1016 for T275753 [admin]
11:59 <arturo> cloudvirt1014 now in the ceph host aggregate [admin]
11:58 <arturo> rebooting cloudvirt1014 for T275753 [admin]
11:50 <arturo> moved cloudvirt1023 away from the maintenance host aggregate, leave it in the ceph aggregate (was in the 2) [admin]
11:47 <arturo> moved cloudvirt1014 to the 'maintenance' host aggregate, drain it for T275753 [admin]
10:01 <arturo> icinga-downtime cloudnet1003 for 14 days bc potential alerting storm due to firmware issues (T271058) [admin]
10:00 <arturo> rebooting again cloudnet1003 (no network failover) (T271058) [admin]
09:58 <arturo> update firmware-bnx2x from 20190114-2 to 20200918-1~bpo10+1 on cloudnet1003 (T271058) [admin]
09:30 <arturo> installing linux kernel 5.10.13-1~bpo10+1 in cloudnet1003 and rebooting it (network failover) (T271058) [admin]
2021-03-02 §
17:16 <andrewbogott> rebooting cloudvirt1039 to see if I can trigger T276208 [admin]
16:10 <arturo> [codfw1dev] restart nova-compute on cloudvirt2002-dev [admin]
11:59 <arturo> moved cloudvirt1012 to 'maintenance' host aggregate. Drain it with `wmcs-drain-hypervisor` to reboot it for T275753 [admin]
11:59 <arturo> cloudvirt1023 is affected by T276208 and cannot be rebooted. Put it back into the ceph hos aggregate [admin]
10:43 <arturo> moved cloudvirt1013 cloudvirt1032 cloudvirt1037 back into the 'ceph' host aggregate [admin]
10:13 <arturo> moved cloudvirt1023 to 'maintenance' host aggregate. Drain it with `wmcs-drain-hypervisor` to reboot it for T275753 [admin]