401-450 of 1297 results (19ms)
2021-03-22 §
10:10 <arturo> cleanup conntrack table in standby node: aborrero@cloudnet1003:~ $ sudo ip netns exec qrouter-d93771ba-2711-4f88-804a-8df6fd03978a conntrack -F [admin]
2021-03-19 §
17:17 <bstorm> running `ALTER TABLE account MODIFY COLUMN type ENUM('user','tool','paws');` against the labsdbaccounts database on m5 T276284 [admin]
14:29 <andrewbogott> switching admin-monitoring project to use an upstream debian image; I want to see how this affects performance [admin]
00:30 <bstorm> downtimed labstore1004 to check some things in debug mode [admin]
2021-03-17 §
17:28 <bstorm> restarted the backup-glance-images job to clear errors in systemd T271782 [admin]
17:16 <andrewbogott> set default cinder quota for projects to 80Gb with "update quota_classes set hard_limit=80 where resource='gigabytes';" on database 'cinder' [admin]
16:58 <andrewbogott> disabling all flavors with >20Gb root storage with "update flavors set disabled=1 where root_gb>20;" in nova_eqiad1_api [admin]
2021-03-10 §
16:51 <arturo> rebooting cloudvirt1030 for T275753 [admin]
13:14 <dcaro> starting manually the canary VM for cloudvirt1029 (nova start 349830f6-3b39-4a8c-ada4-a7439f65cffe) (T275753) [admin]
12:51 <arturo> draining cloudvirt1030 for T275753 [admin]
12:47 <arturo> rebooting cloudvirt1029 for T275753 [admin]
11:56 <arturo> [codfw1dev] restart rabbitmq-server in all 3 cloudcontrol servers for T276964 [admin]
11:53 <arturo> [codfw1dev] restart nova-conductor in all 3 cloudcontrol servers for T276964 [admin]
11:31 <arturo> draining cloudvirt1029 for T275753 [admin]
11:29 <arturo> rebooting cloudvirt1013 for T275753 [admin]
11:05 <arturo> draining cloudvirt1013 for T275753 [admin]
11:00 <arturo> rebooting cloudvirt1028 for T275753 [admin]
10:33 <arturo> draining cloudvirt1028 for T275753 [admin]
10:29 <arturo> rebooting cloudvirt1023 for T275753 [admin]
09:37 <arturo> draining cloudvirt1023 for T275753 [admin]
09:07 <arturo> [codfw1dev] reimaging cloudvirt2003-dev (T276964) [admin]
2021-03-09 §
16:27 <arturo> rebooting cloudvirt1027 (T275753) [admin]
13:38 <arturo> draining cloudvrit1027 for T275753 [admin]
13:35 <arturo> icinga-downtime cloudvirt1038 for 30 days for T276922 [admin]
13:21 <arturo> add cloudvirt1039 to the ceph host aggregate (no longer a spare, we have cloudvirt1038 with HW failures) [admin]
12:52 <arturo> cloudvirt1038 hard powerdown / powerup for T276922 [admin]
12:33 <arturo> rebooting cloudvirt1038 (T275753) [admin]
10:58 <arturo> draining cloudvirt1038 (T275753) [admin]
10:54 <arturo> rebooting cloudvirt1037 (T275753) [admin]
09:59 <arturo> draining cloudvirt1037 (T275753) [admin]
09:12 <dcaro> restarted the wmcs-backup service on cloudvirt1024 to retry the backups (failed because a VM was removed in-between, T276892) [admin]
2021-03-05 §
21:40 <andrewbogott> replacing 'observer' role with 'reader' role in eqiad1 T276018 [admin]
21:21 <andrewbogott> replacing 'observer' role with 'reader' role in eqiad1 [admin]
16:23 <arturo> rebooting cloudvirt1036 for T275753 [admin]
12:30 <arturo> draining cloudvirt1036 for T275753 [admin]
12:25 <arturo> rebooting cloudvirt1035 for T275753 [admin]
10:49 <arturo> rebooting cloudvirt1035 for T275753 [admin]
10:47 <arturo> rebooting cloudvirt1034 for T275753 [admin]
10:26 <arturo> draining cloudvirt1034 for T275753 [admin]
10:25 <arturo> rebooting cloudvirt1033 for T275753 [admin]
09:18 <arturo> draining cloudvirt1033 for T275753 [admin]
2021-03-04 §
18:36 <andrewbogott> rebooting cloudmetrics1002; the console is hanging [admin]
16:59 <arturo> rebooting cloudvirt1032 for T275753 [admin]
16:34 <arturo> draining cloudvirt1032 for T275753 [admin]
16:33 <arturo> rebooting cloudvirt1031 for T275753 [admin]
16:11 <arturo> draining cloudvirt1031 for T275753 [admin]
16:09 <arturo> rebooting cloudvirt1026 for T275753 [admin]
15:57 <arturo> draining cloudvirt1026 for T275753 [admin]
15:55 <arturo> rebooting cloudvirt1025 for T275753 [admin]
15:41 <arturo> draining cloudvirt1025 for T275753 [admin]