201-250 of 1128 results (13ms)
2021-04-14 §
10:41 <dcaro> Upgrade of codfw ceph to octopus 15.2.20, mgrs upgraded, osds next (T274566) [admin]
10:37 <dcaro> Upgrade of codfw ceph to octopus 15.2.20, mons upgraded, mgrs next (T274566) [admin]
10:15 <dcaro> starting the upgrade of codfw ceph to octopus 15.2.20 (T274566) [admin]
10:07 <dcaro> Merged the ceph 15 (Octopus) repo deployment to codfw, only the repo, not the packages (T274566) [admin]
2021-04-13 §
16:42 <dcaro> Ceph balancer got the cluster to eval 0.014916, that is 88-77% usage for compute pool, and 28-19% usage for the cinder one \o/ (T274573) [admin]
15:08 <dcaro> Activating continuous upmap balancer, keeping a close eye (T274573) [admin]
15:03 <dcaro> Executing a second pass, there's still movements to improve the eval of 0.030075 (T274573) [admin]
15:02 <dcaro> First pass finished, improved eval to 0.030075 (T274573) [admin]
14:49 <dcaro> Running the first_pass balancing plan on ceph eqiad, current eval 0.030622 (T274573) [admin]
14:43 <dcaro> enabling ceph upmap pg balancer on equiad (T274573) [admin]
14:36 <andrewbogott> upgrading codfw1dev to version Victoria, T261137 [admin]
13:11 <andrewbogott> upgrading eqiad1 designate to version Victoria, T261137 [admin]
10:43 <dcaro> enabled ceph upmap balancer on codfw (T274573,T274573) [admin]
2021-04-07 §
21:33 <andrewbogott> upgrading codfw1dev designate to Victoria [admin]
2021-04-04 §
17:36 <andrewbogott> upgrading eqiad1 designate to Ussuri [admin]
2021-04-02 §
14:12 <andrewbogott> upgrading codfw1dev to OpenStack version Ussuri [admin]
2021-04-01 §
12:15 <dcaro> Restoring the 4.9 kernel on cloudcephosd2003-dev and upgrading (T274565) [admin]
10:29 <dcaro> Done restoring the 4.9 kernel on cloudcephosd2001-dev and upgrading, requires logging into console to boot from the older kernel before removing the newer one (T274565) [admin]
10:10 <dcaro> Restoring the 4.9 kernel on cloudcephosd2001-dev and upgrading (T274565) [admin]
2021-03-31 §
08:47 <dcaro> upgrading cinder on codfw cloudcontrol2* nodes (T278845) [admin]
2021-03-30 §
09:53 <arturo> rebooting cloudnet1003 to cleanup conntrack table, it wouldn't cleanup by hand ... [admin]
2021-03-28 §
15:42 <andrewbogott> updated debian-10.0-buster base image [admin]
2021-03-27 §
09:54 <arturo> cleanup conntrack table in qrouter nents in cloudnet1003 (backup) [admin]
2021-03-25 §
19:03 <andrewbogott> deleting all unused (per wmcs-imageusage) Jessie base images from Glance [admin]
17:15 <andrewbogott> refreshing puppet compiler facts for tools project [admin]
10:31 <dcaro> kernel upgrade on osds on codfw done, running performance tests (T274565) [admin]
10:24 <dcaro> upgrading kernel on cloudcephosd2003-dev and reboot (T274565) [admin]
10:18 <dcaro> upgrading kernel on cloudcephosd2002-dev and reboot (T274565) [admin]
10:08 <dcaro> upgrading kernel on cloudcephmon2003-dev and reboot (T274565) [admin]
2021-03-24 §
09:19 <dcaro> restarted wmcs-backup on cloudvirt1024 as it failed due to an image being removed while running (T276892) [admin]
2021-03-23 §
11:33 <arturo> root@cloudcontrol1005:~# wmcs-novastats-dnsleaks --delete [admin]
2021-03-22 §
10:10 <arturo> cleanup conntrack table in standby node: aborrero@cloudnet1003:~ $ sudo ip netns exec qrouter-d93771ba-2711-4f88-804a-8df6fd03978a conntrack -F [admin]
2021-03-19 §
17:17 <bstorm> running `ALTER TABLE account MODIFY COLUMN type ENUM('user','tool','paws');` against the labsdbaccounts database on m5 T276284 [admin]
14:29 <andrewbogott> switching admin-monitoring project to use an upstream debian image; I want to see how this affects performance [admin]
00:30 <bstorm> downtimed labstore1004 to check some things in debug mode [admin]
2021-03-17 §
17:28 <bstorm> restarted the backup-glance-images job to clear errors in systemd T271782 [admin]
17:16 <andrewbogott> set default cinder quota for projects to 80Gb with "update quota_classes set hard_limit=80 where resource='gigabytes';" on database 'cinder' [admin]
16:58 <andrewbogott> disabling all flavors with >20Gb root storage with "update flavors set disabled=1 where root_gb>20;" in nova_eqiad1_api [admin]
2021-03-10 §
16:51 <arturo> rebooting cloudvirt1030 for T275753 [admin]
13:14 <dcaro> starting manually the canary VM for cloudvirt1029 (nova start 349830f6-3b39-4a8c-ada4-a7439f65cffe) (T275753) [admin]
12:51 <arturo> draining cloudvirt1030 for T275753 [admin]
12:47 <arturo> rebooting cloudvirt1029 for T275753 [admin]
11:56 <arturo> [codfw1dev] restart rabbitmq-server in all 3 cloudcontrol servers for T276964 [admin]
11:53 <arturo> [codfw1dev] restart nova-conductor in all 3 cloudcontrol servers for T276964 [admin]
11:31 <arturo> draining cloudvirt1029 for T275753 [admin]
11:29 <arturo> rebooting cloudvirt1013 for T275753 [admin]
11:05 <arturo> draining cloudvirt1013 for T275753 [admin]
11:00 <arturo> rebooting cloudvirt1028 for T275753 [admin]
10:33 <arturo> draining cloudvirt1028 for T275753 [admin]
10:29 <arturo> rebooting cloudvirt1023 for T275753 [admin]