3101-3150 of 3641 results (22ms)
2020-10-15 §
15:17 <arturo> [codfw1dev] try cleaning up anything related to address scopes in the neutron database (T261724) [admin]
13:56 <arturo> [codfw1dev] drop neutron l3 agent hacks in cloudnet2002/2003-dev (T261724) [admin]
2020-10-13 §
17:54 <andrewbogott> rebuilding cloudvirt1021 for backy support [admin]
15:22 <andrewbogott> draining cloudvirt1021 so I can rebuild it with backy support [admin]
14:19 <andrewbogott> rebuilding cloudvirt1022 with backy support [admin]
14:03 <andrewbogott> draining cloudvirt1022 so I can rebuild it with backy support [admin]
11:19 <arturo> [codfw1dev] rebooting labtestvirt2003 [admin]
2020-10-09 §
10:15 <arturo> [codfwd1ev] root@cloudcontrol2001-dev:~# openstack router set --disable-snat cloudinstances2b-gw --external-gateway wan-transport-codfw (T261724) [admin]
09:22 <arturo> [codfwd1dev] rebooting cloudnet boxes for bridge and vlan changes (T261724) [admin]
09:12 <arturo> [codfw1dev] root@cloudcontrol2001-dev:~# openstack subnet delete 31214392-9ca5-4256-bff5-1e19a35661de (cloud-instances-transport1-b-codfw - 208.80.153.184/29) (T261724) [admin]
09:10 <arturo> [codfw1dev] root@cloudcontrol2001-dev:~# openstack router set --external-gateway wan-transport-codfw --fixed-ip subnet=cloud-gw-transport-codfw,ip-address=185.15.57.10 cloudinstances2b-gw (T261724) [admin]
08:49 <arturo> [codfw1dev] root@cloudcontrol2001-dev:~# openstack subnet create --network wan-transport-codfw --gateway 185.15.57.9 --no-dhcp --subnet-range 185.15.57.8/30 cloud-gw-transport-codfw (T261724) [admin]
08:47 <arturo> [codfw1dev] root@cloudcontrol2001-dev:~# openstack subnet delete a5ab5362-4ffb-4059-9ff7-391e22dcf3bc (T261724) [admin]
2020-10-08 §
16:17 <arturo> [codfw1dev] `root@cloudcontrol2001-dev:~# openstack subnet create --network wan-transport-codfw --gateway 185.15.57.8 --no-dhcp --subnet-range 185.15.57.8/31 cloud-gw-transport-codfw` (with a hack -- see task) (T263622) [admin]
16:03 <arturo> [codfw1dev] briefly live-hacked python3-neutron source code in all 3 cloudcontrol2xxx-dev servers to workaround /31 network definition issue (T263622) [admin]
10:28 <arturo> [codfw1dev] reimaging labtestvirt2003 (cloudgw) T261724 [admin]
2020-10-06 §
21:30 <andrewbogott> moved cloudvirt1013 out of the 'ceph' aggregate and into the 'maintenance' aggregate for T243414 [admin]
21:29 <andrewbogott> draining cloudvirt1013 for upgrade to 10G networking [admin]
14:45 <arturo> icinga downtime every cloud* lab* host for 60 minutes for keystone maintenance [admin]
2020-10-05 §
17:40 <bd808> `service uwsgi-labspuppetbackend restart` on cloud-puppetmaster-03 (T264649) [admin]
2020-10-02 §
11:05 <arturo> [codfw1dev] restarting rabbitmq-server in all 3 control nodes, the l3 agent was misbehaving [admin]
09:16 <arturo> [codfw1dev] trying the labtestvirt2003 (cloudgw) reimage again (T261724) [admin]
2020-10-01 §
16:06 <arturo> rebooting cloudvirt1024 to validate changes to /etc/network/interfaces file [admin]
15:36 <arturo> [codfw1dev] reimaging labtestvirt2003 [admin]
2020-09-30 §
16:47 <andrewbogott> rebooting cloudvir1032, 1033, 1034 for T262979 [admin]
13:28 <arturo> enable puppet, reboot and pool back cloudvirt1031 [admin]
13:27 <arturo> extend icinga downtimes for another 120 mins [admin]
13:15 <arturo> `aborrero@cloudcontrol1003:~$ sudo nova-manage placement sync_aggregates` after reading a hint in nova-api.log [admin]
13:02 <arturo> rebooting cloudvirt1016 and moving it to the ceph host aggregate [admin]
12:55 <arturo> rebooting cloudvirt1014 and moving it to the ceph host aggregate [admin]
12:51 <arturo> rebooting cloudvirt1013 and moving it to the ceph host aggregate [admin]
12:39 <arturo> root@cloudcontrol1005:~# openstack aggregate add host maintenance cloudvirt1031 [admin]
12:36 <arturo> rebooted cloudnet1003 (active) a couple of minutes ago [admin]
12:36 <arturo> move cloudvirt1012 and cloudvirt1039 to the ceph aggregate [admin]
11:49 <arturo> rebooting cloudvirt1039 [admin]
11:46 <arturo> rebooting cloudvirt1012 [admin]
11:40 <arturo> rebooting cloudnet1004 (standby) to pick up https://gerrit.wikimedia.org/r/c/operations/puppet/+/631167 (T262979) [admin]
11:38 <arturo> [codfw1dev] rebooting cloudnet2002-dev to pick up https://gerrit.wikimedia.org/r/c/operations/puppet/+/631167 [admin]
11:36 <arturo> [codfw1dev] rebooting cloudnet2003-dev to pick up https://gerrit.wikimedia.org/r/c/operations/puppet/+/631167 [admin]
11:33 <arturo> disabling puppet and downtiming every virt/net server in the fleet in preparation for merging https://gerrit.wikimedia.org/r/c/operations/puppet/+/631167 (T262979) [admin]
09:32 <arturo> rebooting cloudvirt1012 to investigate linuxbridge agent issues [admin]
2020-09-29 §
15:40 <arturo> downgrade linux kernel from linux-image-4.19.0-11-amd64 to linux-image-4.19.0-10-amd64 on cloudvirt1012 [admin]
14:47 <arturo> rebooting cloudvirt1012, chasing config weirdness in the linuxbridge agent [admin]
14:05 <andrewbogott> reimaging 1014 over and over in an attempt to get partman right [admin]
13:51 <arturo> rebooting cloudvirt1012 [admin]
2020-09-28 §
14:55 <arturo> [jbond42] upgraded facter to v3 across the VM fleet [admin]
13:54 <andrewbogott> moving cloudvirt1035 from aggregate 'spare' to 'ceph'. We're going to need all the capacity we can get while converting older cloudvirts to ceph [admin]
2020-09-24 §
15:47 <arturo> stopping/restarting rabbitmq-server in all cloudcontrol servers [admin]
15:45 <arturo> restarting rabbitmq-server in cloudcontrol103 [admin]
15:15 <arturo> restarting floating_ip_ptr_records_updater.service in all 3 cloudcontrol servers to reset state after a DNS failure [admin]