701-750 of 1227 results (9ms)
2020-10-08 §
16:03 <arturo> [codfw1dev] briefly live-hacked python3-neutron source code in all 3 cloudcontrol2xxx-dev servers to workaround /31 network definition issue (T263622) [admin]
10:28 <arturo> [codfw1dev] reimaging labtestvirt2003 (cloudgw) T261724 [admin]
2020-10-06 §
21:30 <andrewbogott> moved cloudvirt1013 out of the 'ceph' aggregate and into the 'maintenance' aggregate for T243414 [admin]
21:29 <andrewbogott> draining cloudvirt1013 for upgrade to 10G networking [admin]
14:45 <arturo> icinga downtime every cloud* lab* host for 60 minutes for keystone maintenance [admin]
2020-10-05 §
17:40 <bd808> `service uwsgi-labspuppetbackend restart` on cloud-puppetmaster-03 (T264649) [admin]
2020-10-02 §
11:05 <arturo> [codfw1dev] restarting rabbitmq-server in all 3 control nodes, the l3 agent was misbehaving [admin]
09:16 <arturo> [codfw1dev] trying the labtestvirt2003 (cloudgw) reimage again (T261724) [admin]
2020-10-01 §
16:06 <arturo> rebooting cloudvirt1024 to validate changes to /etc/network/interfaces file [admin]
15:36 <arturo> [codfw1dev] reimaging labtestvirt2003 [admin]
2020-09-30 §
16:47 <andrewbogott> rebooting cloudvir1032, 1033, 1034 for T262979 [admin]
13:28 <arturo> enable puppet, reboot and pool back cloudvirt1031 [admin]
13:27 <arturo> extend icinga downtimes for another 120 mins [admin]
13:15 <arturo> `aborrero@cloudcontrol1003:~$ sudo nova-manage placement sync_aggregates` after reading a hint in nova-api.log [admin]
13:02 <arturo> rebooting cloudvirt1016 and moving it to the ceph host aggregate [admin]
12:55 <arturo> rebooting cloudvirt1014 and moving it to the ceph host aggregate [admin]
12:51 <arturo> rebooting cloudvirt1013 and moving it to the ceph host aggregate [admin]
12:39 <arturo> root@cloudcontrol1005:~# openstack aggregate add host maintenance cloudvirt1031 [admin]
12:36 <arturo> rebooted cloudnet1003 (active) a couple of minutes ago [admin]
12:36 <arturo> move cloudvirt1012 and cloudvirt1039 to the ceph aggregate [admin]
11:49 <arturo> rebooting cloudvirt1039 [admin]
11:46 <arturo> rebooting cloudvirt1012 [admin]
11:40 <arturo> rebooting cloudnet1004 (standby) to pick up https://gerrit.wikimedia.org/r/c/operations/puppet/+/631167 (T262979) [admin]
11:38 <arturo> [codfw1dev] rebooting cloudnet2002-dev to pick up https://gerrit.wikimedia.org/r/c/operations/puppet/+/631167 [admin]
11:36 <arturo> [codfw1dev] rebooting cloudnet2003-dev to pick up https://gerrit.wikimedia.org/r/c/operations/puppet/+/631167 [admin]
11:33 <arturo> disabling puppet and downtiming every virt/net server in the fleet in preparation for merging https://gerrit.wikimedia.org/r/c/operations/puppet/+/631167 (T262979) [admin]
09:32 <arturo> rebooting cloudvirt1012 to investigate linuxbridge agent issues [admin]
2020-09-29 §
15:40 <arturo> downgrade linux kernel from linux-image-4.19.0-11-amd64 to linux-image-4.19.0-10-amd64 on cloudvirt1012 [admin]
14:47 <arturo> rebooting cloudvirt1012, chasing config weirdness in the linuxbridge agent [admin]
14:05 <andrewbogott> reimaging 1014 over and over in an attempt to get partman right [admin]
13:51 <arturo> rebooting cloudvirt1012 [admin]
2020-09-28 §
14:55 <arturo> [jbond42] upgraded facter to v3 across the VM fleet [admin]
13:54 <andrewbogott> moving cloudvirt1035 from aggregate 'spare' to 'ceph'. We're going to need all the capacity we can get while converting older cloudvirts to ceph [admin]
2020-09-24 §
15:47 <arturo> stopping/restarting rabbitmq-server in all cloudcontrol servers [admin]
15:45 <arturo> restarting rabbitmq-server in cloudcontrol103 [admin]
15:15 <arturo> restarting floating_ip_ptr_records_updater.service in all 3 cloudcontrol servers to reset state after a DNS failure [admin]
2020-09-18 §
10:16 <arturo> cloudvirt1039 libvirtd service issues were fixed with a reboot [admin]
09:56 <arturo> rebooting cloudvirt1039 (spare) to try to fix some weird libvirtd failure [admin]
09:50 <arturo> enabling puppet in cloudvirts and effectively merging patches from T262979 [admin]
08:59 <arturo> disable puppet in all buster cloudvirts (cloudvirt[1024,1031-1039].eqiad.wmnet) to merge a patch for T263205 and T262979 [admin]
08:50 <arturo> installing iptables from buster-bpo in cloudvirt1036 (T263205 and T262979) [admin]
2020-09-15 §
20:32 <andrewbogott> rebooting cloudvirt1038 to see if it resolves T262979 [admin]
13:58 <andrewbogott> draining cloudvirt1002 with wmcs-ceph-migrate [admin]
2020-09-14 §
14:21 <andrewbogott> draining cloudvirt1001, migrating all VMs with wmcs-ceph-migrate [admin]
10:41 <arturo> [codfw1dev] trying to get the bonding working for labtestvirt2003 (T261724) [admin]
09:47 <arturo> installed qemu security update in eqiad1 cloudvirts (T262386) [admin]
09:43 <arturo> [codfw1dev] installed qemu security update in codfw1dev cloudvirts (T262386) [admin]
2020-09-09 §
18:13 <andrewbogott> restarting ceph-mon@cloudcephmon1003 in hopes that the slow ops reported are phantoms [admin]
18:01 <andrewbogott> restarting ceph-mgr@cloudcephmon1003 in hopes that the slow ops reported are phantoms (https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/EOWNO3MDYRUZKAK6RMQBQ5WBPQNLHOPV/) [admin]
17:40 <andrewbogott> giving ceph pg autoscale another chance: ceph osd pool set eqiad1-compute pg_autoscale_mode on [admin]