401-450 of 1180 results (23ms)
2021-02-26 §
10:05 <dcaro> [codfw1dev] rebooting cloudcephosd2002-dev for kernel upgrade (T275753) [admin]
10:01 <arturo> [codfw1dev] rebooting cloudvirt200X-dev for kernel upgrade (T275753) [admin]
09:59 <arturo> [codfw1dev] rebooting cloudweb2001-dev for kernel upgrade (T275753) [admin]
09:53 <arturo> [codfw1dev] rebooting cloudservices2003-dev for kernel upgrade (T275753) [admin]
09:51 <arturo> [codfw1dev] rebooting cloudservices2002-dev for kernel upgrade (T275753) [admin]
09:45 <arturo> [codfw1dev] rebooting cloudcontrol2004-dev for kernel upgrade (T275753) [admin]
09:44 <arturo> [codfw1dev] rebooting cloudbackup[2001-2002].codfw.wmnet for kernel upgrade (T275753) [admin]
09:43 <dcaro> [codfw1dev] rebooting cloudcephosd2001-dev for kernel upgrade (T275753) [admin]
09:41 <arturo> [codfw1dev] rebooting cloudcontrol2003-dev for kernel upgrade (T275753) [admin]
09:33 <arturo> [codfw1dev] rebooting cloudcontrol2001-dev for kernel upgrade (T275753) [admin]
2021-02-25 §
14:56 <arturo> deployed wmcs-netns-events daemon to all cloudnet servers (T275483) [admin]
2021-02-24 §
11:07 <arturo> force-reboot cloudmetrics1002, add icinga downtime for 2 hours. Investigating some server issue [admin]
00:17 <bstorm> set --property hw_scsi_model=virtio-scsi and --property hw_disk_bus=scsi on the main stretch image in glance on eqiad1 T275430 [admin]
2021-02-23 §
22:43 <bstorm> set --property hw_scsi_model=virtio-scsi and --property hw_disk_bus=scsi on the main buster image in glance on eqiad1 T275430 [admin]
20:36 <andrewbogott> adding r/o access to the eqiad1-glance-images ceph pool for the client.eqiad1-compute for T275430 [admin]
10:49 <arturo> rebooting clounet1004 into new kernel from buster-bpo (T271058) [admin]
10:48 <arturo> installing linux-image-amd64 from buster-bpo 5.10.13-1~bpo10+1 in cloudnet1004 (T271058) [admin]
2021-02-22 §
17:14 <bstorm> restarting nova-compute on cloudvirt1016 and cloudvirt1036 in case it helps T275411 [admin]
15:02 <dcaro> Re-uploaded the debian buster 10.0 image from rbd to glance, that worked, re-spawning all the broken instances (T275378) [admin]
11:11 <dcaro> Refreshing all the canary instances (T275354) [admin]
2021-02-18 §
14:50 <arturo> rebooting cloudnet1004 for T271058 [admin]
10:25 <dcaro> Rebooting cloudmetrics1001 to apply new kernel (T275116) [admin]
10:16 <dcaro> Rebooting cloudmetrics1002 to apply new kernel (T275116) [admin]
10:14 <dcaro> Upgrading grafana on cloudmetrics1002 (T275116) [admin]
10:12 <dcaro> Upgrading grafana on cloudmetrics1001 (T275116) [admin]
2021-02-17 §
15:58 <arturo> deploying https://gerrit.wikimedia.org/r/c/operations/puppet/+/664845 to cloudnet servers (T268335) [admin]
2021-02-15 §
16:25 <arturo> [codfw1dev] rebooting all cloudgw200x-dev / cloudnet200x-dev servers (T272963) [admin]
15:45 <arturo> [codfw1dev] drop subnet definition for cloud-instances-transport1-b-codfw (T272963) [admin]
15:45 <arturo> [codfw1dev] connect virtual router cloudinstances2b-gw to vlan cloud-gw-transport-codfw (185.15.57.10) (T272963) [admin]
2021-02-11 §
12:01 <arturo> [codfw1dev] drop instance `tools-codfw1dev-bastion-1` in `tools-codfw1dev` (was buster, cannot use it yet) [admin]
11:59 <arturo> [codfw1dev] create instance `tools-codfw1dev-bastion-2` (stretch) in `tools-codfw1dev` to test stuff related to T272397 [admin]
11:45 <arturo> [codfw1dev] create instance `tools-codfw1dev-bastion-1` in `tools-codfw1dev` to test stuff related to T272397 [admin]
11:42 <arturo> [codfw1dev] drop `tools` project, create `tools-codfw1dev` [admin]
11:38 <arturo> [codfw1dev] drop `coudinfra` project (we are using `cloudinfra-codfw1dev` there) [admin]
05:37 <bstorm> downtimed cloudnet1004 for another week T271058 [admin]
2021-02-09 §
15:23 <arturo> icinga-downtime for 2h everything *labs *cloud for openstack upgrades [admin]
11:14 <dcaro> Merged the osd scheduler change for all osds, applying on all cloudcephosd* (T273791) [admin]
2021-02-08 §
18:50 <bstorm> enabled puppet on cloudvirt1023 for now T274144 [admin]
18:44 <bstorm> restarted the backup_vms.service on cloudvirt1027 T274144 [admin]
17:51 <bstorm> deleted project pki T273175 [admin]
2021-02-05 §
10:59 <arturo> icinga-downtime labstore1004 tools share space check for 1 week (T272247) [admin]
10:21 <dcaro> This was affecting maps and several others, maps and project-proxy have been fixed (T273956) [admin]
09:19 <dcaro> Some certs around the infra are expired (T273956) [admin]
2021-02-04 §
10:12 <dcaro> Increasing the memory limit of osds in eqiad from 8589934592(8G) to 12884901888(12G) (T273851) [admin]
2021-02-03 §
09:59 <dcaro> Doing a full vm backup on cloudvirt1024 with the new script (T260692) [admin]
01:50 <bstorm> icinga-downtime cloudnet1004 for a week T271058 [admin]
2021-02-02 §
17:14 <dcaro> Changed osd memory limit from 4G to 8G (T273649) [admin]
11:00 <arturo> icinga-downtime cloudvirt-wdqs1001 for 1 week (T273579) [admin]
03:12 <andrewbogott> running /usr/local/sbin/wmcs-purge-backups and /usr/local/sbin/wmcs-backup-instances on cloudvirt1024 to see why the backup job paged [admin]
2021-01-29 §
15:36 <andrewbogott> disabling puppet and some services on eqiad1 cloudcontrol nodes; replacing nova-placement-api with placement-api [admin]