2301-2350 of 2569 results (21ms)
2020-01-28
§
|
09:53 |
<arturo> |
restart apache2 in labweb1001/1002 because horizon errors |
[admin] |
09:47 |
<arturo> |
created DNS zone wmcloud.org in eqiad1, transfer it to the cloudinfra project (T242976) right now only use is to delegate codfw1dev.wmcloud.org subdomain to designate in the other deployment |
[admin] |
2020-01-27
§
|
12:45 |
<arturo> |
[codfw1dev] manually move the new domain to the `cloudinfra-codfw1dev` project clouddb2001-dev: `[designate]> update zones set tenant_id='cloudinfra-codfw1dev' where id = '4c75410017904858a5839de93c9e8b3d';` T243556 |
[admin] |
12:44 |
<arturo> |
[codfw1dev] `root@cloudcontrol2001-dev:~# openstack zone create --description "main DNS domain for VMs" --email "root@wmflabs.org" --type PRIMARY --ttl 3600 codfw1dev.wikimedia.cloud.` T243556 |
[admin] |
2020-01-24
§
|
15:10 |
<jeh> |
remove icinga downtime for cloudvirt1013 T241313 |
[admin] |
12:52 |
<arturo> |
repooling cloudvirt1013 after HW got fixed (T241313) |
[admin] |
2020-01-21
§
|
17:43 |
<bstorm_> |
remounting /mnt/nfs/dumps-labstore1007.wikimedia.org/ on all dumps-mounting projects |
[admin] |
10:24 |
<arturo> |
running `sudo systemctl restart apache2.service` in both labweb servers to try mitigating T240852 |
[admin] |
2020-01-15
§
|
16:59 |
<bd808> |
Changed the config for cloud-announce mailing list so that lsit admins do not get bounce unsubscribe notices |
[admin] |
2020-01-14
§
|
14:03 |
<arturo> |
icinga downtime all cloudvirts for another 2h for fixing some icinga checks |
[admin] |
12:04 |
<arturo> |
icinga downtime toolchecker for 2 hours for openstack upgrades T241347 |
[admin] |
12:02 |
<arturo> |
icinga downtime cloud* labs* hosts for 2 hours for openstack upgrades T241347 |
[admin] |
04:26 |
<andrewbogott> |
upgrading designate on cloudservices1003/1004 |
[admin] |
2020-01-13
§
|
13:34 |
<arturo> |
[¢odfw1dev] prevent neutron from allocating floating IPs from the wrong subnet by doing `neutron subnet-update --allocation-pool start=208.80.153.190,end=208.80.153.190 cloud-instances-transport1-b-codfw` (T242594) |
[admin] |
2020-01-10
§
|
13:27 |
<arturo> |
cloudvirt1009: virsh undefine i-000069b6. This is tools-elastic-01 which is running on cloudvirt1008 (so, leaked on cloudvirt1009) |
[admin] |
2020-01-09
§
|
11:12 |
<arturo> |
running `MariaDB [nova_eqiad1]> update quota_usages set in_use='0' where project_id='etytree';` (T242332) |
[admin] |
11:11 |
<arturo> |
running `MariaDB [nova_eqiad1]> select * from quota_usages where project_id = 'etytree';` (T242332) |
[admin] |
10:32 |
<arturo> |
ran `root@cloudcontrol1004:~# nova-manage project quota_usage_refresh --project etytree` |
[admin] |
2020-01-08
§
|
10:53 |
<arturo> |
icinga downtime all cloudvirts for 30 minutes to re-create all canary VMs" |
[admin] |
2020-01-07
§
|
11:12 |
<arturo> |
icinga-downtime everything cloud* for 30 minutes to merge nova scheduler changes |
[admin] |
10:02 |
<arturo> |
icinga downtime cloudvirt1009 for 30 minutes to re-create canary VM (T242078) |
[admin] |
2020-01-06
§
|
13:45 |
<andrewbogott> |
restarting nova-api and nova-conductor on cloudcontrol1003 and 1004 |
[admin] |
2020-01-04
§
|
16:34 |
<arturo> |
icinga downtime cloudvirt1024 for 2 months because hardware errors (T241884) |
[admin] |
2019-12-31
§
|
11:46 |
<andrewbogott> |
I couldn't! |
[admin] |
11:39 |
<andrewbogott> |
restarting cloudservices2002-dev to see if I can reproduce an issue I saw earlier |
[admin] |
2019-12-25
§
|
10:13 |
<arturo> |
icinga downtime for 30 minutes the whole cloud* lab* fleet to merge https://gerrit.wikimedia.org/r/c/operations/puppet/+/560575 (will restart some openstack components) |
[admin] |
2019-12-24
§
|
15:13 |
<arturo> |
icinga downtime all the lab* fleet for nova password change for 1h |
[admin] |
14:39 |
<arturo> |
icinga downtime all the cloud* fleet for nova password change for 1h |
[admin] |
2019-12-23
§
|
11:13 |
<arturo> |
enable puppet in cloudcontrol1003/1004 |
[admin] |
10:40 |
<arturo> |
disable puppet in cloudcontrol1003/1004 while doing changes related to python-ldap |
[admin] |
2019-12-22
§
|
23:48 |
<andrewbogott> |
restarting nova-conductor and nova-api on cloudcontrol1003 and 1004 |
[admin] |
09:45 |
<arturo> |
cloudvirt1013 is back (did it alone) T241313 |
[admin] |
09:37 |
<arturo> |
cloudvirt1013 is down for good. Apparently powered off. I can't even reach it via iLO |
[admin] |
2019-12-20
§
|
12:43 |
<arturo> |
icinga downtime cloudmetrics1001 for 128 hours |
[admin] |
2019-12-18
§
|
12:55 |
<arturo> |
[codfw1dev] created a new subnet neutron object to hold the new CIDR for floating IPs (cloud-codfw1dev-floating - 185.15.57.0/29) T239347 |
[admin] |
2019-12-17
§
|
07:21 |
<andrewbogott> |
deploying horizon/train to labweb1001/1002 |
[admin] |
2019-12-12
§
|
06:11 |
<arturo> |
schedule 4h downtime for labstores |
[admin] |
05:57 |
<arturo> |
schedule 4h downtime for cloudvirts and other openstack components due to upgrade ops |
[admin] |
2019-12-02
§
|
06:28 |
<andrewbogott> |
running nova-manage db sync on eqiad1 |
[admin] |
06:27 |
<andrewbogott> |
running nova-manage cell_v2 map_cell0 on eqiad1 |
[admin] |
2019-11-21
§
|
16:07 |
<jeh> |
created replica indexes and views for szywiki T237373 |
[admin] |
15:48 |
<jeh> |
creating replica indexes and views for shywiktionary T238115 |
[admin] |
15:48 |
<jeh> |
creating replica indexes and views for gcrwiki T238114 |
[admin] |
15:46 |
<jeh> |
creating replica indexes and views for minwiktionary T238522 |
[admin] |
15:36 |
<jeh> |
creating replica indexes and views for gewikimedia T236404 |
[admin] |
2019-11-18
§
|
19:27 |
<andrewbogott> |
repooling labsdb1011 |
[admin] |
18:54 |
<andrewbogott> |
running maintain-views --all-databases --replace-all —clean on labsdb1011 T238480 |
[admin] |
18:44 |
<andrewbogott> |
depooling labsdb1011 and killing remaining user queries T238480 |
[admin] |
18:42 |
<andrewbogott> |
repooled labsdb1009 and 1010 T238480 |
[admin] |
18:19 |
<andrewbogott> |
running maintain-views --all-databases --replace-all —clean on labsdb1010 T238480 |
[admin] |