801-850 of 1238 results (20ms)
2020-07-14
§
|
04:00 |
<andrewbogott> |
shortened the ttl on .wmflabs.org. to 300 |
[admin] |
2020-07-13
§
|
16:17 |
<arturo> |
icinga downtime cloudcontrol[1003-1005].wikimedia.org for 1h for galera database movements |
[admin] |
2020-07-12
§
|
17:39 |
<andrewbogott> |
switched eqiad1 keystone from m5 to cloudcontrol galera |
[admin] |
2020-07-10
§
|
20:26 |
<andrewbogott> |
disabling nova api to move database to galera |
[admin] |
2020-07-09
§
|
11:23 |
<arturo> |
[codfw1dev] rebooting cloudnet2003-dev again for testing sysct/puppet behavior (T257552) |
[admin] |
11:11 |
<arturo> |
[codfw1dev] rebooting cloudnet2003-dev for testing sysct/puppet behavior (T257552) |
[admin] |
09:16 |
<arturo> |
manually increasing sysctl value of net.nf_conntrack_max in cloudnet servers (T257552) |
[admin] |
2020-07-06
§
|
15:16 |
<arturo> |
installing 'aptitude' in all cloudvirts |
[admin] |
2020-07-03
§
|
12:51 |
<arturo> |
[codfw1dev] galera cluster should be up and running, openstack happy (T256283) |
[admin] |
11:44 |
<arturo> |
[codfw1dev] restoring glance database backup from bacula into cloudcontrol2001-dev (T256283) |
[admin] |
11:39 |
<arturo> |
[codfw1dev] stopped mysql database in the galera cluster T256283 |
[admin] |
11:36 |
<arturo> |
[codfw1dev] dropped glance database in the galera cluster T256283 |
[admin] |
2020-07-02
§
|
15:41 |
<arturo> |
`sudo wmcs-openstack --os-compute-api-version 2.55 flavor create --private --vcpus 8 --disk 300 --ram 16384 --property aggregate_instance_extra_specs:ceph=true --description "for packaging envoy" bigdisk-ceph` (T256983) |
[admin] |
2020-06-29
§
|
14:24 |
<arturo> |
starting rabbitmq-server in all 3 cloudcontrol servers |
[admin] |
14:23 |
<arturo> |
stopping rabbitmq-server in all 3 cloudcontrol servers |
[admin] |
2020-06-18
§
|
20:38 |
<andrewbogott> |
rebooting cloudservices2003-dev due to a mysterious 'host down' alert on a secondary ip |
[admin] |
2020-06-16
§
|
15:38 |
<arturo> |
created by hand neutron port 9c0a9a13-e409-49de-9ba3-bc8ec4801dbf `paws-haproxy-vip` (T295217) |
[admin] |
2020-06-12
§
|
13:23 |
<arturo> |
DNS zone `paws.wmcloud.org` transferred to the PAWS project (T195217) |
[admin] |
13:20 |
<arturo> |
created DNS zone `paws.wmcloud.org` (T195217) |
[admin] |
2020-06-11
§
|
19:19 |
<bstorm_> |
proceeding with failback to labstore1004 now that DRBD devices are consistent T224582 |
[admin] |
17:22 |
<bstorm_> |
delaying failback labstore1004 for drive syncs T224582 |
[admin] |
17:17 |
<bstorm_> |
failing NFS back to labstore1004 to complete the upgrade process T224582 |
[admin] |
16:15 |
<bstorm_> |
failing over NFS for labstore1004 to labstore1005 T224582 |
[admin] |
2020-06-10
§
|
16:09 |
<andrewbogott> |
deleting all old cloud-ns0.wikimedia.org and cloud-ns1.wikimedia.org ns records in designate database T254496 |
[admin] |
2020-06-09
§
|
15:25 |
<arturo> |
icinga downtime everything cloud* lab* for 2h more (T253780) |
[admin] |
14:09 |
<andrewbogott> |
stopping puppet, all designate services and all pdns services on cloudservices1004 for T253780 |
[admin] |
14:01 |
<arturo> |
icinga downtime everything cloud* lab* for 2h (T253780) |
[admin] |
2020-06-05
§
|
15:08 |
<andrewbogott> |
trying to re-enable puppet without losing cumin contact, as per https://phabricator.wikimedia.org/T254589 |
[admin] |
2020-06-04
§
|
14:24 |
<andrewbogott> |
disabling puppet on all instances for /labs/private recovery |
[admin] |
14:23 |
<arturo> |
disabling puppet on all instances for /labs/private recovery |
[admin] |
2020-05-28
§
|
23:02 |
<bd808> |
`/usr/local/sbin/maintain-dbusers --debug harvest-replicas` (T253930) |
[admin] |
13:36 |
<andrewbogott> |
rebuilding cloudservices2002-dev with Buster |
[admin] |
00:33 |
<andrewbogott> |
shutting down cloudservices2002-dev to see if we can live without it. This is in anticipation or rebuilding it entirely for T253780 |
[admin] |
2020-05-27
§
|
23:29 |
<andrewbogott> |
disabling the backup job on cloudbackup2001 (just like last week) so the backup doesn't start while Brooke is rebuilding labstore1004 tomorrow. |
[admin] |
06:03 |
<bd808> |
`systemctl start mariadb` on clouddb1001 following reboot (take 2) |
[admin] |
05:58 |
<bd808> |
`systemctl start mariadb` on clouddb1001 following reboot |
[admin] |
05:53 |
<bd808> |
Hard reboot of clouddb1001 via Horizon. Console unresponsive. |
[admin] |
2020-05-25
§
|
16:35 |
<arturo> |
[codfw1dev] created zone `0-29.57.15.185.in-addr.arpa.` (T247972) |
[admin] |
2020-05-21
§
|
19:23 |
<andrewbogott> |
disabling puppet on cloudbackup2001 to prevent the backup job from starting during maintenance |
[admin] |
19:16 |
<andrewbogott> |
systemctl disable block_sync-tools-project.service on cloudbackup2001.codfw.wmnet to avoid stepping on current upgrade |
[admin] |
15:48 |
<andrewbogott> |
re-imaging cloudnet1003 with Buster |
[admin] |
2020-05-19
§
|
22:59 |
<bd808> |
`apt-get install mariadb-client` on cloudcontrol1003 |
[admin] |
21:12 |
<bd808> |
Migrating wcdo.wcdo.eqiad.wmflabs to cloudvirt1023 (T251065) |
[admin] |
2020-05-18
§
|
21:37 |
<andrewbogott> |
rebuilding cloudnet2003-dev with Buster |
[admin] |
2020-05-15
§
|
22:10 |
<bd808> |
Added reedy as projectadmin in cloudinfra project (T249774) |
[admin] |
22:05 |
<bd808> |
Added reedy as projectadmin in admin project (T249774) |
[admin] |
18:44 |
<bstorm_> |
rebooting cloudvirt-wdqs1003 T252831 |
[admin] |
15:47 |
<bd808> |
Manually running wmcs-novastats-dnsleaks from cloudcontrol1003 (T252889) |
[admin] |
2020-05-14
§
|
23:28 |
<bstorm_> |
downtimed cloudvirt1004/6 and cloudvirt-wdqs1003 until tomorrow around this time T252831 |
[admin] |
22:21 |
<bstorm_> |
upgrading qemu-system-x86 on cloudvirt1006 to backports version T252831 |
[admin] |