1-50 of 109 results (25ms)
2023-01-04
§
|
14:12 |
<dhinus> |
deleted CNAME wikilabels.db.svc.eqiad.wmflabs (T307389) |
[clouddb-services] |
2022-12-09
§
|
17:20 |
<wm-bot2> |
Increased quotas by 4000 gigabytes (T301949) - cookbook ran by fran@wmf3169 |
[clouddb-services] |
13:50 |
<dhinus> |
clouddb100 set innodb_file_format=Barracuda and innodb_large_prefix=1 (those values were lost after a reboot) |
[clouddb-services] |
2022-12-07
§
|
22:51 |
<dhinus> |
remove read_only mode on clouddb1001 (SET GLOBAL read_only = 0;) |
[clouddb-services] |
18:05 |
<wm-bot2> |
Increased quotas by 100 gigabytes (T301949) - cookbook ran by fran@wmf3169 |
[clouddb-services] |
2022-11-11
§
|
17:37 |
<dhinus> |
mariadb restarted on clouddb1002 and again in sync with primary (took a bit to catch up) T301949 |
[clouddb-services] |
16:14 |
<dhinus> |
rsyncing a single database from clouddb1002 to tools-db-2 (as a test for T301949) |
[clouddb-services] |
16:12 |
<dhinus> |
stopping mariadb for a few minutes on replica server clouddb1002 (T301949) |
[clouddb-services] |
2022-11-08
§
|
08:38 |
<taavi> |
delete clouddb-wikilabels-01,02 T307389 |
[clouddb-services] |
2022-11-03
§
|
14:39 |
<dhinus> |
depooling dbproxy1019 'confctl select "service=wikireplicas-b,name=dbproxy1019" set/pooled=no' T313445 |
[clouddb-services] |
13:58 |
<dhinus> |
shutting down dbproxy1019 T313445 |
[clouddb-services] |
2022-11-02
§
|
09:37 |
<taavi> |
shut down the wikilabels servers T307389 |
[clouddb-services] |
2022-07-02
§
|
05:56 |
<taavi> |
toolsdb: add s54518__mw to list of replication ignored databases, data mismatch between primary and replica |
[clouddb-services] |
2022-06-28
§
|
16:30 |
<taavi> |
stopped mariadb on clouddb1002, starting rsync clouddb1002->tools-db-1 on a root screen session on tools-db-1 T301949 |
[clouddb-services] |
2022-05-17
§
|
14:12 |
<taavi> |
enable gtid on toolsdb T301993 |
[clouddb-services] |
2022-02-17
§
|
12:03 |
<taavi> |
add myself as a project member so I don't need to ssh in as root@ |
[clouddb-services] |
2021-09-02
§
|
18:52 |
<bstorm> |
removed strange old duplicate cron for osmupdater T285668 |
[clouddb-services] |
2021-08-31
§
|
20:19 |
<bstorm> |
attempting to resync OSMDB back to Feb 21st 2019 T285668 |
[clouddb-services] |
2021-08-30
§
|
22:07 |
<bstorm> |
restarting osmdb on clouddb1003 to try to capture enough connections T285668 |
[clouddb-services] |
21:53 |
<bstorm> |
disable puppet and osm updater script on clouddb1003 T285668 |
[clouddb-services] |
2021-05-26
§
|
16:53 |
<bstorm> |
restarting postgresql since T220164 was closed. Hoping all connections don't get used up again. |
[clouddb-services] |
2021-05-25
§
|
14:35 |
<dcaro> |
taking down clouddb1002 replica for reboot of cloudvirt1020 (T275893) |
[clouddb-services] |
2021-05-04
§
|
22:57 |
<bstorm> |
manually added cnames for toolsdb, osmdb and wikilabelsdb in db.svc.wikimedia.cloud zone T278252 |
[clouddb-services] |
2021-04-05
§
|
09:56 |
<arturo> |
make jhernandez (IRC joakino) projectadmin (T278975) |
[clouddb-services] |
2021-03-10
§
|
12:44 |
<arturo> |
briefly stopping VM clouddb-wikireplicas-proxy-2 to migrate hypervisor |
[clouddb-services] |
10:57 |
<arturo> |
briefly stopped VM clouddb-wikireplicas-proxy-1 to disable VMX cpu flag |
[clouddb-services] |
2021-03-02
§
|
23:29 |
<bstorm> |
bringing toolsdb back up 😟 |
[clouddb-services] |
2021-02-26
§
|
23:20 |
<bstorm> |
rebooting clouddb-wikilabels-02 for patches |
[clouddb-services] |
22:55 |
<bstorm> |
rebooting clouddb-wikireplicas-proxy-1 and clouddb-wikireplicas-proxy-2 before (hopefully) many people are using them |
[clouddb-services] |
2021-01-29
§
|
18:21 |
<bstorm> |
deleting clouddb-toolsdb-03 as it isn't used |
[clouddb-services] |
2020-12-23
§
|
19:20 |
<bstorm> |
created clouddb-wikireplicas-proxy-1 and clouddb-wikireplicas-proxy-2 as well as the 16 neutron ports for wikireplicas proxying |
[clouddb-services] |
2020-12-17
§
|
02:14 |
<bstorm> |
toolsdb is back and so is the replica T266587 |
[clouddb-services] |
01:10 |
<bstorm> |
the sync is done and we have a good copy of the toolsdb data, proceeding with the upgrades and stuff to that hypervisor while configuring replication to work again T266587 |
[clouddb-services] |
2020-12-16
§
|
18:34 |
<bstorm> |
restarted sync from toolsdb to its replica server after cleanup to prevent disk filling T266587 |
[clouddb-services] |
17:31 |
<bstorm> |
sync started from toolsdb to its replica server T266587 |
[clouddb-services] |
17:29 |
<bstorm> |
stopped mariadb on the replica T266587 |
[clouddb-services] |
17:28 |
<bstorm> |
shutdown toolsdb T266587 |
[clouddb-services] |
17:24 |
<bstorm> |
settings toolsdb to readonly to prepare for shutdown T266587 |
[clouddb-services] |
17:06 |
<bstorm> |
switching the secondary config back to clouddb1002 in order to minimize concerns about affecting ceph performance T266587 |
[clouddb-services] |
2020-11-15
§
|
19:45 |
<bstorm> |
restarting the import to clouddb-toolsdb-03 with --max-allowed-packet=1G to rule out that as a problem entirely T266587 |
[clouddb-services] |
19:36 |
<bstorm> |
set max_allowed_package to 64MB on clouddb-toolsdb-03 T266587 |
[clouddb-services] |
2020-11-11
§
|
08:20 |
<bstorm> |
loading dump on the replica T266587 |
[clouddb-services] |
06:06 |
<bstorm> |
dump completed, transferring to replica to start things up again T266587 |
[clouddb-services] |
2020-11-10
§
|
16:24 |
<bstorm> |
set toolsdb to read-only T266587 |
[clouddb-services] |
2020-10-29
§
|
23:49 |
<bstorm> |
launching clouddb-toolsdb-03 (trying to make a better naming convention) to replace failed replica T266587 |
[clouddb-services] |
2020-10-27
§
|
18:04 |
<bstorm> |
set clouddb1002 to read-only since it was not supposed to be writeable anyway T266587 |
[clouddb-services] |
2020-10-20
§
|
18:36 |
<bstorm> |
brought up mariadb and replication on clouddb1002 T263677 |
[clouddb-services] |
17:14 |
<bstorm> |
shutting down clouddb1003 T263677 |
[clouddb-services] |
17:13 |
<bstorm> |
stopping postgresql on clouddb1003 T263677 |
[clouddb-services] |
17:08 |
<bstorm> |
poweroff clouddb1002 T263677 |
[clouddb-services] |