1-50 of 127 results (22ms)
2023-02-21
§
|
17:18 |
<andrewbogott> |
shutting down postgres on clouddb1004/1003, then shutting down the vms |
[clouddb-services] |
2023-02-20
§
|
20:19 |
<taavi> |
manually start prometheus-mysql-exporter on clouddb1001 |
[clouddb-services] |
17:11 |
<dhinus> |
mariadb on clouddb1001 is running again |
[clouddb-services] |
17:07 |
<dhinus> |
stopping mariadb on clouddb1001 for about 5 minutes (T329970) |
[clouddb-services] |
11:23 |
<dhinus> |
mariadb on clouddb1001 is running again |
[clouddb-services] |
11:16 |
<dhinus> |
stopping mariadb on clouddb1001 for about 2 minutes (T329970) |
[clouddb-services] |
2023-02-17
§
|
16:13 |
<dhinus> |
drop unused database s51203__baglama2_p T323502 |
[clouddb-services] |
16:12 |
<dhinus> |
drop unused database s52467__new_hashtags T329461 |
[clouddb-services] |
2023-01-30
§
|
11:02 |
<dhinus> |
'SET GLOBAL read_only = 0;' after restarting mariadb in clouddb1001 (T328273) |
[clouddb-services] |
10:53 |
<dhinus> |
restarting mariadb in clouddb1001 to apply minor version upgrade (T328273) |
[clouddb-services] |
2023-01-27
§
|
15:52 |
<taavi> |
repaired wmf-mariadb101 installation on clouddb1001 |
[clouddb-services] |
10:35 |
<dhinus> |
clouddb1001 unlock tables; (T301949) |
[clouddb-services] |
10:33 |
<dhinus> |
clouddb1001 flush tables with read lock; (T301949) |
[clouddb-services] |
2023-01-25
§
|
14:37 |
<dhinus> |
clouddb1001 unlock tables; (T301949) |
[clouddb-services] |
14:34 |
<dhinus> |
clouddb1001 flush tables with read lock; (T301949) |
[clouddb-services] |
2023-01-24
§
|
16:01 |
<dhinus> |
clouddb1001 unlock tables; (T301949) |
[clouddb-services] |
15:46 |
<dhinus> |
clouddb1001 flush tables with read lock; (T301949) |
[clouddb-services] |
2023-01-05
§
|
09:12 |
<dhinus> |
disabled puppet checks on clouddb100[3-4] (touch /.no-puppet-checks) T323159 |
[clouddb-services] |
2023-01-04
§
|
14:12 |
<dhinus> |
deleted CNAME wikilabels.db.svc.eqiad.wmflabs (T307389) |
[clouddb-services] |
2022-12-09
§
|
17:20 |
<wm-bot2> |
Increased quotas by 4000 gigabytes (T301949) - cookbook ran by fran@wmf3169 |
[clouddb-services] |
13:50 |
<dhinus> |
clouddb100 set innodb_file_format=Barracuda and innodb_large_prefix=1 (those values were lost after a reboot) |
[clouddb-services] |
2022-12-07
§
|
22:51 |
<dhinus> |
remove read_only mode on clouddb1001 (SET GLOBAL read_only = 0;) |
[clouddb-services] |
18:05 |
<wm-bot2> |
Increased quotas by 100 gigabytes (T301949) - cookbook ran by fran@wmf3169 |
[clouddb-services] |
2022-11-11
§
|
17:37 |
<dhinus> |
mariadb restarted on clouddb1002 and again in sync with primary (took a bit to catch up) T301949 |
[clouddb-services] |
16:14 |
<dhinus> |
rsyncing a single database from clouddb1002 to tools-db-2 (as a test for T301949) |
[clouddb-services] |
16:12 |
<dhinus> |
stopping mariadb for a few minutes on replica server clouddb1002 (T301949) |
[clouddb-services] |
2022-11-08
§
|
08:38 |
<taavi> |
delete clouddb-wikilabels-01,02 T307389 |
[clouddb-services] |
2022-11-03
§
|
14:39 |
<dhinus> |
depooling dbproxy1019 'confctl select "service=wikireplicas-b,name=dbproxy1019" set/pooled=no' T313445 |
[clouddb-services] |
13:58 |
<dhinus> |
shutting down dbproxy1019 T313445 |
[clouddb-services] |
2022-11-02
§
|
09:37 |
<taavi> |
shut down the wikilabels servers T307389 |
[clouddb-services] |
2022-07-02
§
|
05:56 |
<taavi> |
toolsdb: add s54518__mw to list of replication ignored databases, data mismatch between primary and replica |
[clouddb-services] |
2022-06-28
§
|
16:30 |
<taavi> |
stopped mariadb on clouddb1002, starting rsync clouddb1002->tools-db-1 on a root screen session on tools-db-1 T301949 |
[clouddb-services] |
2022-05-17
§
|
14:12 |
<taavi> |
enable gtid on toolsdb T301993 |
[clouddb-services] |
2022-02-17
§
|
12:03 |
<taavi> |
add myself as a project member so I don't need to ssh in as root@ |
[clouddb-services] |
2021-09-02
§
|
18:52 |
<bstorm> |
removed strange old duplicate cron for osmupdater T285668 |
[clouddb-services] |
2021-08-31
§
|
20:19 |
<bstorm> |
attempting to resync OSMDB back to Feb 21st 2019 T285668 |
[clouddb-services] |
2021-08-30
§
|
22:07 |
<bstorm> |
restarting osmdb on clouddb1003 to try to capture enough connections T285668 |
[clouddb-services] |
21:53 |
<bstorm> |
disable puppet and osm updater script on clouddb1003 T285668 |
[clouddb-services] |
2021-05-26
§
|
16:53 |
<bstorm> |
restarting postgresql since T220164 was closed. Hoping all connections don't get used up again. |
[clouddb-services] |
2021-05-25
§
|
14:35 |
<dcaro> |
taking down clouddb1002 replica for reboot of cloudvirt1020 (T275893) |
[clouddb-services] |
2021-05-04
§
|
22:57 |
<bstorm> |
manually added cnames for toolsdb, osmdb and wikilabelsdb in db.svc.wikimedia.cloud zone T278252 |
[clouddb-services] |
2021-04-05
§
|
09:56 |
<arturo> |
make jhernandez (IRC joakino) projectadmin (T278975) |
[clouddb-services] |
2021-03-10
§
|
12:44 |
<arturo> |
briefly stopping VM clouddb-wikireplicas-proxy-2 to migrate hypervisor |
[clouddb-services] |
10:57 |
<arturo> |
briefly stopped VM clouddb-wikireplicas-proxy-1 to disable VMX cpu flag |
[clouddb-services] |
2021-03-02
§
|
23:29 |
<bstorm> |
bringing toolsdb back up 😟 |
[clouddb-services] |
2021-02-26
§
|
23:20 |
<bstorm> |
rebooting clouddb-wikilabels-02 for patches |
[clouddb-services] |
22:55 |
<bstorm> |
rebooting clouddb-wikireplicas-proxy-1 and clouddb-wikireplicas-proxy-2 before (hopefully) many people are using them |
[clouddb-services] |
2021-01-29
§
|
18:21 |
<bstorm> |
deleting clouddb-toolsdb-03 as it isn't used |
[clouddb-services] |
2020-12-23
§
|
19:20 |
<bstorm> |
created clouddb-wikireplicas-proxy-1 and clouddb-wikireplicas-proxy-2 as well as the 16 neutron ports for wikireplicas proxying |
[clouddb-services] |
2020-12-17
§
|
02:14 |
<bstorm> |
toolsdb is back and so is the replica T266587 |
[clouddb-services] |