1-50 of 97 results (20ms)
2022-07-02 §
05:56 <taavi> toolsdb: add s54518__mw to list of replication ignored databases, data mismatch between primary and replica [clouddb-services]
2022-06-28 §
16:30 <taavi> stopped mariadb on clouddb1002, starting rsync clouddb1002->tools-db-1 on a root screen session on tools-db-1 T301949 [clouddb-services]
2022-05-17 §
14:12 <taavi> enable gtid on toolsdb T301993 [clouddb-services]
2022-02-17 §
12:03 <taavi> add myself as a project member so I don't need to ssh in as root@ [clouddb-services]
2021-09-02 §
18:52 <bstorm> removed strange old duplicate cron for osmupdater T285668 [clouddb-services]
2021-08-31 §
20:19 <bstorm> attempting to resync OSMDB back to Feb 21st 2019 T285668 [clouddb-services]
2021-08-30 §
22:07 <bstorm> restarting osmdb on clouddb1003 to try to capture enough connections T285668 [clouddb-services]
21:53 <bstorm> disable puppet and osm updater script on clouddb1003 T285668 [clouddb-services]
2021-05-26 §
16:53 <bstorm> restarting postgresql since T220164 was closed. Hoping all connections don't get used up again. [clouddb-services]
2021-05-25 §
14:35 <dcaro> taking down clouddb1002 replica for reboot of cloudvirt1020 (T275893) [clouddb-services]
2021-05-04 §
22:57 <bstorm> manually added cnames for toolsdb, osmdb and wikilabelsdb in db.svc.wikimedia.cloud zone T278252 [clouddb-services]
2021-04-05 §
09:56 <arturo> make jhernandez (IRC joakino) projectadmin (T278975) [clouddb-services]
2021-03-10 §
12:44 <arturo> briefly stopping VM clouddb-wikireplicas-proxy-2 to migrate hypervisor [clouddb-services]
10:57 <arturo> briefly stopped VM clouddb-wikireplicas-proxy-1 to disable VMX cpu flag [clouddb-services]
2021-03-02 §
23:29 <bstorm> bringing toolsdb back up 😟 [clouddb-services]
2021-02-26 §
23:20 <bstorm> rebooting clouddb-wikilabels-02 for patches [clouddb-services]
22:55 <bstorm> rebooting clouddb-wikireplicas-proxy-1 and clouddb-wikireplicas-proxy-2 before (hopefully) many people are using them [clouddb-services]
2021-01-29 §
18:21 <bstorm> deleting clouddb-toolsdb-03 as it isn't used [clouddb-services]
2020-12-23 §
19:20 <bstorm> created clouddb-wikireplicas-proxy-1 and clouddb-wikireplicas-proxy-2 as well as the 16 neutron ports for wikireplicas proxying [clouddb-services]
2020-12-17 §
02:14 <bstorm> toolsdb is back and so is the replica T266587 [clouddb-services]
01:10 <bstorm> the sync is done and we have a good copy of the toolsdb data, proceeding with the upgrades and stuff to that hypervisor while configuring replication to work again T266587 [clouddb-services]
2020-12-16 §
18:34 <bstorm> restarted sync from toolsdb to its replica server after cleanup to prevent disk filling T266587 [clouddb-services]
17:31 <bstorm> sync started from toolsdb to its replica server T266587 [clouddb-services]
17:29 <bstorm> stopped mariadb on the replica T266587 [clouddb-services]
17:28 <bstorm> shutdown toolsdb T266587 [clouddb-services]
17:24 <bstorm> settings toolsdb to readonly to prepare for shutdown T266587 [clouddb-services]
17:06 <bstorm> switching the secondary config back to clouddb1002 in order to minimize concerns about affecting ceph performance T266587 [clouddb-services]
2020-11-15 §
19:45 <bstorm> restarting the import to clouddb-toolsdb-03 with --max-allowed-packet=1G to rule out that as a problem entirely T266587 [clouddb-services]
19:36 <bstorm> set max_allowed_package to 64MB on clouddb-toolsdb-03 T266587 [clouddb-services]
2020-11-11 §
08:20 <bstorm> loading dump on the replica T266587 [clouddb-services]
06:06 <bstorm> dump completed, transferring to replica to start things up again T266587 [clouddb-services]
2020-11-10 §
16:24 <bstorm> set toolsdb to read-only T266587 [clouddb-services]
2020-10-29 §
23:49 <bstorm> launching clouddb-toolsdb-03 (trying to make a better naming convention) to replace failed replica T266587 [clouddb-services]
2020-10-27 §
18:04 <bstorm> set clouddb1002 to read-only since it was not supposed to be writeable anyway T266587 [clouddb-services]
2020-10-20 §
18:36 <bstorm> brought up mariadb and replication on clouddb1002 T263677 [clouddb-services]
17:14 <bstorm> shutting down clouddb1003 T263677 [clouddb-services]
17:13 <bstorm> stopping postgresql on clouddb1003 T263677 [clouddb-services]
17:08 <bstorm> poweroff clouddb1002 T263677 [clouddb-services]
17:08 <bstorm> stopping mariadb on clouddb1002 T263677 [clouddb-services]
17:07 <bstorm> shut down replication on clouddb1002 (now with task) T263677 [clouddb-services]
17:05 <bstorm> shut down replication on clouddb1002 [clouddb-services]
2020-09-29 §
23:48 <bstorm> restarting postgresql on clouddb1003 to clear the number of connections [clouddb-services]
2020-09-08 §
18:21 <bstorm> copied the profile::mariadb::section_ports key into prefix puppet to fix puppet after the refactor for wmfmariadbpy [clouddb-services]
2020-07-16 §
17:56 <bstorm> Significantly lifted traffic shaping limits on clouddb1001/toolsdb to improve network performance T257884 [clouddb-services]
2020-06-09 §
22:55 <bstorm_> Reset the passwords for T254931 [clouddb-services]
2020-06-04 §
17:41 <bd808> Restarting mariadb after hand applying {{gerrit|602433}} (T253738) [clouddb-services]
2020-06-01 §
18:51 <bstorm_> restarting mariadb on clouddb1002 T253738 [clouddb-services]
2020-05-29 §
23:42 <bstorm_> stopped puppet and restarting mariadb on clouddb1002 after filtering out a table T253738 [clouddb-services]
2020-03-27 §
12:26 <arturo> add myself as project admin [clouddb-services]
2020-03-20 §
13:43 <jeh> upgrade puppetmaster to v5 T241719 [clouddb-services]