51-100 of 137 results (26ms)
2021-05-04 §
22:57 <bstorm> manually added cnames for toolsdb, osmdb and wikilabelsdb in db.svc.wikimedia.cloud zone T278252 [clouddb-services]
2021-04-05 §
09:56 <arturo> make jhernandez (IRC joakino) projectadmin (T278975) [clouddb-services]
2021-03-10 §
12:44 <arturo> briefly stopping VM clouddb-wikireplicas-proxy-2 to migrate hypervisor [clouddb-services]
10:57 <arturo> briefly stopped VM clouddb-wikireplicas-proxy-1 to disable VMX cpu flag [clouddb-services]
2021-03-02 §
23:29 <bstorm> bringing toolsdb back up 😟 [clouddb-services]
2021-02-26 §
23:20 <bstorm> rebooting clouddb-wikilabels-02 for patches [clouddb-services]
22:55 <bstorm> rebooting clouddb-wikireplicas-proxy-1 and clouddb-wikireplicas-proxy-2 before (hopefully) many people are using them [clouddb-services]
2021-01-29 §
18:21 <bstorm> deleting clouddb-toolsdb-03 as it isn't used [clouddb-services]
2020-12-23 §
19:20 <bstorm> created clouddb-wikireplicas-proxy-1 and clouddb-wikireplicas-proxy-2 as well as the 16 neutron ports for wikireplicas proxying [clouddb-services]
2020-12-17 §
02:14 <bstorm> toolsdb is back and so is the replica T266587 [clouddb-services]
01:10 <bstorm> the sync is done and we have a good copy of the toolsdb data, proceeding with the upgrades and stuff to that hypervisor while configuring replication to work again T266587 [clouddb-services]
2020-12-16 §
18:34 <bstorm> restarted sync from toolsdb to its replica server after cleanup to prevent disk filling T266587 [clouddb-services]
17:31 <bstorm> sync started from toolsdb to its replica server T266587 [clouddb-services]
17:29 <bstorm> stopped mariadb on the replica T266587 [clouddb-services]
17:28 <bstorm> shutdown toolsdb T266587 [clouddb-services]
17:24 <bstorm> settings toolsdb to readonly to prepare for shutdown T266587 [clouddb-services]
17:06 <bstorm> switching the secondary config back to clouddb1002 in order to minimize concerns about affecting ceph performance T266587 [clouddb-services]
2020-11-15 §
19:45 <bstorm> restarting the import to clouddb-toolsdb-03 with --max-allowed-packet=1G to rule out that as a problem entirely T266587 [clouddb-services]
19:36 <bstorm> set max_allowed_package to 64MB on clouddb-toolsdb-03 T266587 [clouddb-services]
2020-11-11 §
08:20 <bstorm> loading dump on the replica T266587 [clouddb-services]
06:06 <bstorm> dump completed, transferring to replica to start things up again T266587 [clouddb-services]
2020-11-10 §
16:24 <bstorm> set toolsdb to read-only T266587 [clouddb-services]
2020-10-29 §
23:49 <bstorm> launching clouddb-toolsdb-03 (trying to make a better naming convention) to replace failed replica T266587 [clouddb-services]
2020-10-27 §
18:04 <bstorm> set clouddb1002 to read-only since it was not supposed to be writeable anyway T266587 [clouddb-services]
2020-10-20 §
18:36 <bstorm> brought up mariadb and replication on clouddb1002 T263677 [clouddb-services]
17:14 <bstorm> shutting down clouddb1003 T263677 [clouddb-services]
17:13 <bstorm> stopping postgresql on clouddb1003 T263677 [clouddb-services]
17:08 <bstorm> poweroff clouddb1002 T263677 [clouddb-services]
17:08 <bstorm> stopping mariadb on clouddb1002 T263677 [clouddb-services]
17:07 <bstorm> shut down replication on clouddb1002 (now with task) T263677 [clouddb-services]
17:05 <bstorm> shut down replication on clouddb1002 [clouddb-services]
2020-09-29 §
23:48 <bstorm> restarting postgresql on clouddb1003 to clear the number of connections [clouddb-services]
2020-09-08 §
18:21 <bstorm> copied the profile::mariadb::section_ports key into prefix puppet to fix puppet after the refactor for wmfmariadbpy [clouddb-services]
2020-07-16 §
17:56 <bstorm> Significantly lifted traffic shaping limits on clouddb1001/toolsdb to improve network performance T257884 [clouddb-services]
2020-06-09 §
22:55 <bstorm_> Reset the passwords for T254931 [clouddb-services]
2020-06-04 §
17:41 <bd808> Restarting mariadb after hand applying {{gerrit|602433}} (T253738) [clouddb-services]
2020-06-01 §
18:51 <bstorm_> restarting mariadb on clouddb1002 T253738 [clouddb-services]
2020-05-29 §
23:42 <bstorm_> stopped puppet and restarting mariadb on clouddb1002 after filtering out a table T253738 [clouddb-services]
2020-03-27 §
12:26 <arturo> add myself as project admin [clouddb-services]
2020-03-20 §
13:43 <jeh> upgrade puppetmaster to v5 T241719 [clouddb-services]
2020-03-18 §
17:49 <bstorm_> updated the tools-prometheus security group with the current IP addresses for the prometheus servers [clouddb-services]
2019-10-25 §
10:49 <arturo> (jynus) clouddb1002 mariadb (toolsdb secondary) being upgraded from 10.1.38 to 10.1.39 is done !(T236384 , T236420) [clouddb-services]
10:46 <arturo> (jynus) clouddb1002 mariadb (toolsdb secondary) being upgraded from 10.1.38 to 10.1.39 (T236384 , T236420) [clouddb-services]
10:45 <arturo> icinga downtime toolschecker for 1 to upgrade clouddb1002 mariadb (toolsdb secondary) (T236384 , T236420) [clouddb-services]
10:37 <arturo> clouddb1002 downgrading wmf-mariadb101 from 10.1.41-1 to 10.1.39-1 (T236384 , T236420) [clouddb-services]
08:13 <arturo> enable puppet in clouddb1001/clouddb1002 to deploy https://gerrit.wikimedia.org/r/c/operations/puppet/+/546102 (T236384 , T236420) [clouddb-services]
2019-10-24 §
18:34 <bstorm_> downgraded clouddb1001 to 10.1.39 T236420 T236384 [clouddb-services]
18:19 <bstorm_> stopped puppet on clouddb1001 and removed unattended-upgrades for now T236384 [clouddb-services]
16:03 <bstorm_> stopped puppet on clouddb1002 and removed unattended-upgrades for now T236384 [clouddb-services]
13:27 <phamhi> add phamhi as user and projectadmin [clouddb-services]