2018-05-08
§
|
10:22 |
<jynus> |
stop mariadb on db1055 to clone it to db1064 |
[production] |
10:15 |
<moritzm> |
reimaging mw1310, mw1311 (job runners) to stretch |
[production] |
09:58 |
<jynus@tin> |
Synchronized wmf-config/db-eqiad.php: Depool db1055 (duration: 00m 54s) |
[production] |
09:25 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Repool db1121 after alter table (duration: 01m 00s) |
[production] |
09:20 |
<elukey> |
forced a BBU re-learn cycle on analytics1032 |
[production] |
09:17 |
<gehel> |
reducing replication factor on cassandra v3 (unused) keyspace for maps |
[production] |
08:56 |
<moritzm> |
reimaging mw1345, mw1346 (API servers) to stretch |
[production] |
08:30 |
<moritzm> |
reimaging mw2156, mw2157, mw2158 (job runners) to stretch |
[production] |
08:27 |
<moritzm> |
reimaging mw1308, mw1309 (job runners) to stretch |
[production] |
08:03 |
<marostegui> |
Stop MySQL on db1116 to transfer its content to db2092 - T190704 |
[production] |
07:59 |
<marostegui@tin> |
Synchronized wmf-config/db-codfw.php: Depool db2092 T190704 (duration: 00m 57s) |
[production] |
07:53 |
<elukey> |
second attempt to remove the cassandra-metrics-collector (+ cleanup) from aqs* |
[production] |
07:30 |
<jynus> |
cleaning up maintenance hosts (terbium, etc.) from tendril maintenance files |
[production] |
06:51 |
<marostegui> |
Stop MySQL on db1060 as it will be decommissioned - T193732 |
[production] |
06:50 |
<moritzm> |
reimaging mw1313, mw1343, mw1344 to stretch |
[production] |
06:26 |
<marostegui@tin> |
Synchronized wmf-config/db-codfw.php: Remove db1060 from config - T193732 (duration: 01m 01s) |
[production] |
06:25 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Remove db1060 from config - T193732 (duration: 00m 59s) |
[production] |
06:05 |
<marostegui> |
Read_only=off on db1069 to finish with the x1 failover |
[production] |
06:05 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Promote db1069 as new x1 master (duration: 01m 00s) |
[production] |
06:00 |
<marostegui> |
Set db1055 ready only |
[production] |
06:00 |
<marostegui> |
Start x1 failover |
[production] |
05:41 |
<marostegui> |
Move db2034 under db1069 for x1 failover - T186320 |
[production] |
05:36 |
<marostegui> |
Move dbstore1002:x1 under db1069 for x1 failover - T186320 |
[production] |
05:29 |
<marostegui> |
Disable puppet on db1055 and db1069 before x1 failover - T186320 |
[production] |
05:28 |
<marostegui> |
Disable gtid on db1069 an db2034 before x1 failover - T186320 |
[production] |
05:26 |
<marostegui> |
Deploy schema change on db1121 with replication (this will generate lag on labs on s4) - T191519 T188299 T190148 |
[production] |
05:26 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Depool db1121 for alter table (duration: 01m 00s) |
[production] |
05:19 |
<marostegui> |
Reload haproxy on dbproxy1010 to repool labsdb1011 - https://phabricator.wikimedia.org/T174047 |
[production] |
05:18 |
<marostegui@tin> |
Synchronized wmf-config/db-eqiad.php: Repool db1097:3314 after alter table (duration: 01m 00s) |
[production] |
04:27 |
<dzahn@neodymium> |
conftool action : set/pooled=yes; selector: name=mw2221.codfw.wmnet |
[production] |
04:26 |
<dzahn@neodymium> |
conftool action : set/pooled=yes; selector: name=mw2220.codfw.wmnet |
[production] |
04:24 |
<dzahn@neodymium> |
conftool action : set/pooled=yes; selector: name=mw2219.codfw.wmnet |
[production] |
02:33 |
<l10nupdate@tin> |
scap sync-l10n completed (1.32.0-wmf.2) (duration: 05m 45s) |
[production] |
00:12 |
<ejegg> |
updated CiviCRM from 9752607052 to 81e54c850d |
[production] |
2018-05-07
§
|
23:41 |
<dzahn@neodymium> |
conftool action : set/pooled=yes; selector: name=mw2218.codfw.wmnet |
[production] |
23:39 |
<dzahn@neodymium> |
conftool action : set/pooled=yes; selector: name=mw2217.codfw.wmnet |
[production] |
23:39 |
<mutante> |
mw2219,mw2220,mw2221 - reinstall with stetch |
[production] |
23:37 |
<dzahn@neodymium> |
conftool action : set/pooled=yes; selector: name=mw2216.codfw.wmnet |
[production] |
22:31 |
<bstorm_> |
labsdb1009,labsdb1010,labsdb1011 are now on up-to-date views per T174047 |
[production] |
22:23 |
<ppchelko@tin> |
Started restart [changeprop/deploy@7e86531]: Restart changeprop to try forcing it rebalancing topics |
[production] |
21:28 |
<mutante> |
mw2216,mw2217,mw2218 - wmf-auto-reimage --conftool , reinstall with stretch |
[production] |
21:25 |
<XioNoX> |
re-pool eqsin - T193897 |
[production] |
20:48 |
<imarlier@tin> |
Started restart [performance/coal@50fe0dd]: Restart coal-web service everywhere, hopefully |
[production] |
20:48 |
<arlolra> |
Updated Parsoid to 6e38948 (T192909) |
[production] |
20:41 |
<arlolra@tin> |
Finished deploy [parsoid/deploy@cd5e875]: Updating Parsoid to 6e38948 (duration: 12m 25s) |
[production] |
20:30 |
<otto@tin> |
Finished deploy [statsv/statsv@c186340]: Configure api.version via CLI opt -- prep for Kafka main upgrade T167039 (duration: 00m 05s) |
[production] |
20:30 |
<otto@tin> |
Started deploy [statsv/statsv@c186340]: Configure api.version via CLI opt -- prep for Kafka main upgrade T167039 |
[production] |
20:29 |
<arlolra@tin> |
Started deploy [parsoid/deploy@cd5e875]: Updating Parsoid to 6e38948 |
[production] |
20:25 |
<bsitzmann@tin> |
Finished deploy [mobileapps/deploy@e20f23d]: Update mobileapps to c1f4de6 (T191538) (duration: 06m 09s) |
[production] |
20:19 |
<bsitzmann@tin> |
Started deploy [mobileapps/deploy@e20f23d]: Update mobileapps to c1f4de6 (T191538) |
[production] |