1551-1600 of 10000 results (40ms)
2018-05-08 ยง
15:06 <ottomata> beginnng Kafka upgrade of main-codfw: T167039 [production]
14:53 <XioNoX> re-enable pybal on lvs2004 - T193677 [production]
14:48 <XioNoX> disabling pybal on lvs2004 - T193677 [production]
14:37 <mutante> LDAP: added 'sbailey' to group 'wmf' (T194091) [production]
14:19 <ppchelko@tin> Started restart [changeprop/deploy@7e86531]: Restart changeprop to try forcing it rebalancing topics [production]
14:15 <mutante> mw2215,mw2222,mw2223 - reinstalling with stretch [production]
13:43 <zeljkof> EU SWAT finished [production]
13:42 <zfilipin@tin> Synchronized php-1.32.0-wmf.2/extensions/Translate: SWAT: [[gerrit:431744|Refactor TranslationUpdateJob to use only primitive types for parameters (T192111)]] (duration: 01m 11s) [production]
13:25 <zfilipin@tin> Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:431628|Enable maps i18n everywhere (T191655)]] (duration: 01m 00s) [production]
13:14 <zfilipin@tin> Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:430388|Enable AdvancedSearch BetaFeature on all wikis (T193182)]] (duration: 01m 00s) [production]
13:02 <marostegui> Manually fail disk #9 on db1073 to get it replaced [production]
12:20 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Remove db1055 (duration: 00m 59s) [production]
12:19 <moritzm> reimaging mw2159, mw2160, mw2161 (job runners) to stretch [production]
12:18 <jynus@tin> Synchronized wmf-config/db-codfw.php: Remove db1055 (duration: 00m 59s) [production]
12:17 <moritzm> upgrading app servers in beta to wikidiff 1.6.0 (T190717) [production]
12:16 <moritzm> upgrading app servers in beta to [production]
12:02 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Pool db1064 with low load (duration: 00m 59s) [production]
11:36 <marostegui> Deploy schema change on db1103:3314 - T191519 T188299 T190148 [production]
11:36 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1103:3314 for alter table (duration: 00m 59s) [production]
11:18 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Really depool db2092 (duration: 00m 53s) [production]
10:29 <moritzm> reimaging mw1347, mw1348 (API servers) to stretch (last two remaining API servers in eqiad) [production]
10:22 <jynus> stop mariadb on db1055 to clone it to db1064 [production]
10:15 <moritzm> reimaging mw1310, mw1311 (job runners) to stretch [production]
09:58 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Depool db1055 (duration: 00m 54s) [production]
09:25 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1121 after alter table (duration: 01m 00s) [production]
09:20 <elukey> forced a BBU re-learn cycle on analytics1032 [production]
09:17 <gehel> reducing replication factor on cassandra v3 (unused) keyspace for maps [production]
08:56 <moritzm> reimaging mw1345, mw1346 (API servers) to stretch [production]
08:30 <moritzm> reimaging mw2156, mw2157, mw2158 (job runners) to stretch [production]
08:27 <moritzm> reimaging mw1308, mw1309 (job runners) to stretch [production]
08:03 <marostegui> Stop MySQL on db1116 to transfer its content to db2092 - T190704 [production]
07:59 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Depool db2092 T190704 (duration: 00m 57s) [production]
07:53 <elukey> second attempt to remove the cassandra-metrics-collector (+ cleanup) from aqs* [production]
07:30 <jynus> cleaning up maintenance hosts (terbium, etc.) from tendril maintenance files [production]
06:51 <marostegui> Stop MySQL on db1060 as it will be decommissioned - T193732 [production]
06:50 <moritzm> reimaging mw1313, mw1343, mw1344 to stretch [production]
06:26 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Remove db1060 from config - T193732 (duration: 01m 01s) [production]
06:25 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Remove db1060 from config - T193732 (duration: 00m 59s) [production]
06:05 <marostegui> Read_only=off on db1069 to finish with the x1 failover [production]
06:05 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Promote db1069 as new x1 master (duration: 01m 00s) [production]
06:00 <marostegui> Set db1055 ready only [production]
06:00 <marostegui> Start x1 failover [production]
05:41 <marostegui> Move db2034 under db1069 for x1 failover - T186320 [production]
05:36 <marostegui> Move dbstore1002:x1 under db1069 for x1 failover - T186320 [production]
05:29 <marostegui> Disable puppet on db1055 and db1069 before x1 failover - T186320 [production]
05:28 <marostegui> Disable gtid on db1069 an db2034 before x1 failover - T186320 [production]
05:26 <marostegui> Deploy schema change on db1121 with replication (this will generate lag on labs on s4) - T191519 T188299 T190148 [production]
05:26 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1121 for alter table (duration: 01m 00s) [production]
05:19 <marostegui> Reload haproxy on dbproxy1010 to repool labsdb1011 - https://phabricator.wikimedia.org/T174047 [production]
05:18 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1097:3314 after alter table (duration: 01m 00s) [production]