1101-1150 of 10000 results (57ms)
2019-06-15 §
20:16 <smalyshev@deploy1001> Finished deploy [wdqs/wdqs@55174a4]: deploy new pattern for bots (duration: 00m 54s) [production]
20:15 <smalyshev@deploy1001> Started deploy [wdqs/wdqs@55174a4]: deploy new pattern for bots [production]
19:14 <SMalyshev> repooled wdqs1004 [production]
17:35 <elukey> restart hadoop-yarn-resourcemanager on an-masters as attempt to fix yarn.w.o [production]
07:44 <SMalyshev> depooled wdqs1004 to catch it up [production]
2019-06-14 §
23:23 <ejegg> updated payments-wiki from 75abd71cc1 to 79d1822644 [production]
23:19 <SMalyshev> repooled wdqs1003 [production]
23:13 <SMalyshev> repooled wdqs2003 [production]
23:10 <_joe_> set cpufreq governor for mw1348 to performance [production]
19:56 <SMalyshev> depooled wdqs2003 to catch up [production]
19:17 <SMalyshev> depooled wdqs1003 to catch up [production]
15:56 <gehel> repooling wdqs1003, not catching up anyway (high edit load) [production]
15:24 <godog> test setting 'performance' governor on ms-be2035 - T210723 [production]
14:35 <godog> powercycle mw1294, down and no console [production]
13:26 <gehel> depooling wdqs1003 to allow it to catch up on lag [production]
13:22 <joal@deploy1001> Started restart [analytics/aqs/deploy@fc1d232]: (no justification provided) [production]
12:38 <godog> test setting 'performance' governor on ms-be2032 - T210723 [production]
11:36 <godog> test setting 'performance' governor on ms-be2034 - T210723 [production]
10:22 <marostegui> Optimize tables on pc2008 - T210725 [production]
10:17 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully repool db1077 after recovering from a crash (duration: 00m 49s) [production]
10:14 <godog> test setting 'performance' governor on ms-be2031 - T210723 [production]
09:44 <godog> test setting 'performance' governor on ms-be2037 - T210723 [production]
09:43 <godog> test setting 'performance' governor on ms-be2033 - T210723 [production]
09:28 <godog> test setting 'performance' governor on ms-be2038 - T210723 [production]
09:26 <godog> test setting 'performance' governor on ms-be2016 - T210723 [production]
03:57 <SMalyshev> repooled wdqs1005 [production]
00:11 <SMalyshev> depooled wdqs1005 - let it catch up [production]
00:10 <SMalyshev> repooled wdqs1006 - caught up [production]
2019-06-13 §
23:25 <SMalyshev> depooled wdqs1006 to let it catch up quicker [production]
18:10 <fdans@deploy1001> Finished deploy [analytics/refinery@67b34fe]: retrying deployment of analytics refinery (duration: 00m 19s) [production]
18:10 <fdans@deploy1001> Started deploy [analytics/refinery@67b34fe]: retrying deployment of analytics refinery [production]
18:01 <fdans@deploy1001> Finished deploy [analytics/refinery@67b34fe]: deploying refinery source 0.0.92 into refinery (duration: 16m 45s) [production]
17:44 <fdans@deploy1001> Started deploy [analytics/refinery@67b34fe]: deploying refinery source 0.0.92 into refinery [production]
17:34 <bstorm_> T203254 set cpu scaling governor to performance on labstore1004 and labstore1005 [production]
16:02 <gehel> restart blazegraph on wdqs public cluster completed [production]
15:58 <gehel> restart blazegraph on wdqs public cluster [production]
15:36 <gehel> restarting blazegraph on wdqs-internal / eqiad (just in case) [production]
08:09 <jynus> reloading proxies for wikireplicas to rebalance load [production]
07:00 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: More traffic to db1077 after recovering from a crash (duration: 00m 50s) [production]
00:45 <paravoid> setting the CPU governor to performance for ms-be1036 (a while ago) [production]
2019-06-12 §
18:15 <krinkle@deploy1001> Synchronized php-1.34.0-wmf.8/thumb.php: T225197 / 06b631fae5 (duration: 00m 47s) [production]
18:13 <krinkle@deploy1001> Synchronized php-1.34.0-wmf.8/extensions/ArticlePlaceholder/includes/: T207235 / a42aa1599a131c55304 (duration: 00m 49s) [production]
16:06 <gehel@cumin1001> END (PASS) - Cookbook sre.elasticsearch.rolling-restart (exit_code=0) [production]
15:49 <gehel@cumin1001> START - Cookbook sre.elasticsearch.rolling-restart [production]
15:37 <legoktm> re-enabled bawolff's gerrit account [production]
15:14 <gehel@cumin1001> END (ERROR) - Cookbook sre.elasticsearch.rolling-restart (exit_code=97) [production]
14:38 <marostegui> Start replication on all threads on labsdb1010 - T222978 [production]
14:35 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: More traffic to db1077 after recovering from a crash (duration: 00m 47s) [production]
13:19 <gehel@cumin1001> START - Cookbook sre.elasticsearch.rolling-restart [production]
11:55 <godog> swift eqiad-prod: put back ms-be1033 - T223518 [production]