1401-1450 of 3675 results (9ms)
2019-11-05 §
21:04 <ottomata> re-enabling refine jobs after spark 2.4.4 upgrade [analytics]
20:57 <joal> Starting denoramlize-check one month in advance to enforce a running job with new spark [analytics]
20:37 <ottomata> roll restarting hadoop-yarn-nodemanagers to pick up spark 2.4.4 shuffle lib [analytics]
20:21 <ottomata> install spark 2.4.4-bin-hadoop2.6-1 cluster wide using debdeploy - T222253 [analytics]
20:18 <joal> Deploying refinery onto HDFS [analytics]
20:12 <ottomata> stopped refine jobs for Spark 2.4 upgrade - T222253 [analytics]
20:09 <joal> Deploying refinery using scap with missing patch [analytics]
20:00 <joal> Deploying refinery using scap [analytics]
18:49 <joal> Make Jenkins release refinery-source v0.0.105 to archiva [analytics]
17:12 <ottomata> 2019-11-05T17:11:50.239 INFO HDFSCleaner Deleted 872360 files and directories in tmp [analytics]
17:01 <ottomata> first run of HDFSCleaner on /tmp, should delete files older than 31 days [analytics]
11:00 <fdans> testing load of top metric from mediarequests with corrected quotemarks escaping [analytics]
2019-11-04 §
23:28 <milimetric> deployed refinery [analytics]
14:58 <joal> restarting AQS using scap after snapshot bump (2019-10) [analytics]
2019-10-31 §
19:45 <fdans> (actually no, no need) [analytics]
19:43 <fdans> (changing jar version first) [analytics]
19:43 <fdans> restarting mediawiki-history-wikitext [analytics]
19:42 <fdans> refinery deployment complete [analytics]
19:17 <fdans> updating jar symlinks to 0.0.104 [analytics]
17:59 <fdans> deploying refinery [analytics]
17:49 <fdans> deplying refinery-source 0.0.104 [analytics]
16:36 <elukey> restart oozie and hive-server2 on an-coord1001 to pick up new new TLS mapreduce settings [analytics]
15:31 <joal> Rerun webrequest jobs for hour 2019-10-31T14:00 after failure [analytics]
14:53 <elukey> enabled encrypted shuffle option in all Hadoop Analytics Yarn Node Managers [analytics]
10:17 <elukey> deploy TLS certificates for MapReduce Shufflers on Hadoop worker nodes (no-op change, no yarn-site config) [analytics]
2019-10-30 §
15:00 <ottomata> disabling eventlogging-consumer mysql on eventlog1002 [analytics]
08:31 <joal> Rerun failed cassandra-daily-coord-local_group_default_T_mediarequest_per_file days: 2019-10-26, 2019-10-23 and 2019-10-22 [analytics]
06:30 <elukey> re-run cassandra-coord-pageview-per-article-daily 29/10/2019 [analytics]
2019-10-29 §
08:51 <fdans> starting backfilling for per file mediarequests for 7 days from Sep 15 2015 [analytics]
07:09 <elukey> roll restart java daemons on analytics1042, druid1003 and aqs1004 to pick up new openjdk upgrades [analytics]
2019-10-28 §
10:10 <fdans> mediarequest per file backfilling suspended [analytics]
09:14 <elukey> manual re-run of cassandra-coord-pageview-per-article-daily - 26/10/2019 - as attempt to see if the error is reproducible or not (timeout while inserting into cassandra) [analytics]
2019-10-24 §
13:54 <fdans> running top mediarequest backfill from 2015-01-02 to 2019-05-01 [analytics]
2019-10-23 §
18:59 <milimetric> refinery deployment re-done to fix my mistake [analytics]
18:37 <mforns> refinery deployment done! [analytics]
18:31 <mforns> deploying refinery with refinery-deploy-to-hdfs up to 1110d59c3983bcff4986bce1baf885f05ee06ba5 [analytics]
18:21 <mforns> deploying refinery with scap up to 1110d59c3983bcff4986bce1baf885f05ee06ba5 [analytics]
2019-10-22 §
15:47 <fdans> start backfilling of mediarequests per file from 2015-01-02 to 2019-05-17 after ok vetting of 2015-01-01 [analytics]
2019-10-18 §
14:45 <fdans> backfilling 2015-1-1 for mediarequests per file, proceeding with all days until 2019-05-17 if successful [analytics]
2019-10-17 §
18:01 <elukey> update librdkafka on eventlog1002 and restart eventlogging [analytics]
10:26 <elukey> rollback eventlogging back to Python 2, some errors (unseen in tests) logged by the processors [analytics]
10:18 <elukey> move eventlogging to python 3 [analytics]
2019-10-16 §
20:27 <ottomata> upgrading to spark 2.4.4 in analytics test cluster [analytics]
20:20 <joal> Kill-restart mediawiki-history-dumps-coord to pick up changes [analytics]
20:16 <joal> Deployed refinery onto HDFS [analytics]
20:08 <joal> Deployed refinery using scap [analytics]
19:45 <joal> Refinery-source v0.0.103 released to refinery [analytics]
19:29 <joal> Ask jenkins to release refinery-source v0.0.103 to archiva [analytics]
19:19 <joal> AQS deployed with mediarequest-top endpoint [analytics]
18:45 <joal> Manually create mediarequest-top cassandra keyspace and tables, and add fake test data into it [analytics]