801-850 of 3058 results (14ms)
2019-10-31 §
19:42 <fdans> refinery deployment complete [analytics]
19:17 <fdans> updating jar symlinks to 0.0.104 [analytics]
17:59 <fdans> deploying refinery [analytics]
17:49 <fdans> deplying refinery-source 0.0.104 [analytics]
16:36 <elukey> restart oozie and hive-server2 on an-coord1001 to pick up new new TLS mapreduce settings [analytics]
15:31 <joal> Rerun webrequest jobs for hour 2019-10-31T14:00 after failure [analytics]
14:53 <elukey> enabled encrypted shuffle option in all Hadoop Analytics Yarn Node Managers [analytics]
10:17 <elukey> deploy TLS certificates for MapReduce Shufflers on Hadoop worker nodes (no-op change, no yarn-site config) [analytics]
2019-10-30 §
15:00 <ottomata> disabling eventlogging-consumer mysql on eventlog1002 [analytics]
08:31 <joal> Rerun failed cassandra-daily-coord-local_group_default_T_mediarequest_per_file days: 2019-10-26, 2019-10-23 and 2019-10-22 [analytics]
06:30 <elukey> re-run cassandra-coord-pageview-per-article-daily 29/10/2019 [analytics]
2019-10-29 §
08:51 <fdans> starting backfilling for per file mediarequests for 7 days from Sep 15 2015 [analytics]
07:09 <elukey> roll restart java daemons on analytics1042, druid1003 and aqs1004 to pick up new openjdk upgrades [analytics]
2019-10-28 §
10:10 <fdans> mediarequest per file backfilling suspended [analytics]
09:14 <elukey> manual re-run of cassandra-coord-pageview-per-article-daily - 26/10/2019 - as attempt to see if the error is reproducible or not (timeout while inserting into cassandra) [analytics]
2019-10-24 §
13:54 <fdans> running top mediarequest backfill from 2015-01-02 to 2019-05-01 [analytics]
2019-10-23 §
18:59 <milimetric> refinery deployment re-done to fix my mistake [analytics]
18:37 <mforns> refinery deployment done! [analytics]
18:31 <mforns> deploying refinery with refinery-deploy-to-hdfs up to 1110d59c3983bcff4986bce1baf885f05ee06ba5 [analytics]
18:21 <mforns> deploying refinery with scap up to 1110d59c3983bcff4986bce1baf885f05ee06ba5 [analytics]
2019-10-22 §
15:47 <fdans> start backfilling of mediarequests per file from 2015-01-02 to 2019-05-17 after ok vetting of 2015-01-01 [analytics]
2019-10-18 §
14:45 <fdans> backfilling 2015-1-1 for mediarequests per file, proceeding with all days until 2019-05-17 if successful [analytics]
2019-10-17 §
18:01 <elukey> update librdkafka on eventlog1002 and restart eventlogging [analytics]
10:26 <elukey> rollback eventlogging back to Python 2, some errors (unseen in tests) logged by the processors [analytics]
10:18 <elukey> move eventlogging to python 3 [analytics]
2019-10-16 §
20:27 <ottomata> upgrading to spark 2.4.4 in analytics test cluster [analytics]
20:20 <joal> Kill-restart mediawiki-history-dumps-coord to pick up changes [analytics]
20:16 <joal> Deployed refinery onto HDFS [analytics]
20:08 <joal> Deployed refinery using scap [analytics]
19:45 <joal> Refinery-source v0.0.103 released to refinery [analytics]
19:29 <joal> Ask jenkins to release refinery-source v0.0.103 to archiva [analytics]
19:19 <joal> AQS deployed with mediarequest-top endpoint [analytics]
18:45 <joal> Manually create mediarequest-top cassandra keyspace and tables, and add fake test data into it [analytics]
2019-10-15 §
13:15 <elukey> re-enable timers on an-coord1001 [analytics]
12:57 <fdans> resumed backfilling of mediarequests per referer daily [analytics]
12:46 <elukey> moved hadoop cluster to new zookeeper cluster [analytics]
11:25 <elukey> stop all systemd timers on an-coord1001 as prep step for hadoop maintenance [analytics]
10:42 <fdans> backfilling January 1st 2015 for mediarequests per referer daily, proceeding with all days until May 2019 if successful [analytics]
2019-10-14 §
18:13 <joal> Manually add ban.wikipedia.org to pageview whitelist (T234768) [analytics]
14:28 <elukey> matomo upgraded to 3.11 on matomo1001 [analytics]
2019-10-11 §
12:51 <elukey> deployed eventlogging python3 version in deployment-prep [analytics]
07:09 <elukey> drop test_wmf_netflow fro druid analytics and restart turnilo [analytics]
06:24 <elukey> remove /tmp/hive-staging_hive_(2017|2018)* data from HDFS instead of /tmp/* to avoid causing hive failures (it needs to write temporary data for the current running jobs) [analytics]
06:04 <elukey> delete content of /tmp/* on HDFS [analytics]
2019-10-10 §
09:13 <joal> rerun failed pageview hour after manual job killing (pageview-hourly-wf-2019-10-9-19) [analytics]
09:13 <joal> Kill stuck oozie launcher in yarn (application_1569878150519_43184) [analytics]
2019-10-09 §
20:52 <milimetric> deploy of refinery and refinery-source 0.0.102 finally seems to have finished [analytics]
19:55 <milimetric> refinery ... probably? deployed with errors like "No such file or directory (2)\nrsync error" [analytics]
17:11 <elukey> restart druid-broker on druid100[5-6] - not serving data correctly [analytics]
2019-10-08 §
09:22 <elukey> delete druid old test datasource from the analytics cluster - test_kafka_event_centralnoticeimpression [analytics]