2751-2800 of 4935 results (20ms)
2019-10-02 §
08:31 <elukey> kill/restart mw check denormalize with hive2_jdbc parameter [analytics]
2019-09-30 §
21:05 <ottomata> rolling restart of hdfs namenode and hdfs resourcemanager to take presto proxy user settings [analytics]
05:26 <elukey> re-run manually pageview-druid-hourly 29/09T22:00 [analytics]
2019-09-27 §
06:44 <elukey> clean up files older than 30d in /var/log/{oozie,hive} on an-coord1001 [analytics]
2019-09-26 §
18:42 <mforns> finished deploying refinery using scap (together with refinery-source 0.0.101) [analytics]
18:27 <mforns> deploying refinery using scap (together with refinery-source 0.0.101) [analytics]
17:33 <elukey> run apt-get autoremove on stat* and notebook* to clean up old python2 deps [analytics]
15:01 <mforns> deploying analytics/aqs using scap [analytics]
13:04 <elukey> removing python2 packages from the analytics hosts (not from eventlog1002) [analytics]
11:13 <mforns> deployed analytics-refinery-source v0.0.101 using Jenkins [analytics]
05:47 <elukey> upload the new version of the pageview whitelist - https://gerrit.wikimedia.org/r/539225 [analytics]
2019-09-25 §
13:37 <elukey> move the Hadoop test cluster to the Analytics Zookeeper cluster [analytics]
08:37 <elukey> add netflow realtime ingestion alert for Druid [analytics]
06:02 <elukey> set python3 for all report updater jobs on stat1006/7 [analytics]
2019-09-24 §
14:46 <ottomata> temporarily disabled camus-mediawiki_analytics_events systemd timer on an-coord1001 - T233718 [analytics]
13:18 <joal> Manually repairing wmf.mediawiki_wikitext_history [analytics]
06:07 <elukey> update Druid Kafka supervisor for netflow to index new dimensions [analytics]
2019-09-23 §
20:56 <ottomata> created new camus job for high volume mediawiki analytics events: mediawiki_analytics_events [analytics]
16:46 <elukey> deploy refinery again (no hdfs, no source) to deploy the latest python fixes [analytics]
09:25 <elukey> temporarily disable *drop* timers on an-coord1001 to verify refinery python change with the team [analytics]
08:24 <elukey> deploy refinery to apply all the python2 -> python3 fixes [analytics]
07:44 <elukey> restart manually refine_mediawiki_events on an-coord1001 with --since 48 to force the refinement after camus backfilled the missing data [analytics]
07:41 <elukey> manually applied https://gerrit.wikimedia.org/r/#/c/analytics/refinery/+/538235/ on an-coord1001 [analytics]
06:21 <elukey_> restart camus mediawiki_events on an-coord1001 with increased mapreduce heap size [analytics]
2019-09-21 §
09:00 <fdans> resumed per file mediarequests backfiling coordinator [analytics]
2019-09-20 §
17:04 <elukey> restart yarn/hdfs daemons on analytics1045 [analytics]
17:01 <elukey> remove /var/lib/hadoop/j from analytics1045 due to a broken dis [analytics]
2019-09-19 §
13:31 <joal> Kill-restart webrequest-load bundle to fix queue issue [analytics]
10:37 <elukey> manually rollback /srv/deployment/analytics/refinery/bin/refinery-drop-hive-partitions to "#!/usr/bin/env python" on stat1007 [analytics]
09:16 <fdans> starting load to cassandra of mediarequests per file daily [analytics]
2019-09-18 §
19:23 <joal> Deploy AQS using scap - Try 3 [analytics]
18:59 <joal> Deploy AQS using scap - Try 2 [analytics]
18:53 <joal> Deploy AQS using scap [analytics]
18:16 <joal> Start mediawiki-history-dumps oozie job starting with August 2019 [analytics]
18:10 <joal> Kill-restart webrequest-load oozie job to pick-up new ua-parser [analytics]
18:09 <joal> Restart eventlogging with new ua-parser (ottomata did) [analytics]
16:46 <elukey> manually restarted the refinery-drop-older-than jobs [analytics]
16:45 <elukey> manually set "#!/usr/bin/env python" for refinery-drop-older-than on an-coord1001 to restore functionality (minor bug encountered) [analytics]
13:41 <joal> Deploy refinery to hdfs [analytics]
13:35 <joal> Deploying refinery using scap [analytics]
12:54 <elukey> re-run webrequest-load upload/text for hour 11 due to transient hive server socket failures [analytics]
12:39 <joal> Release refinery-source v0.0.100 to archiva [analytics]
2019-09-17 §
08:19 <elukey> manually decommed analytics1032 for hdfs/yarn on the Hadoop testing cluster - T233080 [analytics]
07:50 <joal> Manually released com.github.ua-parser/uap-java 1.4.4-core0.6.9~1-wmf to archiva [analytics]
2019-09-16 §
12:41 <elukey> rebooting the hadoop test cluster with the new spicerack cookbook as test [analytics]
10:04 <elukey> disable puppet on an-coord1001 and manually forcing python3 for camus - T204735 [analytics]
07:25 <joal> Delete matomo error with URL http://Wikipedia/screen/Explore [analytics]
2019-09-13 §
16:57 <joal> Reset ua-parser/uap-java wmf branch to up-to-date master using push force [analytics]
2019-09-12 §
09:35 <elukey> drop old database 'superset' from analytics-meta (an-coord1001) after a precautionary backup [analytics]
2019-09-11 §
18:42 <nuria> deployment of v0.0.99 to cluster succeeded, letting it bake for a bit [analytics]