101-150 of 2725 results (17ms)
2020-05-04 §
16:44 <joal> Deploy refinery again using scap (trying to fox sqoop) [analytics]
15:39 <joal> restart refinery-sqoop-whole-mediawiki.service [analytics]
15:37 <joal> restart refinery-sqoop-mediawiki-private.service [analytics]
14:50 <joal> Deploy refinery using scap to fix sqoop [analytics]
13:43 <elukey> restart refinery-sqoop-whole-mediawiki to test failure exit codes [analytics]
06:50 <elukey> upgrade druid-exporter on all druid nodes [analytics]
2020-05-03 §
19:36 <joal> Rerun mobile_apps-session_metrics-wf-7-2020-4-26 [analytics]
2020-05-02 §
10:54 <joal> Rerun predictions-actor-hourly-wf-2020-5-2-0 [analytics]
2020-05-01 §
16:59 <elukey> test prometheus-druid-exporter 0.8 on druid1001 (deb packages not yet uploaded, just build and manually installed) [analytics]
2020-04-30 §
10:36 <elukey> run superset init to add missing perms on an-tool1005 and analytics-tool1004 - T249681 [analytics]
07:14 <elukey> correct X-Forwarded-Proto for superset (http -> https) and restart it [analytics]
2020-04-29 §
18:55 <joal> Kill-restart cassandra-daily-coord-local_group_default_T_pageviews_per_article_flat [analytics]
18:46 <joal> Kill-restart pageview-hourly job [analytics]
18:45 <joal> No restart needed for pageview-druid jobs [analytics]
18:36 <joal> kill restart pageview-druid jobs (hourly, daily, monthly) to add new dimension [analytics]
18:29 <joal> Kill-restart data-quality-stats-hourly bundle [analytics]
17:57 <joal> Deploy refinery on HDFS [analytics]
17:45 <elukey> roll restart Presto workers to pick up the new jvm settings (110G heap size) [analytics]
16:06 <joal> Deploying refinery using scap [analytics]
15:57 <joal> Deploying AQS using scap [analytics]
14:26 <elukey> enable TLS consumer/producers for kafka main -> jumbo mirror maker - T250250 [analytics]
13:48 <joal> Releasing refinery 0.0.123 onto archiva with Jenkins [analytics]
08:47 <elukey> roll restart zookeeper on an-conf* to pick up new openjdk11 updates (affects hadoop) [analytics]
2020-04-27 §
13:02 <elukey> superset 0.36.0 deployed to an-tool1005 [analytics]
2020-04-26 §
18:14 <elukey> restart nodemanager on analytics1054 - failed due to heap pressure [analytics]
18:14 <elukey> re-run webrequest-load-coord-text 26/04/2020T16 via Hue [analytics]
2020-04-23 §
13:57 <elukey> launch again data quality stats bundle with https://gerrit.wikimedia.org/r/#/c/analytics/refinery/+/592008/ applied locally [analytics]
2020-04-22 §
06:46 <elukey> kill dataquality hourly bundle again, traffic_by_country keeps failing [analytics]
06:11 <elukey> start data quality bundle hourly with --user=analytics [analytics]
05:45 <elukey> add a separate refinery scap target for the Hadoop test cluster and redeploy to check new settings [analytics]
2020-04-21 §
23:17 <milimetric> restarted webrequest bundle, babysitting that first before going on [analytics]
23:00 <milimetric> forgot a small jar version update, finished deploying now [analytics]
21:38 <milimetric> deployed twice because analytics1030 failed with "OSError {}" but seems ok after the second deploy [analytics]
14:27 <elukey> add motd to notebook100[3,4] to alert about host deprecation (in favor of stat100x) [analytics]
11:51 <elukey> manually add SUCCESS flags under /wmf/data/wmf/banner_activity/daily/year=2020/month=1 and /wmf/data/wmf/banner_activity/daily/year=2019/month=12 to unblock druid banner monthly indexations [analytics]
2020-04-20 §
14:38 <ottomata> restarting eventlogging-processor with updated python3-ua-parser for parsing KaiOS user ageints [analytics]
10:28 <elukey> drop /srv/log/mw-log/archive/api from stat1007 (freeing 1.3TB of space!) [analytics]
2020-04-18 §
21:40 <elukey> force hdfs-balancer as attempt to redistribute hdfs blocks more evenly to worker nodes (hoping to free the busiest ones) [analytics]
21:32 <elukey> drop /user/analytics-privatedata/.Trash/* from hdfs to free some space (~100G used) [analytics]
21:25 <elukey> drop /var/log/hadoop-yarn/apps/analytics-search/* from hdfs to free space (~8T replicated used) [analytics]
21:21 <elukey> drop /user/{analytics|hdfs}/.Trash/* from hdfs to free space (~100T used) [analytics]
21:12 <elukey> drop /var/log/hadoop-yarn/apps/analytics from hdfs to free space (15.1T replicated) [analytics]
2020-04-17 §
13:45 <elukey> lock down /srv/log/mw-log/archive/ on stat1007 to analytics-privatedata-users access only [analytics]
10:26 <elukey> re-created default venv for notebooks on notebook100[3,4] (missed to git pull before re-creaing it the last time) [analytics]
2020-04-16 §
05:34 <elukey> restart hadoop-yarn-nodemanager on an-worker108[4,5] - failed after GC OOM events (heavy spark jobs) [analytics]
2020-04-15 §
14:03 <elukey> update Superset Alpha role perms with what stated in T249923#6058862 [analytics]
09:35 <elukey> restart jupyterhub too as follow up [analytics]
09:35 <elukey> execute "create_virtualenv.sh ../venv" on stat1006, notebook1003, notebook1004 to apply new settings to Spark kernels (re-creating them) [analytics]
09:09 <elukey> restart druid brokers on druid100[4-6] - stuck after datasource deletion [analytics]
2020-04-11 §
09:19 <elukey> set hive-security: read-only for the Presto hive connector and roll restart the cluster [analytics]