3651-3700 of 4917 results (19ms)
2018-04-16 §
08:35 <joal> Restarting wikidata-articleplaceholder oozie job after last week's failures [analytics]
08:29 <joal> Deploying refnery onto HDFS [analytics]
08:22 <joal> Deploying refinery from tin [analytics]
08:03 <joal> Correction - Deploying refinery-source v0.0.62 using Jenkins ! [analytics]
08:03 <joal> Deploying refinery source v0.0.62 from tin [analytics]
2018-04-12 §
20:34 <ottomata> replacing references to dataset1001.wikimedia.org:: with /srv/dumps in stat1005:~ezachte/wikistats/dammit.lt/bash: for f in $(sudo grep -l dataset1001.wikimedia.org *); do sudo sed -i 's@dataset1001.wikimedia.org::@/srv/dumps/@g' $f; done T189283 [analytics]
2018-04-11 §
16:48 <elukey> restart hadoop namenodes to pick up HDFS trash settings [analytics]
2018-04-10 §
22:43 <joal> Deploying refinery with scap [analytics]
22:42 <joal> Refinery-source 0.0.61 deployed on archiva [analytics]
20:43 <ottomata> bouncing main -> jumbo mirrormakers to blacklist job topics until we have time to investigate more [analytics]
20:38 <ottomata> restarted event* camus and refine cron jobs, puppet is reenabled on analytics1003 [analytics]
20:14 <ottomata> restart mirrormakers main -> jumbo (AGAIN) [analytics]
19:26 <ottomata> restarted camus-webrequest and camus-mediawiki (avro) camus jobs [analytics]
18:18 <ottomata> restarting all hadoop nodemanagers, 3 at a time to pick up spark2-yarn-shuffle.jar T159962 [analytics]
18:06 <joal> EDeploy refinery to HDFS [analytics]
17:46 <joal> Refinery source 0.0.60 deployed to archiva [analytics]
15:42 <ottomata> disable puppet on analytics1003 and stop camus crons in preperation for spark 2 upgrade [analytics]
14:25 <ottomata> bouncing all main -> jumob mirror makers, they look stuck! [analytics]
09:00 <elukey> restart eventlogging mysql consumers on eventlog1002 to pick up new DNS changes for m4-master - T188991 [analytics]
2018-04-09 §
07:15 <elukey> upgrade kafka burrow on kafkamon* [analytics]
2018-04-06 §
17:14 <joal> Launch manual mediawiki-history-reduced job to test memory setting (and index new data) -- mediawiki-history-reduced-wf-2018-03 [analytics]
13:39 <joal> Rerun mediawiki-history-druid-wf-2018-03 [analytics]
2018-04-05 §
19:24 <ottomata> upgrading spark2 to spark 2.3 [analytics]
13:43 <mforns> created success files in /wmf/data/raw/mediawiki/tables/<table>/snapshot=2018-03 for <table> in revision, logging, pagelinks [analytics]
13:38 <mforns> copied sqooped data for mediawiki history from /user/mforns over to /wmf/data/raw/mediawiki/tables/ for enwiki, table: revision [analytics]
2018-04-04 §
21:07 <mforns> copied sqooped data for mediawiki history from /user/mforns over to /wmf/data/raw/mediawiki/tables/ for wikidatawiki and commonswiki, tables: revision, logging and pagelinks [analytics]
16:06 <elukey> killed banner-impression related jvms on an1003 to finish openjdk-8 upgrades (they should be brought back via cron) [analytics]
2018-04-03 §
20:11 <ottomata> bouncing main -> jumbo mirrormaker to apply batch.size = 65536 [analytics]
19:32 <ottomata> bouncing main -> jumbo MirrorMaker unsetting http://session.timeout.ms/, this has a restiction on the broker in 0.9 :( [analytics]
19:22 <ottomata> bouncing main -> jumbo MirrorMaker setting session.timeout.ms = 125000 [analytics]
18:46 <ottomata> restart main -> jumbo MirrorMaker with request.timeout.ms = 2 minutes [analytics]
15:26 <elukey> manually run hdfs balancer on an1003 (tmux session) [analytics]
15:25 <elukey> killed a jvm belonging to hdfs-balancer stuck from march 9th [analytics]
13:48 <ottomata> re-enable job queue topic mirroring from main -> eqiad [analytics]
2018-04-02 §
22:28 <ottomata> bounce mirror maker to pick up client_id config changes [analytics]
20:55 <ottomata> deployed multi-instance mirrormaker for main -> jumbo. 4 per host == 12 total processes [analytics]
11:25 <joal> Repair cu_changes hive table afer succesfull sqoop import and add _PARTITIONED file for oozie jobs to launch [analytics]
08:33 <joal> rerun wikidata-specialentitydata_metrics-wf-2018-4-1 [analytics]
2018-03-30 §
13:48 <elukey> restart overlord+middlemanager on druid100[23] to avoid consistency issues [analytics]
13:41 <elukey> restart overlord+middlemanager on druid1001 after failures in real time indexing (overlord leader) [analytics]
09:44 <elukey> re-enable camus [analytics]
08:26 <elukey> stopped camus to drain the cluster - prep for easy restart of analytics1003's jvm daemons [analytics]
2018-03-29 §
20:55 <milimetric> accidentally killed mediawiki-geowiki-monthly-coord, and then restarted it [analytics]
20:12 <ottomata> blacklisted mediawiki.job topics from main -> jumbo MirrorMaker again, don't want to page over the weekend while this still is not stable. T189464 [analytics]
07:30 <joal> Manually reparing hive mediawiki_private_cu_changes table after manual sqooping of 2018-01 data, and add _PARTITIONNED file to the folder [analytics]
2018-03-28 §
19:39 <ottomata> bouncing main -> jumbo mirrormaker to apply increase in consumer num.streams [analytics]
19:21 <milimetric> synced refinery to hdfs (only python changes but just so we have latest) [analytics]
19:20 <joal> Start Geowiki jobs (monthly and druid) starting 2018-01 [analytics]
18:36 <joal> Making hdfs://analytics-hadoop/wmf/data/wmf/mediawiki_private accessible only by analytics-privatedata-users group (and hdfs obviously) [analytics]
18:02 <joal> Kill-Restart mobile_apps-session_metrics (bundle killed, coord started) [analytics]