151-200 of 3369 results (8ms)
2020-12-09 §
20:34 <elukey> execute on mysql:an-coord1002 "set GLOBAL replicate_wild_ignore_table='superset_staging.%'" to avoid replication for superset_staging from an-coord1002 [analytics]
07:12 <elukey> re-enable timers after maintenance [analytics]
07:07 <elukey> restart hive-server2 on an-coord1002 for consistency [analytics]
07:05 <elukey> restart hive metastore and server2 on an-coord1001 to pick up settings for DBTokenStore [analytics]
06:50 <elukey> stop timers on an-launcher1002 as prep step to restart hive [analytics]
2020-12-07 §
18:51 <joal> Test mediawiki-wikitext-history new sizing settings [analytics]
18:43 <razzi> kill testing flink job: sudo -u hdfs yarn application -kill application_1605880843685_61049 [analytics]
18:42 <razzi> truncate /var/lib/hadoop/data/h/yarn/logs/application_1605880843685_61049/container_e27_1605880843685_61049_01_000002/taskmanager.log on an-worker1011 [analytics]
2020-12-03 §
22:34 <milimetric> updated mw history snapshot on AQS [analytics]
07:09 <elukey> manual reset-failed refinery-sqoop-whole-mediawiki.service on an-launcher1002 (job launched manually) [analytics]
2020-12-02 §
21:37 <joal> Manually create _SUCCESS flags for banner history monthly jobs to kick off (they'll be deleted by the purge tomorrow morning) [analytics]
21:16 <joal> Rerun timed out jobs after oozie config got updated (mediawiki-geoeditors-yearly-coord and banner_activity-druid-monthly-coord) [analytics]
20:49 <ottomata> deployed eventgate-analytics-external with refactored stream config, hopefully this will work around the canary events alarm bug - T266573 [analytics]
18:20 <mforns> finished netflow migration wmf->event [analytics]
17:50 <mforns> starting netflow migration wmf->event [analytics]
17:50 <joal> Manually start refinery-sqoop-production on an-launcher1002 to cover for couped runs failure [analytics]
16:50 <mforns> restarted turnilo to clear deleted datasource [analytics]
16:47 <milimetric> faked _SUCCESS flag for image table to allow daisy-chained mediawiki history load dependent coordinators to keep running [analytics]
07:49 <elukey> restart oozie to pick up new settings for T264358 [analytics]
2020-12-01 §
19:43 <razzi> deploy refinery with refinery-source v0.0.140 [analytics]
10:50 <elukey> restart oozie to pick up new logging settings [analytics]
09:03 <elukey> clean up old hive metastore/server old logs on an-coord1001 to free space [analytics]
2020-11-30 §
17:51 <joal> Deploy refinery onto hdfs [analytics]
17:49 <joal> Kill-restart mediawiki-history-load job after refactor (1 coordinator per table) and tables addition [analytics]
17:32 <joal> Kill-restart mediawiki-history-reduced job for druid-public datasource number of shards update [analytics]
17:32 <joal> Deploy refinery using scap for naming hotfix [analytics]
15:29 <ottomata> migrated EventLogging schemas SpecialMuteSubmit and SpecialInvestigate to EventGate - T268517 [analytics]
14:56 <joal> Deploying refinery onto hdfs [analytics]
14:49 <joal> Create new hive tables for newly sqooped data [analytics]
14:45 <joal> Deploy refinery using scap [analytics]
09:08 <elukey> force execution of refinery-drop-pageview-actor-hourly-partitions on an-launcher1002 (after args fixup from Joseph) [analytics]
2020-11-27 §
14:51 <elukey> roll restart zookeeper on druid* nodes for openjdk upgrades [analytics]
10:29 <elukey> restart eventlogging_to_druid_editattemptstep_hourly on an-launcher1002 (failed) to see if the hive metastore works [analytics]
10:27 <elukey> restart oozie and presto-server on an-coord1001 for openjdk upgrades [analytics]
10:27 <elukey> restart hive server and metastore on an-coord1001 - openjdk upgrades + problem with high GC caused by a job [analytics]
08:05 <elukey> roll restart druid public cluster for openjdk upgrades [analytics]
2020-11-26 §
13:52 <elukey> roll restart druid daemons on druid analytics to pick up new openjdk upgrades [analytics]
13:08 <elukey> force umount/mount of all /mnt/hdfs mountpoints to pick up opendjdk upgrades [analytics]
09:07 <elukey> force purging https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/en.wikipedia/all-access/user/Diego_Maradona/daily/2020110500/2020112500 from caches [analytics]
08:40 <elukey> roll restart cassandra on aqs10* for openjdk upgrades [analytics]
2020-11-25 §
19:04 <joal> Killing job application_1605880843685_18336 as it consumes too much resources [analytics]
18:40 <elukey> restart turnilo to pick up new netflow config changes [analytics]
16:46 <elukey> move analytics1066 to C3 [analytics]
16:11 <elukey> move analytics1065 to C3 [analytics]
15:38 <elukey> move stat1004 to A5 [analytics]
2020-11-24 §
19:33 <elukey> kill and restart webrequest_load bundle to pick up analytics-hive.eqiad.wmnet settings [analytics]
19:05 <elukey> deploy refinery to hdfs (even if not really needed) [analytics]
18:47 <elukey> deploy analytics refinery as part of the regular weekly train [analytics]
15:38 <elukey> move druid1005 from rack B7 to B6 [analytics]
14:59 <elukey> move analytics1072 from rack B2 to B3 [analytics]