601-650 of 2421 results (19ms)
2019-04-18 §
18:53 <fdans> Release of v0.0.86 in maven succeeded [analytics]
15:22 <fdans> restarting release of version 0.0.86 of refinery source to maven [analytics]
14:29 <fdans> releasing version 0.0.86 of refinery source to maven [analytics]
2019-04-17 §
09:06 <elukey> restart eventlogging on eventlog1002 due to errors in processors and consumer lag accumulated after the last Kafka Jumbo roll restart [analytics]
2019-04-13 §
09:21 <elukey> re-run failed webrequest-text 2018-04-13-07 job - temporary failure between Hive and HDFS [analytics]
2019-04-12 §
10:12 <elukey> matomo upgraded to 3.9.1 to fix some security vulns [analytics]
2019-04-10 §
14:48 <elukey> restart turnilo to pick up the new nodejs runtime [analytics]
13:58 <joal> Deploying AQS [analytics]
2019-04-09 §
18:40 <ottomata> chowning files in analytics.wm.org/datasets/archive/public-datasets/ as stats:wikidev [analytics]
15:00 <fdans> backfilling data between previous backfill end and start of puppetized job for PrefUpdate [analytics]
13:53 <mforns> restarted turnilo to clear deleted datasource [analytics]
2019-04-08 §
14:50 <fdans> backfilling prefupdate schema into druid from Jan 1 2019 until Apr 1 2019 [analytics]
2019-04-04 §
21:20 <mforns> Restarted turnilo to clear deleted datasource [analytics]
2019-04-03 §
19:16 <elukey> failover from namenode on 1002 (currently active after the outage) to 1001 (standby) [analytics]
18:07 <joal> mediawiki-history-checker manual rerun successful [analytics]
15:22 <elukey> execute kafka preferred-replica-election on kafka-jumbo [analytics]
2019-04-02 §
17:54 <mforns> restarted turnilo to clear deleted datasource [analytics]
17:29 <milimetric> revision/pagelinks failed wikis rerun successfully, now forcing comment/actor rerun [analytics]
15:02 <mforns> Rerunning webrequest-load-coord for 2019-04-01T22 [analytics]
14:59 <elukey> re-run of webrequest upload 2019-04-01-14 with higher data loss threshold [analytics]
10:14 <elukey> restart eventlogging's mysql consumers on eventlog1002 - T219842 [analytics]
06:18 <joal> Deleted (in hdfs bin) actor and comment table data because it has been sqooped too early - manual rerun will be started once labs sqoop is done [analytics]
2019-04-01 §
06:02 <elukey> kill + re-run of pageviews hourly 30-03 hour 7 - seems stuck in heart beat after reduce completed [analytics]
2019-03-29 §
12:29 <mforns> Restarted Turnilo to refresh deleted test datasource [analytics]
12:11 <mforns> Restarted Turnilo to refresh deleted test datasource [analytics]
11:52 <mforns> Restarted Turnilo to refresh deleted test datasource [analytics]
11:10 <mforns> Restarted Turnilo to refresh deleted test datasource [analytics]
2019-03-28 §
19:04 <joal> Manually rerun webrequest-load-wf-upload-2019-3-28-8 with higher error threshold (alot of false positive!) [analytics]
2019-03-27 §
21:13 <milimetric> done deploying refinery, will now restart monthly geoeditors coordinator [analytics]
2019-03-18 §
11:08 <elukey> restart hue on analytics-tool1001 to pick up some new changes (should be a no-op) [analytics]
2019-03-14 §
17:43 <mforns> Deploying AQS using scap (node10 upgrade) [analytics]
2019-03-13 §
22:58 <nuria> mediawiki-check denormalized restart ed 0147256-181112144035577-oozie-oozi-C [analytics]
22:48 <nuria> killed oozie job 0131427-181112144035577-oozie-oozi-C to correct e-mail address [analytics]
2019-03-12 §
16:06 <joal> Rerun webrequest-load-wf-text-2019-3-12-11 after error [analytics]
2019-03-08 §
20:48 <joal> Rerun webrequest-load-wf-upload-2019-3-8-19 after hive outage [analytics]
14:52 <joal> deployed wikistats2 2.5.5 [analytics]
2019-03-07 §
14:50 <joal> Restart mediawiki-history after having corrected data [analytics]
13:52 <joal> manually killing mediawiki-history-denormalize-wf-2019-02 instead of letting it fail another 3 attemps [analytics]
10:40 <joal> Manually fixed sqoop issues [analytics]
2019-03-06 §
18:13 <joal> Refinery deployed onto hadoop [analytics]
18:08 <joal> Refinery deployed using scap [analytics]
2019-03-04 §
16:17 <elukey> disable all report updater jobs via puppet (ensure => absent) due to dbstore1002 decom [analytics]
2019-02-28 §
17:16 <milimetric> restarted mediawiki/history/load job: https://hue.wikimedia.org/oozie/list_oozie_coordinator/0131840-181112144035577-oozie-oozi-C/ [analytics]
14:40 <milimetric> refinery deployed with new sqoop logic and updated history/load job [analytics]
09:57 <fdans> restarting mediawiki-history-wikitext coordinator [analytics]
09:56 <fdans> restarting mediawiki-history-check_denormalize [analytics]
09:48 <fdans> restarting mediawiki-history-denormalize coordinator [analytics]
2019-02-27 §
17:42 <elukey> re-run webrequest-load-wf-upload-2019-2-27-16 (failed due to a shutdown of analytics1071 for hw maintenance) [analytics]
2019-02-24 §
10:24 <elukey> restart check webrequest service on an-coord1001 (failed due to /mnt/hdfs being unavail) [analytics]
2019-02-20 §
18:17 <fdans> deploying refinery [analytics]