701-750 of 3547 results (20ms)
2020-07-15 §
08:18 <elukey> move piwik to CAS (idp.wikimedia.org) [analytics]
2020-07-14 §
15:50 <elukey> upgrade spark2 on all stat100x hosts [analytics]
15:07 <elukey> upgrade spark2 to 2.4.4-bin-hadoop2.6-3 on stat1004 [analytics]
14:55 <elukey> re-create jupyterhub's venv on stat1005/8 after https://gerrit.wikimedia.org/r/612484 [analytics]
14:45 <elukey> re-create jupyterhub's base kernel directory on stat1005 (trying to debug some problems) [analytics]
07:27 <joal> Restart forgotten unique-devices per-project-family jobs after yesterday deploy [analytics]
2020-07-13 §
20:17 <milimetric> deployed weekly train with two oozie job bugfixes and rename to pageview_actor table [analytics]
19:42 <joal> Deploy refinery with scap [analytics]
19:24 <joal> Drop pageview_actor_hourly and replace it by pageview_actor [analytics]
18:26 <joal> Kill pageview_actor_hourly and unique_devices_per_project_family jobs to copy backfilled data [analytics]
12:35 <joal> Start backfilling of wdqs_internal (external had been done, not internal :S) [analytics]
2020-07-10 §
17:10 <nuria> updating the EL whitelist, refinery reploy (but not source) [analytics]
16:01 <milimetric> deployed, EL whitelist is updated [analytics]
2020-07-09 §
18:52 <elukey> upgrade spark2 to 2.4.4-bin-hadoop2.6-3 on stat1008 [analytics]
2020-07-07 §
10:12 <elukey> decom archiva1001 [analytics]
2020-07-06 §
08:09 <elukey> roll restart aqs on aqs100[4-9] to pick up new druid settings [analytics]
07:51 <elukey> enable binlog on matomo's database on matomo1002 [analytics]
2020-07-04 §
10:52 <joal> Rerun mediawiki-geoeditors-monthly-wf-2020-06 after heisenbug (patch provided for long-term fix) [analytics]
2020-07-03 §
19:20 <joal> restart failed webrequest-load job webrequest-load-wf-text-2020-7-3-17 with higher thresholds - error due to burst of requests in ulsfo [analytics]
19:13 <joal> restart mediawiki-history-denormalize oozie job using 0.0.115 refinery-job jar [analytics]
19:05 <joal> kill manual execution of mediawiki-history to save an-coord1001 (too big of a spark-driver) [analytics]
18:53 <joal> restart webrequest-load-wf-text-2020-7-3-17 after hive server failure [analytics]
18:52 <joal> restart data_quality_stats-wf-event.navigationtiming-useragent_entropy-hourly-2020-7-3-15 after have server failure [analytics]
18:51 <joal> restart virtualpageview-hourly-wf-2020-7-3-15 after hive-server failure [analytics]
16:41 <joal> Rerun mediawiki-history-check_denormalize-wf-2020-06 after having cleaned up wrong files and restarted a job without deterministic skewed join [analytics]
2020-07-02 §
18:16 <joal> Launch a manual instance of mediawiki-history-denormalize to release data despite oozie failing [analytics]
16:17 <joal> rerun mediawiki-history-denormalize-wf-2020-06 after oozie sharelib bump through manual restart [analytics]
12:41 <joal> retry mediawiki-history-denormalize-wf-2020-06 [analytics]
07:26 <elukey> start a tmux on an-launcher1002 with 'sudo -u analytics /usr/local/bin/kerberos-run-command analytics /usr/local/bin/refinery-sqoop-mediawiki-production' [analytics]
07:20 <elukey> execute systemctl reset-failed refinery-sqoop-whole-mediawiki.service to clear our alarms on launcher1002 [analytics]
2020-07-01 §
19:04 <joal> Kill/restart webrequest-load-bundle for mobile-pageview update [analytics]
18:59 <joal> kill/restart pageview-druid jobs (hourly, daily, monthly) for in_content_namespace field update [analytics]
18:57 <joal> kill/restart mediawiki-wikitext-history-coord and mediawiki-wikitext-current-coord for bz2 codec update [analytics]
18:55 <joal> kill/restart mediawiki-history-denormalize-coord after skewed-join strategy update [analytics]
18:52 <joal> Kill/Restart unique_devices-per_project_family-monthly-coord after fix [analytics]
18:41 <joal> deploy refinery to HDFS [analytics]
18:28 <joal> Deploy refinery using scap after hotfix [analytics]
18:20 <joal> Deploy refinery using scap [analytics]
16:58 <joal> trying to release refinery-source 0.0.129 to archiva, version 3 [analytics]
16:51 <elukey> remove /etc/maven/settings.xml from all analytics nodes that have it [analytics]
2020-06-30 §
18:28 <joal> trying to release refinery-source to archiva from jenkins (second time) [analytics]
16:30 <joal> Release refinery-source v0.0.129 using jenkins [analytics]
16:30 <joal> Deploy refien [analytics]
16:05 <elukey> re-enable timers on an-launcher1002 after archiva maintenance [analytics]
15:23 <elukey> stop timers on an-launcher1002 to ease debugging for refinery deploy [analytics]
13:12 <elukey> restart nodemanager on analytics1068 after GC overhead and OOMs [analytics]
09:32 <joal> Kill/Restart mediawiki-wikitext-history job now that the current month one is done (bz2 fix) [analytics]
2020-06-29 §
13:09 <elukey> archiva.wikimedia.org migrated to archiva1002 [analytics]
2020-06-25 §
17:20 <elukey> move RU jobs/timers from an-launcher1001 to an-launcher1002 [analytics]
16:07 <elukey> move all timers but RU from an-launcher1001 to 1002 (puppet disabled on 1001, all timers completed) [analytics]