501-550 of 3230 results (17ms)
2020-06-03 §
17:10 <elukey> restart RU jobs after adding memory to an-launcher1001 [analytics]
16:57 <elukey> reboot an-launcher1001 to get new memory [analytics]
16:01 <elukey> stop timers on an-launcher, prep for reboot [analytics]
09:35 <elukey> re-run webrequest-druid-hourly-coord 03/06T7 (failed due to druid1002 moving to buster) [analytics]
08:50 <elukey> reimage druid1002 to Buster [analytics]
2020-06-01 §
14:54 <elukey> stop all timers on an-launcher1001, prep step for reboot [analytics]
12:54 <elukey> /user/dedcode/.Trash/* -skipTrash [analytics]
06:53 <elukey> re-run virtualpageview-hourly-wf-2020-5-31-19 [analytics]
06:28 <elukey> temporary stop of all RU jobs on an-launcher1001 to priviledge camus and others [analytics]
06:03 <elukey> kill all airflow-related processes on an-launcher1001 - host killing tasks due to OOM [analytics]
2020-05-30 §
08:15 <elukey> manual reset-failed of monitor_refine_mediawiki_job_events_failure_flags [analytics]
2020-05-29 §
13:19 <elukey> re-run druid webrequest hourly 29/05T11 (failed due to a host reimage in progress) [analytics]
12:19 <elukey> reimage druid1001 to Debian Buster [analytics]
10:05 <elukey> move el2druid config from druid1001 to an-druid1001 [analytics]
2020-05-28 §
18:31 <milimetric> after deployment, restarted four oozie jobs with new SLAs and fixed datasets definitions [analytics]
06:40 <elukey> slowly restarting all RU units on an-launcher1001 [analytics]
06:32 <elukey> delete old RU pid files with timestamp May 27 19:00 (scap deployment failed to an-launcher due to disk issues) except ./jobs/reportupdater-queries/pingback/.reportupdater.pid that was working fine [analytics]
2020-05-27 §
19:53 <joal> Start pageview-complete dump oozie job after deploy [analytics]
19:24 <joal> Deploy refinery onto hdfs [analytics]
19:22 <joal> restart failed services on an-launcher1001 [analytics]
19:06 <joal> Deploy refinery using scap to an-launcher1001 only [analytics]
18:41 <joal> Deploying refinery with scap [analytics]
13:42 <ottomata> increased Kafka topic retention in jumbo-eqiad to 31 days for (eqiad|codfw).mediawiki.revision-create - T253753 [analytics]
07:09 <joal> Rerun webrequest-druid-hourly-wf-2020-5-26-17 [analytics]
07:04 <elukey> matomo upgraded to 3.13.5 on matomo1001 [analytics]
06:17 <elukey> superset upgraded to 0.36 [analytics]
05:52 <elukey> attempt to upgrade Superset to 0.36 - downtime expected [analytics]
2020-05-24 §
10:04 <elukey> re-run virtualpageview-hourly 23/05T15 - failed due to a sporadic kerberos/hive issue [analytics]
2020-05-22 §
09:11 <elukey> superset upgrade attempt to 0.36 failed due to a db upgrade error (not seen in staging), rollback to 0.35.2 [analytics]
08:15 <elukey> superset down for maintenance [analytics]
07:09 <elukey> add druid100[7,8] to the LVS druid-public-brokers service (serving AQS's traffic) [analytics]
2020-05-21 §
17:24 <elukey> add druid100[7,8] to the druid public cluster (not serving load balancer traffic for the moment, only joining the cluster) - T252771 [analytics]
16:44 <elukey> roll restart druid historical nodes on druid100[4-6] (public cluster) to pick up new settings - T252771 [analytics]
14:02 <elukey> restart druid kafka supervisor for wmf_netflow after maintenance [analytics]
13:53 <elukey> restart druid-historical on an-druid100[1,2] to pick up new settings [analytics]
13:17 <elukey> kill wmf_netflow druid supervisor for maintenance [analytics]
13:13 <elukey> stop druid-daemons on druid100[1-3] (one at the time) to move the druid partition from /srv/druid to /srv (didn't think about it before) - T252771 [analytics]
09:16 <elukey> move Druid Analytics SQL in Superset to druid://an-druid1001.eqiad.wmnet:8082/druid/v2/sql/ [analytics]
09:05 <elukey> move turnilo to an-druid1001 (beefier host) [analytics]
08:15 <elukey> roll restart of all druid historicals in the analytics cluster to pick up new settings [analytics]
2020-05-20 §
13:55 <milimetric> deployed refinery with refinery-source v0.0.125 [analytics]
2020-05-19 §
15:28 <elukey> restart hadoop master daemons on an-master100[1,2] for openjdk upgrades [analytics]
06:29 <elukey> roll restart zookeeper on druid100[4-6] for openjdk upgrades [analytics]
06:18 <elukey> roll restart zookeeper on druid100[1-3] for openjdk upgrades [analytics]
2020-05-18 §
14:02 <elukey> roll restart of hadoop daemons on the prod cluster for openjdk upgrades [analytics]
13:30 <elukey> roll restart hadoop daemons on the test cluster for openjdk upgrades [analytics]
10:33 <elukey> add an-druid100[1,2] to the Druid Analytics cluster [analytics]
2020-05-15 §
13:23 <elukey> roll restart of the Druid analytics cluster to pick up new openjdk + /srv completed [analytics]
13:15 <elukey> turnilo back to druid1001 [analytics]
13:03 <elukey> move turnilo config to druid1002 to ease druid maintenance [analytics]