1951-2000 of 6007 results (30ms)
2021-08-20 §
08:46 <btullis> btullis@druid1001:~$ sudo systemctl stop druid-broker druid-coordinator druid-historical druid-middlemanager druid-overlord [analytics]
2021-08-19 §
19:05 <razzi> razzi@deploy1002:/srv/deployment/analytics/aqs/deploy$ scap deploy "Deploy aqs 9c062f2" [analytics]
19:02 <razzi> note that the aqs-deploy repo's commit message DOES NOT include the changes of aqs in its changes list (though it has the correct SHA in the first line) [analytics]
18:26 <razzi> Beginning aqs deploy process [analytics]
17:55 <razzi> razzi@labstore1007:~$ sudo systemctl start analytics-dumps-fetch-geoeditors_dumps.service [analytics]
17:53 <razzi> sudo systemctl start analytics-dumps-fetch-geoeditors_dumps.service on labstore1006 [analytics]
2021-08-18 §
17:37 <btullis> on an-coord1001: MariaDB [superset_production]> update clusters set broker_host='an-druid1001.eqiad.wmnet' where cluster_name='analytics-eqiad'; [analytics]
15:08 <joal> Restart oozie jobs loading druid to use new druid-host [analytics]
08:55 <joal> Deploying refinery with scap [analytics]
2021-08-13 §
16:46 <elukey> cleanup /srv/discovery on stat1007 after https://gerrit.wikimedia.org/r/c/operations/puppet/+/712422 [analytics]
15:16 <milimetric> reran the other three failed jobs successfully [analytics]
14:52 <milimetric> rerunning webrequest-druid-hourly-wf-2021-8-13-13 because of failure to connect to Hive metastore [analytics]
2021-08-12 §
14:46 <btullis> btullis@druid1002:/etc/zookeeper/conf$ sudo systemctl disable druid-broker druid-coordinator druid-historical druid-middlemanager druid-overlord [analytics]
14:45 <btullis> btullis@druid1002:/etc/zookeeper/conf$ sudo systemctl stop druid-broker druid-coordinator druid-historical druid-middlemanager druid-overlord [analytics]
2021-08-11 §
19:43 <btullis> btullis@druid1003:~$ sudo systemctl stop druid-overlord && sudo systemctl disable druid-overlord [analytics]
19:41 <btullis> btullis@druid1003:~$ sudo systemctl stop druid-historical && sudo systemctl disable druid-historical [analytics]
19:40 <btullis> btullis@druid1003:~$ sudo systemctl stop druid-coordinator && sudo systemctl disable druid-coordinator [analytics]
19:37 <btullis> btullis@druid1003:~$ sudo systemctl stop druid-broker && sudo systemctl disable druid-broker [analytics]
19:30 <btullis> btullis@druid1003:~$ curl -X POST http://druid1003.eqiad.wmnet:8091/druid/worker/v1/disable [analytics]
12:13 <btullis> migration of zookeeper from druid1002 to an-druid1002 complete, with quorum and two zynced followers. Re-enabling puppet on all druid nodes. [analytics]
09:48 <btullis> suspended the following oozie jobs in hue: webrequest-druid-hourly-coord, pageview-druid-hourly-coord, edit-hourly-druid-coord [analytics]
09:45 <btullis> btullis@an-launcher1002:~$ sudo systemctl disable eventlogging_to_druid_editattemptstep_hourly.timer eventlogging_to_druid_navigationtiming_hourly.timer eventlogging_to_druid_netflow_hourly.timer eventlogging_to_druid_prefupdate_hourly.timer [analytics]
09:21 <elukey> run "sudo find /var/log/airflow -type f -mtime +15 -delete" on an-airflow1001 to free space (root partition almost full) [analytics]
2021-08-10 §
17:27 <razzi> resume the following schedules in hue: edit-hourly-druid-coord, pageview-druid-hourly-coord, webrequest-druid-hourly-coord [analytics]
17:10 <razzi> sudo cookbook sre.druid.roll-restart-workers analytics (errored out) [analytics]
09:04 <btullis> btullis@an-launcher1002:~$ sudo systemctl restart eventlogging_to_druid_prefupdate_hourly.service [analytics]
09:04 <btullis> btullis@an-launcher1002:~$ sudo systemctl restart eventlogging_to_druid_netflow_daily.service [analytics]
2021-08-09 §
10:45 <btullis_> btullis@an-druid1003:/var/log/druid$ sudo chown -R druid:druid /srv/druid /var/log/druid [analytics]
10:25 <btullis_> btullis@an-druid1003:~$ sudo puppet agent -tv [analytics]
2021-08-04 §
09:12 <btullis> btullis@an-coord1001:~$ sudo systemctl start hive-metastore.service hive-server2.service [analytics]
09:12 <btullis> btullis@an-coord1001:~$ sudo systemctl stop hive-server2.service hive-metastore.service [analytics]
09:00 <btullis> sudo systemctl start hive-metastore && sudo systemctl start hive-server2 [analytics]
09:00 <btullis> btullis@an-coord1002:~$ sudo systemctl stop hive-server2 && sudo systemctl stop hive-metastore [analytics]
2021-08-03 §
19:23 <ottomata> bump Refine to refinery version 0.1.16 to pick up normalized_host transform - now all event tables will have a new normalized_host field - T251320 [analytics]
19:02 <ottomata> Deployed refinery using scap, then deployed onto hdfs [analytics]
14:57 <ottomata> rerunning webrequest refine for upload 08-03T01:00 - 0042643-210701181527401-oozie-oozi-W [analytics]
2021-08-02 §
18:49 <razzi> sudo cookbook sre.druid.roll-restart-workers analytics [analytics]
17:57 <razzi> sudo cookbook sre.druid.roll-restart-workers public [analytics]
2021-07-30 §
22:22 <razzi> razzi@cumin1001:~$ sudo cookbook sre.druid.roll-restart-workers test [analytics]
2021-07-29 §
18:12 <razzi> sudo cookbook sre.aqs.roll-restart aqs [analytics]
2021-07-28 §
10:46 <btullis> btullis@an-test-coord1001:/etc/hive/conf$ sudo systemctl start hive-metastore.service hive-server2.service [analytics]
10:46 <btullis> btullis@an-test-coord1001:/etc/hive/conf$ sudo systemctl stop hive-server2.service hive-metastore.service [analytics]
2021-07-26 §
20:54 <razzi> reran the failed workflow of cassandra-daily-wf-local_group_default_T_pageviews_per_article_flat-2021-7-25 [analytics]
2021-07-22 §
18:38 <ottomata> deploy refinery to an-launcher1002 for bin/gobblin job lock change [analytics]
2021-07-20 §
20:30 <joal> rerun webrequest timed-out instances [analytics]
18:58 <mforns> starting refinery deployment [analytics]
18:40 <razzi> razzi@an-launcher1002:~$ sudo puppet agent --enable [analytics]
18:39 <razzi> razzi@an-master1001:/var/log/hadoop-hdfs$ sudo -u yarn kerberos-run-command yarn yarn rmadmin -refreshQueues [analytics]
18:37 <razzi> razzi@an-master1002:~$ sudo -i puppet agent --enable [analytics]
18:34 <razzi> razzi@an-master1002:~$ sudo -u yarn kerberos-run-command yarn yarn rmadmin -refreshQueues [analytics]