1-50 of 4045 results (12ms)
2021-08-12 §
14:46 <btullis> btullis@druid1002:/etc/zookeeper/conf$ sudo systemctl disable druid-broker druid-coordinator druid-historical druid-middlemanager druid-overlord [analytics]
14:45 <btullis> btullis@druid1002:/etc/zookeeper/conf$ sudo systemctl stop druid-broker druid-coordinator druid-historical druid-middlemanager druid-overlord [analytics]
2021-08-11 §
19:43 <btullis> btullis@druid1003:~$ sudo systemctl stop druid-overlord && sudo systemctl disable druid-overlord [analytics]
19:41 <btullis> btullis@druid1003:~$ sudo systemctl stop druid-historical && sudo systemctl disable druid-historical [analytics]
19:40 <btullis> btullis@druid1003:~$ sudo systemctl stop druid-coordinator && sudo systemctl disable druid-coordinator [analytics]
19:37 <btullis> btullis@druid1003:~$ sudo systemctl stop druid-broker && sudo systemctl disable druid-broker [analytics]
19:30 <btullis> btullis@druid1003:~$ curl -X POST http://druid1003.eqiad.wmnet:8091/druid/worker/v1/disable [analytics]
12:13 <btullis> migration of zookeeper from druid1002 to an-druid1002 complete, with quorum and two zynced followers. Re-enabling puppet on all druid nodes. [analytics]
09:48 <btullis> suspended the following oozie jobs in hue: webrequest-druid-hourly-coord, pageview-druid-hourly-coord, edit-hourly-druid-coord [analytics]
09:45 <btullis> btullis@an-launcher1002:~$ sudo systemctl disable eventlogging_to_druid_editattemptstep_hourly.timer eventlogging_to_druid_navigationtiming_hourly.timer eventlogging_to_druid_netflow_hourly.timer eventlogging_to_druid_prefupdate_hourly.timer [analytics]
09:21 <elukey> run "sudo find /var/log/airflow -type f -mtime +15 -delete" on an-airflow1001 to free space (root partition almost full) [analytics]
2021-08-10 §
17:27 <razzi> resume the following schedules in hue: edit-hourly-druid-coord, pageview-druid-hourly-coord, webrequest-druid-hourly-coord [analytics]
17:10 <razzi> sudo cookbook sre.druid.roll-restart-workers analytics (errored out) [analytics]
09:04 <btullis> btullis@an-launcher1002:~$ sudo systemctl restart eventlogging_to_druid_prefupdate_hourly.service [analytics]
09:04 <btullis> btullis@an-launcher1002:~$ sudo systemctl restart eventlogging_to_druid_netflow_daily.service [analytics]
2021-08-09 §
10:45 <btullis_> btullis@an-druid1003:/var/log/druid$ sudo chown -R druid:druid /srv/druid /var/log/druid [analytics]
10:25 <btullis_> btullis@an-druid1003:~$ sudo puppet agent -tv [analytics]
2021-08-04 §
09:12 <btullis> btullis@an-coord1001:~$ sudo systemctl start hive-metastore.service hive-server2.service [analytics]
09:12 <btullis> btullis@an-coord1001:~$ sudo systemctl stop hive-server2.service hive-metastore.service [analytics]
09:00 <btullis> sudo systemctl start hive-metastore && sudo systemctl start hive-server2 [analytics]
09:00 <btullis> btullis@an-coord1002:~$ sudo systemctl stop hive-server2 && sudo systemctl stop hive-metastore [analytics]
2021-08-03 §
19:23 <ottomata> bump Refine to refinery version 0.1.16 to pick up normalized_host transform - now all event tables will have a new normalized_host field - T251320 [analytics]
19:02 <ottomata> Deployed refinery using scap, then deployed onto hdfs [analytics]
14:57 <ottomata> rerunning webrequest refine for upload 08-03T01:00 - 0042643-210701181527401-oozie-oozi-W [analytics]
2021-08-02 §
18:49 <razzi> sudo cookbook sre.druid.roll-restart-workers analytics [analytics]
17:57 <razzi> sudo cookbook sre.druid.roll-restart-workers public [analytics]
2021-07-30 §
22:22 <razzi> razzi@cumin1001:~$ sudo cookbook sre.druid.roll-restart-workers test [analytics]
2021-07-29 §
18:12 <razzi> sudo cookbook sre.aqs.roll-restart aqs [analytics]
2021-07-28 §
10:46 <btullis> btullis@an-test-coord1001:/etc/hive/conf$ sudo systemctl start hive-metastore.service hive-server2.service [analytics]
10:46 <btullis> btullis@an-test-coord1001:/etc/hive/conf$ sudo systemctl stop hive-server2.service hive-metastore.service [analytics]
2021-07-26 §
20:54 <razzi> reran the failed workflow of cassandra-daily-wf-local_group_default_T_pageviews_per_article_flat-2021-7-25 [analytics]
2021-07-22 §
18:38 <ottomata> deploy refinery to an-launcher1002 for bin/gobblin job lock change [analytics]
2021-07-20 §
20:30 <joal> rerun webrequest timed-out instances [analytics]
18:58 <mforns> starting refinery deployment [analytics]
18:40 <razzi> razzi@an-launcher1002:~$ sudo puppet agent --enable [analytics]
18:39 <razzi> razzi@an-master1001:/var/log/hadoop-hdfs$ sudo -u yarn kerberos-run-command yarn yarn rmadmin -refreshQueues [analytics]
18:37 <razzi> razzi@an-master1002:~$ sudo -i puppet agent --enable [analytics]
18:34 <razzi> razzi@an-master1002:~$ sudo -u yarn kerberos-run-command yarn yarn rmadmin -refreshQueues [analytics]
18:32 <razzi> razzi@an-master1002:~$ sudo systemctl start hadoop-yarn-resourcemanager.service [analytics]
18:31 <razzi> razzi@an-master1002:~$ sudo systemctl stop hadoop-yarn-resourcemanager.service [analytics]
18:22 <razzi> sudo -u hdfs /usr/bin/hdfs haadmin -failover an-master1002-eqiad-wmnet an-master1001-eqiad-wmnet [analytics]
18:21 <razzi> re-enable yarn queues by merging puppet patch https://gerrit.wikimedia.org/r/c/operations/puppet/+/705732 [analytics]
17:27 <razzi> razzi@cumin1001:~$ sudo -i wmf-auto-reimage-host -p T278423 an-master1001.eqiad.wmnet [analytics]
17:17 <razzi> stop all hadoop processes on an-master1001 [analytics]
16:52 <razzi> starting hadoop processes on an-master1001 since they didn't failover cleanly [analytics]
16:31 <razzi> sudo bash gid_script.bash on an-maseter1001 [analytics]
16:29 <razzi> razzi@alert1001:~$ sudo icinga-downtime -h an-master1001 -d 7200 -r "an-master1001 debian upgrade" [analytics]
16:25 <razzi> razzi@an-master1001:~$ sudo systemctl stop hadoop-mapreduce-historyserver [analytics]
16:25 <razzi> sudo systemctl stop hadoop-hdfs-zkfc.service on an-master1001 again [analytics]
16:25 <razzi> sudo systemctl stop hadoop-yarn-resourcemanager on an-master1001 again [analytics]