1801-1850 of 4882 results (20ms)
2020-10-18 §
07:01 <elukey> decom analytics1054 from hadoop [analytics]
2020-10-17 §
06:08 <elukey> decom analytics1053 from the hadoop cluster [analytics]
2020-10-15 §
17:57 <razzi> taking yarn.wikimedia.org offline momentarily to test new tls configuration: T240439 [analytics]
14:51 <elukey> roll restart druid-historical daemons on druid1004-1008 to pick up new conn pooling changes [analytics]
07:03 <elukey> restart oozie to pick up the analytics team's admin list [analytics]
06:09 <elukey> decommission analytics1050 from the hadoop cluster [analytics]
2020-10-14 §
17:39 <joal> Rerun refine for mediawiki_api_request failed hour [analytics]
15:59 <elukey> drain + reboot an-worker1100 to pick up GPU settings [analytics]
15:29 <elukey> drain + reboot an-worker110[1,2] to pick up GPU settings [analytics]
14:56 <elukey> drain + reboot an-worker109[8,9] to pick up GPU settings [analytics]
05:48 <elukey> decom analytics1049 from the Hadoop cluster [analytics]
2020-10-13 §
12:38 <elukey> drop /srv/backup/mysql from an-master1002 (not used anymore) [analytics]
08:59 <klausman> Regenned the jupyterhub venvs on stat1004 [analytics]
07:56 <klausman> re-imaging stat1004 to Buster [analytics]
06:20 <elukey> decom analytics1048 from the Hadoop cluster [analytics]
2020-10-12 §
11:36 <joal> Clean druid test-datasources [analytics]
11:32 <elukey> remove analytics-meta lvm backup settings from an-coord1001 [analytics]
11:23 <elukey> remove analytics-meta lvm backup settings from an-master1002 [analytics]
07:02 <elukey> reduce hdfs block replication factor on Hadoop test to 2 [analytics]
05:37 <elukey> decom analytics1047 from the Hadoop cluster [analytics]
2020-10-11 §
08:33 <elukey> drop some old namenode backups under /srv on an-master1002 to free some space [analytics]
08:24 <elukey> decommission analytics1046 from the hadoop cluster [analytics]
08:12 <elukey> clean up logs on an-launcher1002 (disk space full) [analytics]
2020-10-10 §
12:01 <elukey> decommission analytics1045 from the Hadoop cluster [analytics]
2020-10-09 §
13:17 <elukey> execute "cumin 'stat100[5,8]* or an-worker109[6-9]* or an-worker110[0,1]*' 'apt-get install -y linux-headers-amd64'" [analytics]
11:15 <elukey> bootstrap the Analytics Hadoop test cluster [analytics]
09:47 <elukey> roll restart of hadoop-yarn-nodemanager on all hadoop workers to pick up new settings [analytics]
07:58 <elukey> decom analytics1044 from Hadoop [analytics]
07:04 <elukey> failover from an-master1002 to 1001 for HDFS namenode (the namenode failed over hours ago, no logs to check) [analytics]
2020-10-08 §
18:08 <razzi> restart oozie server on an-coord1001 for reverting T262660 [analytics]
17:42 <razzi> restart oozie server on an-coord1001 for T262660 [analytics]
17:19 <elukey> removed /var/lib/puppet/clientbucket/6/f/a/c/d/9/8/d/6facd98d16886787ab9656eef07d631e/content on an-launcher1002 (29G, last modified Aug 4th) [analytics]
15:45 <elukey> executed git pull on /srv/jupyterhub/deploy and run again create_virtualenv.sh on stat1007 (pyspark kernels may not run correctly due to a missing feature) [analytics]
15:43 <elukey> executed git pull on /srv/jupyterhub/deploy and run again create_virtualenv.sh on stat1006 (pyspark kernels not running due to a missing feature) [analytics]
13:13 <elukey> roll restart of druid overlords and coordinators on druid public to pick up new TLS settings [analytics]
12:51 <elukey> roll restart of druid overlords and coordinators on druid analytics to pick up new TLS settings [analytics]
10:35 <elukey> force the re-creation of default jupyterhub venvs on stat1006 after reimage [analytics]
08:47 <klausman> Starting re-image of stat1006 to Buster [analytics]
07:14 <elukey> decom analytics1043 from the Hadoop cluster [analytics]
06:46 <elukey> move the hdfs balancer from an-coord1001 to an-launcher1002 [analytics]
2020-10-07 §
08:45 <elukey> decom analytics1042 from hadoop [analytics]
2020-10-06 §
13:14 <elukey> cleaned up /srv/jupyter/venv and re-created it to allow jupyterhub to start cleanly on stat1007 [analytics]
12:56 <joal> Restart oozie to pick up new spark settings [analytics]
12:47 <elukey> force re-creation of the base virtualenv for jupyter on stat1007 after the reimage [analytics]
12:20 <elukey> update HDFS Namenode GC/Heap settings on an-master100[1,2] [analytics]
12:19 <elukey> increase spark shuffle io retry logic (10 tries every 10s) [analytics]
09:08 <elukey> add an-worker1114 to the hadoop cluster [analytics]
09:04 <klausman> Starting reimaging of stat1007 [analytics]
07:32 <elukey> bootstrap an-worker111[13] as hadoop workers [analytics]
2020-10-05 §
19:14 <mforns> restarted oozie coord unique_devices-per_domain-monthly after deployment [analytics]