Configure for Hortonworks
This section provides additional configuration requirements for integrating the Designer Cloud Powered by Trifacta platform with the Hortonworks Data Platform.
This section applies only to the versions of HDP that are supported by the Designer Cloud Powered by Trifacta platform. For more information, see Supported Deployment Scenarios for Hortonworks.
Note
Except as noted, the following configuration items apply to the latest supported version of Hortonworks Data Platform.
Prerequisites
Before you begin, it is assumed that you have completed the following tasks:
Successfully installed a supported version of Hortonworks Data Platform into your enterprise infrastructure.
Installed the Alteryx software in your environment. For more information, see Install Software.
Reviewed the mechanics of platform configuration. See Required Platform Configuration.
Configured access to the Alteryx database. See Configure the Databases.
Performed the basic Hadoop integration configuration. See Configure for Hadoop.
You have access to platform configuration either via the Trifacta node or through the Admin Settings page.
Hortonworks Cluster Configuration
The following changes need to be applied to Hortonworks cluster configuration files or to configuration areas inside Ambari.
Tip
Ambari is the recommended method for configuring your Hortonworks cluster.
Configure for Ranger
If you have deployed Ranger in a Kerberized environment, you must verify and complete the following changes in Ambari.
Steps:
If you have enabled Ranger, navigate to Hive > Configs > Settings.
Choose Authorization: Ranger.
Hiveserver2 Authentication: Kerberos.
If you have enabled Ranger and Hive, navigate to Hive > Configs > Advanced > General.
hive.security.authorization.manager: org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizerFactory
Navigate to Hive > Configs > Advanced > Advanced hive-site.
hive.security.authentication.manager: org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator
hive.conf.restricted.list: hive.security.authenticator.manager,hive.security.authorization.manager,hive.users.in.admin.role,hive.security.authorization.enabled
Navigate to Hive > Configs > Advanced > Custom hive-site. Changes in this area update
hive-site.xml
.hadoop.proxyuser.trifacta.groups:
[hadoop.group
(default=trifactausers
)]
hadoop.proxyuser.trifacta.hosts: *
hive2.jdbc.url:<your_jdbc_url>
hive.metastore.sasl.enabled: true
Save your configuration changes.
Configure for Spark Profiling
For Hortonworks 3.0 and later, the intermediate dataset files that are generated as part of Spark profiling of your job can cause the job to hang when the source is a Hive table. As a precaution, if you are profiling jobs from Hive sources, you should disable the following property on Hortonworks 3.0 and later.
Steps:
You can apply this change through the Admin Settings Page (recommended) or
trifacta-conf.json
. For more information, see Platform Configuration Methods.Locate the
spark.props
setting.Insert the following setting:
"transformer.dataframe.cache.reused": "false"
Save your changes and restart the platform.
If you are using S3 as your datastore and have enabled Spark profiling, you must apply the following configuration, which adds the hadoop-aws
JAR and the aws-java-sdk
JAR to the extra class path for Spark.
Steps:
In Ambari, navigate to Spark2 > Configs.
Add a new parameter to Custom Spark2-defaults.
Set the parameter as follows, which is specified for HDP 2.5.3.0, build 37:
spark.driver.extraClassPath=/usr/hdp/2.5.3.0-37/hadoop/hadoop-aws-2.7.3.2.5.3.0-37.jar:/usr/hdp/2.5.3.0-37/hadoop/lib/aws-java-sdk-s3-1.10.6.jar
Restart Spark from Ambari.
Restart the Designer Cloud Powered by Trifacta platform.
If you are using Spark for profiling, you must add environment properties to your cluster configuration. See Configure for Spark.
Set up directory permissions
On all Hortonworks cluster nodes, verify that the YARN user has access to the YARN working directories:
chown yarn:hadoop /mnt/hadoop/yarn
If you are upgrading from a previous version of Hortonworks, you may need to clear the YARN user cache for the [hadoop.user
(default=trifacta
)]
user:
rm -rf /mnt/hadoop/yarn/local/usercache/trifacta
Configure Designer Cloud Powered by Trifacta platform
The following changes need to be applied to the Trifacta node.
Except as noted, these changes are applied to the following file in the Alteryx deployment:
trifacta-conf.json
Configure WebHDFS port
You can apply this change through the Admin Settings Page (recommended) or
trifacta-conf.json
. For more information, see Platform Configuration Methods.WebHDFS: Verify that the port number for WebHDFS is correct:
"webhdfs.port": <webhdfs_port_num>,
Save your changes.
Configure Resource Manager port
Hortonworks uses a custom port number for Resource Manager. You must update the setting for the port number used by Resource Manager. You can apply this change through the Admin Settings Page (recommended) or trifacta-conf.json
. For more information, see Platform Configuration Methods.
Note
By default, Hortonworks uses 8050 for Resource Manager. Please verify that you have the correct port number.
"yarn.resourcemanager.port": 8032,
Save your changes.
Configure location of Hadoop bundle JAR
Set the value for the Hadoop bundle JAR to the appropriate distribution. The following is for Hortonworks 3.1:
"hadoopBundleJar": "hadoop-deps/hdp-3.1/build/libs/hdp-3.1-bundle.jar"
Save your changes.
Configure Hive Locations
If you are enabling an integration with Hive on the Hadoop cluster, there are some distribution-specific parameters that must be set. For more information, see Configure for Hive.
Restart
To apply your changes, restart the platform. See Start and Stop the Platform.
After restart, you should verify operations. For more information, see Verify Operations.