Skip to main content

LiveQuery

With LiveQuery, you can use cloud data processing while building your Cloud Native workflow. LiveQuery provides access to your cloud data warehouse, enabling you to work seamlessly in Designer Cloud without the need to move or replicate data into your Alteryx Analytics Cloud (AAC) environment. Use LiveQuery to work directly with your entire dataset in real-time, rather than relying on a sample.

Use LiveQuery to connect directly to your cloud data warehouses on Snowflake or Databricks.

To enable LiveQuery as a Workspace Admin, go to Profile menu > Workspace Admin > Settings.

To use LiveQuery, open a Cloud Native workflow in Designer Cloud and enable LiveQuery from the Options menu.

Limitations

  • For Input, only CDW or CSV files are supported as a source.

  • DatetimeNow and Text Input tools aren't supported.

  • These Transform tools aren't currently supported:

    • Regex

    • Find Replace

    • Unique

    • Cross Tab

    • Arrange

    • Transpose

    • Dynamic Rename

  • Complex data types like array, map, struct, variant, and binary aren't supported.

Databricks LiveQuery

You can connect with LiveQuery from Databricks to Databricks, files to Databricks, or any source to Databricks.

  • Input Types: Databricks table, CustomSQL table, and Views.

  • Supported Ecosystems: AWS and Azure Cloud.

  • Features Supported:

    • PDP (Policy-Driven Pipelines),

    • File Input,

    • Connections from any source.

  • Not Supported: LiveQuery is not functional in AWS workspaces configured with key or secret credentials.

Data Handling Notes

Text Input Tool

Column names containing special characters (e.g., []) are not renamed properly internally by the parser, which can cause downstream processing issues with the SQL Transform tool.

Union Tool

The order of elements returned by the UNION operation in SQL is not guaranteed.

Select Rows Tool

Databricks applies a "nulls first" sorting approach, leading to a different row order compared to Cloud Native Mode.

Float Data Representation

Columns of type float containing integer-like values (e.g., 12, 202) are returned in decimal format (e.g., 12.0, 202.0) with following tools:

  • Weighted Average tool - for DBX table and Text Input.

  • Running Total tool - For DBX table and Text Input, the column type shown as Integer.

  • Text to Columns tool - For Text Input.

Parquet File Input

Date, Datetime, and Timestamp column types will be interpreted as string values.

Databricks Limitations

Unsupported Transforms and Data Types

  • Transforms that use UDFs are not supported and are disabled. This includes: DatetimeNow, Unique, Regex, Arrange, CrossTab, and Transpose.

  • Complex data types such as array, map, struct, variant, binary, object, and interval are not supported.

  • The Postgres TIME data type is not supported.

  • Date types (e.g., 2024-08-08) are published to Databricks as timestamp values (2024-08-08T00:00:00.000).

Memo API and Workflow Issues

  • Data does not load for memo API calls when using duplicated or exported LiveQuery workflows.

  • If the SQL warehouse is updated in the Admin Settings for Databricks, the new configuration will take effect after 15 minutes due to caching.

Performance Considerations

When using Classic and Pro SQL Warehouses in Databricks, performance may be slow, depending on the size of the warehouse.

If the SQL warehouse takes a long time to start, memo API calls may fail due to timeouts. Ensure the SQL warehouse is up and running before executing workflows.