Skip to main content

prompt.png Prompt Tool

Use the Prompt tool to send prompts to a Large Language Model (LLM) and then receive the model’s response as an output. You can also configure LLM parameters to tune the model’s output.

注意

The GenAI Tools are currently in Public Preview. Learn how to join the Public Preview and get started with AI-powered workflows!

Tool Components

The Prompt tool has 3 anchors (2 input and 1 output):

  • M input anchor: (Optional) Use the M input anchor to connect the model connection settings from the LLM Override tool. Alternatively, set up a connection within this tool.

  • D input anchor: (Optional) Use the D input anchor to connect text data you want to add to your prompt.

  • Output anchor: Use the output anchor to pass the model’s response downstream.

Configure the Tool

Connect to the AI Model Service in Alteryx One

If you aren’t using the LLM Override tool to provide model connection settings to the Prompt tool, you must set up a connection to the AI Model Service in Alteryx One.

For first-time setup of the Prompt tool, you must add your Alteryx One workspace as a Data Source…

LLM Provider and Model Selection and Configuration

  1. Use the LLM Provider dropdown to select the provider you want to use in your workflow. If you connected the LLM Override tool, this option is disabled.

  2. Next, use the Select Model dropdown to select an available model from your LLM Provider. If you connected the LLM Override tool and selected a specific model, this option is disabled.

  3. Use the Model Configuration section to configure the model’s parameters:

    • Temperature: Controls the randomness of the model's output as a number between 0 and 2. The default value is 1.

      • Lower values provide more reliable and consistent responses.

      • Higher values provide more creative and random responses, but can also become illogical.

    • TopP: Controls which output tokens the model samples from and ranges from 0 to 1. The model selects from the most to least probable tokens until the sum of their probabilities equals the TopP value. For example, if the TopP value is 0.8 and you have 3 tokens with probabilities of 0.5, 0.3, and 0.2, then the model only selects from the tokens with 0.5 and 0.3 probability (sum of 0.8). Lower values result in more consistent responses, while higher values result in more random responses.

    • Max Output Tokens: The maximum number of tokens that the LLM can include in a response. Each token is about 3/4 of a word. Tokens are basic input and output units in an LLM. They’re text chunks that can be words, character sets, or combinations of words and punctuation. Refer to your LLM provider and specific model for max available output tokens.

Prompt and Response Configurations

Use the Prompt and Response Configurations section to compose your prompt and configure the data columns associated with the prompt and response. 

  1. Enter the name for the Response Column. This column contains the LLM response to your prompt.

  2. Enter the name for the Prompt Column that will contain your prompt data available to downstream tools. You have 2 options:

    • Choose Select Existing and select a column from an input data stream (D input anchor).

    • Choose Create New to enter a new column name.

  3. Enter your prompt in the text field. To select an input data column, you can either…

    1. Enter an opening bracket ([) in the text field to bring up the column selection menu. You can also type out the column name within brackets ([]).

    2. Select a column for the Prompt dropdown.

  4. Run the workflow.

Prompt Builder

Use Prompt Builder to quickly test different prompts and model configurations. You can then compare the results with prompt history. To experiment with different models and model configurations, select Prompt Builder to open the Prompt Builder window. 

Configuration Tab

Use the Configuration tab to enter your prompts and update model configurations:

  1. Use the Model dropdown to select an available model from your LLM Provider.

  2. Use the Model Configuration section to configure the model’s parameters. Refer to the previous section for parameter descriptions.

  3. Enter or update your prompt in the Prompt Template text field.

  4. Enter the Number of Records to Test. If you have a large dataset, use this setting to limit the number of records tested with your prompt. 

  5. Select Run Test Prompt to view the Sample Responses for each row.

  6. If you like the responses of the new prompt, select Save Prompt to Canvas to update the Prompt tool with the new prompt and model configuration.

Prompt History Tab

Use the Prompt History tab to view your past prompts, model parameters, and a sample response.

For each past prompt, you can…

  • Save Prompt to Canvas: Update the Prompt tool with the selected prompt and associated model parameters.

  • Edit & Rerun Prompt: Return to the Configuration tab with the selected prompt and associated model parameters.

警告

Make sure to save your prompt before you leave the Prompt Builder window. You will lose your prompt history when you select Cancel

Output

The tool outputs two string data columns.

  • Prompt Column: Contains your prompt.

  • Response Column: Contains the response from your LLM Provider and Model.