Use this file to discover all available pages before exploring further.
You’ve created and tested your prompt, and now you’re ready to deploy it to your production environment.
1
Deploy
Click the “Deploy” button in the prompt editor to deploy your prompt to the
production environment.
2
Create API Key
Navigate to the “API
Keys” section in your
organization settings and create a new API key. Be sure to save it as a
secure environment variable or in a secret manager before closing the tab,
as you will not have access to it again.
3
Install SDK
Now, install the PromptFoundry SDK in your application.
npm install @prompt-foundry/typescript-sdk
4
Integrate into your provider call
Integrate the SDK into your provider call to start using your deployed prompt.
Option 1 - Completion Proxy
Option 2 - Direct Provider Integration
Initiates a completion request to the configured LLM provider using specified parameters and provided variables.
This endpoint abstracts the integration with different model providers, enabling seamless switching between models while maintaining a consistent data model for your application.
import PromptFoundry from '@prompt-foundry/typescript-sdk'; // Initialize Prompt Foundry SDK with your API key const promptFoundry = new PromptFoundry({ apiKey: process.env['PROMPT_FOUNDRY_API_KEY'], }); async function main() { // Retrieve model parameters for the prompt const completionCreateResponse = await client.completion.create('637ae1aa8f4aa6fad144ccbd', { // Optionally append additional messages to the converstation thread on top of your configured prompt messages appendMessages: [ { role: 'user', content: [{ type: 'TEXT', text: 'What is the weather in Seattle, WA?' }] }, ], // Supports prompt template variables variables: {}, }); // completion response console.log(completionCreateResponse.message); } main().catch(console.error);
Fetches the configured model parameters and messages rendered with the provided variables mapped to the set LLM provider.
This endpoint abstracts the need to handle mapping between different providers, while still allowing direct calls to the providers.
import PromptFoundry from "@prompt-foundry/typescript-sdk";import { Configuration, OpenAIApi } from "openai";// Initialize Prompt Foundry SDK with your API keyconst promptFoundry = new PromptFoundry({ apiKey: process.env["PROMPT_FOUNDRY_API_KEY"],});// Initialize OpenAI SDK with your API keyconst configuration = new Configuration({ apiKey: process.env["OPENAI_API_KEY"],});const openai = new OpenAIApi(configuration);async function main() {// Retrieve model parameters for the prompt const modelParameters = await promptFoundry.prompts.getParameters("66abc31c93546b6b73414840", { variables: { hello: "world" }, }); // check if provider is Open AI if (modelParameters.provider === "openai") { // Use the retrieved parameters to create a chat completion request const modelResponse = await openai.chat.completions.create( modelParameters.parameters ); // Print the response from OpenAI console.log(modelResponse.data); }}
For more information on each SDK, visit the “Libraries” section of the documentation.