You’ve created and tested your prompt, and now you’re ready to deploy it to your production environment.
1
Deploy
Click the “Deploy” button in the prompt editor to deploy your prompt to the
production environment.
2
Create API Key
Navigate to the “API
Keys” section in your
organization settings and create a new API key. Be sure to save it as a
secure environment variable or in a secret manager before closing the tab,
as you will not have access to it again.
3
Install SDK
Now, install the PromptFoundry SDK in your application.
npminstall @prompt-foundry/typescript-sdk
4
Integrate into your provider call
Integrate the SDK into your provider call to start using your deployed prompt.
Initiates a completion request to the configured LLM provider using specified parameters and provided variables.
This endpoint abstracts the integration with different model providers, enabling seamless switching between models while maintaining a consistent data model for your application.
import PromptFoundry from'@prompt-foundry/typescript-sdk';// Initialize Prompt Foundry SDK with your API keyconst promptFoundry =newPromptFoundry({ apiKey: process.env['PROMPT_FOUNDRY_API_KEY'],});asyncfunctionmain(){// Retrieve model parameters for the promptconst completionCreateResponse =await client.completion.create('637ae1aa8f4aa6fad144ccbd',{// Optionally append additional messages to the converstation thread on top of your configured prompt messages appendMessages:[{ role:'user', content:[{ type:'TEXT', text:'What is the weather in Seattle, WA?'}]},],// Supports prompt template variables variables:{},});// completion responseconsole.log(completionCreateResponse.message);}main().catch(console.error);
Initiates a completion request to the configured LLM provider using specified parameters and provided variables.
This endpoint abstracts the integration with different model providers, enabling seamless switching between models while maintaining a consistent data model for your application.
import PromptFoundry from'@prompt-foundry/typescript-sdk';// Initialize Prompt Foundry SDK with your API keyconst promptFoundry =newPromptFoundry({ apiKey: process.env['PROMPT_FOUNDRY_API_KEY'],});asyncfunctionmain(){// Retrieve model parameters for the promptconst completionCreateResponse =await client.completion.create('637ae1aa8f4aa6fad144ccbd',{// Optionally append additional messages to the converstation thread on top of your configured prompt messages appendMessages:[{ role:'user', content:[{ type:'TEXT', text:'What is the weather in Seattle, WA?'}]},],// Supports prompt template variables variables:{},});// completion responseconsole.log(completionCreateResponse.message);}main().catch(console.error);
Fetches the configured model parameters and messages rendered with the provided variables mapped to the set LLM provider.
This endpoint abstracts the need to handle mapping between different providers, while still allowing direct calls to the providers.
import PromptFoundry from"@prompt-foundry/typescript-sdk";import{ Configuration, OpenAIApi }from"openai";// Initialize Prompt Foundry SDK with your API keyconst promptFoundry =newPromptFoundry({ apiKey: process.env["PROMPT_FOUNDRY_API_KEY"],});// Initialize OpenAI SDK with your API keyconst configuration =newConfiguration({ apiKey: process.env["OPENAI_API_KEY"],});const openai =newOpenAIApi(configuration);asyncfunctionmain(){// Retrieve model parameters for the promptconst modelParameters =await promptFoundry.prompts.getParameters("66abc31c93546b6b73414840",{ variables:{ hello:"world"},});// check if provider is Open AIif(modelParameters.provider ==="openai"){// Use the retrieved parameters to create a chat completion requestconst modelResponse =await openai.chat.completions.create( modelParameters.parameters);// Print the response from OpenAIconsole.log(modelResponse.data);}}
For more information on each SDK, visit the “Libraries” section of the documentation.