You’ve created and tested your prompt, and now you’re ready to deploy it to your production environment.
1
Deploy
Click the “Deploy” button in the prompt editor to deploy your prompt to the
production environment.
2
Create API Key
Navigate to the “API
Keys” section in your
organization settings and create a new API key. Be sure to save it as a
secure environment variable or in a secret manager before closing the tab,
as you will not have access to it again.
3
Install SDK
Now, install the PromptFoundry SDK in your application.
Copy
npm install @prompt-foundry/typescript-sdk
4
Integrate into your provider call
Integrate the SDK into your provider call to start using your deployed prompt.
Initiates a completion request to the configured LLM provider using specified parameters and provided variables.
This endpoint abstracts the integration with different model providers, enabling seamless switching between models while maintaining a consistent data model for your application.
import PromptFoundry from '@prompt-foundry/typescript-sdk'; // Initialize Prompt Foundry SDK with your API key const promptFoundry = new PromptFoundry({ apiKey: process.env['PROMPT_FOUNDRY_API_KEY'], }); async function main() { // Retrieve model parameters for the prompt const completionCreateResponse = await client.completion.create('637ae1aa8f4aa6fad144ccbd', { // Optionally append additional messages to the converstation thread on top of your configured prompt messages appendMessages: [ { role: 'user', content: [{ type: 'TEXT', text: 'What is the weather in Seattle, WA?' }] }, ], // Supports prompt template variables variables: {}, }); // completion response console.log(completionCreateResponse.message); } main().catch(console.error);
For more information on each SDK, visit the “Libraries” section of the documentation.