Which LLM providers does Prompt Foundry support?

Prompt Foundry supports a variety of leading LLM providers, including OpenAI and Anthropic. The platform is designed to be flexible, allowing for easy integration with other providers as well. If your preferred provider isn’t listed, reach out to us to discuss potential support.

Can non-technical team members use Prompt Foundry?

Yes, once the SDK is integrated into your application, both technical and non-technical team members can collaborate in the prompt development process without needing to write or deploy any additional code.

What evaluation assertion methods are available in Prompt Foundry?

Prompt Foundry offers a range of evaluation assertion methods that can be found in the assertion section of the evaluations documentation.

Does Prompt Foundry support multi-modal models?

Yes, Prompt Foundry supports multi-modal models, allowing you to create, test, and evaluate prompts that incorporate both text and image inputs, enhancing the versatility of your AI applications.

Do you support private cloud or on-premises deployments?

Yes, Prompt Foundry can be deployed in private cloud environments or on-premises to meet your organization’s specific security and compliance requirements. Contact us for more details on custom deployment options.

I don’t see my model provider listed, can I still use Prompt Foundry?

Even if your model provider isn’t listed, it is likely on our roadmap. Feel free to contact us, and we will prioritize adding it.

See the Supported Models page for a list of the models we currently support.

Can I use Prompt Foundry with my own models?

Absolutely. Prompt Foundry is designed to integrate with your custom models, allowing you to manage, test, and optimize your prompts regardless of the model or infrastructure you’re using.

Contact us to discuss how we can support your specific model requirements.

Why should I trust Prompt Foundry with my proprietary LLM configurations?

Prompt Foundry implements industry-standard security measures, including encryption, access controls, and audit logging, to protect your proprietary LLM configurations. We prioritize the confidentiality and integrity of your data to ensure it’s safe from unauthorized access.

Shouldn’t my LLM configurations live in my code?

While keeping your LLM configurations in code is an option, using Prompt Foundry allows you to decouple prompt management from your codebase. This approach offers benefits like faster iteration cycles, easier collaboration, more flexible version control, and seamless switching between models—all without the need to deploy new code every time you update a prompt.

Read more about the benefits of using Prompt Foundry.