Deliver LLM apps faster with Vellum AI development platform
Read Time 3 mins | Written by: Cole
Many of our partners need to build AI apps based on large language models (LLMs). There’s plenty of capable LLMs available – GPT-4, Llama 2, Falcon 180B – but they keep changing and new ones are released every week. After OpenAI almost imploded it became very clear we needed a development platform for AI apps that let us switch between LLMs.
We also wanted a developer platform that would make it possible to onboard software engineers faster and help us build production-ready AI apps at scale. We decided to use Vellum and have built a growing list of AI apps with their development platform.
Vellum helped us build everything from a custom chatbot for a psychedelic-assisted therapy company to an internal tool for speeding up the government request-for-proposal (RFP) process.
Here’s an overview of Vellum and their case study on the ways Codingscape uses Vellum to speed up time-to-market for AI product delivery.
What is Vellum?
Vellum is a development platform for building LLM apps with tools for prompt engineering, semantic search, version control, testing, and monitoring.
You want to build a prototype on GPT-4 to test against Llama 2?
Software engineers can use Vellum to test the models against each other, design the workflows you need, and rapidly build a production-ready backend for your LLM app.
Development best practices with Vellum
Building an LLM app from scratch requires a complex backend – and it doesn't make sense to build a new one every time you have to build a new app. Vellum gives us an easy way to build repeatable AI processes so that we don't have to start at square one every time.
On top of that, it covers everything we need to deliver production-ready apps to our partners with full confidence.
- Rapid experimentation: No more juggling browser tabs and tracking results in spreadsheets.
- Regression testing: Test changes to your prompts and models before they go into production against a bank of test cases + recently made requests.
- Your data as context: Dynamically include company-specific context in your prompts without managing your own semantic search infra.
- Version control: Track what's worked and what hasn't. Upgrade to new prompts/models or revert when needed – no code changes required.
- Observability & monitoring: See exactly what you're sending to models and what they're giving back. View metrics like quality, latency, and cost over time.
- Provider agnostic: Use the best provider and model for the job, swap when needed. Avoid tightly coupling your business to just one LLM provider.
Codingscape use cases for Vellum
The main AI app you'll see based on LLMs are chatbots. Medical chatbots, shopping assistant chatbots, and chatbots that let employees access internal company resources are common requests. We've used Vellum to quickly prototype and build some of these custom chatbots and a few tools that speed up internal processes unique to our business.
- Resume parser chatbot that stores employee resumes, and lets us use a chatbot to match senior software engineers to new projects based on skills.
- Healthcare chatbot for women that references PubMed articles to answer questions specific to women's health.
- Medical assistance chatbot that helps you find ketamine clinics near you and psychedelic therapies around the globe.
- TeleTexter Google Chrome extension that summarizes all your open browser tabs into usable text with links for email newsletters.
- Gov RFP tool that quickly analyzes and summarizes project requirements for gov contracts and helps us generate proposals.
Vellum just released a case study about the ways Codingscape uses their AI developer platform. Give it a read to learn more about Vellum and how it speeds up the development of production-ready LLM apps.
Don't Miss
Another Update
new content is published
Cole
Cole is Codingscape's Content Marketing Strategist & Copywriter.