Build complex workflows, measure output quality, keep your data secure and iterate faster with our model orchestration and observability platform.
Caden's visual builder cuts compute & dev costs by 50%. Swiftly switch models, vendors & estimate costs upfront.
Leverage best practices in Caden's model optimization, prompts, and pre-built Chains to help you achieve your desired outcome.
Ship your feature up to 10x faster, and easily iterate on your build without going through a dev cycle.
Caden abstracts and simplifies the tasks that typically slow down or stall development, while helping you improve and maintain quality.
Build your own workflow from scratch or utilize our pre-built “chains” for common Gen AI tasks like summarizing, recommendation and writing.
Our model cost and output summary provides you with the data-driven intelligence needed to optimize your AI models, ensuring you achieve the best quality results with minimal resources.
Caden AI's integration feature empowers you to continuously adapt your AI solutions, allowing you to seamlessly update and enhance your AI feature with the latest models, and enhanced prompts.
We want to use AI, but it's new and developing so fast. Beyond the obvious like ChatGPT, it's hard to know what we can implement now with immediate or near term ROI. As soon as we heard what Caden had to offer, we knew we could finally capitalize on GenAI.
LLMs are spreading to more and more consumers, but many LLMs are not yet suited for B2B functions. Caden is breaking this barrier for business by not only estimating costs, but serving both PMs and developers. It will be a total game changer.
Learn how to leverage the power of Caden in your AI features
Every so often a new technology emerges that allows people to do things they couldn’t do before. The discovery of penicillin led to antibiotics, a new way man could fight invisible bacterial attackers. The jet engine yielded a 10,000x improvement on foot travel, meaning people and things could move further and faster than ever before. Transistors, tires, lasers, levers and light bulbs have all provided such a benefit. However, it took years, in some cases decades, for each technology listed to have the impact we know today.
That’s because at the time of their discovery or first use, the available tools, complementary tech and requirements needed to harness the full value of the new technology were limited. Think of high octane fuels and high strength alloy materials required for a fighter jet, not just the jet engine itself. As demand for a new technology begins to explode, there also exists a need to standardize the production processes so that a broader audience can benefit from it. Think of the petroleum industry in its early days. The vast majority of applications like automobiles could not be realized until companies like Standard Oil emerged to make it economically feasible. At the time, gasoline was not readily available. There were few oil wells and refineries, so a constant supply of refined fuel was not available to just anybody.
By bringing together the fragmented landscape of supplies and technologies to make gasoline readily available, people could simply go to the pump, not worrying about where the gas came from or whether it would flow, and get what they needed.
Trying to get value from AI today is like trying to get gasoline before Standard Oil. The fragmented tools and standards available to businesses and consumers are vastly limiting the way in which AI can benefit people. Getting value from AI is not as simple as turning on the tap.
The trillion-dollar question is: how do we unleash the full potential of AI? What new tools, standards and infrastructure are required to make this possible.
At Caden, we translated years of experience to bring together everything anyone needs to build with AI. We wanted to create a new type of platform focused on helping people build generative AI applications, without having to worry about the underlying complexities or changes in the tech stack. My co-founder and I have worked on everything from semantic search and large-scale knowledge graphs to computer vision AI for self-driving cars, smartphones – even coffee makers that can see. We’ve observed firsthand and fundamentally believe in the transformative advantage successful AI/ML adoption can bring. We’ve also seen how hard it is to go from a prototype to production for AI.
There has never been a more uneven playing field for AI than there is today.
Organizations that can access and harness tens of thousands of supercomputers, internet-scale datasets, and the brilliant minds required to train today’s foundation models have unlocked tremendous potential for consumers and enterprises. The beauty of interfaces like ChatGPT, Bard, and Claude is that anyone can quickly see just how capable (and error-prone) the latest large language models are. But the reality is just a handful of companies have a real edge thanks to generative AI..
As an example, take the development of early PC and desktop computers. For a time, if you couldn’t work with the computer through a command line interface, building digital applications into your life was a non-starter. Early GUIs and browsers like Mosaic truly changed the playing field for the haves and have nots of a digitally, and eventually software-driven business. The same goes for database software. The emergence of companies like AWS, Databricks and Snowflake changed the game for who could and couldn’t build a data-driven business. Database software is low level, like an LLM itself. The cloud hosting and tooling provides new capabilities that democratize the usage of the underlying AI technology.
Now is the era in which organizations that become AI-driven could potentially eat the world. How we get there, in large part, is dependent on the tools we have at hand. That’s why we built Caden – to allow everyone, not just the large companies, to harness the power of AI.
Caden’s vision is to make AI easy. We spent months speaking with hundreds of companies building or thinking about how adopting Large language models will make or break their industry. Making it easier to use gen AI models will allow individuals and organizations to build better intelligence into their products. By iterating faster and enabling non-experts to build, deploy and monitor with gen AI, we believe companies will achieve better quality, better cost-effectiveness and ultimately better adoption in the long run of the latest AI technologies.
There are 3 tenets we focus on:
1) The AI tech stack is always changing
The Lindy Effect, popularized by Nassim Taleb’s book, describes the essence of this point in a probabilistic framework. The longer something has been around, the higher the probability it will stay around, and vice versa. It’s likely bicycles will outlive combustion engines, statistically. The same can be said for LLMs versus smaller ML models, as people have been using linear regression longer than GPT-4. There is also a particularly high probability today’s neural networks will be replaced based on how quickly the current ones replaced the old!
Abstracting the dependence on cloud and model vendors is key. Even more important is the ability to leverage open source models and on-premise deployments with Gen AI. As the cost of compute will come down and availability of various AI chips will increase, the ability to orchestrate and adapt the AI models used in your applications will be essential.
2) Training Data is no longer the only moat
In the good old days of 2017 or 2018, it seemed like having a well labeled, curated dataset for a good problem was a sufficient moat for an AI company. It meant you could train a model to good enough accuracy to eventually solve a customer problem and ideally collect more data on the problem so your product could stay better than others and your models kept getting better.
Fast forward to a world where some AI companies have trained AI models bigger than anyone else can train on pretty much all the data. Companies that have a super specialized computer vision model for niche medical device applications might seem ok, but these same foundation AI models are also sequencing proteins better than specialized models.
Even more interesting, foundation models are incredible at few shot learning, meaning that you often don’t need many examples to get an “off the shelf model” to get pretty good at your problem in just a few tries. Despite the hype around RLHF and fine-tuning the reality is the same AI bottleneck exists: you need human experts to label stuff. And unlike a small computer vision or ML model, updating model weights is non trivial and not cheap.
We believe where the rubber meets the road is with the powerful combination of generic foundation models (open source or commercial) and private knowledge sources. These solutions enable a human to easily share unstructured or unlabelled data with a model where unless it knew this piece of data or knowledge it would be useless. There are many examples of this with the latest crop of solutions being centered around Retrieval Augmented Generation (RAG). Companies need better ways to work with these private data and knowledge sources with an interface that lets them securely integrate, coordinate, and measure model performance over time.
3) Quality and cost are Paramount
For many AI tasks, objective measures of a model's quality on your task were quantitative. For example a person presence classifier (is there a person in the image or not) could be judged on accuracy using a eval or test dataset. How many images did it get False positive (person when not person) and false negative (not person when person)? Then, you could use the inference cost (needed throughput per dollar of compute) to pick which hardware for your device.
The reality for generative AI is very different. Models can produce erroneous outputs, while still incurring high costs for inference. That’s why we built Caden.
Caden AI is enabling anyone to build AI-first products. Our platform abstracts the complexity of integrating AI. With Caden, you can estimate costs, measure quality, deploy and monitor Gen AI in minutes.
Companies and individuals use Caden to iterate faster and monitor performance in production for quality and cost effectiveness. Furthermore, the platform allows you to define your own variables, data sources and prompts. Caden makes it easy to integrate all these components in one place so people can leverage the full potential of AI.
We’d love to share what we’ve built with you, get your feedback and help you quickly build and integrate generative AI features into your products and workflows. You can do that by registering here: app.cadenai.com/register and see for yourself how easy it is to build with Caden. We’ll be following up with more posts on what you can do with Caden and how to use our latest features in our next post.
We’re excited to announce Caden AI from stealth to help enterprises harness the power of Generative AI! Created by Davis Sawyer, former CPO and Co-Founder of Deeplite, and Paul Dlug, former Director of Engineering at Golden, Caden aims to make the power of Generative AI accessible to all. This pioneering venture is the first successful launch from the Forum Venture Studio, which co-builds with founders.
From sales and marketing automation, digital advertising, edtech and legal applications abound, Generative AI has taken the world by storm because of its vast potential to disrupt the innovation economy. However, accessing and utilizing the latest tools like large language models (LLMs) can be time-consuming, costly and challenging.
With Caden AI, enterprises can unlock the full potential of Gen AI with ease and efficiency:
Build: Visualize and build quality AI products without the need for technical expertise. Engineering resources are in short supply and over-deployed, spending valuable hours building and iterating on Gen AI tools. By integrating with the latest models and vendors like GPT-4, Alpaca, and Cohere, Caden AI enables non-technical product teams and developers alike to build with greater efficiency and save businesses days and weeks of effort when it comes to getting the right quality for generative applications. The Caden platform provides a new set of abilities to the AI/ML arsenal:
Test: Accurately estimate costs and evaluate output quality to ensure reliable and effective AI products. This feature helps businesses optimize expenses, reduce unnecessary API calls, and maintain high-quality results.
Run and Optimize: Quickly deploy AI models and generate code,to eliminate the need for lengthy development cycles. Caden’s integration environment empowers enterprises to iterate rapidly, make necessary adjustments, and observe how LLMs perform in production.
Having spent years on the frontlines of deep learning innovation, to say this new era of Generative AI applications holds massive opportunity is an understatement. However, building reliable, production-grade apps and iterating efficiently is not accessible to many businesses that can really benefit from this technology. Caden looks to level the playing field by enabling organizations of any size to make generative AI features into a core part of their software products. The investment and partnership from Forum Ventures has helped us get the platform in the hands of initial users in sales and marketing SaaS to start creating new AI features.
Caden AI benefits from the expertise and support of Forum Venture Studio, a team of ten dedicated professionals providing assistance in validation, product design, development, growth, hiring, and fundraising. Additionally, the studio takes the initiative to invest the first $250,000 USD into the venture.
We asked Jonah Midanik, General Partner and COO, why he decided to back Caden and what makes the company special “The founding team has over 20 years of experience and are leading experts in building cutting edge AI systems. Generative AI is a dynamic space, and the team’s previous experience as a venture backed founder and long-time executives building deep tech AI products has allowed them to see the field, and understand what companies and consumers need as they navigate the platform shift to AI. We look forward to seeing the Caden team grow and bring their innovative platform to market.”
We can’t wait for you to try the cutting-edge capabilities of Caden AI firsthand. We will be inviting enterprises to participate in the beta testing phase. Interested parties can book a meeting with the team here.
About Caden AI:
Caden AI is a pioneering development platform that empowers enterprises to leverage the power of Generative AI. Create complex workflows, estimate costs, and measure the quality of outputs for rapid iteration and full LLM lifecycle management. Caden AI makes AI technology accessible to all.
In this video we introduce Caden AI. We show you our new development platform that allows you to easily integrate large language models into your product in just minutes. The platform is based on our founders years of experience in AI development. With the Caden platform, you'll have access to the latest and greatest models from multiple vendors, enabling you to experiment, estimate costs, and measure quality before going into production. We built Caden to make Generative AI more accessible. The platform will significantly reduce the time for prompt engineering and allow you to iterate faster to ensure the quality is right before releasing anything. Lastly, Caden's seamless integration means you can build into any apps within minutes, whether it's releasing it as an API or using code snippets to help your team build faster on top of the LLMs!