Back
Wesley de Nooijer

Wesley de Nooijer

The case for building AI without prompts

The case for building AI without prompts

When OpenAI put Large Language Models (LLMs) into the hands of the public with the launch of ChatGPT it kickstarted the AI mania. Ever since then, many people have pointed at prompt engineering as a key skill for the future.


After all, LLMs are the tools of the future and prompts are how you use these tools. Right?


Well, not quite. While it's clear that LLMs will play a major role in our products and societies over the next few years and decades, prompt engineering may not be a key factor. In fact, at pretrain.com we think most LLM applications will be built without users doing any form of prompt engineering.


We expect the future to look much more like the following:
First, you tell an AI assistant what you need. The AI then does the prompt engineering itself. It generates a few examples and you provide feedback. Then, the AI uses the feedback to improve its prompt. After a few cycles of this, you get outputs that meet your exact needs. No more prompt engineering by you. AI takes care of that itself.


There are two key reasons that we think most LLM applications will be built using automatic prompt engineering.


1. New techniques now make it possible for us to do this.
2. Not having to prompt has immense benefits and opens up new possibilities.


Automatic Prompt Engineering

What if, instead of creating prompts ourselves, AI can write its own prompts? Many people have already played with this idea, but recently a big leap forward was made:


DSPy

DSPy is a Python library created with the goal of programming, not prompting LLMs. DSPy automatically creates prompts and tests them against metrics of your choice. This way you can keep optimizing until you have the prompt you want. While it's still early days, DSPy will more and more remove the need for prompt engineering. By abstracting away prompt engineering, we unlock a ton of benefits and possibilities. Let's go through our top 7.


DSPy Image

The 7 Benefits

1.Skip the learning curve

One of the most obvious benefits is that we can, of course, skip the entire prompt engineering step. This means no more endless tweaks and tests to find out which exact words we need to get ChatGPT to do what we want.


It turns out, that finding the right words for a prompt can sometimes be quite unintuitive. Take, for instance, the example of when researchers found out that telling an LLM that it loves "Star Trek" lead to better mathematical reasoning.


With automatic prompt engineering, founders and product teams can build AI applications without learning all the intricate details of how to use LLMs.


2. Keep everything up-to-date

The AI space moves incredibly quickly. That means that it's hard to keep up. But it is essential for businesses to do so anyway.


There are many companies that have appointed specialists simply to research what they can do with AI. They didn't have a specific use case in mind, they just know that they can't afford to miss out on this technological wave.


Every few months, new and more powerful generations of models are released. And each time, many businesses scramble to integrate them as soon as they can.


That's because often the simplest way to improve your AI applications is upgrading the model that you use.


The problem is that prompts don't generalize across models.


This means that to make the most of new models, you need new prompts every time you make a change. And if you have complex AI workflows, this means spending a lot of time tweaking your prompt every time you want to implement a new model. Even for models from the same provider, a wholly different prompt may be required.


With automatic prompt engineering, you can automate this entirely. Staying up-to-date with the latest developments in AI then becomes incredibly easy.


3. Short development cycles.

Automatic prompt optimization isn't just useful to keep your workflows up to date. It's useful for making any change in general.


Did you find a new LLM error that you want to correct? Or do you have a new criteria for the LLM outputs? Simply plug in your new rule and let your AI rebuild the workflow automatically.


Almost every change you want to make to your LLM application means that you have to change your prompts. This becomes especially problematic when your pipeline consists of multiple steps that depend on each other.


Automatic prompting is how you keep your AI workflows flexible, extensible and maintainable.


4. Find the best model for the task

There are around a dozen or so different models to choose from for your LLM applications. How do you know which is the best for the job? Which model is best for your specific use case?


Since prompts are not portable across models, it is hard to test each model for each of your AI workflows.


With automatic prompt engineering, it actually becomes feasible to try different models for your use case.


5. Automatic Cost optimizations

We can also use automatic prompt engineering to find the cheapest model that still meets all of our requirements. This is our so-called Minimum Viable Model (MVM).


The cost savings can be substantial, as some models are 100x cheaper than others. This means that in some cases we can save up to 99% on our AI bills.


6. Redundancy to prevent downtime.

When OpenAI went down on June 4th 2024, many applications went down with it. For certain applications this is very problematic. In that case, it is critical to always have back-up options available.
By testing different models in your workflow, you can ensure that second and third-best options are always on stand-by.


Preventing downtime is another added benefit of automatic prompt engineering.


7. Personalization at scale.

The ultimate goal of business is to give your customer exactly what they want. In a dream world, this means providing everyone with a product made exactly for them.


In the real world, personalization at scale has always been problemetic due to its high costs. LLMs have now changed this.


We can now create personalized AI workflows for each of our users, tweaked and optimized based on their personal feedback.


Over the next few years, we expect AI and software applications to get a lot more personalized than they are today.


Without automatic prompt engineering, this simply wouldn't be possible.


With quicker and easier development, higher quality, lower costs, increased resilience, and more personalization, the benefits of automatic prompt engineering are immense. To get started with building your own AI workflows, without prompt engineering, you can sign up at pretrain.com and try us out for free.