“First it came for copywriters & logo designers
I didn’t say anything
Then it came for customer service agents
I didn’t say anything
Then it came for agronomists
I didn’t say much
Now it is coming for….”
It is quite clear “it” here is AI.
Pricing of Enterprise Software
In a recent CropLife article titled, “When It Comes to AI, Who’s Flying the Airplane?”, the author contends humans should be in control of AI and humans should be the fallback for AI. The article ends with a belief,
I don’t believe AI will replace agronomists, and I don’t believe it will replace journalists.
The answer above is an answer to the wrong question. The wrong question people are asking in this context is whether AI will replace agronomists, or will replace journalists. The question is reductive, and does not foster a discussion.
The right question to ask is “How will AI change the job of agronomists and journalists?”
There is a spectrum of possibilities when you ask this question, and one of the answers could be “AI will replace agronomists or journalists.” AI replacing agronomists or journalists occupies some area on the probability distribution curve for the answers to the question I posed.
If we contrast this with how enterprise software has been marketed and sold over the last three to four decades, it was and is about increasing employee and company productivity.
Use our software, and it will increase your employee productivity by 20%.
Use our software, and it will increase collaboration within your company, and increase overall productivity by 10%.
Use our software to track your inbound leads, and it will increase your sales reps efficiency by 8%.
Due to this most enterprise software is priced on a per seat level. It has been indexed against your headcount and the value of increasing the productivity of said headcount. There was very little discussion about the software replacing your staff.
We see this trend in food and agriculture as well. Climate FieldView is priced based on a named account basis. Many of the other accounting software are priced on a per seat basis.
Even within the context of pricing of improving productivity, startups often leave money on the table, in terms of how they calculate value created.
For example, I was talking with an early stage startup during the last AgTech Alchemy event. The startup is creating a model to capture data in the field using a smartphone, and make it very easy to do so.
One way to price this offering is to think about the productivity improvements for the person who will be out in the field to measure different “things” using the smartphone. A simple calculation could be as follows:
This application will increase the productivity of your field staff by 30%, and so we should be able to capture a tenth to a third of the value, and so our pricing is X per user.
This does leave some value on the table, as it does not take into account the accuracy of the measurement using a smartphone, compared to any mistakes made by a human.
The software is still acting as a productivity improvement tool for field staff and so can be priced accordingly.
LLM pricing
The initial set of LLM model based products are following the same playbook, or they are usage based due to the costs of running LLM based queries. Within food and agriculture, we are seeing people doing enterprise deals or using LLMs for internal purposes.
It is not exactly clear how LLM based products should be priced, and companies are experimenting with different options. I asked the pricing question to someone who has been an early leader in building out LLM agents for agronomists through a partnership with a large technology company. This was their response via email (emphasis by me)
Selling new AI solutions requires extra due diligence to validate the costs of implementation and validation that the outcomes and promises of value are real. A smart AI solution provider will either put in the extra effort to get the potential customer comfortable with the implementation and value proposition, or they can accelerate the sales cycle and do a value based pricing model to share in the risk/reward of the outcomes. We are validating the business case with our customers and also the costs of integration to make sure we are all aligned before proceeding. Takeaway is that given the lack of awareness around use cases, newness of the AI technology, and heavy integration requirements I would recommend having several commercial options available to accommodate the variety of variables.
The respondent to my query makes some really pertinent points. At a basic level, one needs to make sure you are solving a real problem, and your LLM based product is delivering the value you have promised and the customer is looking for.
Their point about how to accelerate the sales cycle by doing a value based pricing model to share the risk / reward of the outcomes is very astute. But in the case of LLMs, how do you get a sense of the value you are creating?
Let us take the example of an LLM which is a digital agronomist for your team. In the current scheme of things, the LLM is acting as an assistant to the agronomist, but in many scenarios it is doing the work of the agronomist itself by answering certain agronomy questions.
This opens up an opportunity for AI agent startups to sell the “work”, instead of just the productivity increase. This could open up many new opportunities, which might not have been available before.
When you start selling the work, you are not selling the utility of the software or the productivity improvement for a human worker, you are selling the work of an agronomist or a journalist. We are not there yet, and it might be a while before we get there. (if we get there).
This opens up new and interesting pricing models.
Let us take the example of an LLM agent which acts as a co-pilot for an agronomist (think of them as a junior agronomist on staff) or a customer service agent for an equipment dealership. (Again, think of them as junior customer service agents).
In this example, you could price them on a per seat basis, using some productivity improvement assumptions for the agronomist or the customer service agent. But, the LLM agent here works a bit differently, as it is doing the work of an agronomist or a customer service agent as well.
If one hires the LLM agent, one does not have to spend time training new agronomists. Also, the LLM agents don’t churn as well.
Training and churn among employees is a big challenge. And a huge cost for enterprises both in terms of time and money, but also in the quality of service which is delivered to their customers.
As an organization selling these LLMs agents, one should consider the cost savings on productivity, and include the imputed savings on training and churn, when pricing your agents.
This is the true value the agent will be creating for your enterprise. If one wants to use value-based pricing, one should consider the total imputed costs, even if you are doing seat based pricing.
Other options are pricing based on outcomes. For example, if you are selling an AI LLM agent for customer service, a pricing model could include payments based on case resolutions. The best outcome when a customer contacts you with a customer service issue is to resolve the issue quickly and satisfactorily.
If the AI agent can resolve the issue, without involving a human agent, then you can charge based on the outcome of having closed the issue. If the AI agent cannot close the issue, but can triage it to a point, where a second or third level customer support person can take over, and can resolve it much more efficiently, then it is a productivity improvement. (I believe we will get to an outcome based price for LLM based customer service agents, before any of the seed & chemical companies can figure out outcome based pricing for yield guarantees!)
The enterprise using these agents do get the benefit of closing the issues, or the benefit of being productive on the triaging, and significantly reducing the cost of training and churn among customer service employees, which continues to be one of the highest among different categories of jobs.
Efficiency in building software
So far we have talked about how companies building LLM-based AI agents can capture more value (assuming they can create it). Enterprises on the other side are not going to stay idle, and they are going to respond to keep their costs low.
The startup I talked with during the AgTech Alchemy event mentioned how their engineering costs are so much lower compared to a few years ago. He said a few years ago, he would have hired a team of 3-4 engineers, and now he is able to do most of the work on his own.
Most of the decrease in costs have been driven by coding co-pilots, and so one person can do the work of a few people. Your product development costs can be slashed significantly, compared to a few years ago.
Enterprises are going to push back, and make an argument that the cost of building the AI agents is much lower. They will want the AI agent building organization to pass on some of the efficiencies of building AI based agents to enterprise customers.
This goes against the principles of value based pricing.
But if you are building LLM agents, then there are other teams out there building LLM agents as well, and they might be willing to pass on some of their product development efficiencies to the enterprise customer, in order to win the deal.
This is especially true, if the proprietary data needed to train the models is owned and controlled by the enterprise. In this example, the LLM agent can get commoditized very quickly.
It might be relatively easy to switch out one LLM agent with another LLM agent. The switching costs for an LLM agent might not be huge.
Data has gravity
LLMs for use cases like customer support or agronomy require access to domain specific data, data which is specific to the company’s products, their usage, and behavior.
During the early part of the LLM craziness or ML craziness, many of us felt that the data can move to the model. This assumption is not rock solid anywhere, as we have found it is easier to move the model to the data as the models have become easier and easier to access and use.
Basically, data has gravity. Data is not going to move to the models easily, but models can more easily move to the data. If you are a model only provider, the switching costs are low. You will be in a deflationary fight with other model providers.
Image generated using ChatGPT: Data has gravity
I get ChatGPT for doing research for the newsletter for $ 20 / month, and I can also get other open-source models practically for free.
So what should one do, if you are a model-only provider? One can try to go and capture additional relevant and valuable data for the model, which is not easily available to other players. This will give you some amount of data gravity, and increase switching costs.
Or one could provide deep enterprise-level workflow and process integration. Can you provide additional tools to work better with data, ensuring higher quality?
This could be combined with any domain specific technical innovations. This approach could create more integrated and seamless business process workflows for your customers, and provide more configurable tailored solutions for their unique business challenges.
It will increase switching costs, create durable revenue streams, create a defensible value proposition for your products and services, and help be part of a collaborative ecosystem.
I would love to know how you are thinking about pricing your enterprise level LLM agents. If you are using them for internal users only, how are you making a business case to justify investments by your organization? I would welcome your insights and feedback.
One of the common units of productivity of broad acre farmers is hectares. So instead of a pricing per head unit as a proxy for productivity, maybe an AI based agronomy package price per hectare might be appropriate. This gives the company the a business model that scales with the customer and the customer has control over how many HA it is deployed on. Food for thought.
AI is still relatively new to me, so what is LLM? I get main points I think you're going for, but I wanted to check.