Is Your AI ‘Agentic’, Or Merely ‘Agent-ish’?

Is Your AI ‘Agentic’, Or Merely ‘Agent-ish’?


Jensen Huang says AI agents are a ‘multi-trillion-dollar opportunity’. Mark Benioff thinks agents represent ‘what AI was meant to be’. And Satya Nadella thinks SaaS is dead. It’s 2025, and agents are the only game in town (or so it would seem). The tech industry adores its buzzwords, and ‘AI agents’ might be the buzziest of them all! While a few vendor platforms are genuinely building agentic features into their roadmaps, others are merely ‘agent washing’. I see lots of confusion among Forrester clients – buyers of these technologies – who are trying to sift through all this frenzy to make sense of what agents really are, what they mean to the business, and what their choices are.

I have thoughts.

If it doesn’t have agency, it isn’t an agent

We are still early enough along the technology maturity cycle that definitions and characteristics can be a bit fluid, but it is generally accepted that AI agents are LLM-based constructs that demonstrate specific design patterns: planning, reflection, collaboration with other agents, and tool use. Underlying these patterns are two foundational building blocks of true ‘agentic’ capability:

  • Agency: A defining characteristic of an agentic AI system is the ‘agency’ to control and direct its own program flow, making independent decisions about the specific pathways, sequence and nature of actions it must execute to attain its goals. Of course, agency can be narrow or broad, but AI agents are expected to have broad agency across a variety of goals within a context-space.
  • Autonomy: This is a product of an agent’s ‘agency’ as well as the generalized intelligence of today’s foundation models. Autonomy refers to the breadth of contexts (exceptions, externalities and edge cases) within which the AI can operate effectively and deliver desired outcomes, without requiring explicit instructions or intervention from a human.

You can immediately see that agency and autonomy feed off each other. Together, these traits distinguish true AI agents from their lesser counterparts.

If you look carefully at many of the ‘agentic’ offerings that SaaS products offer, they come across as a mixed bag. You will quickly realize that these ‘agents’ have limited autonomy, or limited agency, or are limited to such a narrow context-space that you might as well have just used a deterministic workflow or a regular LLM prompt to produce the same outcome. Unfortunately, several of the purported agentic demos that I have seen from SaaS vendors are merely LLM prompts embedded into a flowchart-y, deterministic process flow, within which they are deployed to perform narrow tasks. Basically, these are LLM-wrappers around deterministic process workflows.

These are not ‘agentic’. More often than not, they are merely ‘agent-ish’.

The autonomy spectrum

This is not to say that there is little to no value in these ‘agent-ish’ workflows. Agent-ish workflows have their place in an autonomous ecosystem, and the capability footprint of these ‘agent-ish’ workflows will get better and better over the next few months. But it’s still a stretch to call them AI agents.

In this context, it’s helpful to think of autonomy at different levels. At Forrester we tend to map AI systems along a spectrum of varying agency and autonomy, across the distinct dimensions of control, execution and monitoring. This is analogous to the concept of levels of autonomy in self-driving cars, but instead, as applied to enterprise processes. Let’s outline the key levels:

  • Level 0: Manual. Humans, ironically, embody the highest levels of agency and generalized capability (or ‘common sense’). A human employee can usually be tasked into a role without needing detailed instructions or step-by-step flowcharts to navigate their job. But the point of autonomy is to reduce this reliance on human labor, and so this level forms a baseline from which to measure higher-level autonomy.
  • Level 1: Software-driven, or rules-based automation. This encompasses traditional software-driven automation, as well as task-specific assistants that can be built using traditional automation tech such as Robotic Process Automation (RPA) or workflow automation. These systems execute predefined tasks along preconfigured pathways efficiently but lack any meaningful decision-making ability beyond simple deterministic logical operations.
  • Level 2: Probabilistic automation. This includes systems that integrate machine learning or large language models (LLMs) to enhance automation, yet they remain tethered to static workflows. For example, an RPA-like customer outreach workflow may dip into a machine learning (ML) model to generate a list of customers who are likely to churn. We often hear vendors assert that their software is ‘agentic’ because it can make non-deterministic decisions… well, most machine learning models work with probabilities and are, therefore, non-deterministic. That does not make them agentic, as they have no agency and are only focused on a specific task.
  • Level 3: AI operators, or agentic process orchestration. These quasi-agents mimic agency but operate within tightly defined guardrails. Think of ‘LLM wrappers’ around deterministic workflows. A vast majority of the current wave of so-called ‘agents’ from SaaS vendors fall at this level, as do tools that Forrester terms as ‘agentic process automation’. These are ‘agent-ish’ because they deliver autonomy only within a narrowly defined context space and have very limited agency within these narrow context-spaces. In this context, it is important to note that for many organizations, ‘agent-ish’ workflows and hybrid orchestration across Level 2 and Level 3 – wherever done right – will prove extremely useful in the near term for organizations that are dipping their toes into the space, but the choice of use cases and finesse in technical execution will be crucial to success.
  • Level 4: AI agents, or ‘agentic systems’. Systems at this level exhibit both agency and autonomy within broad contexts. Like a highly skilled human colleague or manager, they don’t need a step-by-step flowchart; they are goal-oriented, using their knowledge and contextual awareness to determine the best course of action. AI agents rate high on control and execution dimensions, with limited monitoring capabilities. Several examples of true AI agents are coming into being. A few examples would include Devin, a programming agent, or AI Scientist for research and scientific discovery. We have seen several enterprise use cases for these true AI agents in areas such as drug discovery, complex know-your-customer processes or advanced insights generation (to name a few of several). That said, truly agentic systems operate at a level of capability that is a step-function higher than ‘agent-ish’ systems in business value created.
  • Level 5: AGI (Artificial General Intelligence), or whatever comes next. We don’t know where AI might evolve in the next five years. While AGI is aspirational and poorly defined today, it does describe a future where AI systems self-govern and manage not only goals but also their evolving purpose.

What it means

It is not unrealistic to imagine organizations designed in the form of hierarchies wherein agentic systems manage other forms of autonomy across Level 1, 2 and 3 ‘ (including agent-ish’ systems), either replacing or augmenting human labor in these roles.

However, most organizations are at very early stages of this journey. So, it is important that technology buyers and decision makers take a clear-eyed view to the hype and to understand that these ‘agent-ish’ systems are not the Promised Land of enterprise autonomy, but just an intermediate (but nevertheless important) step along the journey.



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.