Are Your Oracle AI Agents Actually Being Used? Here’s How to Find Out.

By Sudarshan Mondal | Oracle HCM Cloud | AI & Workforce Innovation


There is a question I hear from almost every HCM leader after an Oracle AI agent rollout — and it is not about functionality. It is not about configuration. It is this:

“Is anyone actually using it?”

That question matters more than most teams realize. You can build a beautifully architected AI agent, publish it, hand it off to end users — and still have no idea whether it is driving real value or quietly collecting dust. Oracle’s 26A release brings a direct answer to that question through a new BIP report: Fusion AI Agents Adoption and Usage.

Having worked through the report structure and what it reveals, I want to share what I think every Oracle HCM practitioner needs to understand about it — and more importantly, what questions it now allows you to answer that you simply could not before.


What This Report Actually Does

At its core, this report gives you a consolidated view of two things: who is using your AI agents and how hard those agents are working.

It pulls from Fusion AI Agent Studio and classifies every workflow by how it came to exist — whether someone used an Oracle seeded template directly, derived a custom version from a template, or built something entirely from scratch. That classification alone is valuable. It tells you whether your organization is a fast follower, a customizer, or a true builder — and that has real implications for support, governance, and future investment decisions.


The Three Questions This Report Answers

1. Are people actually using my agents?

The INTERACTION_COUNT metric is your primary signal here. It counts distinct conversational sessions — each one representing a real end-to-end agent invocation. A high interaction count means the agent is embedded in daily work. A low one means it is either undiscovered, misaligned with user needs, or simply not trusted yet.

One nuance worth flagging: interaction count includes both developer testing and end-user activity. Read it alongside COMPLETED_EXECUTION_COUNT to separate genuine user adoption from background system noise. If your interaction count is high but completed executions are low — that is a problem worth investigating.

2. How sophisticated are my agents?

This is where the report gets interesting for architects. Metrics like REASONING_STEP_COUNT, FUNCTION_CALL_COUNT, and TOOL_CALLS_EVENT_COUNT tell you how complex the agent’s behavior actually is at runtime. A high reasoning step count means the agent is doing real cognitive work — evaluating conditions, branching logic, making decisions. A low count might mean the agent is essentially a glorified FAQ bot.

For organizations making the case internally for AI investment, these metrics are your evidence. They translate technical sophistication into something a CHRO or CIO can understand: this agent does not just retrieve information, it reasons.

3. What is this costing me to run?

Token consumption — INPUT_TOKEN_COUNT, OUTPUT_TOKEN_COUNT, and AVG_TOKENS_PER_INTERACTION — are your cost and efficiency signals. Two agents serving similar use cases can have dramatically different token profiles depending on how their prompts are designed and how much context they carry into each interaction.

This is where smart HCM architects can add real value post-go-live. Identifying high-token, low-completion agents and optimizing their prompt design is not a technical nicety — it is a cost management lever.


What I Look for First When Running This Report

When I approach this report for the first time on a new environment, I run through a mental checklist:

Adoption type distribution. What percentage of agents are seeded templates used directly versus custom-built? A heavy lean toward direct template usage is not inherently bad — but it may indicate that the organization has not yet developed internal confidence to build. A heavy lean toward custom-built agents signals maturity, but also governance risk if those agents lack documentation and ownership metadata.

Error rate by agent. The ERROR_RATE_PERCENT column surfaces reliability problems before they become user trust problems. I look for any agent above a 5% error rate as an immediate investigation priority.

Token outliers. Agents with disproportionately high AVG_TOKENS_PER_INTERACTION relative to their interaction count often have prompt design issues — either they are carrying too much static context or they are not scoping user input effectively.

Inactive agents. The LAST_TELEMETRY_TS field is underappreciated. Agents that were active three months ago and show no recent telemetry are candidates for retirement or re-promotion. Dead agents in your catalog create confusion and erode trust in the platform overall.


Why This Matters Beyond the Technical

Oracle AI Agent Studio is not just a feature — it is Oracle’s strategic bet on where HCM automation is going. The organizations that will get the most from it are not necessarily the ones that build the most agents. They are the ones that govern, measure, and iterate.

This report gives you the measurement layer. What you do with it is the governance question.

In my experience, the best-performing Oracle HCM environments treat AI agent telemetry the same way they treat payroll run statistics or benefits enrollment data — as operational metrics that require regular review, clear ownership, and a response protocol when something looks wrong.

That mindset shift — from “we deployed it” to “we operate it” — is what separates organizations that realize value from AI from those that simply report that they have implemented it.


Getting Started

The setup is straightforward. You download the catalog zip from the Oracle post, unarchive it into your HCM Shared Folders under Custom > Human Capital Management, relink the data model, and run the report. Oracle’s documentation walks through the exact steps clearly.

The harder work — and the more valuable work — is deciding what you will do with what you find.


Sudarshan Mondal is an Oracle HCM Cloud architect and thought leader with 24+ years of experience helping global organizations reimagine how they manage their people. A seasoned Oracle practitioner, he has designed and delivered complex HCM Cloud implementations across Healthcare, Higher Education, Energy, and Financial Services — spanning Core HR, Payroll, Compensation, and Benefits. He writes at the intersection of enterprise technology, human capital strategy, and the future of work.

All content on this site represents his personal opinions and does not reflect the positions of his employer or any affiliated organization.


Tags: #OracleHCMCloud ·#AIAgents #FusionAIAgentStudio #Oracle26A #WorkforceInnovation #HCMAnalytics

·

Comments

Leave a comment

Check also

View Archive [ -> ]