
Dissecting a ServiceNow AI Agent: The 'Classify tasks' Usecase
ServiceNow’s AI Agent framework represents a significant leap forward in platform automation. Instead of just simple scripts, we can now build intelligent agents that understand requests, use tools, follow multi-step plans, and even collaborate within a team to achieve complex objectives.
But how are these powerful agents actually constructed? Let’s pull back the curtain and explore the inner workings of a practical example provided by ServiceNow: the “Classify tasks” use case. By examining its configuration details, we can understand the design patterns and components that bring it to life.
The Objective: What Does “Classify tasks” Do?
Every AI Agent starts with a purpose. The definition for the “Classify tasks” use case (sn_aia_usecase
) clearly outlines its mission:
- Name: Classify tasks
- Goal: “Efficiently Classify tasks (incidents, cases, or problems) by fetching relevant record information, predicting and updating fields like Priority and Assignment Group, analyzing sentiment, and retrieving necessary context, regardless of the amount of detail provided.”
Essentially, this agent automates the critical but often manual process of triaging and categorizing new tasks, aiming to improve efficiency and consistency.
The Team Approach: Specialized Roles
Complex tasks often benefit from teamwork. This use case employs an AI Agent Team (sn_aia_team
) consisting of two specialized agents (sn_aia_team_member
, sn_aia_agent
), each with a distinct role:
- Record handler: This agent is the hands-on expert for direct record interaction – its job is to “Perform actions of fetch, create, update the record with the provided record details.”
- Field predictor: This agent focuses on analysis and intelligence, designed to “predict fields of an incoming task/record… [and] analyse the sentiment.”
This division of labor is a key design pattern, allowing each agent to excel at its specific function, making the overall system more robust and maintainable.
The Master Plan: Guiding the Usecase
The heart of the use case is its base_plan
. This provides the high-level, ordered strategy that the AI system follows to achieve the overall objective. Let’s look at the plan for “Classify tasks”:
Efficiently Classify tasks (incidents, cases, or problems) by fetching relevant record information, predicting and updating fields like Priority and Assignment Group, analyzing sentiment, and retrieving necessary context, regardless of the amount of detail provided.
To Classify a task, ALWAYS follow this progression of steps:
1. Fetch task information of
. If the task details are not found, then check the correctness of the task number from user.
2. Next, predict assignment group, impact, urgency, priority and category fields, and evaluate sentiment value for the incoming task. Take the sys_id field as record_sys_id and the table field from above step’s output.
3. Next, update the record with the above predicted fields and sentiment value if present. Display these values including sentiment in a bulleted list format. Display the sentiment reasoning on a separate line below the fields. Take user consent before updating the record.
4. Next, if the record update was successful, summarize and justify the updated fields and display them to the user again in a bulleted list format, to confirm the changes.
MANDATORY: While following the steps above, never ask the user for additional instruction. Always follow the instructions above. All requests or format display values are essential and should not be skipped.
Analysis:
- Structured Flow: The plan uses clear, numbered steps, establishing a logical sequence. Dependencies are handled implicitly by the order (fetch before predict, predict before update).
- Action-Oriented: Each step clearly defines a core action (Fetch, Predict/Analyze, Update, Summarize).
- Implicit Delegation: While not explicitly naming agents, the actions strongly align with the defined agent specializations (Fetch/Update -> Record Handler; Predict/Analyze/Summarize -> Field Predictor). The underlying orchestrator uses these cues.
- User Interaction & Safeguards: Steps explicitly define interactions like checking the task number (Step 1), displaying results (Steps 3 & 4), and, critically, mandating user consent before any record update (Step 3).
- Clarity and Rigidity: The instructions are precise, even specifying output formats (bullet points). Keywords like
ALWAYS
andMANDATORY
signal that this plan must be followed strictly, reducing ambiguity for the AI.
Meet the Agents: Instructions and Tools
Now, let’s dive deeper into the individual agents, their specific instructions, and the tools they wield (sn_aia_agent_tool_m2m
, sn_aia_tool
). Both agents utilize the ReAct
(Reason+Act) strategy, meaning they think about why they’re doing something before taking an action.
1. Record Handler Agent
- Role: Expert in fetching, updating, and creating records.
- Instructions: Provides specific procedures for each core action (fetch, update, create). Key directives include: using raw field values (not display values), consolidating sentiment data for updates, always getting user consent before updating, handling errors gracefully, and communicating informally.
- Tools:
Get the record meta data
: Used to fetch initial task details.Update the record details
: Used to apply changes to the record after prediction and consent.Create the record
: Available to the agent, though not required by this specific use case plan.
2. Field Predictor Agent
- Role: Expert in prediction, sentiment analysis, and summarization/justification.
- Instructions: Outlines procedures for: predicting fields (using similar records), analyzing sentiment, and summarizing findings only when asked by the higher-level plan. It’s instructed to return raw values and handle situations where predictions aren’t possible.
- Tools:
Get Similar Records
: Used as input for the prediction tool.Predict Fields for a record
: Performs the core field prediction.Get sentiment data
: Analyzes the task’s sentiment.- (Summarization likely uses the LLM’s generative capability, guided by the agent’s instructions, as no specific ‘Summarize’ tool is explicitly listed for it in the provided details, but Step 4 of the base plan requires it).
Analysis of Agents & Tools:
- Clear Roles: Agents operate within well-defined boundaries.
- Purposeful Tools: Tools represent specific, often complex actions (like running a prediction model or updating a record via a Flow).
- Aligned Instructions: Agent instructions directly support the steps outlined in the use case’s
base_plan
, including crucial details like the consent check. - Data Precision: Instructions specify how to handle data (e.g., “use ONLY values,” “consolidate sentiment”), ensuring consistency.
Putting it All Together: The Execution Flow
How does this orchestrated process unfold when a user triggers the use case?
- Initiation: User requests classification for a task (e.g., “Classify INC0010001”).
- Fetch (Step 1): The orchestrator delegates to the Record Handler, which uses the
Get the record meta data
tool to retrieve the task’s basic information (sys_id, table, etc.). - Predict & Analyze (Step 2): The task details (sys_id, table) are passed to the Field Predictor. It uses
Get Similar Records
,Predict Fields for a record
, andGet sentiment data
tools to generate predictions (Assignment Group, Priority, etc.) and sentiment analysis. - Consent & Update (Step 3): The predictions are sent back to the Record Handler. Following its instructions (and the base plan), it first presents the proposed changes and sentiment to the user, explicitly requesting consent.
- If Approved: The Record Handler uses the
Update the record details
tool to apply the changes. - If Denied: The process likely halts or reports the inability to proceed.
- If Approved: The Record Handler uses the
- Summarize (Step 4): Assuming the update was successful, the orchestrator directs the Field Predictor (or uses its capabilities) to generate a summary of the changes applied, which is then displayed to the user for confirmation.
- Completion: The use case concludes its run.
Take aways: Building Intelligent Automation
Dissecting the “Classify tasks” AI Agent reveals a thoughtfully designed system built on ServiceNow’s framework and best practices. It effectively combines:
- A clear, goal-oriented Usecase Plan.
- A collaborative Team of specialized Agents.
- Detailed Instructions defining agent behavior and data handling.
- Specific Tools for executing platform actions.
- Essential Safeguards like user consent.
This layered approach allows ServiceNow developers to build sophisticated, reliable automations that go far beyond traditional scripting, tackling complex processes like task classification with intelligence and structure. It’s a powerful demonstration of how AI Agents can streamline workflows and enhance productivity on the platform.