Meet Your Digital Colleague: An Agent
🎯 Difficulty Level: Easy
⏱️ Reading Time: 5 minutes
👤 Author: Rob Vettor
📅 Last updated on: December 5, 2025

To Research and Add:
- Intent-based prompting
- Prompt as declarative intent vs. a question. Doesn't say how to do, states what needs to be done.
- Agent .vs Assistant
Beyond Code: Intelligence at Work
AI agents are more than just lines of code—they’re the next leap in how we build, operate, and experience software. Imagine intelligent digital colleagues: goal-driven, adaptable, and capable of managing tasks, making decisions, and collaborating in real time. These aren’t simple bots or static programs. They are autonomous services designed to solve real-world problems with human-like judgment and flexibility.
What sets AI agents apart from traditional software is their ability to reason, plan, and act—often with far less coding and complexity. Instead of rigid workflows, agents adapt to changing goals, coordinate with other agents, and lighten the load for employees, allowing people to focus on higher-value work. When teams of specialized agents work together, they can tackle challenges no single app could handle alone, driving better outcomes across services, operations, and entire communities.
Perception, Reasoning, and Action: The Agent’s Superpowers
In the language of modern AI, an agent is a software entity that perceives its environment, reasons about what it finds, and takes action—much like a person would. The heart of every agent is a large language model (LLM) that acts as its cognitive engine. These models are trained on vast amounts of data and can simulate human-like reasoning, problem-solving, and conversation. But agents are more than just smart chatbots—they can interact with APIs, run code, retrieve information, and take autonomous action in the world around them.
When Software Starts to Think for Itself
At their core, AI agents combine the reasoning power of advanced AI models with real-time context and automation. Their true promise? To supplement human capabilities today—and perhaps, one day, to act as independent digital colleagues. The age of agents isn’t about software, APIs, or just language models. It’s about a new kind of machine—one that mirrors the way we think, decides, and gets things
Agent Core Components
Agents consist of the following core components:
- Knowledge: Tailor the agent’s responses by providing it with specialized instructions and data sources.
- Actions: Develop actions, triggers, and workflows that automate business processes.
Beyond these core components, additional layers enhance an agent’s functionality: - The orchestrator acts as the central engine that manages how the agent interacts with knowledge, skills, and autonomy. - The foundation models power the agent’s reasoning, language understanding, and response generation, forming the intelligence layer behind every interaction. - The user experience layer ensures seamless interaction between users and agents by integrating agents into Microsoft 365 applications or external platforms for an intuitive and efficient workflow.
By combining these elements, agents provide a powerful way to extend Copilot to automate tasks, integrate data, and deliver intelligent, context-aware assistance.
Agent vs. Assistant
Assisant => Copilot
AI Agent vs. AI Structured Workflow
Predefined Workflow
open-ended, dynamic, or opportunistic agentic workflow,
Dynamic agent routing • Emergent agent workfl
Dynamic Routing Workflow
A true multiagent system involves agent that dynamically reason, plan, negotiate and act in real time without human interaction or manual coordination.
Operate withi defined scooter
The Agent Landscape Across Azure
Native Agents vs. Copilot Studio Agents vs. Foundry Agents vs. M365 Agents

The Azure environment offers a rich ecosystem for building enterprise agents and agentic soluitons.
AI agents are intelligent, goal-driven services/applications built to manage and execute tasks, services, and operations. Compared to traditional software, AI-powered agents unlock a modern way to build applications — delivering stronger, more flexible solutions with far less coding and complexity. Their real power emerges when multiple specialized agents work together as a team — tackling larger challenges, adapting in real time, lightening the load for employees, freeing them up to focus on higher-value work, and driving better outcomes across services, operations, and communities.
Modern literature defines an agent as a software entity that can perceive its environment through sensors and act upon that environment through actuators. In the context of AI, agents are often equipped with reasoning capabilities, allowing them to make decisions based on their perceptions and goals.
e autonomous entities designed to solve complex tasks for humans
An **intelligent software service** that reasons like a **human**.
It’s not about software.
It's not about APIs.
It’s not about Language Models.
It's about something far more human.
- Human-like judgement
- Human-like decision-making
- Human-like conversations and actions
A machine that mirrors the way we think, but at scale and precision.
Suppplement human capabilities today -- perhaps replace in the future.
At it’s core, AI agents are intelligent software services that combine the reasoning abilities from AI models with contextual data and the ability to automate tasks based on user input and environmental factors that they perceive.
Each agent is powered by an LLM that serves as its cognitive engine.
These state-of-the-art LLMs are pre-trained with extensive breadth of knowledge (training data) and demonstrate human-like reasoning, making them versatile problem-solvers. Additionally, agents can enhance their capabilities by calling external tools (via APIs), running code, and retrieving documents, enabling them to make autonomous decisions.
The ability to make decisions on their own is the cornerstone of a multi-agent system.
Agents == Software components that can reason, plan=, and interact with its environment.
The brain, or core computational engine is the LM.
### Agent
## Agency
### MultiAgent
### AI Workflow
### Agentic
A proactive system that initiates, plans, adapts -- not react to fixed prompts.
### Emergent behaviors:
emergent behavior refers to complex, coordinated, or intelligent patterns of behavior that arise from the interactions between multiple agents in a system—without being explicitly programmed into any individual agent.
For multi-agent systems, this means that while each agent follows relatively simple rules or logic on its own, the overall system can produce surprisingly sophisticated or adaptive outcomes when agents interact. These behaviors emerge from the group dynamics, not from a central controller or master plan.
These emergent properties are often seen as the source of multi-agent systems’ greatest strengths in sophisticated problem-solving that are beyond what any individual agent could achieve.
# Chapter 1: Welcome to the Future of AI
### From Conversation to Autonomy
### 1. Introduction
- Hook: A compelling vision of AI agents working autonomously to solve complex problems
- Brief overview of the evolution from conversational AI to agentic systems
- Chapter roadmap and key takeaways
### 2. The Evolution of AI Interfaces
- Conversational beginnings: ChatGPT and early LLM interfaces
- Limitations of the conversational paradigm
- The emergence of autonomous capabilities
### 3. Defining AI Agents
- What makes an AI an “agent”
- Key characteristics: autonomy, goal-directedness, persistence, environment interaction
- The spectrum from assistants to fully autonomous agents
- Single agent capabilities and limitations
### 4. The Rise of Multi-Agent Systems
- Defining multi-agent architectures
- Emergent capabilities through collaboration
- Communication and coordination frameworks
- Advantages over single agent approaches
### 5. The Current Landscape
- Examples of early agentic systems in production
- Key players and developments
- Real-world use cases that demonstrate the transition
### 6. Business and Technical Implications
- How agentic systems change the AI value proposition
- Technical requirements for building and deploying agents
- New considerations for business leaders
### 7. Looking Ahead
- Near-term developments on the horizon
- Potential paradigm shifts in how we work with AI
- Critical capabilities to watch for
### 8. Chapter Summary
- Recap of the assistant-to-agent evolution
- Preview of upcoming chapters
- Call to action for readers to envision applications in their domain
### Key Terms
- Large Language Models (LLMs)
- Agents
- Agentic systems
- Autonomy
- Agency
-->
Declarative Agent:
define what you want—goals, constraints, preferences—and let the agent plan and execute.
Imperative Agent:
Agent is given predefined workflow.
The moment an “agent” becomes fully pre-scripted or hardcoded, with no autonomy or reasoning, many would argue: it’s not really an agent anymore. It’s just workflow automation or a task runner.
Here’s the nuance:
• Imperative agents do have some agent-like structure—they may:
• Monitor context or state
• Have a loop of perception, decision, action
• Execute commands using APIs or tools
But if the sequence is rigid and there’s no decision-making or goal inference, then: They’re “agents” in architecture, but not in spirit.
It depends on your definition:
• Broad view (practical engineering): Yes, it’s still an agent if it wraps tools, monitors, and acts on triggers—even if steps are fixed.
• Purist view (AI & autonomy): No, unless it can reason, adapt, or choose among alternatives, it’s just an orchestrator.
Some call this kind of imperative setup an “agent shell” or agent scaffold, waiting to be made truly intelligent by adding reasoning or declarative behaviors.
### AI-Time
We’re living in AI-time (describe time frames in the world of AI).
Not so long ago (months in GenAI time), the focus was on intelligent conversations. Ask an LLM a question and s and content generation. But, in just a short time, agents have taken the spotlight. Indisputably, 2025 has become the year of the agents.
As LLMs introduce a level of reasoning into previously workflow-only systems, we see a rise of “Reasoning Architecture” that lies on top of traditional underlying “Technical Architecture.
AI agents, transforming AI-based apps from isolated chatbots into fully functional, autonomous, and deeply integrated AI workers.
The next big thing? Gartner believes AI agents are the future. OpenAI, Nvidia and Microsoft are betting on it — as are companies such as Salesforce, which have so far been rather inconspicuous in the field of AI.
So, what is really behind the trend? The key to understanding agents is agency.
Unlike traditional generative AI systems, agents don’t just respond to user input. Instead, they can process a complex problem such as an insurance claim from start to finish. This includes understanding the text, images and PDFs of the claim, retrieving information from the customer database, comparing the case with the insurance terms and conditions, asking the customer questions and waiting for their response — even if it takes days — without losing context.
In contrast to existing AI systems and all the copilots out there that help employees to do their job, AI agents are, in fact, fully-fledged employees themselves, offering immense potential for process automation.
Given time, agentic AI has the potential to disrupt almost every business process prevalent in an enterprise today.
The key characteristics of agentic AI systems are their autonomy and reasoning prowess that allow them to decompose complex tasks into smaller executable tasks, and then orchestrate their execution in a way that can monitor, reflect, and adapt / self-correct the execution as and when needed. Given this,
### The Reasoning Architecture
Unlike traditional product design, AI-driven systems must explicitly manage a new conceptual layer: the Reasoning Architecture. This layer captures expert judgment, domain-specific logic, and nuanced decision-making criteria, especially critical in subjective scenarios and edge cases where ambiguity reigns.
Reasoning Architecture is distinctly different from Technical Architecture, which remains responsible for data storage, retrieval systems (like vector databases), infrastructure, APIs, and overall system performance. While engineers maintain the Technical Architecture, domain experts — often non-technical — directly shape the Reasoning Architecture. This is because we now have an element in AI systems that directly looks at and reasons across natural language. These LLM systems are design to mimic how humans and experts in specific workflows think and automate the way the system thinks about unstructured information.
The Reasoning Planning infrastructure is essentially advanced Chain of Thought reasoning that reflects the ability for specific human experts to, in a no-code way, easily load and edit their reasoning about information in different situations.
Reasoning architecture is essentially the mapping of how the system reasons through specific information and comes to specific types of conclusions, categorizations, or decisions based on the information at hand. It essentially acts almost like a checklist or an SOP of how to think about specific information.
### Tools
A critical component of agent architecture are the external tools the LLMs can access on order to completes tasks. Tool calling is decent enough to get started, but for real-world development, I think MCP is the real concept that everyone will be adopting soon.
Tools - wrapper for functions
Multi-step reasoning
LC AI Orchestrator
D
When do we move from single agent to multi
Worker role
Reviewer role
Supervisor agent
Subordinate agents
Termination strategy — word ‘approved’
Router agent - extracts intent
Use the log streaming in ACA to watch multi-agent running
Add ability to upload image (bill) to the ‘pay your bill’ agent
Use only for flex workflows: trading latency, cost, in-precision
Langgraph: Create agents with nodes/edges
Framework vs low-level api
Always add max iteration condtan
### Tools
Language models provide