Agents
Explore how agents operate, how they use tools and memory, and how to structure them effectively in Peargent.
In simple terms, an Agent calls the Model to generate responses according to its defined behavior.
Agents are the core units of work in Peargent, while Tools provide the capabilities that help agents perform actions and tackle complex tasks.
Agents can operate individually for simple tasks, or they can be combined into a Pool of agents to handle more complex, multi-step workflows.
Creating an Agent
To create an agent, use the create_agent function from the peargent module. At minimum, you must define the agent’s name, description, and persona, and the model to use.
Here is a simple example:
from peargent import create_agent
from peargent.models import openai
code_reviewer = create_agent(
name="Code Reviewer",
description="Reviews code for issues and improvements",
persona=(
"You are a highly skilled senior software engineer and code reviewer. "
"Your job is to analyze code for correctness, readability, maintainability, and performance. "
"Identify bugs, edge cases, and bad practices. Suggest improvements that follow modern Python "
"standards and best engineering principles. Provide clear explanations and, when appropriate, "
"offer improved code snippets. Always be concise, accurate, and constructive."
),
model=openai("gpt-4")
)Call agent.run(prompt) to perform an inference using the agent’s persona as the system prompt and your input as the user message.
response = code_reviewer.run("Review this Python function for improvements:\n\ndef add(a, b): return a+b")
print(response)
# The function is correct but could be optimized, here is the optimized version...When running an agent individually, the description field is optional. However, it becomes mandatory when the agent is part of a Pool.
- Refer Tools to learn how to use tools with agents.
- Refer History to learn how to setup conversation memory for agents.
- Refer Pool to learn how to create a pool of agents.
How does agent work?
Start Execution (agent.run())
When you call agent.run(...), the agent prepares for a new interaction: it loads any previous conversation History (if enabled), begins tracing (if enabled), and registers the user's new input.
Build the Prompt
The agent constructs the full prompt by combining its persona, Tools, prior conversation context, and optional output schema. This prompt is then sent to the configured Model.
Model Generates a Response
The model returns a response based on the prompt. The agent records this output and checks whether the model is requesting tool calls.
Execute Tools (If Requested)
If the response includes tool calls, the agent runs those tools (in parallel if multiple), collects their outputs, and then asks the model again using an updated prompt. This cycle continues until no more tool actions are required.
Finalize the Result
The agent checks whether it should stop (stop conditions met or max iterations reached). If an output schema was provided, the response is validated against it. Finally, the conversation is synced to History (if enabled), tracing is ended, and the final response is returned.
Parameters
| Parameter | Type | Description | Required |
|---|---|---|---|
name | str | The name of the agent. | Yes |
description | str | A brief description of the agent's purpose. Required when using in a Pool. | No* |
persona | str | The system prompt defining the agent's personality and instructions. | Yes |
model | Model | The LLM model instance (e.g., openai("gpt-4")). | Yes |
tools | list[Tool] | A list of Tools the agent can access. | No |
stop | StopCondition | Condition that determines when the agent should stop iterating (default: limit_steps(5)). | No |
history | HistoryConfig | Configuration for conversation History. | No |
tracing | bool | None | Enable/disable tracing. None (default) inherits from global tracer if enable_tracing() was called, True explicitly enables, False opts out. | No |
output_schema | Type[BaseModel] | Pydantic model for structured output validation. | No |
max_retries | int | Maximum retries for output_schema validation (default: 3). Only used when output_schema is provided. | No |
* Required when using the agent in a Pool.
To know more about stop, tracing, output_schema, and max_retries, refer to Advanced Features.