Agent Definitions¶
Agent
¶
Bases: AgentMeta
The Agent class is a concrete implementation of an AI agent with tool-calling capabilities, inheriting from AgentMeta. It integrates a language model, tools, memory, and flow management to process queries, execute tools, and maintain conversational context.
Methods:
| Name | Description |
|---|---|
ainvoke |
|
invoke |
|
stream |
Stream the agent response token-by-token with continuous tool-calling |
save_memory |
Save the tool message to the memory |
register_tools |
Register a list of tools. |
ainvoke
async
¶
ainvoke(query: str, is_save_memory: bool = False, user_id: str = 'unknown_user', max_iterations: int = 10, is_tool_formatted: bool = True, max_history: int = None, **kwargs) -> Awaitable[Any]
invoke
¶
invoke(query: str, is_save_memory: bool = False, user_id: str = 'unknown_user', max_iterations: int = 10, is_tool_formatted: bool = True, max_history: int = None, **kwargs) -> Any
stream
¶
stream(query: str, is_save_memory: bool = False, user_id: str = 'unknown_user', max_iterations: int = 10, is_tool_formatted: bool = True, max_history: int = None, **kwargs) -> Generator[Any, None, None]
Stream the agent response token-by-token with continuous tool-calling
capability. Follows the same 3-step loop as :meth:invoke but yields
AIMessageChunk objects so the caller can push tokens to a live
connection (WebSocket, SSE, etc.) as soon as they arrive.
Workflow¶
- Step 1 – Call the LLM with structured output to determine
whether a tool is needed (
AgentResponse). - If no tool needed: stream the direct answer token-by-token and return.
- Step 2 – Execute the tool synchronously (same as
invoke). - Step 3 – After all tool iterations, stream the final LLM summary token-by-token.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
query
|
str
|
User query. |
required |
is_save_memory
|
bool
|
Save conversation to long-term memory. |
False
|
user_id
|
str
|
Identifier for the current user. |
'unknown_user'
|
max_iterations
|
int
|
Maximum tool-calling iterations. |
10
|
is_tool_formatted
|
bool
|
If True, streams a final LLM summary after tool execution; if False, yields the raw ToolMessage. |
True
|
max_history
|
int
|
Number of history messages to include. |
None
|
**kwargs
|
Forwarded to the compiled-graph path when applicable. |
{}
|
Yields:
| Type | Description |
|---|---|
Any
|
AIMessageChunk | AIMessage | ToolMessage: Streamed LLM tokens |
Any
|
or the final tool/LLM message. |
save_memory
¶
Save the tool message to the memory
register_tools
¶
Register a list of tools.
Automatically detects whether each entry is an AgentSkill directory
(contains a SKILL.md file) and routes it to
:meth:~vinagent.register.tool.ToolManager.register_agentskill_tool.
All other paths are treated as regular Python modules and forwarded to
:meth:~vinagent.register.tool.ToolManager.register_module_tool.