Python Actions Reference#
Auto-generated summary | Last updated: 2026-01-25 | Git:
8eb000eTotal modules: 51 | Total actions: 276
This document lists all built-in actions available in the Python implementation of The Edge Agent.
For complete action documentation including parameters and examples, see the YAML Reference.
Quick Reference#
Module |
Actions |
Description |
|---|---|---|
4 |
Caching/memoization with LTM backend |
|
7 |
HTTP, file operations, notifications, checkpoints |
|
16 |
JSON/CSV parsing, transformation, validation |
|
4 |
LLM calls, streaming, retry, and tool calling |
|
4 |
Long-term persistent key-value storage with FTS5 |
|
3 |
Key-value storage with TTL |
|
10 |
Inter-agent communication (send, receive, broadcast, delegate) |
|
5 |
Multi-agent collaboration (dispatch, parallel, sequential) |
|
5 |
Firestore CRUD operations |
|
25 |
Graph database with Datalog and HNSW vectors |
|
18 |
Neo4j GDS graph analytics algorithms |
|
11 |
Neo4j APOC trigger management |
|
7 |
Error handling actions (is_retryable, clear, retry) |
|
4 |
Planning/decomposition primitives |
|
1 |
Rate limiting with shared named limiters |
|
7 |
Reasoning techniques (CoT, ReAct, self-correct, decompose) |
|
3 |
Self-reflection loop primitive |
|
1 |
General-purpose retry loop with correction |
|
2 |
Generic extraction validation with Prolog/probes |
|
3 |
Academic research via PubMed, ArXiv, CrossRef APIs |
|
2 |
Authentication verification (verify, get_user) |
|
2 |
BMad story task extraction |
|
10 |
Data catalog for tables, files, and snapshots |
|
5 |
Cloud storage with metadata management |
|
2 |
Sandboxed Python code execution |
|
1 |
Context assembly with relevance ranking |
|
6 |
Tabular data operations |
|
7 |
DSPy prompt optimization (cot, react, compile) |
|
6 |
Git worktree actions (execution modes) |
|
4 |
GitHub Issues integration |
|
1 |
HTTP response for early termination |
|
2 |
Input schema validation |
|
8 |
Document extraction via LlamaExtract |
|
6 |
LlamaIndex RAG bridge (query, router, subquestion) |
|
6 |
Local LLM inference via llama-cpp-python |
|
2 |
Markdown parsing with sections, variables, checklists |
|
7 |
Mem0 universal memory integration |
|
7 |
Tracing spans and event logging |
|
4 |
Embedding creation, vector storage, semantic search |
|
1 |
Schema merge and manipulation |
|
3 |
SQL and full-text search via QueryEngine |
|
2 |
Secrets access via secrets.get and secrets.has |
|
1 |
Semantic search using SemTools CLI |
|
7 |
Session lifecycle with archive-based expiration |
|
4 |
Session persistence (load, save, delete, exists) |
|
7 |
Cloud storage operations via fsspec (S3, GCS, Azure) |
|
1 |
Text processing including citation insertion |
|
8 |
TextGrad learning actions |
|
5 |
Bridges to CrewAI, MCP, and LangChain tools |
|
8 |
Vector similarity search via VectorIndex |
|
4 |
Web scraping, crawling, search via Firecrawl/Perplexity |
Table of Contents#
Core Actions (P0)#
Core actions provide essential functionality for most workflows.
cache.*#
Module: cache_actions.py
Action |
Description |
|---|---|
|
Wrap any action with automatic caching. |
|
Retrieve cached value by key without executing any action. |
|
Invalidate (delete) cached entries by exact key or pattern. |
|
Compute hash of file content from any URI. |
core.*#
Module: core_actions.py
Action |
Description |
|---|---|
|
Make HTTP GET request. |
|
Make HTTP POST request. |
|
Write content to a file (local or remote via fsspec). |
|
Read content from a file (local or remote via fsspec). |
|
Send a notification. |
|
Save checkpoint to specified path. |
|
Load checkpoint from specified path. |
data.*#
Module: data_actions.py
Action |
Description |
|---|---|
|
Parse a JSON string into a Python object. |
|
Parse a JSON string into a Python object. |
|
Transform data using JMESPath or JSONPath expressions. |
|
Transform data using JMESPath or JSONPath expressions. |
|
Convert a Python object to a JSON string. |
|
Convert a Python object to a JSON string. |
|
Parse CSV data from text or file. |
|
Parse CSV data from text or file. |
|
Convert a list of dicts or list of lists to a CSV string. |
|
Convert a list of dicts or list of lists to a CSV string. |
|
Validate data against a JSON Schema. |
|
Validate data against a JSON Schema. |
|
Merge multiple dictionaries/objects. |
|
Merge multiple dictionaries/objects. |
|
Filter list items using predicate expressions. |
|
Filter list items using predicate expressions. |
llm.*#
Module: llm_actions.py
Action |
Description |
|---|---|
|
Call a language model (supports OpenAI, Azure OpenAI, Ollama, LiteLLM, and Shell CLI) |
|
Stream LLM responses token-by-token |
|
DEPRECATED: Use llm.call with max_retries parameter instead |
|
LLM call with tool/function calling support |
LLM Provider Configuration#
The LLM actions support multiple providers: OpenAI, Azure OpenAI, Ollama, LiteLLM, and Shell CLI.
Provider Detection Priority:
Explicit
providerparameter (highest priority)Environment variable detection:
OLLAMA_API_BASE→ OllamaAZURE_OPENAI_API_KEY+AZURE_OPENAI_ENDPOINT→ Azure OpenAI
Default → OpenAI
Environment Variables:
Variable |
Provider |
Description |
|---|---|---|
|
OpenAI |
OpenAI API key |
|
Azure |
Azure OpenAI API key |
|
Azure |
Azure endpoint URL |
|
Azure |
Deployment name (optional) |
|
Ollama |
Ollama API URL (default: |
Ollama Example:
- name: ask_local_llm
uses: llm.call
with:
provider: ollama
model: llama3.2
api_base: http://localhost:11434/v1
messages:
- role: user
content: "{{ state.question }}"
LiteLLM Example:
- name: ask_claude
uses: llm.call
with:
provider: litellm
model: anthropic/claude-3-opus-20240229
messages:
- role: user
content: "{{ state.question }}"
See YAML Reference for complete provider documentation.
ltm.*#
Module: ltm_actions.py
Action |
Description |
|---|---|
|
Store a key-value pair persistently with optional metadata. |
|
Retrieve a value from long-term memory by key. |
|
Delete a value from long-term memory by key. |
|
Search across long-term memory using FTS5 and/or metadata filtering. |
memory.*#
Module: memory_actions.py
Action |
Description |
|---|---|
|
Store a key-value pair in memory with optional TTL. |
|
Retrieve a value from memory by key. |
|
Summarize conversation history using LLM to fit token windows. |
Integration Actions (P1)#
Integration actions connect TEA with external systems and enable multi-agent workflows.
a2a.*#
Module: a2a_actions.py
Action |
Description |
|---|---|
|
Send a message to a specific agent. |
|
Receive messages from agents. |
|
Broadcast message to all agents in namespace. |
|
Delegate a task to another agent and wait for response. |
|
Get a value from shared state. |
|
Set a value in shared state. |
|
Discover available agents in namespace. |
|
Register current agent for discovery and broadcasts. |
|
Unregister current agent. |
|
Send heartbeat to update last_seen timestamp. |
agent.*#
Module: agent_actions.py
Action |
Description |
|---|---|
|
Dispatch a task to a single named agent. |
|
Dispatch same task to multiple agents in parallel. |
|
Chain multiple agents where output feeds into next agent’s input. |
|
Coordinator pattern with leader agent dispatching to workers. |
|
Delegate to CrewAI for complex multi-agent workflows. |
firestore.*#
Module: firestore_actions.py
Action |
Description |
|---|---|
|
|
|
|
|
|
|
|
|
graph.*#
Module: graph_actions.py
Action |
Description |
|---|---|
|
Store an entity (node) in the graph database. |
|
Store a relation (edge) between two entities. |
|
Execute a Cypher/Datalog/SQL-PGQ query or pattern match. |
|
Retrieve relevant subgraph context. |
|
Delete an entity (node) from the graph database. |
|
Delete a relation (edge) from the graph database. |
|
Update properties of an entity (node) in the graph database. |
|
Update properties of a relation (edge) in the graph database. |
|
Add labels to an entity (node) in the graph database. |
|
Remove labels from an entity (node) in the graph database. |
|
Bulk insert/update multiple entities in a single transaction. |
|
Bulk create/update multiple relations in a single transaction. |
|
Delete multiple entities in a single transaction. |
|
Conditional upsert with ON CREATE / ON MATCH semantics. |
|
Conditional upsert of a relation with ON CREATE / ON MATCH semantics. |
|
Create a property graph from vertex and edge tables (DuckPGQ). |
|
Drop a property graph (DuckPGQ). |
|
Run a graph algorithm (DuckPGQ). |
|
Find shortest path between two entities (DuckPGQ). |
|
List all created property graphs (DuckPGQ). |
|
Perform vector similarity search using Neo4j Vector Index. |
|
Create a vector index in Neo4j for similarity search. |
|
Drop a vector index from Neo4j. |
|
List all vector indexes in Neo4j. |
|
Check if the Neo4j instance supports vector indexes. |
neo4j_gds.*#
Module: neo4j_gds_actions.py
Action |
Description |
|---|---|
|
Check if Neo4j GDS library is available. |
|
Get the installed Neo4j GDS library version. |
|
Create an in-memory graph projection for GDS algorithms. |
|
Drop (remove) an in-memory graph projection. |
|
List all active in-memory graph projections. |
|
Estimate memory requirements for a GDS algorithm. |
|
Run PageRank algorithm on a projected graph. |
|
Run Betweenness Centrality algorithm. |
|
Run Degree Centrality algorithm. |
|
Run Closeness Centrality algorithm. |
|
Run Louvain community detection algorithm. |
|
Run Label Propagation community detection algorithm. |
|
Run Weakly Connected Components algorithm. |
|
Find shortest weighted path using Dijkstra’s algorithm. |
|
Find shortest path using A* algorithm with heuristic. |
|
Find shortest paths from source to all other nodes. |
|
Compute Jaccard similarity between nodes based on shared neighbors. |
|
Run K-Nearest Neighbors algorithm on node properties. |
neo4j_trigger.*#
Module: neo4j_trigger_actions.py
Action |
Description |
|---|---|
|
Check if APOC library is installed and available. |
|
Get the installed APOC library version. |
|
Check if APOC triggers are enabled in Neo4j configuration. |
|
Register a database trigger using APOC. |
|
Remove a registered trigger. |
|
List all registered triggers. |
|
Temporarily disable a trigger without removing it. |
|
Re-enable a paused trigger. |
|
Register a trigger that fires an HTTP webhook on graph changes. |
|
Register a trigger that writes to a state node for agent consumption. |
|
Remove triggers by prefix, used for session/agent cleanup. |
Reasoning Actions (P2)#
Reasoning actions provide advanced AI capabilities for planning, reflection, and error handling.
error.*#
Module: error_actions.py
Action |
Description |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
planning.*#
Module: planning_actions.py
Action |
Description |
|---|---|
|
Decompose a goal into subtasks using LLM. |
|
Execute plan subtasks respecting dependency order. |
|
Re-plan from current state, preserving completed subtasks. |
|
Get current plan execution status. |
ratelimit.*#
Module: ratelimit_actions.py
Action |
Description |
|---|---|
|
Wrap any action with rate limiting. |
reasoning.*#
Module: reasoning_actions.py
Action |
Description |
|---|---|
|
Chain-of-Thought reasoning action. |
|
ReAct (Reason-Act) reasoning action. |
|
Self-correction reasoning action. |
|
Problem decomposition reasoning action. |
|
Chain-of-Thought using DSPy ChainOfThought module. |
|
ReAct using DSPy ReAct module with tool bridge. |
|
Compile DSPy module with teleprompter for optimized prompts. |
reflection.*#
Module: reflection_actions.py
Action |
Description |
|---|---|
|
Execute a generate→evaluate→correct loop (AC: 1, 5, 6). |
|
Standalone evaluation action (AC: 7). |
|
Standalone correction action (AC: 8). |
retry.*#
Module: retry_actions.py
Action |
Description |
|---|---|
|
Execute validation with retry loop (TEA-YAML-005). |
validation.*#
Module: validation_actions.py
Action |
Description |
|---|---|
|
Validate extracted entities and relationships (AC: 16-18). |
|
Generate a schema-guided extraction prompt (AC: 23-27). |
Utility Actions (P3)#
Utility actions provide specialized functionality for specific use cases.
academic.*#
Module: academic_actions.py
Action |
Description |
|---|---|
|
Search PubMed database for scientific articles via NCBI E-utilities |
|
Search ArXiv preprint server for papers |
|
Query CrossRef API for DOI metadata or search by query string |
academic.pubmed#
Search the PubMed database for scientific articles using NCBI E-utilities API.
Parameters:
Parameter |
Type |
Default |
Description |
|---|---|---|---|
|
string |
required |
Search query (PubMed query syntax) |
|
int |
5 |
Maximum results to return |
|
string |
“relevance” |
Sort order: “relevance” or “date” |
|
int |
30 |
Request timeout in seconds |
Rate Limiting: 3 requests/second (10 req/s with NCBI_API_KEY)
academic.arxiv#
Search the ArXiv preprint server for research papers.
Parameters:
Parameter |
Type |
Default |
Description |
|---|---|---|---|
|
string |
optional |
Search query string |
|
string |
optional |
Direct paper lookup by ID |
|
int |
5 |
Maximum results to return |
|
string |
“relevance” |
Sort order: “relevance” or “date” |
Rate Limiting: 1 request per 3 seconds (per ArXiv terms of service)
academic.crossref#
Query the CrossRef API for DOI metadata or search by query string.
Parameters:
Parameter |
Type |
Default |
Description |
|---|---|---|---|
|
string |
optional |
DOI for direct lookup |
|
string |
optional |
Search query string |
|
int |
5 |
Maximum results to return |
|
string |
optional |
Email for polite pool access (50 req/s) |
auth.*#
Module: auth_actions.py
Action |
Description |
|---|---|
|
Verify an authentication token |
|
Get full user profile by UID |
auth.verify#
Verify an authentication token. Extracts token from headers if not provided directly.
- name: verify_token
uses: auth.verify
with:
token: "{{ state.custom_token }}" # Optional
headers: "{{ state.request_headers }}" # Optional
output: auth_result
auth.get_user#
Get full user profile by UID from Firebase Authentication.
- name: get_profile
uses: auth.get_user
with:
uid: "{{ state.__user__.uid }}"
output: full_profile
bmad.*#
Module: bmad_actions.py
Action |
Description |
|---|---|
|
Parse a BMad story file into structured data. |
|
Parse a BMad story file into structured data. |
catalog.*#
Module: catalog_actions.py
Action |
Description |
|---|---|
|
Register a new table in the DuckLake catalog. |
|
Get table metadata from the catalog. |
|
List tables in the catalog with optional filtering. |
|
Track a Parquet or delta file in the catalog. |
|
Get file metadata from the catalog. |
|
List files for a table with optional filtering. |
|
Create a point-in-time snapshot for a table. |
|
Get the most recent snapshot for a table. |
|
List snapshots for a table. |
|
Get files that changed since a snapshot. |
cloud_memory.*#
Module: cloud_memory_actions.py
Action |
Description |
|---|---|
|
Store an artifact in cloud storage with metadata and embedding. |
|
Retrieve an artifact from cloud storage. |
|
List artifacts with filtering. |
|
Update metadata only (not file content). |
|
Search documents by anchors. |
code.*#
Module: code_actions.py
Action |
Description |
|---|---|
|
Execute Python code in a RestrictedPython sandbox. |
|
Manage persistent sandbox sessions for multi-step code execution. |
context.*#
Module: context_actions.py
Action |
Description |
|---|---|
|
Assemble context from configured layers with relevance ranking. |
data_tabular.*#
Module: data_tabular_actions.py
Action |
Description |
|---|---|
|
Register a new tabular table in the catalog. |
|
Insert rows into a tabular table. |
|
Update rows matching WHERE clause. |
|
Delete rows matching WHERE clause. |
|
Query tabular data with SQL. |
|
Full compaction: merge N Parquet files + inlined -> 1 Parquet file. |
dspy.*#
Module: dspy_actions.py
Action |
Description |
|---|---|
|
Chain-of-Thought reasoning using DSPy ChainOfThought module. |
|
ReAct reasoning using DSPy ReAct module. |
|
Compile a DSPy module with teleprompter for optimized prompts. |
|
Run optimization against a validation set. |
|
List all compiled DSPy modules. |
|
Export all compiled DSPy prompts for checkpoint persistence. |
|
Import compiled DSPy prompts from checkpoint persistence. |
git.*#
Module: git_actions.py
Action |
Description |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
github.*#
Module: github_actions.py
Action |
Description |
|---|---|
|
List issues from a GitHub repository. |
|
Create a new GitHub issue. |
|
Update an existing GitHub issue. |
|
Search GitHub issues using GitHub search syntax. |
http_response.*#
Module: http_response_actions.py
Action |
Description |
|---|---|
|
Synchronous version of http.respond. |
input_validation.*#
Module: input_validation_actions.py
Action |
Description |
|---|---|
|
Validate input data against a schema (AC10). |
|
Create a reusable schema validator. |
llamaextract.*#
Module: llamaextract_actions.py
Action |
Description |
|---|---|
|
Extract structured data from a document using LlamaExtract. |
|
Create or update an extraction agent. |
|
List available extraction agents. |
|
Get extraction agent details. |
|
Delete an extraction agent. |
|
Submit async extraction job to LlamaExtract. |
|
Poll job status from LlamaExtract. |
|
Get extraction result for a completed job. |
llamaindex.*#
Module: llamaindex_actions.py
Action |
Description |
|---|---|
|
Execute a simple vector query against a LlamaIndex index. |
|
Execute a router query that selects the best engine for the query. |
|
Execute a sub-question query that decomposes complex queries. |
|
Create a new LlamaIndex index from documents or a directory. |
|
Load a persisted LlamaIndex index. |
|
Add documents to an existing LlamaIndex index. |
llm_local.*#
Module: llm_local_actions.py
Action |
Description |
|---|---|
|
LLM completion using local or API backend. |
|
Chat completion using OpenAI-compatible format. |
|
Streaming LLM generation with token-by-token output. |
|
Generate text embeddings using local or API backend. |
|
Chat completion using OpenAI-compatible format. |
|
Generate text embeddings using local or API backend. |
markdown.*#
Module: markdown_actions.py
Action |
Description |
|---|---|
|
Parse Markdown content into a structured document. |
|
Parse Markdown content into a structured document. |
mem0.*#
Module: mem0_actions.py
Action |
Description |
|---|---|
|
Store messages with automatic fact extraction using Mem0. |
|
Search memories by semantic similarity using Mem0. |
|
Get all memories for a specified scope. |
|
Get a specific memory by its ID. |
|
Update an existing memory by ID. |
|
Delete memories by ID or scope. |
|
Test Mem0 connection and configuration. |
observability.*#
Module: observability_actions.py
Action |
Description |
|---|---|
|
Start a new trace span. |
|
Log an event, metrics, or state snapshot to the current span. |
|
End the current trace span. |
|
Validate Opik connectivity and authentication (TEA-BUILTIN-005.3). |
|
Get the complete flow log from ObservabilityContext (TEA-OBS-001.1). |
|
Log a custom event to the observability stream (TEA-OBS-001.1). |
|
Query events from the observability stream (TEA-OBS-001.1). |
rag.*#
Module: rag_actions.py
Action |
Description |
|---|---|
|
Create embeddings from text. |
|
Store documents with embeddings in vector store. |
|
Query vector store for similar documents. |
|
Index files/directories into vector store (AC: 1-13). |
schema.*#
Module: schema_actions.py
Action |
Description |
|---|---|
|
Deep merge multiple JSON Schemas with kubectl-style semantics. |
search.*#
Module: search_actions.py
Action |
Description |
|---|---|
|
Execute grep-like search across agent memory. |
|
Execute SQL query against agent_memory table with safety controls. |
|
Search for files by structured content field values. |
secrets.*#
Module: secrets_actions.py
Action |
Description |
|---|---|
|
Get a secret value by key. |
|
Check if a secret exists. |
semtools.*#
Module: semtools_actions.py
Action |
Description |
|---|---|
|
Semantic search using SemTools CLI. |
session.*#
Module: session_actions.py
Action |
Description |
|---|---|
|
Create a new session with expiration. |
|
End session and archive its memory. |
|
Archive session with custom reason. |
|
Restore archived session. |
|
Get session metadata. |
|
List sessions with optional filtering. |
|
Archive sessions that have exceeded their TTL. |
session_persistence.*#
Module: session_persistence_actions.py
Action |
Description |
|---|---|
|
Load session data from the configured session backend. |
|
Save current state to the session backend. |
|
Delete a session from the backend. |
|
Check if a session exists. |
storage.*#
Module: storage_actions.py
Action |
Description |
|---|---|
|
List files/objects at the given path. |
|
Check if a file/object exists. |
|
Delete a file/object or directory. |
|
Copy a file/object to another location. |
|
Get metadata/info about a file/object. |
|
Create a directory/prefix. |
|
Execute a native filesystem operation not exposed by standard fsspec API. |
text.*#
Module: text_actions.py
Action |
Description |
|---|---|
|
Insert citation markers using semantic embedding matching |
text.insert_citations#
Insert citation markers into text using semantic embedding matching. Uses OpenAI embeddings to compute similarity between sentences and references, placing citations at the most semantically relevant positions.
Parameters:
Parameter |
Type |
Default |
Description |
|---|---|---|---|
|
string |
required |
Markdown text to process |
|
list[str] |
required |
List of reference strings |
|
string |
“text-embedding-3-large” |
OpenAI embedding model |
|
string |
None |
OpenAI API key (uses env var if not provided) |
Returns:
{
"cited_text": "Text with [1] citation markers inserted.",
"references_section": "## References\n\n1. Author. Title. 2020.",
"citation_map": {"1": "Author. Title. 2020."},
"text": "Full text with citations and References section"
}
Features:
Semantic matching via embeddings (not just keyword matching)
Citations placed at most relevant sentences
Conclusions and Abstract sections excluded from citation
References reordered by first occurrence
Markdown formatting preserved
textgrad.*#
Module: textgrad_actions.py
Action |
Description |
|---|---|
|
Define an optimizable prompt variable (learn.textgrad.variable action). |
|
Compute textual gradients from output evaluation (learn.textgrad.feedback action… |
|
Optimize a prompt variable using TextGrad (learn.textgrad.optimize_prompt action… |
|
Corrector for reflection.loop that uses TextGrad for prompt optimization (AC: 4)… |
|
Define an optimizable prompt variable (learn.textgrad.variable action). |
|
Compute textual gradients from output evaluation (learn.textgrad.feedback action… |
|
Optimize a prompt variable using TextGrad (learn.textgrad.optimize_prompt action… |
|
Corrector for reflection.loop that uses TextGrad for prompt optimization (AC: 4)… |
tools.*#
Module: tools_actions.py
Action |
Description |
|---|---|
|
Execute a CrewAI tool. |
|
Execute a tool from an MCP server. |
|
Execute a LangChain tool. |
|
Discover available tools from specified sources. |
|
Clear the tool discovery cache. |
vector.*#
Module: vector_actions.py
Action |
Description |
|---|---|
|
Semantic search over agent memory using vector similarity. |
|
Search using a pre-computed embedding vector. |
|
Load vector data from a Parquet file or URL. |
|
Build or rebuild the vector search index. |
|
Get statistics about the vector index. |
|
Generate embedding for content. |
|
Generate embeddings for multiple content strings. |
|
Backfill embeddings for documents missing them. |
web.*#
Module: web_actions.py
Action |
Description |
|---|---|
|
Scrape a URL and extract LLM-ready content via Firecrawl API. |
|
Crawl a website recursively via Firecrawl API. |
|
Perform web search via Perplexity API. |
|
Extract structured data from a URL using ScrapeGraphAI. |
Deprecated Actions#
The following actions have preferred alternatives:
Deprecated |
Use Instead |
Notes |
|---|---|---|
|
|
Built-in retry support |
Custom Actions#
Register custom actions via the imports: section in YAML:
imports:
- path: ./my_actions.py
actions:
- my_custom_action
Your module must implement register_actions():
def register_actions(registry, engine):
registry['custom.my_action'] = my_action_function
Source Location#
All action modules are in:
python/src/the_edge_agent/actions/
This document was auto-generated from the codebase. Run python scripts/extract_action_signatures.py to update.