Build the Agent or Workflow#
Warning
Before running any code, ensure you are logged in to the Afnio backend (afnio login).. See Logging in to Afnio Backend for details.
Afnio agents and workflows are built by composing modules that perform operations on Variables. The afnio.cognitive namespace provides all the building blocks you need to create your own agent or workflow powered by language models (LMs) Every module in Afnio subclasses afnio.cognitive.modules.Module. An agent or workflow is itself a module that can contain other modules (layers), parameters, buffers, and LM models (such as OpenAI or Anthropic clients). This modular and nested design makes it easy to build, extend, and manage complex agent architectures.
In the following sections, we’ll build an agent to classify the sentiment (positive, neutral, or negative) and urgency (low, medium, or high) of enterprise support emails in the Meta Facility Support Analyzer dataset.
import afnio
import afnio.cognitive as cog
from afnio.models.openai import AsyncOpenAI
Define the Module#
To define an AI agent or workflow, subclass cog.Module and initialize your parameters, buffers, submodules, and other components in __init__. Implement the agent’s logic in the forward method, which processes inputs and returns outputs.
Example: Sentiment Classification Agent
SENTIMENT_RESPONSE_FORMAT = {...
# Define the response format schema
SENTIMENT_RESPONSE_FORMAT = {
"type": "json_schema",
"json_schema": {
"strict": True,
"name": "sentiment_response_schema",
"schema": {
"type": "object",
"properties": {
"sentiment": {
"type": "string",
"enum": ["positive", "neutral", "negative"],
},
},
"additionalProperties": False,
"required": ["sentiment"],
},
},
}
class SentimentAgent(cog.Module):
def __init__(self):
super().__init__()
self.sentiment_task = cog.Parameter(
data="Read the provided message and determine the sentiment.",
role="system prompt for sentiment classification",
requires_grad=True,
)
self.sentiment_user = afnio.Variable(
data="**Message:**\n\n{message}\n\n",
role="input template to sentiment classifier",
)
self.sentiment_classifier = cog.ChatCompletion()
def forward(self, fwd_model, inputs, **completion_args):
sentiment_messages = [
{"role": "system", "content": [self.sentiment_task]},
{"role": "user", "content": [self.sentiment_user]},
]
return self.sentiment_classifier(
fwd_model,
sentiment_messages,
inputs=inputs,
response_format=SENTIMENT_RESPONSE_FORMAT,
**completion_args,
)
The agent constructs a multi-turn chat message and uses cog.ChatCompletion to interact with a LM for sentiment classification.
First, create an instance of SentimentAgent:
# Instantiate the agent
sentiment_agent = SentimentAgent()
print(sentiment_agent)
Output:
SentimentAgent(
(sentiment_classifier): ChatCompletion()
)
Before running the agent, set your OpenAI API key and initialize the LM client that will be used by cog.ChatCompletion:
import os
os.environ["OPENAI_API_KEY"] = "sk-..." # Replace with your actual key
# Set up the LM client for forward passes
fwd_model_client = AsyncOpenAI()
To run the agent, call it as a function with the LM client, input data, and any arguments required by the LM (such as model name, temperature, etc.). This executes the agent’s forward method and automatically builds the computation graph for backpropagation on the server.
Note
Do not call agent.forward() directly! Always call the agent instance as a function (e.g., agent(...)) to ensure hooks and background operations are handled correctly.
Inputs are typically passed as a dictionary, mapping variable names to their values—these can be individual items or batches. For example, we use the "message" key to match the {message} placeholder in the self.sentiment_user Variable buffer; this placeholder is replaced by the corresponding input using Python f-string formatting. This approach lets you flexibly provide data for prompt templates or multiple fields, and agents can accept more complex input structures as needed.
# Use the agent
emails = afnio.Variable(
[
"Subject: Cleaning Schedule for Upcoming Semester\n\nDear ProCare Team,\n\nI'm Dr. [Sender], and I am a professor at [University/Institution]. I have been consistently impressed with the quality of your services and, as we prepare for the new semester, I'd like to review and update the cleaning schedule for our engineering department. We're considering more frequent deep cleans and eco-friendly products for high-traffic areas.\n\nThis isn't urgent, but I'd appreciate your help coordinating these updates soon.\n\nBest,\nDr. [Sender]\n[University/Institution]",
"Subject: Immediate Attention Required: Emergency Repair Needed\n\nHi ProCare Team,\n\nThis is [Sender], and I have to say, I'm not impressed. I've been using your services for a while now, and I expected better. Our HVAC system has completely failed, and the temperature is unbearable. I’ve tried resetting it and checking breakers, but no luck. Please send a repair team immediately—this needs urgent attention.\n\nThanks,\n[Sender]",
],
role="input email or message",
)
response = sentiment_agent(
fwd_model_client,
inputs={"message": emails},
model="gpt-4.1-nano",
temperature=0.0,
)
print(response.data)
Output:
['{"sentiment":"positive"}', '{"sentiment":"negative"}']
Note
When you instantiate an LM client with an API key, Afnio will ask for your consent to share the key with the backend for remote model execution and backpropagation.
The key is only used to process your requests during your session.
It is never stored and is removed from memory when your session ends.
You can type
alwaysto remember your choice for future sessions and suppress this prompt.
Consent prompt:
========== [Consent Required] ==========
Your LM model API key will be sent to the Tellurio server for remote model execution and backpropagation.
Please review the following:
• Tellurio will never use your key except to execute your requests.
• The key is only used during your session.
• The key is never stored and is removed from memory when your session ends.
Do you consent to share your API key with the server?
Type 'yes' to allow just this time, or 'always' to allow and remember your choice for all future sessions.
Consent [yes/always/no]: always
Module Structure#
Afnio modules are highly flexible: you can define and use parameters, buffers, built-in modules (like cog.Split or cog.ChatCompletion), custom submodules, external models (such as OpenAI or Anthropic clients), and even your own Python functions—all as attributes of your module. This nested and composable structure makes it easy to build, extend, and manage complex workflows.
All attributes of a module—such as parameters, buffers, submodules, models, and functions—are automatically registered and accessible for optimization, inspection, or serialization. This makes it straightforward to train, audit, and deploy your agents in any environment.
Example: Urgency Classification Agent
URGENCY_RESPONSE_FORMAT = {...
# Define the response format schema
URGENCY_RESPONSE_FORMAT = {
"type": "json_schema",
"json_schema": {
"strict": True,
"name": "urgency_response_schema",
"schema": {
"type": "object",
"properties": {
"urgency": {"type": "string", "enum": ["low", "medium", "high"]},
},
"additionalProperties": False,
"required": ["urgency"],
},
},
}
class UrgencyAgent(cog.Module):
def __init__(self):
super().__init__()
self.split = cog.Split()
self.urgency_task = cog.Parameter(
data="Read the provided message and determine the urgency.",
role="system prompt for urgency classification",
requires_grad=True,
)
self.register_buffer(
"urgency_user",
afnio.Variable(
data="**Message:**\n\n{message}\n\n",
role="input template to urgency classifier",
),
)
self.urgency_classifier = cog.ChatCompletion()
self.lm_model = AsyncOpenAI()
def forward(self, fwd_model, inputs, **completion_args):
# Split email into subject and body
subject, body = self.split(inputs["message"], "\n", 1)
clean_body = self.preprocess_text(body)
urgency_messages = [
{"role": "system", "content": [self.urgency_task]},
{"role": "user", "content": [self.urgency_user]},
]
# Choose which model to use
chosen_model = fwd_model if fwd_model else self.lm_model
# Classify urgency of the email body
urgency = self.urgency_classifier(
chosen_model,
urgency_messages,
inputs={"message": clean_body},
response_format=URGENCY_RESPONSE_FORMAT,
**completion_args,
)
return urgency, subject
def preprocess_text(self, var):
return afnio.Variable([t.strip() for t in var.data])
In this example, the module contains:
Type |
Example(s) |
Description |
|---|---|---|
Parameters |
|
Learnable Variables (e.g., prompts) that can be optimized. |
Buffers |
|
Variables that store state but are not optimized. |
Built-in modules |
|
Predefined Afnio modules for operations like |
Custom submodules |
None in this example; |
User-defined modules for hierarchical workflows. |
External models |
|
LM client instances as |
Custom functions |
|
Python functions for custom logic, preprocessing, or postprocessing. |
We create an instance of UrgencyAgent:
# Instantiate the agent
urgency_agent = UrgencyAgent()
print(urgency_agent)
Output:
UrgencyAgent(
(split): Split()
(urgency_classifier): ChatCompletion()
)
Parameters and Buffers#
Parameters are learnable Variables (such as prompts, or part of a prompt) that can be optimized during training. Buffers are Variables that store state or intermediate values but are not optimized.
Accessing parameters and buffers:
You can access parameters and buffers for inspection, auditing, or serialization:
for name, param in urgency_agent.named_parameters():
print(f"Parameter: {name}\n Data: {param.data}\n Role: {param.role}")
for name, buf in urgency_agent.named_buffers():
print(f"Buffer: {name}\n Data: {buf.data!r}\n Role: {buf.role}")
Parameter: urgency_task
Data: Read the provided message and determine the urgency.
Role: system prompt for urgency classification
Buffer: urgency_user
Data: '**Message:**\n\n{message}\n\n'
Role: input template to urgency classifier
Composing with Modules#
Composing modules enables you to build complex, multi-step workflows by combining reusable components. This approach is especially useful for agents that perform several tasks in sequence or parallel, apply branching logic, or need to process intermediate results.
You can nest modules to create hierarchical workflows—simply assign submodules as attributes in your module’s __init__ method to register them automatically. This makes it easy to organize, extend, and maintain sophisticated agent architectures.
Example: Facility Support Agent
The FacilitySupportAgent combines sentiment and urgency classification to generate a tailored response for each email.
import json
class FacilitySupportAgent(cog.Module):
def __init__(self):
super().__init__()
self.sentiment_classifier = SentimentAgent()
self.urgency_classifier = UrgencyAgent()
def forward(self, fwd_model, inputs, **completion_args):
sentiment = self.sentiment_classifier(
fwd_model, inputs=inputs, **completion_args
)
urgency, subject = self.urgency_classifier(
fwd_model, inputs=inputs, **completion_args
)
sentiments = [json.loads(s)["sentiment"] for s in sentiment.data]
urgencies = [json.loads(u)["urgency"] for u in urgency.data]
subjects = subject.data
responses = []
for s, u, subj in zip(sentiments, urgencies, subjects):
if s == "negative" and u == "high":
resp = f"{subj}\n\nThank you for your message. One of our operators will reach out to you in the next 15 minutes to address your urgent concerns."
else:
resp = f"{subj}\n\nThank you for your message. We have received your request and will get back to you in the next 48 hours."
responses.append(resp)
return sentiments, urgencies, responses
You can instantiate the composed agent as follows:
# Instantiate the agent
facility_agent = FacilitySupportAgent()
print(facility_agent)
Output:
FacilitySupportAgent(
(sentiment_classifier): SentimentAgent(
(sentiment_classifier): ChatCompletion()
)
(urgency_classifier): UrgencyAgent(
(split): Split()
(urgency_classifier): ChatCompletion()
)
)
You can use the composed agent with the same inputs defined earlier:
# Use the agent
sentiment, urgency, response = facility_agent(
fwd_model_client,
inputs={"message": emails},
model="gpt-4.1-nano",
temperature=0.0,
)
print(f"Sentiment: {sentiment[0]}\nUrgency: {urgency[0]}\nResponse: {response[0]}")
print("=" * 126)
print(f"Sentiment: {sentiment[1]}\nUrgency: {urgency[1]}\nResponse: {response[1]}")
Output:
Sentiment: positive
Urgency: low
Response: Subject: Cleaning Schedule for Upcoming Semester
Thank you for your message. We have received your request and will get back to you in the next 48 hours.
==============================================================================================================================
Sentiment: negative
Urgency: high
Response: Subject: Immediate Attention Required: Emergency Repair Needed
Thank you for your message. One of our operators will reach out to you in the next 15 minutes to address your urgent concerns.