afnio#
- class afnio.GradientEdge(node: Node, output_nr: int)[source]#
Bases:
NamedTupleObject representing a given gradient edge within the autodiff graph.
To get the gradient edge where a given Variable gradient will be computed, you can do
edge = autodiff.graph.get_gradient_edge(variable).- count(value, /)#
Return number of occurrences of value.
- index(value, start=0, stop=9223372036854775807, /)#
Return first index of value.
Raises ValueError if the value is not present.
- class afnio.Node(next_functions=None)[source]#
Bases:
object- name()[source]#
Return the name.
Example:
>>> import afnio >>> import afnio.cognitive.functional as F >>> a = hf.Variable("Hello,", requires_grad=True) >>> b = hf.Variable("world!", requires_grad=True) >>> c = F.sum([a, b]) >>> assert isinstance(c.grad_fn, afnio.autodiff.graph.Node) >>> print(c.grad_fn.name()) SumBackward0
- property next_functions: Tuple[GradientEdge]#
- class afnio.Variable(data='', role='', requires_grad=False)[source]#
Bases:
objectA class to represent generic data, such as textual inputs, outputs, or numeric data.
- data#
The raw data, which can be a single string or numeric value or a list of single string or numeric values.
- grad#
Stores the gradient of the variable, if requires_grad is set to True and backpropagation has been performed.
- Type:
Optional[float]
- backward(gradient=None, retain_graph=None, create_graph=False, inputs=None)[source]#
Computes the gradient of current variable wrt graph leaves.
The graph is differentiated using the chain rule. If the variable is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionally requires specifying a
gradient. It should be a variable with data of matching type and shape, that represents the gradient of the differentiated function w.r.t.self.This function accumulates gradients in the leaves - you might need to zero
.gradattributes or set them toNonebefore calling it.Note
When
inputsare provided, each input must be a leaf variable. If any input is not a leaf, aRuntimeErroris raised.- Parameters:
gradient (Variable, optional) – The gradient of the function being differentiated w.r.t.
self. This argument can be omitted ifselfis a scalar.retain_graph (bool, optional) – If
False, the graph used to compute the grads will be freed. Setting this toTrueretains the graph, allowing for additional backward calls on the same graph, useful for example for multi-task learning where you have multiple losses. However, retaining the graph is not needed in nearly all cases and can be worked around in a much more efficient way. Defaults to the value ofcreate_graph.create_graph (bool, optional) – If
True, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults toFalse.inputs (sequence of Variable, optional) – Inputs w.r.t. which the gradient will be accumulated into
.grad. All other variables will be ignored. If not provided, the gradient is accumulated into all the leaf Variables that were used to compute thevariables.
- copy_(src)[source]#
Copies the data from the source Variable into this Variable.
- Parameters:
src (Variable) – The source Variable to copy from.
- Returns:
The current Variable with updated data, role and requires_grad.
- Return type:
self
- Raises:
TypeError – If the source is not a Variable.
ValueError – If the source data type does not match the target data type.
- property data#
- detach()[source]#
Returns a new Variable, detached from the computation graph. This new Variable will not have a grad_fn and will not track gradients.
- is_floating_point()[source]#
Checks if the Variable’s data contains floating-point values.
- Returns:
- True if the data is a floating-point type (either scalar or
all elements in a list/tuple are floating-point).
- Return type:
-
is_leaf:
bool# All Variables that have
requires_gradwhich isFalsewill be leaf Variables by convention.For Variables that have
requires_gradwhich isTrue, they will be leaf Variables if they were created by the user. This means that they are not the result of an operation and sograd_fnis None.Only leaf Variables will have their
gradpopulated during a call tobackward(). To getgradpopulated for non-leaf Variables, you can useretain_grad().Example:
>>> a = hf.Variable("abc", requires_grad=True) >>> a.is_leaf True >>> b = hf.Variable("abc", requires_grad=True).upper() >>> b.is_leaf False # b was created by the operation that converts all string characters to uppercase >>> c = hf.Variable("abc", requires_grad=True) + "def" >>> c.is_leaf False # c was created by the addition operation >>> d = hf.Variable("abc").upper() >>> d.is_leaf True # d does not require gradients and so has no operation creating it (that is tracked by the autodiff engine) >>> e = hf.Variable("abc").upper().requires_grad_() >>> e.is_leaf True # e requires gradients and has no operations creating it
- requires_grad_(mode=True)[source]#
Change if autodiff should record operations on this variable: sets this variable’s
requires_gradattribute in-place. Returns this variable.requires_grad_()’s main use case is to tell autodiff to begin recording operations on a Variablevariable. Ifvariablehasrequires_grad=False(because it was obtained through a DataLoader, or required preprocessing or initialization),variable.requires_grad_()makes it so that autodiff will begin to record operations onvariable.- Parameters:
requires_grad (bool) – If autodiff should record operations on this variable. Default:
True.
Example
>>> # Initialize with requires_grad=False for data preprocessing >>> x = hf.Variable(data="abc", role="input") >>> x = preprocess(x) # Preprocess without gradient tracking >>> x variable(abc, role=input, requires_grad=False)
>>> # Now enable requires_grad for backpropagation >>> x.requires_grad_() >>> output = model(x) >>> output.backward() # Backpropagation through `x` >>> x.grad variable(ABC, role=input, requires_grad=True)
- afnio.get_backward_model_client()[source]#
Retrieve the global model client singleton.
- Raises:
RuntimeError – If no model client is set globally.
- Returns:
The global model client.
- Return type:
ModelClientSingleton
- afnio.load(f)[source]#
Loads an object from a disk file using zip compression and pickle serialization.
- Parameters:
f (
Union[str,PathLike,BinaryIO,IO[bytes]]) – A file-like object (must implement read) or a string or os.PathLike object containing a file name.- Returns:
The deserialized object.
Example
>>> # Load from file >>> obj = hf.load('model.hf') >>> # Load from io.BytesIO buffer >>> buffer = io.BytesIO() >>> obj = hf.load(buffer)
- afnio.no_grad()[source]#
Context manager that disables gradient calculation. All operations within this block will not track gradients, making them more memory-efficient.
Disabling gradient calculation is useful for inference, when you are sure that you will not call
Variable.backward(). It will reduce memory consumption for computations that would otherwise have requires_grad=True.In this mode, the result of every computation will have requires_grad=False, even when the inputs have requires_grad=True. There is an exception! All factory functions, or functions that create a new Variable and take a requires_grad kwarg, will NOT be affected by this mode.
This context manager is thread local; it will not affect computation in other threads.
Also functions as a decorator.
- Example::
>>> x = hf.Variable("abc", role="variable", requires_grad=True) >>> with hf.no_grad(): ... y = x + x >>> y.requires_grad False >>> @hf.no_grad() ... def doubler(x): ... return x + x >>> z = doubler(x) >>> z.requires_grad False >>> @hf.no_grad ... def tripler(x): ... return x + x + x >>> z = tripler(x) >>> z.requires_grad False >>> # factory function exception >>> with hf.no_grad(): ... a = hf.cognitive.Parameter("xyz") >>> a.requires_grad True
- afnio.save(obj, f, pickle_protocol=2)[source]#
Saves an object to a disk file using zip compression and pickle serialization.
- Parameters:
Note
A common Afnio convention is to save variables using .hf file extension.
Example
>>> # Save to file >>> x = hf.Variable(data="You are a doctor.", role="system prompt") >>> hf.save(x, 'variable.hf') >>> # Save to io.BytesIO buffer >>> buffer = io.BytesIO() >>> hf.save(x, buffer)
- afnio.set_backward_model_client(model_path='openai/gpt-4o', client_args=None, completion_args=None)[source]#
Set the global model client for backward operations.
- Parameters:
model_path (str) – Path in the format
provider/model_name(e.g.,"openai/gpt-4o"). Default:"openai/gpt-4o".client_args (Dict) –
Arguments to initialize the model client such as:
api_key(str): The client API key.organization(str): The organization to bill.base_url(str): The model base endpoint URL (useful when models are behind a proxy).etc.
completion_args (Dict) –
Arguments to pass to
achat()during usage such as:model(str): The model to use (e.g.,gpt-4o).temperature(float): Amount of randomness injected into the response.max_completion_tokens(int): Maximum number of tokens to generate.etc.
Note
For a complete list of supported
client_argsandcompletion_argsfor each model, refer to the respective API documentation.
Modules