afnio.autodiff.basic_ops#

Classes

Add(*args, **kwargs)

Implements an addition operation for Variable instances within the afnio framework, supporting automatic differentiation.

Split(*args, **kwargs)

Implements a split operation for Variable instances within the afnio framework, supporting automatic differentiation.

Sum(*args, **kwargs)

Implements a summation operation for a list of Variable instances within the afnio framework, supporting automatic differentiation.

class afnio.autodiff.basic_ops.Add(*args, **kwargs)[source]#

Bases: Function

Implements an addition operation for Variable instances within the afnio framework, supporting automatic differentiation.

This class inherits from autodiff.Function and requires both the forward and backward methods to be defined.

The Add function supports both scalar and list .data fields:

  • Scalars: Adds numerical values (int, float) or concatenates strings.

  • Lists: Performs element-wise addition of corresponding elements from the lists. Lists must be of the same length.

It automatically handles type-based operations:

  • For numerical data (int, float), it performs arithmetic addition.

  • For strings, it concatenates the values.

  • Mixed types (e.g., string and number) are converted appropriately before performing the addition.

This operation also tracks Variable dependencies, enabling automatic gradient computation through backpropagation.

Example with scalar inputs:

>>> x = Variable(data="abc", role="first input", requires_grad=True)
>>> y = Variable(data="def", role="second input", requires_grad=False)
>>> result = Add.apply(x, y)
>>> result.data
'abcdef'
>>> result.role
'first input and second input'
>>> result.requires_grad
True
>>> g = Variable(data="MY_FEEDBACK", role="add gradient")
>>> result.backward(g)
>>> x.grad[0].data
'Here is the combined feedback we got for this specific first input and other variables: MY_FEEDBACK'
>>> x.grad[0].role
'feedback to first input'

Example with batched inputs:

>>> x = Variable(data=[1, 2, 3], role="first input", requires_grad=True)
>>> y = Variable(data=[4, 5, 6], role="second input", requires_grad=False)
>>> result = Add.apply(x, y)
>>> result.data
[5, 7, 9]
>>> result.role
'first input and second input'
>>> result.requires_grad
True
classmethod apply(*args, **kwargs)#

Applies the forward function of the custom Function class.

This method handles cases where setup_context is defined to set up the ctx (context) object separately or within the forward method itself.

static backward(ctx, grad_output)[source]#

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non variable outputs of the forward function), and it should return as many variables, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Variable or is a Variable not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve variables saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.

static forward(ctx, x, y)[source]#

Define the forward of the custom autodiff Function.

This function is to be overridden by all subclasses. There are two ways to define forward:

Usage 1 (Combined forward and ctx):

@staticmethod
def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any:
    pass
  • It must accept a context ctx as the first argument, followed by any number of arguments (variables or other types).

Usage 2 (Separate forward and ctx):

@staticmethod
def forward(*args: Any, **kwargs: Any) -> Any:
    pass

@staticmethod
def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None:
    pass
  • The forward no longer accepts a ctx argument.

  • Instead, you must also override the afnio.autodiff.Function.setup_context() staticmethod to handle setting up the ctx object. output is the output of the forward, inputs are a Tuple of inputs to the forward.

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Variables should not be stored directly on ctx. Instead, variables should be saved either with ctx.save_for_backward() if they are intended to be used in backward.

static setup_context(ctx, inputs, output)#

There are two ways to define the forward pass of an autodiff.Function.

Either:

  1. Override forward with the signature forward(ctx, *args, **kwargs). setup_context is not overridden. Setting up the ctx for backward happens inside the forward.

  2. Override forward with the signature forward(*args, **kwargs) and override setup_context. Setting up the ctx for backward happens inside setup_context (as opposed to inside the forward)

class afnio.autodiff.basic_ops.Split(*args, **kwargs)[source]#

Bases: Function

Implements a split operation for Variable instances within the afnio framework, supporting automatic differentiation.

This class inherits from Function and requires both the forward and backward methods to be defined.

The Split function divides the .data of the input Variable into multiple parts using a specified delimiter sep. If maxsplit is specified, the split operation is limited to a maximum number of splits. It handles both scalar and list .data fields:

  • Scalars: The scalar .data (a single string) is split into substrings based on the specified sep and maxsplit parameters.

  • Lists: Each element of the list .data (strings) is split individually. If splits of varying lengths occur, shorter splits are automatically padded with empty strings to ensure consistent dimensions.

During backpropagation, feedback is collected and aggregated across all split parts. The combined feedback is propagated back to the original input Variable, allowing for the proper computation of gradients.

Example with scalar inputs:

>>> x = Variable(data="afnio is great!", role="sentence", requires_grad=True)
>>> result = Split.apply(x, sep=" ", maxsplit=1)
>>> [var.data for var in result]
['afnio', 'is great!']
>>> result[0].role
'split part 0 of sentence'
>>> g_1 = Variable(data="MY_FIRST_FEEDBACK", role="gradient")
>>> g_2 = Variable(data="MY_SECOND_FEEDBACK", role="gradient")
>>> result[0].backward(g_1, retain_graph=True)
>>> result[1].backward(g_2)
>>> x.grad[0].data
'Here is the combined feedback we got for this specific sentence and other variables: <ITEM>MY_FIRST_FEEDBACK</ITEM><ITEM></ITEM>'
>>> x.grad[0].role
'feedback to sentence'
>>> x.grad[1].data
'Here is the combined feedback we got for this specific sentence and other variables: <ITEM></ITEM><ITEM>MY_SECOND_FEEDBACK</ITEM>'
>>> x.grad[1].role
'feedback to sentence'

Example with batched inputs:

>>> x = Variable(
...     data=["afnio is great!", "Deep learning"],
...     role="sentences",
...     requires_grad=True
... )
>>> result = Split.apply(x, sep=" ", maxsplit=2)
>>> [var.data for var in result]
[['afnio', 'Deep'], ['is', 'learning'], ['great!', '']]
>>> g = Variable(data="MY_FEEDBACK", role="gradient")
>>> result[1].backward(g)
>>> x.grad[0].data
'Here is the combined feedback we got for this specific sentences and other variables: <ITEM></ITEM><ITEM>MY_FEEDBACK</ITEM><ITEM></ITEM>'
>>> x.grad[0].role
'feedback to sentences'
classmethod apply(*args, **kwargs)#

Applies the forward function of the custom Function class.

This method handles cases where setup_context is defined to set up the ctx (context) object separately or within the forward method itself.

static backward(ctx, *grad_outputs)[source]#

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non variable outputs of the forward function), and it should return as many variables, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Variable or is a Variable not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve variables saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.

static forward(ctx, x, sep=None, maxsplit=-1)[source]#

Define the forward of the custom autodiff Function.

This function is to be overridden by all subclasses. There are two ways to define forward:

Usage 1 (Combined forward and ctx):

@staticmethod
def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any:
    pass
  • It must accept a context ctx as the first argument, followed by any number of arguments (variables or other types).

Usage 2 (Separate forward and ctx):

@staticmethod
def forward(*args: Any, **kwargs: Any) -> Any:
    pass

@staticmethod
def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None:
    pass
  • The forward no longer accepts a ctx argument.

  • Instead, you must also override the afnio.autodiff.Function.setup_context() staticmethod to handle setting up the ctx object. output is the output of the forward, inputs are a Tuple of inputs to the forward.

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Variables should not be stored directly on ctx. Instead, variables should be saved either with ctx.save_for_backward() if they are intended to be used in backward.

static setup_context(ctx, inputs, output)#

There are two ways to define the forward pass of an autodiff.Function.

Either:

  1. Override forward with the signature forward(ctx, *args, **kwargs). setup_context is not overridden. Setting up the ctx for backward happens inside the forward.

  2. Override forward with the signature forward(*args, **kwargs) and override setup_context. Setting up the ctx for backward happens inside setup_context (as opposed to inside the forward)

class afnio.autodiff.basic_ops.Sum(*args, **kwargs)[source]#

Bases: Function

Implements a summation operation for a list of Variable instances within the afnio framework, supporting automatic differentiation.

This class inherits from Function and requires both the forward and backward methods to be defined.

The Sum function aggregates the .data, .role, and .requires_grad attributes of all input Variable instances into a single Variable. It supports both scalar and list .data fields:

  • Scalars: Computes the arithmetic sum for numerical data (int, float) or concatenates all string values, wrapping each in <ITEM></ITEM> tags.

  • Lists: Aggregates the corresponding elements of the lists. For numerical data, it sums the corresponding elements. For string data, it concatenates them, wrapping each element in <ITEM></ITEM> tags.

During backpropagation, the function distributes the gradient to all input Variable instances that require gradients.

Example with scalar inputs:

>>> x = Variable(data="abc", role="first input", requires_grad=True)
>>> y = Variable(data="def", role="second input", requires_grad=False)
>>> result = Sum.apply([x, y])
>>> result.data
'<ITEM>abc</ITEM><ITEM>def</ITEM>'
>>> result.role
'first input and second input'
>>> result.requires_grad
True
>>> g = Variable(data="MY_FEEDBACK", role="add gradient")
>>> result.backward(g)
>>> x.grad[0].data
'Here is the combined feedback we got for this specific first input and other variables: MY_FEEDBACK'
>>> x.grad[0].role
'feedback to first input'

Example with batched inputs:

>>> x = Variable(data=[1, 2, 3.5], role="first input", requires_grad=True)
>>> y = Variable(data=[4, 5, 6], role="second input", requires_grad=False)
>>> result = Sum.apply([x, y])
>>> result.data
[5, 7, 9.5]
>>> result.role
'first input and second input'
>>> result.requires_grad
True
classmethod apply(*args, **kwargs)#

Applies the forward function of the custom Function class.

This method handles cases where setup_context is defined to set up the ctx (context) object separately or within the forward method itself.

static backward(ctx, grad_output)[source]#

Define a formula for differentiating the operation with backward mode automatic differentiation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non variable outputs of the forward function), and it should return as many variables, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Variable or is a Variable not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve variables saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computed w.r.t. the output.

static forward(ctx, x)[source]#

Define the forward of the custom autodiff Function.

This function is to be overridden by all subclasses. There are two ways to define forward:

Usage 1 (Combined forward and ctx):

@staticmethod
def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any:
    pass
  • It must accept a context ctx as the first argument, followed by any number of arguments (variables or other types).

Usage 2 (Separate forward and ctx):

@staticmethod
def forward(*args: Any, **kwargs: Any) -> Any:
    pass

@staticmethod
def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None:
    pass
  • The forward no longer accepts a ctx argument.

  • Instead, you must also override the afnio.autodiff.Function.setup_context() staticmethod to handle setting up the ctx object. output is the output of the forward, inputs are a Tuple of inputs to the forward.

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Variables should not be stored directly on ctx. Instead, variables should be saved either with ctx.save_for_backward() if they are intended to be used in backward.

static setup_context(ctx, inputs, output)#

There are two ways to define the forward pass of an autodiff.Function.

Either:

  1. Override forward with the signature forward(ctx, *args, **kwargs). setup_context is not overridden. Setting up the ctx for backward happens inside the forward.

  2. Override forward with the signature forward(*args, **kwargs) and override setup_context. Setting up the ctx for backward happens inside setup_context (as opposed to inside the forward)