experimental_experiment.torch_interpreter._aten_functions

See https://pytorch.org/docs/stable/torch.compiler_ir.html for the full list of aten functions.

class experimental_experiment.torch_interpreter._aten_functions.Reduction(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]
experimental_experiment.torch_interpreter._aten_functions.aten_FunctionCtx(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], *args, **kwargs)[source]

not implemented

experimental_experiment.torch_interpreter._aten_functions.aten___and___Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name: str = '__and___Tensor') str[source]

and

experimental_experiment.torch_interpreter._aten_functions.aten__assert_scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: Any, name: str = '_assert_scalar')[source]

_assert_scalar

experimental_experiment.torch_interpreter._aten_functions.aten__embedding_bag(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], weight: str, indices: str, offsets: str, scale_grad_by_freq: bool = False, mode: int = 0, sparse: bool = False, per_sample_weights: str | None = None, include_last_offset: bool = False, padding_idx: int | None = None, name: str = '_embedding_bag') Tuple[str, str, str, str][source]

_embedding_bag

experimental_experiment.torch_interpreter._aten_functions.aten__enter_autocast(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], *args: List[Any]) str[source]

Returns the function returns a dummy which will be removed after the graph is created.

experimental_experiment.torch_interpreter._aten_functions.aten__exit_autocast(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], output_of_enter_auto_cast: str) str[source]

Returns the function returns a dummy which will be removed after the graph is created.

experimental_experiment.torch_interpreter._aten_functions.aten__log_api_usage_once(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], module_name: str) str[source]

_log_api_usage_once: creates a dummy result.

experimental_experiment.torch_interpreter._aten_functions.aten__log_softmax(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int = -1, unnamed: bool = False, dtype: torch.dtype | None = None, name: str = '_log_softmax') str[source]

logsoftmax

experimental_experiment.torch_interpreter._aten_functions.aten__log_softmax_backward_data(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], grad_output: str, output: str, dim: int, input_dtype: torch.dtype | None = None, name: str = '_log_softmax_backward_data')[source]

logsoftmax backward

experimental_experiment.torch_interpreter._aten_functions.aten__native_batch_norm(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, weight: str | None = None, bias: str | None = None, running_mean: str | None = None, running_var: str | None = None, training: bool = False, momentum: float = 0.9, eps: float = 1e-05, name: str = '_native_batch_norm', empty_mean_std: bool = False) Tuple[str, str, str][source]

batch normalization

experimental_experiment.torch_interpreter._aten_functions.aten__native_batch_norm_legit_no_stats(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, weight: str | None = None, bias: str | None = None, training: bool = False, momentum: float = 0.9, eps: float = 1e-05, name: str = '_native_batch_norm_legit_no_stats') Tuple[str, str, str][source]

batch normalization = aten__native_batch_norm

experimental_experiment.torch_interpreter._aten_functions.aten__native_batch_norm_legit_no_training(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, weight: str | None = None, bias: str | None = None, running_mean: str | None = None, running_var: str | None = None, momentum: float = 0.9, eps: float = 1e-05, name: str = '_native_batch_norm_legit_no_training') Tuple[str, str, str][source]

batch normalization = aten__native_batch_norm with training=False

experimental_experiment.torch_interpreter._aten_functions.aten__prelu_kernel(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, weight: str) str[source]

prelu

experimental_experiment.torch_interpreter._aten_functions.aten__prelu_kernel_backward(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], grad_output: str, x: str, weight: str) Tuple[str, str][source]

prelu backward

experimental_experiment.torch_interpreter._aten_functions.aten__set_grad_enabled(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], enable: bool) str[source]

Returns the function returns a dummy which will be removed after the graph is created.

experimental_experiment.torch_interpreter._aten_functions.aten__softmax(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int = -1, half_to_float: bool = False) str[source]

softmax

experimental_experiment.torch_interpreter._aten_functions.aten__softmax_backward_data(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], grad_output: str, y: str, dim: int, input_dtype: torch.dtype | None = None, name: str = '_softmax_backward_data') str[source]

softmax backward

experimental_experiment.torch_interpreter._aten_functions.aten__sym_sqrt(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = '_sym_sqrt') str[source]

symbolic sqrt

experimental_experiment.torch_interpreter._aten_functions.aten__to_copy(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dtype: torch.dtype | None = None, layout=None, device: torch.device | None = None, pin_memory=None, non_blocking=False, memory_format=None) str[source]

identity

experimental_experiment.torch_interpreter._aten_functions.aten__unsafe_index_put(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], self: str, indices: List[str], values: str, accumulate: bool = False) str[source]

[…,:, …]

experimental_experiment.torch_interpreter._aten_functions.aten__unsafe_view(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, size: str) str[source]

slice

experimental_experiment.torch_interpreter._aten_functions.aten_abs(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

abs

experimental_experiment.torch_interpreter._aten_functions.aten_acos(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

acos

experimental_experiment.torch_interpreter._aten_functions.aten_acosh(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

acosh

experimental_experiment.torch_interpreter._aten_functions.aten_adaptive_avg_pool1d(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, output_size: Tuple[int, ...], name='aten.adaptive_avg_pool1d')[source]

adaptative AvgPool

experimental_experiment.torch_interpreter._aten_functions.aten_adaptive_avg_pool2d(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, output_size: Tuple[int, ...], name='aten.adaptive_avg_pool2d')[source]

adaptative AvgPool

experimental_experiment.torch_interpreter._aten_functions.aten_adaptive_avg_pool3d(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, output_size: Tuple[int, ...], name='aten.adaptive_avg_pool3d')[source]

adaptative AvgPool

experimental_experiment.torch_interpreter._aten_functions.aten_add(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name: str = 'add') str[source]

add

experimental_experiment.torch_interpreter._aten_functions.aten_add_Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, alpha: Any | None = None, name: str = 'add_Scalar') str[source]

add

experimental_experiment.torch_interpreter._aten_functions.aten_add_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, alpha: Any | None = None, name: str = 'add_Tensor') str[source]

add

experimental_experiment.torch_interpreter._aten_functions.aten_add__Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, alpha: Any | None = None, name: str = 'add__Tensor') str[source]

add

experimental_experiment.torch_interpreter._aten_functions.aten_addcmul(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, t1: str, t2: str, value: float = 1.0, name: str = 'addcmul') str[source]

addcmul

experimental_experiment.torch_interpreter._aten_functions.aten_addmm(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], a: str, b: str, c: str, beta: float = 1.0, alpha: float = 1.0) str[source]

gemm

experimental_experiment.torch_interpreter._aten_functions.aten_alias(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

identity

experimental_experiment.torch_interpreter._aten_functions.aten_all(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

all

experimental_experiment.torch_interpreter._aten_functions.aten_all_dim(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int, keepdim: bool = False, name: str = 'all_dim') str[source]

all_dim

experimental_experiment.torch_interpreter._aten_functions.aten_amax(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int | None = None, keepdim: bool = False, output_dtype: torch.dtype | None = None, name: str = 'aten_amax') str[source]

reducemax

experimental_experiment.torch_interpreter._aten_functions.aten_and(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name: str = 'and') str[source]

and

experimental_experiment.torch_interpreter._aten_functions.aten_and_(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='and') str[source]

and

experimental_experiment.torch_interpreter._aten_functions.aten_any(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 'any') str[source]

any

experimental_experiment.torch_interpreter._aten_functions.aten_any_dim(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int, keepdim: bool = False, name: str = 'all_dim') str[source]

all_dim

experimental_experiment.torch_interpreter._aten_functions.aten_arange(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], start: int | None = None, end: int | None = None, step: int = 1, dtype: torch.dtype | None = None, layout=None, device: torch.device | None = None, pin_memory=None, name: str = 'arange', requires_grad: bool = False) str[source]

arange

experimental_experiment.torch_interpreter._aten_functions.aten_arange_start(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], start: int | None = None, end: int | None = None, dtype: torch.dtype | None = None, layout=None, device: torch.device | None = None, pin_memory=None) str[source]

arange

experimental_experiment.torch_interpreter._aten_functions.aten_arange_start_step(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], start: int | None = None, end: int | None = None, step: int = 1, dtype: torch.dtype | None = None, layout=None, device: torch.device | None = None, pin_memory=None) str[source]

arange

experimental_experiment.torch_interpreter._aten_functions.aten_argmax(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int | None = None, keepdim: bool = False) str[source]

argmax

experimental_experiment.torch_interpreter._aten_functions.aten_as_strided(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, size: List[int], stride: List[int], storage_offset: int | None = None, name: str = 'as_strided') str[source]

as_strided

experimental_experiment.torch_interpreter._aten_functions.aten_asin(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

asin

experimental_experiment.torch_interpreter._aten_functions.aten_asinh(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

asinh

experimental_experiment.torch_interpreter._aten_functions.aten_atan(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

atan

experimental_experiment.torch_interpreter._aten_functions.aten_atanh(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

atanh

experimental_experiment.torch_interpreter._aten_functions.aten_auto_functionalized(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], wrapped_func, *args: Sequence[str], **kwargs) str[source]

identity, calling a local function

experimental_experiment.torch_interpreter._aten_functions.aten_avg_pool2d(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, kernel_size: Sequence[int] = (), stride: Sequence[int] = (), padding: Sequence[int] = (0, 0), ceil_mode: bool = False, count_include_pad: bool = True, divisor_override: int | None = None, name: str = 'aten_avg_pool2d') str[source]

AveragePool

experimental_experiment.torch_interpreter._aten_functions.aten_avg_pool2d_backward(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], grad_output: str, x: str, kernel_size: Sequence[int] = (), stride: Sequence[int] = (), padding: Sequence[int] = (0, 0), ceil_mode: bool = False, count_include_pad: bool = True, divisor_override: int | None = None, **kwargs) str[source]

AveragePoolGrad (not a standard onnx operator)

experimental_experiment.torch_interpreter._aten_functions.aten_baddbmm(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, batch1: str, batch2: str, beta: str | None = None, alpha: str | None = None, name: str = 'baddbmm') str[source]

baddbmm

experimental_experiment.torch_interpreter._aten_functions.aten_batch_norm(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, weight: str | None = None, bias: str | None = None, running_mean: str | None = None, running_var: str | None = None, training: bool = False, momentum: float = 0.9, eps: float = 1e-05, cudnn_enabled: bool = False, name: str = 'batch_norm') Tuple[str, str, str][source]

batch normalization

experimental_experiment.torch_interpreter._aten_functions.aten_bitwise_not(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 'bitwise_not') str[source]

bitwise not

experimental_experiment.torch_interpreter._aten_functions.aten_bitwise_or(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name: str = 'bitwise_or') str[source]

bitwise or

experimental_experiment.torch_interpreter._aten_functions.aten_bitwise_or_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name: str = 'bitwise_or_Tensor') str[source]

bitwise or

experimental_experiment.torch_interpreter._aten_functions.aten_bitwise_or__Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name: str = 'bitwise_or__Tensor') str[source]

bitwise or

experimental_experiment.torch_interpreter._aten_functions.aten_bmm(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str) str[source]

bmm

experimental_experiment.torch_interpreter._aten_functions.aten_broadcast_tensors(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], tensors: List[str], name: str = 'broadcast_tensors') List[str][source]

reshape into same shapes

experimental_experiment.torch_interpreter._aten_functions.aten_cat(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], tensors: Tuple[str, ...], dim: int = 0, name='cat') str[source]

concat

experimental_experiment.torch_interpreter._aten_functions.aten_chunk(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, chunks: int, dim: int = 0, use_sequence: bool = False, name: str = 'chunk') List[str][source]

chunk

experimental_experiment.torch_interpreter._aten_functions.aten_clamp(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, min: float | None = None, max: float | None = None, name: str = 'clamp') str[source]

clip

experimental_experiment.torch_interpreter._aten_functions.aten_clamp_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, min_t: str | None, max_t: str | None, name: str = 'clamp_Tensor') str[source]

clip

experimental_experiment.torch_interpreter._aten_functions.aten_clamp_max(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, max_: str, name: str = 'clamp_min') str[source]

clamp_min

experimental_experiment.torch_interpreter._aten_functions.aten_clamp_min(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, min_: str, name: str = 'clamp_min') str[source]

clamp_min

experimental_experiment.torch_interpreter._aten_functions.aten_clip(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, min: float | None = None, max: float | None = None, name: str = 'clip') str[source]

clip

experimental_experiment.torch_interpreter._aten_functions.aten_clone(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, memory_format: str | None = None, name='clone') str[source]

identity

experimental_experiment.torch_interpreter._aten_functions.aten_col2im(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, output_size: List[int], kernel_size: List[int], dilation: Sequence[int] = (1, 1), padding: Sequence[int] = (0, 0), stride: Sequence[int] = (1, 1), name: str = 'col2im') str[source]

col2im

experimental_experiment.torch_interpreter._aten_functions.aten_cond(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], cond: str, true_graph: str, false_graph: str, inputs: List[str], name='aten_cond') str[source]

cond

experimental_experiment.torch_interpreter._aten_functions.aten_constant_pad_nd(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, pad: Any, value: float = 0.0, name: str = 'constant_pad_nd') str[source]

pad

experimental_experiment.torch_interpreter._aten_functions.aten_contiguous(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, memory_format=None, name: str = 'contiguous') str[source]

contiguous -> Identity

experimental_experiment.torch_interpreter._aten_functions.aten_conv1d(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, weight: str, bias: str = None, stride: Sequence[int] = (1,), padding: str | Sequence[int] = (0,), dilation: Sequence[int] = (1,), groups: int = 1, auto_pad: str = 'NOTSET', name: str = 'conv1d') str[source]

conv1d

experimental_experiment.torch_interpreter._aten_functions.aten_conv2d(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, weight: str, bias: str = None, stride: Sequence[int] = (1, 1), padding: str | Sequence[int] = (0, 0), dilation: Sequence[int] = (1, 1), groups: int = 1, auto_pad: str = 'NOTSET', name: str = 'conv2d') str[source]

conv2d

experimental_experiment.torch_interpreter._aten_functions.aten_conv2d_padding(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, weight: str, bias: str = None, stride: Sequence[int] = (1, 1), padding: str | Sequence[int] = (0, 0), dilation: Sequence[int] = (1, 1), groups: int = 1, name: str = 'conv2d_padding') str[source]

conv

experimental_experiment.torch_interpreter._aten_functions.aten_conv3d(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, weight: str, bias: str = None, stride: Sequence[int] = (1, 1), padding: str | Sequence[int] = (0, 0), dilation: Sequence[int] = (1, 1), groups: int = 1, auto_pad: str = 'NOTSET', name: str = 'conv3d') str[source]

conv3d

experimental_experiment.torch_interpreter._aten_functions.aten_conv_transpose2d_input(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, weight: str, bias: str, stride: List[int], padding: List[int], output_padding: List[int], groups: List[int], dilation: List[int], name: str = 'conv_transpose2d_input') str[source]

conv_transpose2d

experimental_experiment.torch_interpreter._aten_functions.aten_conv_transpose3d_input(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, weight: str, bias: str, stride: List[int], padding: List[int], output_padding: List[int], groups: List[int], dilation: List[int], name: str = 'conv_transpose3d_input') str[source]

conv_transpose3d

experimental_experiment.torch_interpreter._aten_functions.aten_convolution(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], input: str, weight: str, bias: str = None, stride: Sequence[int] = (1,), padding: str | Sequence[int] = (0, 0), dilation: Sequence[int] = (1,), transposed: bool = False, output_padding: Sequence[int] = (0,), groups: int = 1, auto_pad: str = 'NOTSET', d: int = 0, name: str = 'convolution') str[source]

conv

experimental_experiment.torch_interpreter._aten_functions.aten_copy(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, src: str, non_blocking: bool = False, name: str = 'copy') str[source]

identity

experimental_experiment.torch_interpreter._aten_functions.aten_copy_(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, src: str, non_blocking: bool = False) str[source]

identity

experimental_experiment.torch_interpreter._aten_functions.aten_cos(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 'cos') str[source]

cos

experimental_experiment.torch_interpreter._aten_functions.aten_cosh(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

cosh

experimental_experiment.torch_interpreter._aten_functions.aten_cross_entropy_loss(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, target: str, weight: str | None = None, reduction: int = 1, ignore_index: int = -100, label_smoothing: float = 0.0) str[source]

cross_entropy_loss

experimental_experiment.torch_interpreter._aten_functions.aten_cudnn_batch_norm(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, weight: str, bias: str | None, running_mean: str | None, running_var: str | None, training: bool = False, momentum: float = 0.9, eps: float = 1e-05, name: str = 'cudnn_batch_norm') Tuple[str, str, str][source]

cudnn_batch_norm

experimental_experiment.torch_interpreter._aten_functions.aten_cumsum(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: str, dtype: torch.dtype | None = None, name: str = 'cumsum') str[source]

cumsum

experimental_experiment.torch_interpreter._aten_functions.aten_detach(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

identity

experimental_experiment.torch_interpreter._aten_functions.aten_detach_(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

identity

experimental_experiment.torch_interpreter._aten_functions.aten_div(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='div') str[source]

div

experimental_experiment.torch_interpreter._aten_functions.aten_div_Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str) str[source]

div

experimental_experiment.torch_interpreter._aten_functions.aten_div_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, alpha: Any | None = None, name: str = 'div_Tensor') str[source]

div

experimental_experiment.torch_interpreter._aten_functions.aten_div_Tensor_mode(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, rounding_mode: str | None = None, name: str = 'div_Tensor_mode') str[source]

div_Tensor_mode

experimental_experiment.torch_interpreter._aten_functions.aten_div__Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, alpha: Any | None = None, name: str = 'div__Tensor') str[source]

div

experimental_experiment.torch_interpreter._aten_functions.aten_dropout(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, p: str = 0.5, training: str = True, name: str = 'dropout') str[source]

dropout

experimental_experiment.torch_interpreter._aten_functions.aten_dropout_(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, p: str = 0.5, training: str = True, name: str = 'dropout_') str[source]

inplace dropout, we assume inplace modifications were removed

experimental_experiment.torch_interpreter._aten_functions.aten_einsum(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], equation: str, tensors: Sequence[str], path: int | None = None, name: str = 'einsum') str[source]

einsum

experimental_experiment.torch_interpreter._aten_functions.aten_elu(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, alpha: float = 1.0, scale: float = 1.0, input_scale: int = 1, inplace: bool = False, name='elu') str[source]

elu

experimental_experiment.torch_interpreter._aten_functions.aten_elu_(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, alpha: float = 1.0, scale: float = 1.0, input_scale: int = 1, inplace: bool = False, name='elu_') str[source]

elu_, inplace modifications are not allowed, we assume there were removed

experimental_experiment.torch_interpreter._aten_functions.aten_embedding(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], weight: str, indices: str, padding_idx: int | None = None, max_norm: int | None = None, norm_type: float = 2.0, scale_grad_by_freq: bool = False, sparse: bool = False, name: str = 'embedding') str[source]

embedding

padding_idx is only used for training, see torch.nn.functional.embedding(). It is not taken into account.

experimental_experiment.torch_interpreter._aten_functions.aten_embedding_bag_padding_idx(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], weight: str, indices: str, offsets: str, scale_grad_by_freq: bool = False, mode: int = 0, sparse: bool = False, per_sample_weights: str | None = None, include_last_offset: bool = False, padding_idx: int | None = None, name: str = 'embedding_bag.padding_idx') Tuple[str, str, str, str][source]

embedding_bag.padding_idx

experimental_experiment.torch_interpreter._aten_functions.aten_empty_like(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dtype: torch.dtype | None = None, layout=None, device: torch.device | None = None, pin_memory=None, memory_format=None) str[source]

constantofshape

experimental_experiment.torch_interpreter._aten_functions.aten_empty_permuted(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], size: str, physical_layout: str, dtype: torch.dtype | None = None, layout=None, device: torch.device | None = None, requires_grad: bool = False, pin_memory: bool = False, name: str = 'empty_permuted') str[source]

constantofshape

experimental_experiment.torch_interpreter._aten_functions.aten_empty_strided(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], size: str, stride: str, dtype: torch.dtype | None = None, layout=None, device: torch.device | None = None, requires_grad: bool = False, pin_memory: bool = False, name: str = 'empty_strided') str[source]

constantofshape

experimental_experiment.torch_interpreter._aten_functions.aten_eq(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='eq') str[source]

equal

experimental_experiment.torch_interpreter._aten_functions.aten_eq_Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str) str[source]

equal

experimental_experiment.torch_interpreter._aten_functions.aten_eq_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='eq_Tensor') str[source]

equal

experimental_experiment.torch_interpreter._aten_functions.aten_erf(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 'erf') str[source]

erf

experimental_experiment.torch_interpreter._aten_functions.aten_exp(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 'exp') str[source]

exp

experimental_experiment.torch_interpreter._aten_functions.aten_expand(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, sizes: str | List[int | str], implicit: bool = False, name: str = 'expand') str[source]

expand

experimental_experiment.torch_interpreter._aten_functions.aten_expand_as(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, like: str, name: str = 'expand_as') str[source]

expand_as

experimental_experiment.torch_interpreter._aten_functions.aten_feature_dropout(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, p: float, train: bool, name: str = 'feature_dropout') str[source]

feature_dropout

experimental_experiment.torch_interpreter._aten_functions.aten_fill_Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, v: str, name: str = 'fill_Scalar') str[source]

constantofshape

experimental_experiment.torch_interpreter._aten_functions.aten_fill_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, v: str, name: str = 'fill_Tensor') str[source]

constantofshape

experimental_experiment.torch_interpreter._aten_functions.aten_flatten(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, start_dim: int = 1, end_dim: int = -1, name: str = 'flatten') str[source]

flatten

experimental_experiment.torch_interpreter._aten_functions.aten_flatten_using_ints(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, start_dim: int = 1, end_dim: int = -1, name: str = 'flatten_using_ints') str[source]

flatten

experimental_experiment.torch_interpreter._aten_functions.aten_floor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

floor

experimental_experiment.torch_interpreter._aten_functions.aten_floor_divide(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='floor_divide') str[source]

floor + div

experimental_experiment.torch_interpreter._aten_functions.aten_floordiv(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str) str[source]

floor + div

experimental_experiment.torch_interpreter._aten_functions.aten_full(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], size: str, fill_value: float, dtype: torch.dtype | None = None, layout=None, device: torch.device | None = None, pin_memory=None, requires_grad: bool = False, name: str = 'full') str[source]

constantofshape

experimental_experiment.torch_interpreter._aten_functions.aten_full_like(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, fill_value: str, dtype: torch.dtype | None = None, layout=None, device: torch.device | None = None, pin_memory=None, memory_format=None, name: str = 'full_like') str[source]

constantofshape

experimental_experiment.torch_interpreter._aten_functions.aten_gather(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int, index: str, sparse_grad: bool = False, name: str = 'gather') str[source]

gather

experimental_experiment.torch_interpreter._aten_functions.aten_ge(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name: str = 'ge') str[source]

greater or equal

experimental_experiment.torch_interpreter._aten_functions.aten_ge_Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str) str[source]

greater or equal

experimental_experiment.torch_interpreter._aten_functions.aten_ge_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str) str[source]

greater or equal

experimental_experiment.torch_interpreter._aten_functions.aten_gelu(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, approximate: str = 'none', name: str = 'gelu') str[source]

gelu

experimental_experiment.torch_interpreter._aten_functions.aten_getattr(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, attr_name: str, name: str = 'getattr') str[source]

getattr

experimental_experiment.torch_interpreter._aten_functions.aten_grid_sampler(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, grid: str, interpolation_mode: int, padding_mode: int, align_corners: bool, name: str = 'grid_sampler') str[source]

grid_sampler

experimental_experiment.torch_interpreter._aten_functions.aten_group_norm(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, num_groups: int, weight: str | None = None, bias: str | None = None, eps: float = 1e-05, cudnn_enabled: bool = True, name: str = 'group_norm') str[source]

instance_normalization

experimental_experiment.torch_interpreter._aten_functions.aten_gt(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name: str = 'gt') str[source]

greater

experimental_experiment.torch_interpreter._aten_functions.aten_gt_Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str) str[source]

greater

experimental_experiment.torch_interpreter._aten_functions.aten_gt_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str) str[source]

greater

experimental_experiment.torch_interpreter._aten_functions.aten_hardsigmoid(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 'hardsigmoid') str[source]

hardsigmoid

experimental_experiment.torch_interpreter._aten_functions.aten_hardswish(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, inplace: bool = False, name: str = 'hardswish') str[source]

hardswish

experimental_experiment.torch_interpreter._aten_functions.aten_hardswish_(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, inplace: bool = False, name: str = 'hardswish_') str[source]

hardswish_, inplace modifications are not allowed, we assume there were removed

experimental_experiment.torch_interpreter._aten_functions.aten_hardtanh(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, min_val: float = -1.0, max_val: float = 1.0, inplace: bool = False, name: str = 'hardtanh') str[source]

hardtanh(Tensor self, Scalar min_val=-1, Scalar max_val=1) -> Tensor

experimental_experiment.torch_interpreter._aten_functions.aten_hardtanh_(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, min_val: float = -1.0, max_val: float = 1.0, inplace: bool = False, name: str = 'hardtanh_') str[source]

hardtanh_, inplace modifications are not allowed, we assume there were removed

experimental_experiment.torch_interpreter._aten_functions.aten_hardtanh_backward(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], grad_output: str, x: str, min_val: float, max_val: float, name: str = 'hardtanh_backward') str[source]

hardtanh_backward

experimental_experiment.torch_interpreter._aten_functions.aten_im2col(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, kernel_size: Sequence[int], dilation: Sequence[int] = (1, 1), padding: Sequence[int] = (0, 0), stride: Sequence[int] = (1, 1), name: str = 'im2col') str[source]

im2col

experimental_experiment.torch_interpreter._aten_functions.aten_imul(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='imul') str[source]

imul

experimental_experiment.torch_interpreter._aten_functions.aten_index_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, indices: List[int], name: str = 'index_Tensor') str[source]

[…,:, …]

experimental_experiment.torch_interpreter._aten_functions.aten_index_put(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, indices: List[str], values: str, accumulate: bool = False, name='aten_index_put') str[source]

M[…, :, …] = …

experimental_experiment.torch_interpreter._aten_functions.aten_index_put_(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, indices: List[str], values: str, accumulate: bool = False, name='aten_index_put_') str[source]

M[…, :, …] = …

experimental_experiment.torch_interpreter._aten_functions.aten_index_select(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int, index: str, name: str = 'index_select') str[source]

[…,:, …]

experimental_experiment.torch_interpreter._aten_functions.aten_instance_norm(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, weight: str | None = None, bias: str | None = None, running_mean: str | None = None, running_var: str | None = None, use_input_stats: bool = True, momentum: float = 0.1, eps: float = 1e-05, cudnn_enabled: bool = False, name: str = 'instance_norm') str[source]

instance_norm

experimental_experiment.torch_interpreter._aten_functions.aten_interpolate(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, size: str | None = None, mode: str = 'bilinear', align_corners: bool | None = None, scale_factor: float | None = None, recompute_scale_factor: bool = False, antialias: bool = False, name: str = 'interpolate') str[source]

interpolate

experimental_experiment.torch_interpreter._aten_functions.aten_isinf(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 'isinf') str[source]

isinf

experimental_experiment.torch_interpreter._aten_functions.aten_isnan(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 'isnan') str[source]

isnan

experimental_experiment.torch_interpreter._aten_functions.aten_l1_loss(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, target: str, reduction: str = 'mean', name: str = 'l1_loss') str[source]

l1_loss

experimental_experiment.torch_interpreter._aten_functions.aten_layer_norm(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, normalized_shape: Sequence[int], weight: str | None = None, bias: str | None = None, eps: float = 1e-05, cudnn_enable: bool = False, name='layer_norm') str[source]

layer_norm

experimental_experiment.torch_interpreter._aten_functions.aten_le(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name: str = 'le') str[source]

less or equal

experimental_experiment.torch_interpreter._aten_functions.aten_le_Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str) str[source]

less or equal

experimental_experiment.torch_interpreter._aten_functions.aten_le_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str) str[source]

less or equal

experimental_experiment.torch_interpreter._aten_functions.aten_leaky_relu(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], a: str, negative_slope: float = 0.01, inplace: bool = False, name: str = 'leaky_relu') str[source]

leaky relu

experimental_experiment.torch_interpreter._aten_functions.aten_leaky_relu_(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], a: str, negative_slope: float = 0.01, inplace: bool = False, name: str = 'leaky_relu_') str[source]

leaky_relu_, inplace modifications are not allowed, we assume there were removed

experimental_experiment.torch_interpreter._aten_functions.aten_leaky_relu_backward(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], grad_output: str, x: str, negative_slope: float, self_is_result: bool, name='leaky_relu_backward') str[source]

leaky relu

experimental_experiment.torch_interpreter._aten_functions.aten_lift_fresh_copy(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

identity

experimental_experiment.torch_interpreter._aten_functions.aten_linalg_vector_norm(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, ord: float = 2.0, dim: int | None = None, keepdim: bool = False, dtype: int | None = None, name: str = 'linagl_vector_norm') str[source]

reduce *

experimental_experiment.torch_interpreter._aten_functions.aten_linear(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, weight: str, bias: str = None) str[source]

linear

experimental_experiment.torch_interpreter._aten_functions.aten_linspace(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], start: str, end: str, steps: int, dtype: torch.dtype | None = None, layout: str = '', device: torch.device | None = None, pin_memory=None, name: str = 'linspace') str[source]

linspace

experimental_experiment.torch_interpreter._aten_functions.aten_log(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

log

experimental_experiment.torch_interpreter._aten_functions.aten_log_softmax_int(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int = -1, unnamed: bool = False, dtype: torch.dtype | None = None, name: str = 'log_softmax_int') str[source]

logsoftmax

experimental_experiment.torch_interpreter._aten_functions.aten_logical_and(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='and') str[source]

and

experimental_experiment.torch_interpreter._aten_functions.aten_logical_not(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name='logical_not') str[source]

logical not

experimental_experiment.torch_interpreter._aten_functions.aten_logical_or(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='or') str[source]

or

experimental_experiment.torch_interpreter._aten_functions.aten_lt(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='lt') str[source]

less

experimental_experiment.torch_interpreter._aten_functions.aten_lt_Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str) str[source]

less

experimental_experiment.torch_interpreter._aten_functions.aten_lt_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str) str[source]

less

experimental_experiment.torch_interpreter._aten_functions.aten_masked_fill_Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, mask: str, value: str, name='masked_fill_Scalar') str[source]

masked

experimental_experiment.torch_interpreter._aten_functions.aten_masked_fill_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, mask: str, value, name='masked_fill_Tensor') str[source]

masked

experimental_experiment.torch_interpreter._aten_functions.aten_masked_fill__Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, mask: str, value: str, name='masked_fill__Scalar') str[source]

masked, inplace modifications but we assumes they are removed.

experimental_experiment.torch_interpreter._aten_functions.aten_matmul(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str) str[source]

matmul

experimental_experiment.torch_interpreter._aten_functions.aten_max(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 'max') str[source]

min

experimental_experiment.torch_interpreter._aten_functions.aten_max_dim(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int, keepdim: bool = False, name: str = 'max_dim') str[source]

maximum

experimental_experiment.torch_interpreter._aten_functions.aten_max_other(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name: str = 'max_other') str[source]

maximum

experimental_experiment.torch_interpreter._aten_functions.aten_max_pool1d(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, kernel_size: Sequence[int], stride: Sequence[int] = (), padding: Sequence[int] = (0, 0), dilation: Sequence[int] = (1, 1), ceil_mode: bool = False, name: str = 'max_pool1d') str[source]

max_pool1d

experimental_experiment.torch_interpreter._aten_functions.aten_max_pool2d(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, kernel_size: Sequence[int], stride: Sequence[int] = (), padding: Sequence[int] = (0, 0), dilation: Sequence[int] = (1, 1), ceil_mode: bool = False, name: str = 'max_pool2d') str[source]

max_pool2d

experimental_experiment.torch_interpreter._aten_functions.aten_max_pool2d_with_indices(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, kernel_size: Sequence[int], stride: Sequence[int] = (), padding: Sequence[int] = (0, 0), dilation: Sequence[int] = (1, 1), ceil_mode: bool = False) Tuple[str, str][source]

maxpool

experimental_experiment.torch_interpreter._aten_functions.aten_max_pool3d(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, kernel_size: Sequence[int], stride: Sequence[int] = (), padding: Sequence[int] = (0, 0, 0), dilation: Sequence[int] = (1, 1, 1), ceil_mode: bool = False, name: str = 'max_pool3d') str[source]

max_pool3d

experimental_experiment.torch_interpreter._aten_functions.aten_maximum(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name: str = 'max') str[source]

maximum

experimental_experiment.torch_interpreter._aten_functions.aten_mean(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dtype: torch.dtype | None = None, name: str = 'mean') str[source]

mean

experimental_experiment.torch_interpreter._aten_functions.aten_mean_dim(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int | List[int] | None = None, keepdim: bool = False, dtype: torch.dtype | None = None) str[source]

reducemean

experimental_experiment.torch_interpreter._aten_functions.aten_min(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 'min') str[source]

min

experimental_experiment.torch_interpreter._aten_functions.aten_min_other(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name: str = 'min_other') str[source]

minimum

experimental_experiment.torch_interpreter._aten_functions.aten_minimum(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name: str = 'min') str[source]

minimum

experimental_experiment.torch_interpreter._aten_functions.aten_mm(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str) str[source]

matmul

experimental_experiment.torch_interpreter._aten_functions.aten_mod(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='mod') str[source]

mod

experimental_experiment.torch_interpreter._aten_functions.aten_mse_loss(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, target: str, reduction: str = 'mean', name: str = 'mse_loss') str[source]

mse_loss

experimental_experiment.torch_interpreter._aten_functions.aten_mul(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='mul') str[source]

mul

experimental_experiment.torch_interpreter._aten_functions.aten_mul_Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str) str[source]

mul

experimental_experiment.torch_interpreter._aten_functions.aten_mul_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name: str = 'mul_Tensor') str[source]

mul

experimental_experiment.torch_interpreter._aten_functions.aten_mul__Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name: str = 'mulà_Tensor') str[source]

mul

experimental_experiment.torch_interpreter._aten_functions.aten_multiply_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='multiply_Tensor') str[source]

mul

experimental_experiment.torch_interpreter._aten_functions.aten_native_dropout(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, p: float, train: bool = False, name: str = 'native_dropout')[source]

dropout

experimental_experiment.torch_interpreter._aten_functions.aten_native_layer_norm(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, normalized_shape: Tuple[int, ...], weight: str | None = None, bias: str | None = None, eps: float = 1e-05, name: str = 'aten_native_layer_norm') Tuple[str, str, str][source]

native_layer_norm

experimental_experiment.torch_interpreter._aten_functions.aten_ne(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='ne') str[source]

not equal

experimental_experiment.torch_interpreter._aten_functions.aten_ne_Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='ne_Scalar') str[source]

not equal

experimental_experiment.torch_interpreter._aten_functions.aten_ne_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='ne_Tensor') str[source]

not equal

experimental_experiment.torch_interpreter._aten_functions.aten_neg(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name='neg') str[source]

neg

experimental_experiment.torch_interpreter._aten_functions.aten_new_ones(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, size: str, dtype: torch.dtype | None = None, layout=None, device: torch.device | None = None, pin_memory=None, name: str = 'new_ones') str[source]

new_ones

experimental_experiment.torch_interpreter._aten_functions.aten_new_zeros(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, size: str, dtype: torch.dtype | None = None, layout=None, device: torch.device | None = None, pin_memory=None, name: str = 'seros') str[source]

constantofshape

experimental_experiment.torch_interpreter._aten_functions.aten_nll_loss_forward(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], self: str, target: str, weight: str | None = None, reduction: int = 0, ignore_index: int = -1, name: str = 'nll_loss_forward') Tuple[str, str][source]

nll_loss_forward

experimental_experiment.torch_interpreter._aten_functions.aten_nonzero(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 'nonzero', as_tuple: bool = False) str[source]

nonzero

experimental_experiment.torch_interpreter._aten_functions.aten_nonzero_numpy(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 'nonzero_numpy') str[source]

nonzero numpy

experimental_experiment.torch_interpreter._aten_functions.aten_not(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 'not') str[source]

not

experimental_experiment.torch_interpreter._aten_functions.aten_not_(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 'not') str[source]

not

experimental_experiment.torch_interpreter._aten_functions.aten_numpy_T(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], input_name: str, name: str = 'numpy_T') str[source]

transpose

experimental_experiment.torch_interpreter._aten_functions.aten_ones(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], size: str, dtype: int | None = None, layout=None, device: torch.device | None = None, pin_memory=None, name: str = 'ones') str[source]

constantofshape

experimental_experiment.torch_interpreter._aten_functions.aten_ones_like(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dtype: torch.dtype | None = None, layout=None, device: torch.device | None = None, pin_memory=None, memory_format=None) str[source]

constantofshape

experimental_experiment.torch_interpreter._aten_functions.aten_or(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='or') str[source]

or

experimental_experiment.torch_interpreter._aten_functions.aten_pad(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, pad: str | Tuple[int, ...], mode: str = 'constant', value: float | None = None, name: str = 'pad', pad_is_right: bool = False) str[source]

pad

experimental_experiment.torch_interpreter._aten_functions.aten_permute(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dims: Sequence[int]) str[source]

transpose

experimental_experiment.torch_interpreter._aten_functions.aten_polar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, angle: str, name: str = 'polar') str[source]

polar

experimental_experiment.torch_interpreter._aten_functions.aten_pow(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, exponent: str, name: str = 'pow') str[source]

pow

experimental_experiment.torch_interpreter._aten_functions.aten_pow_Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, exponent: str, name: str = 'pow_Scalar') str[source]

pow

experimental_experiment.torch_interpreter._aten_functions.aten_pow_Tensor_Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, exponent: str, name: str = 'pow_Tensor_Scalar') str[source]

pow

experimental_experiment.torch_interpreter._aten_functions.aten_pow_Tensor_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, exponent: str, name: str = 'pow_Tensor_Tensor') str[source]

pow

experimental_experiment.torch_interpreter._aten_functions.aten_prelu(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], a: str, slope: str, name: str = 'prelu') str[source]

prelu

experimental_experiment.torch_interpreter._aten_functions.aten_prod(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dtype: torch.dtype | None = None, name: str = 'prod') str[source]

min

experimental_experiment.torch_interpreter._aten_functions.aten_prod_dim_int(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int, keepdim: bool = False, dtype: torch.dtype | None = None, name='prod_dim_int') str[source]

reduce_prod

experimental_experiment.torch_interpreter._aten_functions.aten_reciprocal(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 'reciprocal') str[source]

reciprocal

experimental_experiment.torch_interpreter._aten_functions.aten_reflection_pad2d(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, pad: Any, value: float = 0.0, name: str = 'reflection_pad2d') str[source]

pad

experimental_experiment.torch_interpreter._aten_functions.aten_relu(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, inplace: bool = False, name: str = 'relu') str[source]

relu

experimental_experiment.torch_interpreter._aten_functions.aten_relu_(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, inplace: bool = False, name: str = 'relu_') str[source]

relu_, inplace modifications are not allowed, we assume there were removed

experimental_experiment.torch_interpreter._aten_functions.aten_remainder(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, other: str, name='remainder') str[source]

mod

experimental_experiment.torch_interpreter._aten_functions.aten_remainder_Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, other: str) str[source]

mod

experimental_experiment.torch_interpreter._aten_functions.aten_remainder_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, other: str) str[source]

mod

experimental_experiment.torch_interpreter._aten_functions.aten_repeat(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, repeats: str, name: str = 'repeat') str[source]

repeat

experimental_experiment.torch_interpreter._aten_functions.aten_repeat_interleave(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, repeats: List[int], dim: int | None = None, output_size: Tuple[int, ...] | None = None, name: str = 'repeat_interleave') str[source]

repeat_interleave

experimental_experiment.torch_interpreter._aten_functions.aten_repeat_interleave_self_int(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, repeats: List[int], dim: int | None = None, output_size: Tuple[int, ...] | None = None, name: str = 'repeat_interleave_self_int') str[source]

repeat_interleave_self_int

experimental_experiment.torch_interpreter._aten_functions.aten_reshape(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], input_name: str, shape: List[int], name: str = 'reshape') str[source]

reshape

experimental_experiment.torch_interpreter._aten_functions.aten_roll(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, shifts: List[int], dims: List[int], name: str = 'roll') str[source]

roll

experimental_experiment.torch_interpreter._aten_functions.aten_round(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

round

experimental_experiment.torch_interpreter._aten_functions.aten_rrelu_with_noise_backward(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], grad_output: str, x: str, noise: str, lower: float, upper: float, training: bool, self_is_result: bool, name: str = 'rrelu_with_noise_backward') str[source]

rrelu

experimental_experiment.torch_interpreter._aten_functions.aten_rsqrt(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

rqsrt

experimental_experiment.torch_interpreter._aten_functions.aten_rsub(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, alpha: float = 1) str[source]

rsub

experimental_experiment.torch_interpreter._aten_functions.aten_rsub_Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, alpha: float = 1) str[source]

rsub

experimental_experiment.torch_interpreter._aten_functions.aten_scalar_tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], s: float, dtype: int | None = None, layout: str = '', device: torch.device | None = None, pin_memory=None) str[source]

constant

experimental_experiment.torch_interpreter._aten_functions.aten_scan(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], scan_graph: str, scan_inits: List[str], scan_inputs: List[str], dim: int, reverse: bool, additional_inputs: List[str], name='aten_scan') str[source]

cond

experimental_experiment.torch_interpreter._aten_functions.aten_scatter_add(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int, index: str, src: str, name: str = 'scatter_add') str[source]

scatter_add

experimental_experiment.torch_interpreter._aten_functions.aten_scatter_reduce_two(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int, index: str, src: str, reduce: str, include_self: bool = True, name: str = 'scatter_reduce_two')[source]

scatter_reduce.two

experimental_experiment.torch_interpreter._aten_functions.aten_select_copy_int(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int, index: int, name: str = 'select_copy_int') str[source]

gather

experimental_experiment.torch_interpreter._aten_functions.aten_select_int(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int, index: int, name: str = 'select_int') str[source]

gather

experimental_experiment.torch_interpreter._aten_functions.aten_select_scatter(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, src: str, dim: int, index: int, name: str = 'select_scatter') str[source]

scatter elements

experimental_experiment.torch_interpreter._aten_functions.aten_selu(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, inplace: bool = False) str[source]

relu

experimental_experiment.torch_interpreter._aten_functions.aten_setitem(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, indices: Tuple[Any, ...], values: str) str[source]

scatter

experimental_experiment.torch_interpreter._aten_functions.aten_sigmoid(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

sigmoid

experimental_experiment.torch_interpreter._aten_functions.aten_sigmoid_backward(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], out_grad: str, y: str) str[source]

sigmoid backward

See https://github.com/pytorch/pytorch/blob/main/torch/_decomp/decompositions.py#L108. conj_physical = identity for real number.

return out_grad * (y * (1 - y)).conj_physical()
experimental_experiment.torch_interpreter._aten_functions.aten_sign(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 'sign') str[source]

sign

experimental_experiment.torch_interpreter._aten_functions.aten_silu(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, inplace: bool = False, name: str = 'silu') str[source]

silu

experimental_experiment.torch_interpreter._aten_functions.aten_silu_(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, inplace: bool = False, name: str = 'silu_') str[source]

silu_, inplace modifications are not allowed, we assume there were removed

experimental_experiment.torch_interpreter._aten_functions.aten_sin(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name='sin') str[source]

sin

experimental_experiment.torch_interpreter._aten_functions.aten_sinh(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

sinh

experimental_experiment.torch_interpreter._aten_functions.aten_slice_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int = 0, start: int = 0, end: int | None = None, step: int | None = None) str[source]

slice

experimental_experiment.torch_interpreter._aten_functions.aten_slice_backward(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], grad_output: str, input_sizes: List[int], dim: int, start: int, end: int, step: int, name: str = 'slice_backward') str[source]

slice backward

experimental_experiment.torch_interpreter._aten_functions.aten_slice_scatter(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, src: str, dim: int = 0, start: int | None = None, end: int | None = None, step: int | None = None, name: str | None = None) str[source]

slice scatter

experimental_experiment.torch_interpreter._aten_functions.aten_softmax(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int = -1, dtype: torch.dtype | None = None, name: str = 'softmax', _stacklevel: int | None = None) str[source]

softmax

experimental_experiment.torch_interpreter._aten_functions.aten_softmax_int(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int = -1, dtype: torch.dtype | None = None) str[source]

softmax

experimental_experiment.torch_interpreter._aten_functions.aten_softplus(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, beta: float = 1.0, threshold: float = 20.0, name: str = 'softplus')[source]

softplus

experimental_experiment.torch_interpreter._aten_functions.aten_split_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, split_sizes: str, dim: int = 0, name: str = 'split_Tensor') Tuple[str, ...][source]

split_to_sequence or split

experimental_experiment.torch_interpreter._aten_functions.aten_split_with_sizes(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, split_sizes: str, dim: int = 0, name: str = 'split_with_sizes', use_sequence: bool = False) Tuple[str, ...][source]

split_to_sequence or split

experimental_experiment.torch_interpreter._aten_functions.aten_sqrt(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

sqrt

experimental_experiment.torch_interpreter._aten_functions.aten_squeeze(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name='squeeze') str[source]

squeeze

experimental_experiment.torch_interpreter._aten_functions.aten_squeeze_dim(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int, name='squeeze') str[source]

squeeze_dim

experimental_experiment.torch_interpreter._aten_functions.aten_stack(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], tensors: Sequence[str], dim: int = 0, name: str = 'stack') str[source]

concat

experimental_experiment.torch_interpreter._aten_functions.aten_std_dim(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dims: Sequence[int], correction: float, keepdim: bool = False, name: str = 'std_dim') str[source]

std_dim

experimental_experiment.torch_interpreter._aten_functions.aten_sub(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, name='sub') str[source]

sub

experimental_experiment.torch_interpreter._aten_functions.aten_sub_Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, alpha: float, name: str = 'sub_Tensor') str[source]

sub

experimental_experiment.torch_interpreter._aten_functions.aten_sub__Tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str, alpha: float, name: str = 'sub__Tensor') str[source]

sub

experimental_experiment.torch_interpreter._aten_functions.aten_sum(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int | List[int] | None = None, keepdim: bool = False, dtype: torch.dtype | None = None, name='sum') str[source]

reducesum

experimental_experiment.torch_interpreter._aten_functions.aten_sum_dim_IntList(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int | List[int] | None, keepdim: bool, dtype: torch.dtype | None = None) str[source]

reducesum

experimental_experiment.torch_interpreter._aten_functions.aten_sym_constrain_range_for_size(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], dim: Any, min: int | None = None, max: int | None = None, name: str = 'sym_constrain_range_for_size')[source]

assert sym_constrain_range_for_size

experimental_experiment.torch_interpreter._aten_functions.aten_sym_size_int(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int, name: str = 'sym_size_int') str[source]

Shape + Gather

experimental_experiment.torch_interpreter._aten_functions.aten_t(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, name: str = 't') str[source]

transpose

experimental_experiment.torch_interpreter._aten_functions.aten_tan(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

tan

experimental_experiment.torch_interpreter._aten_functions.aten_tanh(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str) str[source]

tanh

experimental_experiment.torch_interpreter._aten_functions.aten_tanh_backward(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], out_grad: str, y: str) str[source]

tanh backward

experimental_experiment.torch_interpreter._aten_functions.aten_tensor(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, indices: Tuple[Any, ...] | None = None) str[source]

[…, :, …]

experimental_experiment.torch_interpreter._aten_functions.aten_threshold_backward(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], grad_output: str, x: str, threshold: float, name: str = 'threshold_backward') str[source]

lessorequal

experimental_experiment.torch_interpreter._aten_functions.aten_to(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], input_name: str, *args: List[Any], name: str = 'to', **kwargs: Dict[str, Any]) str[source]

cast

experimental_experiment.torch_interpreter._aten_functions.aten_to_device(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], input_name: str, *args: List[Any], name: str = 'to_device', **kwargs: Dict[str, Any]) str[source]

to_device -> Identity

experimental_experiment.torch_interpreter._aten_functions.aten_to_dtype(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], input_name: str, *args: List[Any], name: str = 'to_dtype', **kwargs: Dict[str, Any]) str[source]

cast

experimental_experiment.torch_interpreter._aten_functions.aten_to_dtype_layout(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], input_name: str, *args: List[Any], name: str = 'to_dtype_layout', **kwargs: Dict[str, Any]) str[source]

cast

experimental_experiment.torch_interpreter._aten_functions.aten_transpose(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], input_name: str, dim0: int, dim1: int, name: str = 'transpose') str[source]

transpose

experimental_experiment.torch_interpreter._aten_functions.aten_transpose_int(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], input_name: str, dim0: int, dim1: int, name: str = 'transpose_int') str[source]

transpose

experimental_experiment.torch_interpreter._aten_functions.aten_tril(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, diagonal: int = 0) str[source]

tril

experimental_experiment.torch_interpreter._aten_functions.aten_triu(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, diagonal: int | str = 0) str[source]

trilu

experimental_experiment.torch_interpreter._aten_functions.aten_truediv(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, y: str) str[source]

truediv

experimental_experiment.torch_interpreter._aten_functions.aten_type_as(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, other: int, name: str = 'type_as') str[source]

castlike

experimental_experiment.torch_interpreter._aten_functions.aten_unbind_int(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int = 0, use_sequence: bool = False, name: str = 'unbind') Tuple[str, ...][source]

split

experimental_experiment.torch_interpreter._aten_functions.aten_unflatten_int(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int, sizes: List[int], name: str = 'unflatten_int') str[source]

unflatten –> Reshape

experimental_experiment.torch_interpreter._aten_functions.aten_unfold(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dimension: int, size: int, step: int, name: str = 'unfold') str[source]

unfold

experimental_experiment.torch_interpreter._aten_functions.aten_unsqueeze(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dim: int) str[source]

unsqueeze

experimental_experiment.torch_interpreter._aten_functions.aten_upsample_bicubic2d(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, output_size: str, align_corners: bool, scales_d: float | None = None, scales_h: float | None = None, name: str = 'upsample_bicubic2d') str[source]

resize

experimental_experiment.torch_interpreter._aten_functions.aten_upsample_bicubic2d_vec(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, output_size: str, align_corners: bool, scale_factors: Sequence[float] | None = None, name: str = 'upsample_bicubic2d_vec') str[source]

resize

experimental_experiment.torch_interpreter._aten_functions.aten_upsample_bilinear2d(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, output_size: str, align_corners: bool, scales_d: float | None = None, scales_h: float | None = None, name: str = 'upsample_bilinear2d') str[source]

resize

experimental_experiment.torch_interpreter._aten_functions.aten_upsample_bilinear2d_vec(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, output_size: str, align_corners: bool, scale_factors: Sequence[float] | None = None, name: str = 'upsample_bicubic2d_vec') str[source]

resize

experimental_experiment.torch_interpreter._aten_functions.aten_upsample_nearest2d(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, output_size: str, scales_h: float | None = None, scales_w: float | None = None, name: str = 'upsample_nearest2d') str[source]

resize

experimental_experiment.torch_interpreter._aten_functions.aten_upsample_nearest2d_vec(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, output_size: str | None = None, scale_factors: List[int] | None = None, name: str = 'upsample_nearest2d_vec') str[source]

resize

experimental_experiment.torch_interpreter._aten_functions.aten_upsample_nearest3d(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, output_size: str, scales_d: float | None = None, scales_h: float | None = None, scales_w: float | None = None, name: str = 'upsample_nearest3d') str[source]

resize

experimental_experiment.torch_interpreter._aten_functions.aten_upsample_nearest3d_vec(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, output_size: str | None = None, scale_factors: List[int] | None = None, name: str = 'upsample_nearest3d_vec') str[source]

resize

experimental_experiment.torch_interpreter._aten_functions.aten_upsample_trilinear3d(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, output_size: str, align_corners: bool, scales_d: float | None = None, scales_h: float | None = None, scales_w: float | None = None, name: str = 'upsample_trilinear3d') str[source]

resize

experimental_experiment.torch_interpreter._aten_functions.aten_upsample_trilinear3d_vec(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, output_size: str, align_corners: bool, scale_factors: Sequence[float] | None = None, name: str = 'upsample_trilinear3d_vec') str[source]

resize

experimental_experiment.torch_interpreter._aten_functions.aten_view(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, size: str, node_name: str = 'view') str[source]

slice

experimental_experiment.torch_interpreter._aten_functions.aten_where(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], condition: str, x: str, other: str, name: str = 'where') str[source]

where

experimental_experiment.torch_interpreter._aten_functions.aten_where_Scalar(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], condition: str, x: str, other: str, name: str = 'where_Scalar') str[source]

where

This function may introduce some type issues when ‘x’ and ‘other’ are both floats. Implicit cast may be done by torch. Checks what happens after this node.

experimental_experiment.torch_interpreter._aten_functions.aten_where_self(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], condition: str, x: str, other: str) str[source]

where

experimental_experiment.torch_interpreter._aten_functions.aten_wrap_with_autocast(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], device_type: str, dtype: torch.dtype | None, enabled: bool, cache_enabled: bool | None, wrapped_func, *args: Sequence[str], **kwargs) str[source]

identity, calling a local function

experimental_experiment.torch_interpreter._aten_functions.aten_wrap_with_set_grad_enabled(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], enable_grad: bool, wrapped_func, *args: Sequence[str], **kwargs) str[source]

identity

experimental_experiment.torch_interpreter._aten_functions.aten_zero(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dtype: torch.dtype | None = None, layout=None, device: torch.device | None = None, pin_memory=None, memory_format: str | None = None, name: str = 'zero') str[source]

constantofshape

experimental_experiment.torch_interpreter._aten_functions.aten_zeros(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], size: str, dtype: torch.dtype | None = None, layout=None, device: torch.device | None = None, pin_memory=None, requires_grad: bool = False, name: str = 'zeros') str[source]

constantofshape

experimental_experiment.torch_interpreter._aten_functions.aten_zeros_like(g: GraphBuilder, sts: Dict[str, Any] | None, outputs: List[str], x: str, dtype: torch.dtype | None = None, layout=None, device: torch.device | None = None, pin_memory=None, memory_format: str | None = None, name: str = 'zeros_like') str[source]

constantofshape