npx_jit_eager#
eager_onnx#
EagerOnnx#
- class onnx_array_api.npx.npx_jit_eager.EagerOnnx(f: Callable, tensor_class: type | None = None, target_opsets: Dict[str, int] | None = None, output_types: Dict[Any, TensorType] | None = None, ir_version: int | None = None, bypass_eager: bool = False)[source]#
Converts a function into an executable function based on a backend. The new function is converted to onnx on the first call.
- Parameters:
f – function to convert
tensor_class – wrapper around a class defining the backend, if None, it defaults to
onnx.reference.ReferenceEvaluator
target_opsets – dictionary {opset: version}
output_types – shape and type inference cannot be run before the onnx graph is created and type is needed to do such, if not specified, the class assumes there is only one output of the same type as the input
bypass_eager – this parameter must be true if the function has not annotation and is not decorated by xapi_inline or xapi_function
ir_version – defines the IR version to use
JitEager#
- class onnx_array_api.npx.npx_jit_eager.JitEager(f: Callable, tensor_class: type, target_opsets: Dict[str, int] | None = None, output_types: Dict[Any, TensorType] | None = None, ir_version: int | None = None)[source]#
Converts a function into an executable function based on a backend. The new function is converted to onnx on the first call.
- Parameters:
f – function to convert
tensor_class – wrapper around a class defining the backend, if None, it defaults to
onnx.reference.ReferenceEvaluator
target_opsets – dictionary {opset: version}
output_types – shape and type inference cannot be run before the onnx graph is created and type is needed to do such, if not specified, the class assumes there is only one output of the same type as the input
ir_version – defines the IR version to use
- property available_versions#
Returns the key used to distinguish between every jitted version.
- cast_from_tensor_class(results: List[EagerTensor]) Any | Tuple[Any] [source]#
Wraps input from self.tensor_class to python types.
- Parameters:
results – python inputs (including numpy)
- Returns:
wrapped inputs
- cast_to_tensor_class(inputs: List[Any]) List[EagerTensor] [source]#
Wraps input into self.tensor_class.
- Parameters:
inputs – python inputs (including numpy)
- Returns:
wrapped inputs
- get_onnx(key: int | None = None)[source]#
Returns the jitted function associated to one key. If key is None, the assumes there is only one available jitted function and it returns it.
- info(prefix: str | None = None, method_name: str | None = None, already_eager: bool | None = None, args: List[Any] | None = None, kwargs: Dict[str, Any] | None = None, key: Tuple[Any, ...] | None = None, onx: ModelProto | None = None, output: Any | None = None)[source]#
Logs a status.
- jit_call(*values, **kwargs)[source]#
The method builds a key which identifies the signature (input types + parameters value). It then checks if the function was already converted into ONNX from a previous. If not, it converts it and caches the results indexed by the previous key. Finally, it executes the onnx graph and returns the result or the results in a tuple if there are several.
- make_key(*values: List[Any], **kwargs: Dict[str, Any]) Tuple[Any, ...] [source]#
Builds a key based on the input types and parameters. Every set of inputs or parameters producing the same key (or signature) must use the same compiled ONNX.
- Parameters:
values – values given to the function
kwargs – parameters
- Returns:
tuple of mutable keys
- move_input_to_kwargs(values: List[Any], kwargs: Dict[str, Any]) Tuple[List[Any], Dict[str, Any]] [source]#
Mandatory parameters not usually not named. Some inputs must be moved to the parameter list before calling ONNX.
- Parameters:
values – list of inputs
kwargs – dictionary of arguments
- Returns:
new values, new arguments
- property n_versions#
Returns the number of jitted functions. There is one per type and number of dimensions.
jit_onnx#
JitOnnx#
- class onnx_array_api.npx.npx_jit_eager.JitOnnx(f: Callable, tensor_class: type | None = None, target_opsets: Dict[str, int] | None = None, output_types: Dict[Any, TensorType] | None = None, ir_version: int | None = None)[source]#
Converts a function into an executable function based on a backend. The new function is converted to onnx on the first call.
- Parameters:
f – function to convert
tensor_class – wrapper around a class defining the backend, if None, it defaults to
onnx.reference.ReferenceEvaluator
target_opsets – dictionary {opset: version}
output_types – shape and type inference cannot be run before the onnx graph is created and type is needed to do such, if not specified, the class assumes there is only one output of the same type as the input
ir_version – defines the IR version to use