Supported LiteRT Ops#

The following TFLite op types have a built-in converter in yobx.litert.ops. The list is generated programmatically from the live converter registry.

<<<

import re
from yobx.litert import register_litert_converters
from yobx.litert.register import LITERT_OP_CONVERTERS
from yobx.litert.litert_helper import builtin_op_name

register_litert_converters()

PATTERN = re.compile(r"TFLite\s+``(\w+)``\s+→\s+ONNX\s+(.+)")
MODULE_LABELS = {
    "yobx.litert.ops.activations": "Activations",
    "yobx.litert.ops.elementwise": "Element-wise",
    "yobx.litert.ops.nn_ops": "Neural network",
    "yobx.litert.ops.reshape_ops": "Shape / tensor manipulation",
}
MODULE_ORDER = list(MODULE_LABELS.keys())

groups = {m: [] for m in MODULE_ORDER}
for code, fn in LITERT_OP_CONVERTERS.items():
    mod = fn.__module__
    doc = (fn.__doc__ or "").strip().splitlines()[0].strip().rstrip(".")
    m = PATTERN.match(doc)
    tflite_op = (
        m.group(1) if m else (builtin_op_name(code) if isinstance(code, int) else code)
    )
    onnx_op = m.group(2).rstrip(".") if m else "?"
    if mod in groups:
        groups[mod].append((tflite_op, onnx_op))

for mod in MODULE_ORDER:
    label = MODULE_LABELS[mod]
    items = sorted(groups[mod])
    if not items:
        continue
    print(f"**{label}**")
    print()
    for tflite_op, onnx_op in items:
        print(f"* ``{tflite_op}`` → {onnx_op}")
    print()

>>>

Activations

  • ELUElu(alpha=1.0)

  • GELUGelu

  • HARD_SWISHHardSwish

  • LEAKY_RELULeakyRelu

  • LOG_SOFTMAXLogSoftmax(axis=-1)

  • RELURelu

  • RELU_N1_TO_1Clip(min=-1, max=1)

  • SOFTMAXSoftmax(axis=-1)

  • TANHTanh

Element-wise

  • ABSAbs

  • ADDAdd

  • CEILCeil

  • DIVDiv

  • EXPExp

  • FLOORFloor

  • FLOOR_DIVFloor(Div(a, b))

  • LOGLog

  • LOGICAL_ANDAnd

  • LOGICAL_NOTNot

  • LOGICAL_OROr

  • MULMul

  • NEGNeg

  • POWPow

  • ROUNDRound

  • RSQRTReciprocal(Sqrt(x))

  • SINSin

  • SQRTSqrt

  • SQUARED_DIFFERENCEPow(Sub(a, b), 2)

  • SUBSub

Neural network

  • AVERAGE_POOL_2DAveragePool

  • BATCH_MATMULMatMul with optional transposes

  • CONV_2DConv

  • DEPTHWISE_CONV_2DConv with group=in_channels

  • FULLY_CONNECTEDMatMul (+ optional Add bias)

  • MAX_POOL_2DMaxPool

Shape / tensor manipulation

  • CONCATENATIONConcat

  • EXPAND_DIMSUnsqueeze

  • MEANReduceMean

  • REDUCE_MAXReduceMax

  • REDUCE_MINReduceMin

  • RESHAPEReshape

  • SQUEEZESqueeze

  • SUMReduceSum

  • TRANSPOSETranspose