experimental_experiment.bench_run

exception experimental_experiment.bench_run.BenchmarkError[source]
experimental_experiment.bench_run.flatten_object(x: Any) List[Any][source]

Flatten the object.

experimental_experiment.bench_run.get_machine(capability_as_str: bool = True) Dict[str, str | int | float | Tuple[int, int]][source]

Returns the machine specifications.

experimental_experiment.bench_run.get_processor_name()[source]

Returns the processor name.

experimental_experiment.bench_run.make_configs(kwargs: Namespace | Dict[str, Any], drop: Set[str] | None = None, replace: Dict[str, str] | None = None, last: List[str] | None = None, filter_function: Callable[[Dict[str, Any]], bool] | None = None) List[Dict[str, Any]][source]

Creates all the configurations based on the command line arguments.

Parameters:
  • kwargs – parameters the command line, every value having a comma means multiple values, it multiplies the number of configurations to try by the number of comma separated values

  • drop – keys to drop in kwargs if specified

  • replace – values to replace for a particular key

  • last – to change the order of the loop created the configuration, if last == ["part"] and kwargs[part] == "0,1", then configuration where part==0 is always followed by a configuration having part==1

  • filter_function – function taking a configuration and returning True if it is must be kept

Returns:

list of configurations

experimental_experiment.bench_run.make_dataframe_from_benchmark_data(data: List[Dict], detailed: bool = True, string_limit: int = 2000) Any[source]

Creates a dataframe from the received data.

Parameters:
  • data – list of dictionaries for every run

  • detailed – remove multi line and long values

  • string_limit – truncate the strings

Returns:

dataframe

experimental_experiment.bench_run.max_diff(expected: Any, got: Any, verbose: int = 0, level: int = 0, flatten: bool = False, debug_info: List[str] | None = None, begin: int = 0, end: int = -1, _index: int = 0) Dict[str, float][source]

Returns the maximum discrepancy.

Parameters:
  • expected – expected values

  • got – values

  • verbose – verbosity level

  • level – for embedded outputs, used for debug purpposes

  • flatten – flatten outputs

  • debug_info – debug information

  • begin – first output to considered

  • end – last output to considered (-1 for the last one)

  • _index – used with begin and end

Returns:

dictionary with many values

  • abs: max abolute error

  • rel: max relative error

  • sum: sum of the errors

  • n: number of outputs values, if there is one

    output, this number will be the number of elements of this output

experimental_experiment.bench_run.measure_discrepancies(expected: List[Tuple[torch.Tensor, ...]], outputs: List[Tuple[torch.Tensor, ...]]) Dict[str, float][source]

Computes the discrepancies.

Parameters:
  • expected – list of outputs coming from a torch model

  • outputs – list of outputs coming from an onnx model

Returns:

dictionary with max absolute errors, max relative errors, sum of absolute error, the number of elements contributing to it

experimental_experiment.bench_run.multi_run(kwargs: Namespace) bool[source]

Checks if multiple values were sent for one argument.

experimental_experiment.bench_run.run_benchmark(script_name: str, configs: List[Dict[str, str | int | float]], verbose: int = 0, stop_if_exception: bool = True, dump: bool = False, temp_output_data: str | None = None, dump_std: str | None = None, start: int = 0, summary: Callable | None = None, timeout: int = 600, missing: Dict[str, str | Callable] | None = None) List[Dict[str, str | int | float | Tuple[int, int]]][source]

Runs a script multiple times and extract information from the output following the pattern :<metric>,<value>;.

Parameters:
  • script_name – python script to run

  • configs – list of execution to do

  • stop_if_exception – stop if one experiment failed, otherwise continue

  • verbose – use tqdm to follow the progress

  • dump – dump onnx file, sets variable ONNXRT_DUMP_PATH

  • temp_output_data – to save the data after every run to avoid losing data

  • dump_std – dumps stdout and stderr in this folder

  • start – start at this iteration

  • summary – function to call on the temporary data and the final data

  • timeout – timeout for the subprocesses

  • missing – populate with this missing value if not found

Returns:

values