onnx_diagnostic.tasks.text2text_generation

onnx_diagnostic.tasks.text2text_generation.get_inputs(model: Module, config: Any | None, dummy_max_token_id: int, num_key_value_heads: int, num_hidden_layers: int, head_dim: int, encoder_dim: int, batch_size: int = 2, sequence_length: int = 30, sequence_length2: int = 3, add_second_input: bool = False, **kwargs)[source][source]

Generates input for task text2text-generation.

Parameters:
  • model – model to get the missing information

  • config – configuration used to generate the model

  • head_dim – last dimension of the cache

  • dummy_max_token_id – dummy max token id

  • batch_size – batch size

  • encoder_dim – last dimension of encoder_last_hidden_state

  • sequence_length – sequence length

  • sequence_length2 – new sequence length

Returns:

dictionary

Stolen inputs for one model.

cache_position:T7s1
past_key_values:EncoderDecoderCache(
    self_attention_cache=DynamicCache(
        key_cache=#6[T1s1x8x1x64,...],
        value_cache=#6[T1s1x8x1x64,...]),
    cross_attention_cache=DynamicCache(
        key_cache=#6[T1s1x8x16x64,...],
        value_cache=#6[T1s1x8x16x64,...])),
decoder_input_ids:T7s1x1,
encoder_outputs:dict(last_hidden_state:T1s1x16x512)
onnx_diagnostic.tasks.text2text_generation.random_input_kwargs(config: Any) Tuple[Dict[str, Any], Callable][source][source]

Inputs kwargs.

If the configuration is None, the function selects typical dimensions.

onnx_diagnostic.tasks.text2text_generation.reduce_model_config(config: Any) Dict[str, Any][source][source]

Reduces a model size.