onnx_diagnostic.torch_models.llms¶
- onnx_diagnostic.torch_models.llms.get_tiny_llm(batch_size: int = 2, input_cache: bool = True, dynamic_rope: bool = False, **kwargs) Dict[str, Any][source]¶
- Gets a non initialized model. - Parameters:
- batch_size – batch size 
- input_cache – generate data for this iteration with or without cache 
- dynamic_rope – use dynamic rope (see - transformers.LlamaConfig)
- kwargs – to overwrite the configuration, example - num_hidden_layers=1
 
- Returns:
- dictionary 
 - See Export LLM with dynamic shapes for an example.