Contents Menu Expand Light mode Dark mode Auto light/dark, in light mode Auto light/dark, in dark mode Skip to content
Documentation teachcompute 0.2.0
Logo
Documentation teachcompute 0.2.0

Lectures

  • Introduction
  • Build
    • Build with cython
    • Build with pybind11
    • Build with CUDA
    • Build Torch Extensions
  • Collections d’articles périssables
    • 2025-05-31: Feuille de route 2024-2025 (3A)
    • 2024-05-31: Feuille de route 2023-2024 (3A)
    • 2023-11-31 : rappel feuille de route 2023
    • 2023-05-31: Feuille de route 2022-2023 (3A)
  • Code inclus dans cette librairie
    • teachcompute.args
    • teachcompute.__init__.py
    • teachcompute.ext_test_case
    • teachcompute.datasets
    • teachcompute.fctmr
    • teachcompute.memory_peak
    • validation
      • validation.cpu
      • validation.cuda
      • validation.cython
    • teachcompute.torch_models
    • Torch Extensions

Exercices

  • Exposé
    • Hash et distribution
    • Sérialisation
    • Random order for a sum
    • Associativity and matrix multiplication
  • Notebooks sur Spark
    • Reducers récursifs
    • Reservoir Sampling distribué - énoncé
    • Reduce skew data
    • Données antipathiques (skewed), Appariement (correction)
    • Données antipathiques (skewed), Appariement - énoncé
    • Mapper, Reducers customisés avec SQL
    • Premiers pas avec Spark
    • Matrices en 3 colonnes
    • Spark et MLlib - ML
  • Parallelization of a vector sum with C++
    • Measuring CPU performance with a vector sum
    • Measuring CPU performance with a parallelized vector sum
    • Measuring CPU performance with a parallelized vector sum and AVX
  • Tensor manipulations with CUDA
    • Measuring CUDA performance with a vector addition
    • Measuring CUDA performance with a vector addition with streams
    • Measuring CUDA performance with a vector sum
  • Parallelization of a dot product
    • Compares dot implementations (numpy, python, blas)
    • Compares dot implementations (numpy, cython, c++, sse)
    • Compares dot implementations (numpy, c++, sse, openmp)
    • Compares matrix multiplication implementations
    • Compares matrix multiplication implementations with timeit
  • Parallelization, Matrix Calculation
    • Compares filtering implementations (numpy, cython)
  • Parallelization with processes
    • Parallelization of a dot product with processes (joblib)
    • Parallelization of a dot product with processes (concurrent.futures)
  • pytorch
    • Compares implementations for a Piecewise Linear
    • Export a LLAMA model into ONNX

Compléments

  • En diagonal
    • Tous les notebooks
    • Gallerie d’exemples
      • Parallelization of a dot product with processes (joblib)
      • Compares matrix multiplication implementations with timeit
      • Associativity and matrix multiplication
      • Parallelization of a dot product with processes (concurrent.futures)
      • Compares dot implementations (numpy, python, blas)
      • Measuring CPU performance with a vector sum
      • Compares filtering implementations (numpy, cython)
      • Measuring CUDA performance with a vector addition
      • Random order for a sum
      • Compares dot implementations (numpy, c++, sse, openmp)
      • Measuring CUDA performance with a vector sum
      • Measuring CPU performance with a parallelized vector sum
      • Compares implementations for a Piecewise Linear
      • Measuring CUDA performance with a vector addition with streams
      • Compares dot implementations (numpy, cython, c++, sse)
      • Measuring CPU performance with a parallelized vector sum and AVX
      • Export a LLAMA model into ONNX
      • Compares matrix multiplication implementations
      • Sérialisation
    • Syntaxes et définitions
    • FAQ
  • License
  • Change Logs
Back to top
View this page

Parallelization, Matrix Calculation¶

Matrix Calculation

  • Compares filtering implementations (numpy, cython)
Next
Compares filtering implementations (numpy, cython)
Previous
Compares matrix multiplication implementations with timeit
Copyright © 2023-2024, Xavier Dupré
Made with Sphinx and @pradyunsg's Furo