Optimisation#

Gradient#

class mlstatpy.optim.sgd.SGDOptimizer(coef, learning_rate_init=0.1, lr_schedule='invscaling', momentum=0.9, power_t=0.5, early_th=None, min_threshold=None, max_threshold=None, l1=0.0, l2=0.0)[source][source]

Stochastic gradient descent optimizer with momentum.

Paramètres:
  • coef – array, initial coefficient

  • learning_rate_init – float The initial learning rate used. It controls the step-size in updating the weights,

  • lr_schedule{“constant”, “adaptive”, “invscaling”}, learning rate schedule for weight updates, “constant” for a constant learning rate given by learning_rate_init. “invscaling” gradually decreases the learning rate learning_rate_ at each time step t using an inverse scaling exponent of power_t. learning_rate_ = learning_rate_init / pow(t, power_t), “adaptive”, keeps the learning rate constant to learning_rate_init as long as the training keeps decreasing. Each time 2 consecutive epochs fail to decrease the training loss by tol, or fail to increase validation score by tol if “early_stopping” is on, the current learning rate is divided by 5.

  • momentum – float Value of momentum used, must be larger than or equal to 0

  • power_t – double The exponent for inverse scaling learning rate.

  • early_th – stops if the error goes below that threshold

  • min_threshold – lower bound for parameters (can be None)

  • max_threshold – upper bound for parameters (can be None)

  • l1 – L1 regularization

  • l2 – L2 regularization

The class holds the following attributes:

  • learning_rate: float, the current learning rate

  • velocity*: array, velocity that are used to update params

Stochastic Gradient Descent applied to linear regression

The following example how to optimize a simple linear regression.

<<<

import numpy
from mlstatpy.optim import SGDOptimizer


def fct_loss(c, X, y):
    return numpy.linalg.norm(X @ c - y) ** 2


def fct_grad(c, x, y, i=0):
    return x * (x @ c - y) * 0.1


coef = numpy.array([0.5, 0.6, -0.7])
X = numpy.random.randn(10, 3)
y = X @ coef

sgd = SGDOptimizer(numpy.random.randn(3))
sgd.train(X, y, fct_loss, fct_grad, max_iter=15, verbose=True)
print("optimized coefficients:", sgd.coef)

>>>

    0/15: loss: 21.05 lr=0.1 max(coef): 0.69 l1=0/1 l2=0/0.58
    1/15: loss: 15.75 lr=0.0302 max(coef): 0.93 l1=0.21/1.1 l2=0.014/0.88
    2/15: loss: 6.724 lr=0.0218 max(coef): 0.96 l1=0.021/1.4 l2=0.00025/1
    3/15: loss: 4.083 lr=0.018 max(coef): 0.9 l1=0.36/1.6 l2=0.056/1.1
    4/15: loss: 2.768 lr=0.0156 max(coef): 0.8 l1=0.13/1.7 l2=0.0056/1.1
    5/15: loss: 1.861 lr=0.014 max(coef): 0.74 l1=0.023/1.7 l2=0.00018/1.1
    6/15: loss: 1.205 lr=0.0128 max(coef): 0.71 l1=0.013/1.7 l2=9.1e-05/1
    7/15: loss: 0.9286 lr=0.0119 max(coef): 0.68 l1=0.14/1.7 l2=0.0073/0.97
    8/15: loss: 0.6334 lr=0.0111 max(coef): 0.64 l1=0.014/1.7 l2=8.4e-05/0.96
    9/15: loss: 0.4701 lr=0.0105 max(coef): 0.62 l1=0.074/1.7 l2=0.002/0.95
    10/15: loss: 0.3124 lr=0.00995 max(coef): 0.59 l1=0.1/1.7 l2=0.0046/0.95
    11/15: loss: 0.2078 lr=0.00949 max(coef): 0.57 l1=0.03/1.7 l2=0.0003/0.98
    12/15: loss: 0.1579 lr=0.00909 max(coef): 0.59 l1=0.003/1.7 l2=5.1e-06/0.99
    13/15: loss: 0.1217 lr=0.00874 max(coef): 0.6 l1=0.0034/1.7 l2=4.6e-06/1
    14/15: loss: 0.09211 lr=0.00842 max(coef): 0.62 l1=0.014/1.7 l2=6.6e-05/1
    15/15: loss: 0.07229 lr=0.00814 max(coef): 0.63 l1=0.038/1.7 l2=0.00055/1
    optimized coefficients: [ 0.539  0.578 -0.627]