Examples¶
Compute a distance between two graphs.
See Distance between two graphs.
<<<
import copy
from mlstatpy.graph import GraphDistance
# We define two graphs as list of edges.
graph1 = [
("a", "b"),
("b", "c"),
("b", "X"),
("X", "c"),
("c", "d"),
("d", "e"),
("0", "b"),
]
graph2 = [
("a", "b"),
("b", "c"),
("b", "X"),
("X", "c"),
("c", "t"),
("t", "d"),
("d", "e"),
("d", "g"),
]
# We convert them into objects GraphDistance.
graph1 = GraphDistance(graph1)
graph2 = GraphDistance(graph2)
distance, graph = graph1.distance_matching_graphs_paths(graph2, use_min=False)
print("distance", distance)
print("common paths:", graph)
>>>
distance 0.3318250377073907
common paths: 0
X
a
b
c
d
e
00
11
g
t
a -> b []
b -> c []
b -> X []
X -> c []
c -> d []
d -> e []
0 -> b []
00 -> a []
00 -> 0 []
e -> 11 []
c -> 2a.t []
2a.t -> d []
d -> 2a.g []
2a.g -> 11 []
(entrée originale : graph_distance.py:docstring of mlstatpy.graph.graph_distance.GraphDistance, line 3)
Compute a distance between two graphs.
See Distance between two graphs.
<<<
import copy
from mlstatpy.graph import GraphDistance
# We define two graphs as list of edges.
graph1 = [
("a", "b"),
("b", "c"),
("b", "X"),
("X", "c"),
("c", "d"),
("d", "e"),
("0", "b"),
]
graph2 = [
("a", "b"),
("b", "c"),
("b", "X"),
("X", "c"),
("c", "t"),
("t", "d"),
("d", "e"),
("d", "g"),
]
# We convert them into objects GraphDistance.
graph1 = GraphDistance(graph1)
graph2 = GraphDistance(graph2)
distance, graph = graph1.distance_matching_graphs_paths(graph2, use_min=False)
print("distance", distance)
print("common paths:", graph)
>>>
distance 0.3318250377073907
common paths: 0
X
a
b
c
d
e
00
11
g
t
a -> b []
b -> c []
b -> X []
X -> c []
c -> d []
d -> e []
0 -> b []
00 -> a []
00 -> 0 []
e -> 11 []
c -> 2a.t []
2a.t -> d []
d -> 2a.g []
2a.g -> 11 []
(entrée originale : graph_distance.py:docstring of mlstatpy.graph.graph_distance.GraphDistance, line 3)
Stochastic Gradient Descent applied to linear regression
The following example how to optimize a simple linear regression.
<<<
import numpy
from mlstatpy.optim import SGDOptimizer
def fct_loss(c, X, y):
return numpy.linalg.norm(X @ c - y) ** 2
def fct_grad(c, x, y, i=0):
return x * (x @ c - y) * 0.1
coef = numpy.array([0.5, 0.6, -0.7])
X = numpy.random.randn(10, 3)
y = X @ coef
sgd = SGDOptimizer(numpy.random.randn(3))
sgd.train(X, y, fct_loss, fct_grad, max_iter=15, verbose=True)
print("optimized coefficients:", sgd.coef)
>>>
0/15: loss: 49.82 lr=0.1 max(coef): 1.3 l1=0/3 l2=0/3.1
1/15: loss: 13.98 lr=0.0302 max(coef): 0.8 l1=0.088/1.5 l2=0.0054/1.1
2/15: loss: 3.738 lr=0.0218 max(coef): 0.63 l1=0.022/1.3 l2=0.00018/0.67
3/15: loss: 1.999 lr=0.018 max(coef): 0.72 l1=0.24/1.4 l2=0.02/0.83
4/15: loss: 1.567 lr=0.0156 max(coef): 0.8 l1=0.06/1.5 l2=0.0026/0.93
5/15: loss: 1.402 lr=0.014 max(coef): 0.82 l1=0.0064/1.6 l2=1.8e-05/0.99
6/15: loss: 1.126 lr=0.0128 max(coef): 0.81 l1=0.036/1.6 l2=0.00045/0.97
7/15: loss: 0.925 lr=0.0119 max(coef): 0.79 l1=0.00078/1.6 l2=2.2e-07/0.95
8/15: loss: 0.7864 lr=0.0111 max(coef): 0.77 l1=0.0012/1.6 l2=5.3e-07/0.93
9/15: loss: 0.6454 lr=0.0105 max(coef): 0.76 l1=0.0088/1.6 l2=3.3e-05/0.91
10/15: loss: 0.5454 lr=0.00995 max(coef): 0.74 l1=0.042/1.6 l2=0.0012/0.89
11/15: loss: 0.4745 lr=0.00949 max(coef): 0.73 l1=0.039/1.6 l2=0.001/0.88
12/15: loss: 0.4059 lr=0.00909 max(coef): 0.71 l1=0.11/1.6 l2=0.0041/0.86
13/15: loss: 0.363 lr=0.00874 max(coef): 0.71 l1=0.047/1.6 l2=0.00086/0.86
14/15: loss: 0.3138 lr=0.00842 max(coef): 0.7 l1=0.0037/1.6 l2=5.1e-06/0.85
15/15: loss: 0.2867 lr=0.00814 max(coef): 0.69 l1=0.083/1.6 l2=0.0028/0.85
optimized coefficients: [ 0.415 0.691 -0.45 ]
(entrée originale : sgd.py:docstring of mlstatpy.optim.sgd.SGDOptimizer, line 34)
Stochastic Gradient Descent applied to linear regression
The following example how to optimize a simple linear regression.
<<<
import numpy
from mlstatpy.optim import SGDOptimizer
def fct_loss(c, X, y):
return numpy.linalg.norm(X @ c - y) ** 2
def fct_grad(c, x, y, i=0):
return x * (x @ c - y) * 0.1
coef = numpy.array([0.5, 0.6, -0.7])
X = numpy.random.randn(10, 3)
y = X @ coef
sgd = SGDOptimizer(numpy.random.randn(3))
sgd.train(X, y, fct_loss, fct_grad, max_iter=15, verbose=True)
print("optimized coefficients:", sgd.coef)
>>>
0/15: loss: 43.68 lr=0.1 max(coef): 1.2 l1=0/2.5 l2=0/2.3
1/15: loss: 23.97 lr=0.0302 max(coef): 0.59 l1=0.39/1.7 l2=0.067/0.93
2/15: loss: 4.844 lr=0.0218 max(coef): 0.58 l1=0.19/1.1 l2=0.017/0.53
3/15: loss: 3.968 lr=0.018 max(coef): 0.8 l1=0.11/1.6 l2=0.0057/0.99
4/15: loss: 4.019 lr=0.0156 max(coef): 0.9 l1=0.15/1.7 l2=0.013/1.1
5/15: loss: 3.659 lr=0.014 max(coef): 0.91 l1=0.059/1.7 l2=0.0014/1.1
6/15: loss: 3.355 lr=0.0128 max(coef): 0.9 l1=0.16/1.6 l2=0.018/1.1
7/15: loss: 2.801 lr=0.0119 max(coef): 0.86 l1=0.14/1.6 l2=0.011/1
8/15: loss: 2.309 lr=0.0111 max(coef): 0.81 l1=0.13/1.5 l2=0.0096/0.9
9/15: loss: 2.003 lr=0.0105 max(coef): 0.79 l1=0.12/1.4 l2=0.0048/0.82
10/15: loss: 1.718 lr=0.00995 max(coef): 0.75 l1=0.039/1.3 l2=0.00062/0.76
11/15: loss: 1.514 lr=0.00949 max(coef): 0.72 l1=0.052/1.2 l2=0.0016/0.71
12/15: loss: 1.384 lr=0.00909 max(coef): 0.71 l1=0.05/1.2 l2=0.0018/0.67
13/15: loss: 1.281 lr=0.00874 max(coef): 0.69 l1=0.042/1.1 l2=0.00079/0.64
14/15: loss: 1.176 lr=0.00842 max(coef): 0.69 l1=0.01/1.1 l2=4.6e-05/0.62
15/15: loss: 1.107 lr=0.00814 max(coef): 0.68 l1=0.03/1.1 l2=0.00035/0.6
optimized coefficients: [ 0.376 -0.021 -0.678]
(entrée originale : sgd.py:docstring of mlstatpy.optim.sgd.SGDOptimizer, line 34)