Custom DecisionTreeRegressor adapted to a linear regression

A sklearn.tree.DecisionTreeRegressor can be trained with a couple of possible criterions but it is possible to implement a custom one (see hellinger_distance_criterion). See also tutorial Cython example of exposing C-computed arrays in Python without data copies which describes a way to implement fast Cython extensions.

Piecewise data

Let’s build a toy problem based on two linear models.

import matplotlib.pyplot as plt
import numpy
import numpy.random as npr
from mlinsights.ext_test_case import measure_time
from mlinsights.mlmodel.piecewise_tree_regression import PiecewiseTreeRegressor
from mlinsights.mlmodel.piecewise_tree_regression_criterion import (
    SimpleRegressorCriterion,
)
from mlinsights.mlmodel.piecewise_tree_regression_criterion_fast import (
    SimpleRegressorCriterionFast,
)
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor

X = npr.normal(size=(1000, 4))
alpha = [4, -2]
t = (X[:, 0] + X[:, 3] * 0.5) > 0
switch = numpy.zeros(X.shape[0])
switch[t] = 1
y = alpha[0] * X[:, 0] * t + alpha[1] * X[:, 0] * (1 - t) + X[:, 2]
fig, ax = plt.subplots(1, 1)
ax.plot(X[:, 0], y, ".")
ax.set_title("Piecewise examples")
Piecewise examples
Text(0.5, 1.0, 'Piecewise examples')

DecisionTreeRegressor

DecisionTreeRegressor(min_samples_leaf=100)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


array([ 1.22084396, -0.00420038,  2.94896867, -0.00420038,  0.55651447])
fig, ax = plt.subplots(1, 1)
ax.plot(X_test[:, 0], y_test, ".", label="data")
ax.plot(X_test[:, 0], pred, ".", label="predictions")
ax.set_title("DecisionTreeRegressor")
ax.legend()
DecisionTreeRegressor
<matplotlib.legend.Legend object at 0x7f846472cfd0>

DecisionTreeRegressor with custom implementation

model2 = DecisionTreeRegressor(
    min_samples_leaf=100, criterion=SimpleRegressorCriterion(1, X_train.shape[0])
)
model2.fit(X_train, y_train)
DecisionTreeRegressor(criterion=<mlinsights.mlmodel.piecewise_tree_regression_criterion.SimpleRegressorCriterion object at 0x55c473336390>,
                      min_samples_leaf=100)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


array([1.22084396, 0.02146014, 2.94896867, 0.02146014, 0.02146014])
fig, ax = plt.subplots(1, 1)
ax.plot(X_test[:, 0], y_test, ".", label="data")
ax.plot(X_test[:, 0], pred, ".", label="predictions")
ax.set_title("DecisionTreeRegressor\nwith custom criterion")
ax.legend()
DecisionTreeRegressor with custom criterion
<matplotlib.legend.Legend object at 0x7f84740a3250>

Computation time

The custom criterion is not really efficient but it was meant that way. The code can be found in piecewise_tree_regression_criterion. Bascially, it is slow because each time the algorithm optimizing the tree needs the class Criterion to evaluate the impurity reduction for a split, the computation happens on the whole data under the node being split. The implementation in _criterion.pyx does it once.

measure_time("model.fit(X_train, y_train)", globals())
{'average': np.float64(0.0006277020659908886), 'deviation': np.float64(0.00013720017019929376), 'min_exec': np.float64(0.0004980177799734519), 'max_exec': np.float64(0.0008900149600231089), 'repeat': 10, 'number': 50, 'ttime': np.float64(0.006277020659908886), 'context_size': 1176, 'warmup_time': 0.000983060999715235}
measure_time("model2.fit(X_train, y_train)", globals())
{'average': np.float64(0.0029189347680076026), 'deviation': np.float64(0.00021396753727527702), 'min_exec': np.float64(0.002569425140027306), 'max_exec': np.float64(0.003275515240020468), 'repeat': 10, 'number': 50, 'ttime': np.float64(0.029189347680076028), 'context_size': 1176, 'warmup_time': 0.0035421940010564867}

A loop is involved every time the criterion of the node is involved which raises a the computation cost of lot. The method _mse is called each time the algorithm training the decision tree needs to evaluate a cut, one cut involves elements betwee, position [start, end[.

ctypedef double float64_t

cdef void _mean(self, intp_t start, intp_t end, float64_t *mean,
                float64_t *weight) nogil:
    if start == end:
        mean[0] = 0.
        return
    cdef float64_t m = 0.
    cdef float64_t w = 0.
    cdef int k
    for k in range(start, end):
        m += self.sample_wy[k]
        w += self.sample_w[k]
    weight[0] = w
    mean[0] = 0. if w == 0. else m / w

cdef float64_t _mse(self, intp_t start, intp_t end, float64_t mean,
                 float64_t weight) nogil:
    if start == end:
        return 0.
    cdef float64_t squ = 0.
    cdef int k
    for k in range(start, end):
        squ += (self.y[self.sample_i[k], 0] - mean) ** 2 * self.sample_w[k]
    return 0. if weight == 0. else squ / weight

Better implementation

I rewrote my first implementation to be closer to what scikit-learn is doing. The criterion is computed once for all possible cut and then retrieved on demand. The code is below, arrays sample_wy_left is the cumulated sum of \(weight * Y\) starting from the left side (lower Y). The loop disappeared.

ctypedef double float64_t

cdef void _mean(self, intp_t start, intp_t end, float64_t *mean,
                float64_t *weight) nogil:
    if start == end:
        mean[0] = 0.
        return
    cdef float64_t m = self.sample_wy_left[end-1] -
                       (self.sample_wy_left[start-1] if start > 0 else 0)
    cdef float64_t w = self.sample_w_left[end-1] -
                       (self.sample_w_left[start-1] if start > 0 else 0)
    weight[0] = w
    mean[0] = 0. if w == 0. else m / w

cdef float64_t _mse(self, intp_t start, intp_t end, float64_t mean,
                    float64_t weight) nogil:
    if start == end:
        return 0.
    cdef float64_t squ = self.sample_wy2_left[end-1] -
                         (self.sample_wy2_left[start-1] if start > 0 else 0)
    # This formula only holds if mean is computed on the same interval.
    # Otherwise, it is squ / weight - true_mean ** 2 + (mean - true_mean) ** 2.
    return 0. if weight == 0. else squ / weight - mean ** 2
model3 = DecisionTreeRegressor(
    min_samples_leaf=100, criterion=SimpleRegressorCriterionFast(1, X_train.shape[0])
)
model3.fit(X_train, y_train)
pred = model3.predict(X_test)
pred[:5]
array([1.22084396, 0.02146014, 2.94896867, 0.02146014, 0.02146014])
fig, ax = plt.subplots(1, 1)
ax.plot(X_test[:, 0], y_test, ".", label="data")
ax.plot(X_test[:, 0], pred, ".", label="predictions")
ax.set_title("DecisionTreeRegressor\nwith fast custom criterion")
ax.legend()
DecisionTreeRegressor with fast custom criterion
<matplotlib.legend.Legend object at 0x7f8474051db0>
measure_time("model3.fit(X_train, y_train)", globals())
{'average': np.float64(0.0005952476640013629), 'deviation': np.float64(9.200797957516122e-05), 'min_exec': np.float64(0.0004608890599774895), 'max_exec': np.float64(0.0008051304800028447), 'repeat': 10, 'number': 50, 'ttime': np.float64(0.005952476640013629), 'context_size': 1176, 'warmup_time': 0.0019230220023018774}

Much better even though this implementation is currently 3, 4 times slower than scikit-learn’s. Let’s check with a datasets three times bigger to see if it is a fix cost or a cost.

((750, 1), (2250, 1), (2250,))
measure_time("model.fit(X_train3, y_train3)", globals())
{'average': np.float64(0.0013359763760090574), 'deviation': np.float64(0.00012585663500117283), 'min_exec': np.float64(0.0011745783000515076), 'max_exec': np.float64(0.0016119259800325381), 'repeat': 10, 'number': 50, 'ttime': np.float64(0.013359763760090574), 'context_size': 1176, 'warmup_time': 0.0013088390005577821}

The criterion needs to be reinstanciated since it depends on the features X. The computation does not but the design does. This was introduced to compare the current output with a decision tree optimizing for a piecewise linear regression and not a stepwise regression.

try:
    model3.fit(X_train3, y_train3)
except Exception as e:
    print(e)
n_samples=750 -- y.shape=[2250, 1, 0, 0, 0, 0, 0, 0]
model3 = DecisionTreeRegressor(
    min_samples_leaf=100, criterion=SimpleRegressorCriterionFast(1, X_train3.shape[0])
)
measure_time("model3.fit(X_train3, y_train3)", globals())
{'average': np.float64(0.0015539457760241932), 'deviation': np.float64(0.00025411727907129666), 'min_exec': np.float64(0.0012243494600261329), 'max_exec': np.float64(0.002184680420032237), 'repeat': 10, 'number': 50, 'ttime': np.float64(0.015539457760241931), 'context_size': 1176, 'warmup_time': 0.0022764530003769323}

Still almost 2 times slower but of the same order of magnitude. We could go further and investigate why or continue and introduce a criterion which optimizes a piecewise linear regression instead of a stepwise regression.

Criterion adapted for a linear regression

The previous examples are all about decision trees which approximates a function by a stepwise function. On every interval \([r_1, r_2]\), the model optimizes \(\sum_i (y_i - C)^2 \mathbb{1}_{ r_1 \leqslant x_i \leqslant r_2}\) and finds the best constant (= the average) approxmating the function on this interval. We would to like to approximate the function by a regression line and not a constant anymore. It means minimizing \(\sum_i (y_i - X_i \beta)^2 \mathbb{1}_{ r_1 \leqslant x_i \leqslant r_2}\). Doing this require to change the criterion used to split the space of feature into buckets and the prediction function of the decision tree which now needs to return a dot product.

fixed = False
if fixed:
    # It does not work yet.
    piece = PiecewiseTreeRegressor(criterion="mselin", min_samples_leaf=100)
    piece.fit(X_train, y_train)
if fixed:
    pred = piece.predict(X_test)
    pred[:5]
if fixed:
    fig, ax = plt.subplots(1, 1)
    ax.plot(X_test[:, 0], y_test, ".", label="data")
    ax.plot(X_test[:, 0], pred, ".", label="predictions")
    ax.set_title("DecisionTreeRegressor\nwith criterion adapted to linear regression")
    ax.legend()

The coefficients for the linear regressions are kept into the following attribute:

if fixed:
    piece.betas_

Mapped to the following leaves:

if fixed:
    piece.leaves_index_, piece.leaves_mapping_

We can get the leave each observation falls into:

if fixed:
    piece.predict_leaves(X_test)[:5]

The training is quite slow as it is training many linear regressions each time a split is evaluated.

if fixed:
    measure_time("piece.fit(X_train, y_train)", globals())
if fixed:
    measure_time("piece.fit(X_train3, y_train3)", globals())

It works but it is slow, slower than the slow implementation of the standard criterion for decision tree regression.

Next

PR Model trees (M5P and co) and issue Model trees (M5P) propose an implementation a piecewise regression with any kind of regression model. It is based on Building Model Trees. It fits many models to find the best splits and should be slower than this implementation in the case of a decision tree regressor associated with linear regressions.

Total running time of the script: (0 minutes 3.894 seconds)

Gallery generated by Sphinx-Gallery