mlinsights.sklapi#
The following implementation play with scikit-learn API, it overwrites the code handling parameters. It is pretty much useless unless to check the stability of the API.
SkBase#
- class mlinsights.sklapi.sklearn_base.SkBase(**kwargs)[source]#
Pattern of a learner or a transform which follows the API of scikit-learn.
- static compare_params(p1: Dict[str, Any], p2: Dict[str, Any], exc: bool = True) bool [source]#
Compares two sets of parameters.
- Parameters:
p1 – dictionary
p2 – dictionary
exc – raises an exception if error is met
- Returns:
boolean
- fit(X, y=None, sample_weight=None)[source]#
Trains a model.
@param X features @param y target @param sample_weight weight @return self
- get_params(deep=True)[source]#
Returns the parameters which define the objet, all are needed to clone the object.
@param deep unused here @return dict
SkBaseClassifier#
- class mlinsights.sklapi.sklearn_base_classifier.SkBaseClassifier(**kwargs)[source]#
Defines a custom classifier.
- predict_proba(X)[source]#
Returns probability estimates for the test data X.
- Parameters:
X – Training data, numpy array or sparse matrix of shape [n_samples,n_features]
- Returns:
array, shape = (n_samples,.), Returns predicted values.
- score(X, y=None, sample_weight=None)[source]#
Returns the mean accuracy on the given test data and labels.
- Parameters:
X – Training data, numpy array or sparse matrix of shape [n_samples,n_features]
y – Target values, numpy array of shape [n_samples, n_targets] (optional)
sample_weight – Weight values, numpy array of shape [n_samples, n_targets] (optional)
- Returns:
score : float, Mean accuracy of self.predict(X) wrt. y.
SkException#
SkBaseLearner#
- class mlinsights.sklapi.sklearn_base_learner.SkBaseLearner(**kwargs)[source]#
Pattern of a learner qui suit la même API que scikit-learn.
- decision_function(X)[source]#
Output of the model in case of a regressor, matrix with a score for each class and each sample for a classifier.
- Parameters:
X – Samples, {array-like, sparse matrix}, shape = (n_samples, n_features)
- Returns:
array, shape = (n_samples,.), Returns predicted values.
- fit(X, y=None, sample_weight=None)[source]#
Trains a model.
@param X features @param y targets @param sample_weight weight @return self
- score(X, y=None, sample_weight=None)[source]#
Returns the mean accuracy on the given test data and labels.
- Parameters:
X – Training data, numpy array or sparse matrix of shape [n_samples,n_features]
y – Target values, numpy array of shape [n_samples, n_targets] (optional)
sample_weight – Weight values, numpy array of shape [n_samples, n_targets] (optional)
- Returns:
score : float, Mean accuracy of self.predict(X) wrt. y.
SkLearnParameters#
- class mlinsights.sklapi.sklearn_parameters.SkLearnParameters(**kwargs)[source]#
Defines a class to store parameters of a learner or a transform.
- property Keys#
Returns parameter names.
- validate(name, value)[source]#
Verifies a parameter and its value.
- Parameters:
name – name
value – value
- Raise:
raises
SkException
if error
SkBaseRegressor#
- class mlinsights.sklapi.sklearn_base_regressor.SkBaseRegressor(**kwargs)[source]#
Defines a custom regressor.
- score(X, y=None, sample_weight=None)[source]#
Returns the mean accuracy on the given test data and labels.
- Parameters:
X – Training data, numpy array or sparse matrix of shape [n_samples,n_features]
y – Target values, numpy array of shape [n_samples, n_targets] (optional)
sample_weight – Weight values, numpy array of shape [n_samples, n_targets] (optional)
- Returns:
score : float, Mean accuracy of self.predict(X) wrt. y.
SkBaseTransform#
- class mlinsights.sklapi.sklearn_base_transform.SkBaseTransform(**kwargs)[source]#
Pattern of a learner which follows the same API que scikit-learn.
SkBaseTransformLearner#
- class mlinsights.sklapi.sklearn_base_transform_learner.SkBaseTransformLearner(model=None, method=None, **kwargs)[source]#
A transform which hides a learner, it converts method predict into transform. This way, two learners can be inserted into the same pipeline. There is another a,d shorter implementation with class
TransferTransformer
.Use two learners into a same pipeline
It is impossible to use two learners into a pipeline unless we use a class such as
SkBaseTransformLearner
which disguise a learner into a transform.<<<
from sklearn.model_selection import train_test_split from sklearn.datasets import load_iris from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import accuracy_score from sklearn.pipeline import make_pipeline from mlinsights.sklapi import SkBaseTransformLearner data = load_iris() X, y = data.data, data.target X_train, X_test, y_train, y_test = train_test_split(X, y) try: pipe = make_pipeline(LogisticRegression(), DecisionTreeClassifier()) except Exception as e: print("ERROR:") print(e) print(".") pipe = make_pipeline( SkBaseTransformLearner(LogisticRegression()), DecisionTreeClassifier() ) pipe.fit(X_train, y_train) pred = pipe.predict(X_test) score = accuracy_score(y_test, pred) print("pipeline avec deux learners :", score)
>>>
pipeline avec deux learners : 0.9736842105263158
- fit(X, y=None, **kwargs)[source]#
Trains a model.
@param X features @param y targets @param kwargs additional parameters @return self
SkBaseTransformStacking#
- class mlinsights.sklapi.sklearn_base_transform_stacking.SkBaseTransformStacking(models=None, method=None, **kwargs)[source]#
Un transform qui cache plusieurs learners, arrangés selon la méthode du stacking.
Stacking de plusieurs learners dans un pipeline scikit-learn.
Ce transform assemble les résultats de plusieurs learners. Ces features servent d’entrée à un modèle de stacking.
<<<
from sklearn.model_selection import train_test_split from sklearn.datasets import load_iris from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import accuracy_score from sklearn.pipeline import make_pipeline from mlinsights.sklapi import SkBaseTransformStacking data = load_iris() X, y = data.data, data.target X_train, X_test, y_train, y_test = train_test_split(X, y) trans = SkBaseTransformStacking([LogisticRegression(), DecisionTreeClassifier()]) trans.fit(X_train, y_train) pred = trans.transform(X_test) print(pred[3:])
>>>
[[2 2] [1 1] [2 2] [0 0] [1 1] [0 0] [2 2] [1 1] [1 1] [0 0] [0 0] [1 1] [1 1] [1 1] [2 2] [2 2] [1 1] [1 1] [1 1] [0 0] [0 0] [2 2] [0 0] [2 2] [2 2] [2 2] [2 2] [0 0] [0 0] [1 1] [1 1] [0 0] [1 1] [2 2] [0 0]]
- fit(X, y=None, **kwargs)[source]#
Trains a model.
@param X features @param y targets @param kwargs additional parameters @return self
- get_params(deep=True)[source]#
Returns the parameters which define the object. It follows scikit-learn API.
@param deep unused here @return dict