diff --git a/docs/index.html b/docs/index.html index c6b4759..4617259 100644 --- a/docs/index.html +++ b/docs/index.html @@ -633,7 +633,31 @@
xai.
convert_probs
(probs, threshold=0.5)¶Convert probabilities into classes
+Converts all the probabilities in the array provided into binary labels +as per the threshold provided which is 0.5 by default.
+Example
+probs = np.array([0.1, 0.2, 0.7, 0.8, 0.6])
+labels = xai.convert_probs(probs, threshold=0.65)
+print(labels)
+
+> [0, 0, 1, 1, 0]
+
probs (ndarray
) – Numpy array or list containing a list of floats between 0 and 1
threshold (float
) – Float that provides the threshold for which probabilities over the
+threshold will be converted to 1
Numpy array containing the labels based on threshold provided
+np.ndarray
+xai.
evaluation_metrics
(y_valid, y_pred)¶Calculates model performance metrics (accuracy, precision, recall, etc) +from the actual and predicted lables provided.
+Example
+y_actual: np.ndarray
+y_predicted: np.ndarray
+
+metrics = xai.evaluation_metrics(y_actual, y_predicted)
+for k,v in metrics.items():
+ print(f"{k}: {v}")
+
+> precision: 0.8,
+> recall: 0.9,
+> specificity: 0.7,
+> accuracy: 0.8,
+> auc: 0.7,
+> f1: 0.8
+
y_valid – Numpy array with the actual labels for the datapoints
y_pred – Numpy array with the predicted labels for the datapoints
Dictionary containing the metrics as follows:
+return {
+ "precision": precision,
+ "recall": recall,
+ "specificity": specificity,
+ "accuracy": accuracy,
+ "auc": auc,
+ "f1": f1
+}
+
Dict[str, float]
+xai.
metrics_plot
(target, predicted, df=Empty DataFrame Columns: [] Index: [], cross_cols=[], categorical_cols=[], bins=6, plot=True, exclude_metrics=[], plot_threshold=0.5)¶Creates a plot that displays statistical metrics including precision, +recall, accuracy, auc, f1 and specificity for each of the groups created +for the columns provided by cross_cols. For example, if the columns passed +are “gender” and “age”, the resulting plot will show the statistical metrics +for Male and Female for each binned group.
+Example
+target: np.ndarray
+predicted: np.ndarray
+
+df_metrics = xai.metrics_plot(
+ target,
+ predicted,
+ df=df_data,
+ cross_cols=["gender", "age"],
+ bins=3
+
target (ndarray
) – Numpy array containing the target labels for the datapoints
predicted (ndarray
) – Numpy array containing the predicted labels for the datapoints
df (DataFrame
) – Pandas dataframe containing all the features for the datapoints.
+It can be empty if only looking to calculate global metrics, but
+if you would like to compute for categories across columns, the
+columns you are grouping by need to be provided
cross_cols (List
[str
]) – Contains the columns that you would like to use to cross the values
bins (int
) – [Default: 6] The number of bins in which you’d like
+numerical columns to be split
plot (bool
) – [Default: True] If True a plot will be drawn with the results
exclude_metrics (List
[str
]) – These are the metrics that you can choose to exclude if you only
+want specific ones (for example, excluding “f1”, “specificity”, etc)
plot_threshold (float
) – The percentage that will be used to draw the threshold line in the plot
+which would provide guidance on what is the ideal metrics to achieve.
Pandas Dataframe containing all the metrics for the groups provided
+pd.DataFrame
+