🔥🔥Official Repository for Multi-Human-Parsing (MHP)🔥🔥
-
Updated
Dec 9, 2021 - JavaScript
🔥🔥Official Repository for Multi-Human-Parsing (MHP)🔥🔥
A Javascript implementation of the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) evaluation metric for summaries.
PyTorch code for FLD (Feature Likelihood Divergence), FID, KID, Precision, Recall, etc. using DINOv2, InceptionV3, CLIP, etc.
Zone Evaluation: Revealing Spatial Bias in Object Detection (TPAMI 2024)
LeBLEU: Levenshtein/Letter-edit BLEU, N-gram-based Translation Evaluation Score for Morphologically Complex Languages
MATLAB toolbox for evaluating dynamic link prediction accuracy
A Practical Quality Metric for Semantic Role Labeling Systems Evaluation
Implementation of Fréchet Distance with DINOv2 backbone in Pytorch.
Metric evaluator for Automatic Speech Recognition using the HATS dataset
Information Retrieval Systems (e.g. Search Engines) evaluation metrics in R.
Evaluation of the Models (Regression and Classification)
Source code from the AUCCalculator jar (http://mark.goadrich.com/programs/AUC).
Using A/B test to evaluate business decisions for Udacity
Built fraud detection classifiers using gaussian naive bayes and decision tress to identify POIs (persons of interests) and applied machine learning techniques such as features selection, precision and recall, and stochastic gradient descent for optimization in Python.
The proposed algorithm is successful in elimination of 108 rows from Pima Diabetes Dataset by skewness range of Normal distribution curve. To check the efficacy of the algorithm, it is compared with four techniques- Local Outlier Factor, Mahalanobis Distance, Multivariate Normal Distribution (N Dimensional) and DBSCAN.
A repository of the heart disease paper published on Springer
The main goal as a machine learning researcher is to carry out data exploration, data cleaning, feature extraction, and developing robust machine learning algorithms that would aid them in the department.
Application of machine learning in Python | Coursera
Add a description, image, and links to the evaluation-metric topic page so that developers can more easily learn about it.
To associate your repository with the evaluation-metric topic, visit your repo's landing page and select "manage topics."