Remove 2017 Remove Knowledge Discovery Remove Risk
article thumbnail

ML internals: Synthetic Minority Oversampling (SMOTE) Technique

Domino Data Lab

This carries the risk of this modification performing worse than simpler approaches like majority under-sampling. note that this variant “performs worse than plain under-sampling based on AUC” when tested on the Adult dataset (Dua & Graff, 2017). Chawla et al. Indeed, in the original paper Chawla et al. Cost, S., & Salzberg, S.

article thumbnail

Changing assignment weights with time-based confounders

The Unofficial Google Data Science Blog

One reason to do ramp-up is to mitigate the risk of never before seen arms. A ramp-up strategy may mitigate the risk of upsetting the site’s loyal users who perhaps have strong preferences for the current statistics that are shown. Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining.

article thumbnail

Explaining black-box models using attribute importance, PDPs, and LIME

Domino Data Lab

This dataset classifies customers based on a set of attributes into two credit risk groups – good or bad. This is to be expected, as there is no reason for a perfect 50:50 separation of the good vs. bad credit risk. In IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), pages 24–30, Melbourne, Australia, 2017.

Modeling 139