-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathmodelselection.html
829 lines (783 loc) · 45 KB
/
modelselection.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="X-UA-Compatible" content="IE=Edge" />
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>scikit-multilearn: Multi-Label Classification in Python — Multi-Label Classification for Python</title>
<link rel="stylesheet" href="_static/" type="text/css" />
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<script type="text/javascript" id="documentation_options" data-url_root="./" src="_static/documentation_options.js"></script>
<script type="text/javascript" src="_static/jquery.js"></script>
<script type="text/javascript" src="_static/underscore.js"></script>
<script type="text/javascript" src="_static/doctools.js"></script>
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="search.html" />
<link rel="next" title="1. Exploring Label Relations" href="labelrelations.html" />
<link rel="prev" title="3. Dataset handling" href="datasets.html" />
<meta content="True" name="HandheldFriendly">
<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=0">
<meta name="twitter:card" content="summary">
<meta name="twitter:site" content="@scikitml">
<meta name="twitter:title" content="scikit-multilearn">
<meta name="twitter:description" content="A native Python implementation of a variety of multi-label classification algorithms. Includes a Meka, MULAN, Weka wrapper. BSD licensed.">
<meta name="keywords" content="scikit-multilearn, multi-label classification, clustering, python, machinelearning">
<meta property="og:title" content="scikit-multilearn | Multi-label classification package for python" />
<meta property="og:description" content="A native Python implementation of a variety of multi-label classification algorithms. Includes a Meka, MULAN, Weka wrapper. BSD licensed." />
<!-- Compiled and minified CSS -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0-rc.2/css/materialize.min.css">
<link rel="stylesheet" href="/_static/custom.css">
<link href="https://fonts.googleapis.com/css?family=IBM+Plex+Mono|IBM+Plex+Sans|IBM+Plex+Sans+Condensed|IBM+Plex+Serif" rel="stylesheet">
<link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.2.0/css/all.css" integrity="sha384-hWVjflwFxL6sNzntih27bfxkr27PmbbK/iSvJ+a4+0owXq79v+lsFkW54bOGbiDQ" crossorigin="anonymous">
<!-- Compiled and minified JavaScript -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0-rc.2/js/materialize.min.js"></script>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-51136636-1"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-51136636-1');
</script>
</head><body>
<div class="navbar-fixed">
<nav>
<div class="nav-wrapper container">
<a href="index.html" class="brand-logo">scikit-multilearn</a>
<ul id="nav-mobile" class="right hide-on-med-and-down">
<li><a href="userguide.html">User Guide</a></li>
<li><a href="api/skmultilearn.html">Reference</a></li>
<li><a href="https://github.com/scikit-multilearn/scikit-multilearn">Github</a></li>
<li><a href="https://pypi.org/project/scikit-multilearn">PyPi</a></li>
<li id="navbar-about"><a href="authors.html">About</a></li>
</ul>
</div>
</nav>
</div>
<!-- this is a replacement -->
<div class="container">
<div class="row">
<!-- Table of contents -->
<div class="col hide-on-small-only m3 xl2">
<div class="toc-wrapper">
<div style="height: 1px;">
<ul class="section table-of-contents">
<ul>
<li><a class="reference internal" href="#">5. How to select a classifier</a><ul>
<li><a class="reference internal" href="#Intutions">5.1. Intutions</a><ul>
<li><a class="reference internal" href="#Generalization-quality-measures">5.1.1. Generalization quality measures</a></li>
<li><a class="reference internal" href="#Performance">5.1.2. Performance</a></li>
</ul>
</li>
<li><a class="reference internal" href="#Data-driven-model-selection">5.2. Data-driven model selection</a><ul>
<li><a class="reference internal" href="#Estimating-hyper-parameter-k-for-MLkNN">5.2.1. Estimating hyper-parameter k for MLkNN</a></li>
<li><a class="reference internal" href="#Estimating-hyper-parameter-k-for-embedded-classifiers">5.2.2. Estimating hyper-parameter k for embedded classifiers</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</ul>
</div>
</div>
</div>
<div class="main-text section col s12 m8 offset-m1 xl9 offset-xl3">
<style>
/* CSS for nbsphinx extension */
/* remove conflicting styling from Sphinx themes */
div.nbinput,
div.nbinput div.prompt,
div.nbinput div.input_area,
div.nbinput div[class*=highlight],
div.nbinput div[class*=highlight] pre,
div.nboutput,
div.nbinput div.prompt,
div.nbinput div.output_area,
div.nboutput div[class*=highlight],
div.nboutput div[class*=highlight] pre {
background: none;
border: none;
padding: 0 0;
margin: 0;
box-shadow: none;
}
/* avoid gaps between output lines */
div.nboutput div[class*=highlight] pre {
line-height: normal;
}
/* input/output containers */
div.nbinput,
div.nboutput {
display: -webkit-flex;
display: flex;
align-items: flex-start;
margin: 0;
width: 100%;
}
@media (max-width: 540px) {
div.nbinput,
div.nboutput {
flex-direction: column;
}
}
/* input container */
div.nbinput {
padding-top: 5px;
}
/* last container */
div.nblast {
padding-bottom: 5px;
}
/* input prompt */
div.nbinput div.prompt pre {
color: #303F9F;
}
/* output prompt */
div.nboutput div.prompt pre {
color: #D84315;
}
/* all prompts */
div.nbinput div.prompt,
div.nboutput div.prompt {
min-width: 9ex;
padding-top: 0.4em;
padding-right: 0.4em;
text-align: right;
flex: 0;
}
@media (max-width: 540px) {
div.nbinput div.prompt,
div.nboutput div.prompt {
text-align: left;
padding: 0.4em;
}
div.nboutput div.prompt.empty {
padding: 0;
}
}
/* disable scrollbars on prompts */
div.nbinput div.prompt pre,
div.nboutput div.prompt pre {
overflow: hidden;
}
/* input/output area */
div.nbinput div.input_area,
div.nboutput div.output_area {
padding: 0.4em;
-webkit-flex: 1;
flex: 1;
overflow: auto;
}
@media (max-width: 540px) {
div.nbinput div.input_area,
div.nboutput div.output_area {
width: 100%;
}
}
/* input area */
div.nbinput div.input_area {
border: 1px solid #cfcfcf;
border-radius: 2px;
background: #f7f7f7;
}
/* override MathJax center alignment in output cells */
div.nboutput div[class*=MathJax] {
text-align: left !important;
}
/* override sphinx.ext.pngmath center alignment in output cells */
div.nboutput div.math p {
text-align: left;
}
/* standard error */
div.nboutput div.output_area.stderr {
background: #fdd;
}
/* ANSI colors */
.ansi-black-fg { color: #3E424D; }
.ansi-black-bg { background-color: #3E424D; }
.ansi-black-intense-fg { color: #282C36; }
.ansi-black-intense-bg { background-color: #282C36; }
.ansi-red-fg { color: #E75C58; }
.ansi-red-bg { background-color: #E75C58; }
.ansi-red-intense-fg { color: #B22B31; }
.ansi-red-intense-bg { background-color: #B22B31; }
.ansi-green-fg { color: #00A250; }
.ansi-green-bg { background-color: #00A250; }
.ansi-green-intense-fg { color: #007427; }
.ansi-green-intense-bg { background-color: #007427; }
.ansi-yellow-fg { color: #DDB62B; }
.ansi-yellow-bg { background-color: #DDB62B; }
.ansi-yellow-intense-fg { color: #B27D12; }
.ansi-yellow-intense-bg { background-color: #B27D12; }
.ansi-blue-fg { color: #208FFB; }
.ansi-blue-bg { background-color: #208FFB; }
.ansi-blue-intense-fg { color: #0065CA; }
.ansi-blue-intense-bg { background-color: #0065CA; }
.ansi-magenta-fg { color: #D160C4; }
.ansi-magenta-bg { background-color: #D160C4; }
.ansi-magenta-intense-fg { color: #A03196; }
.ansi-magenta-intense-bg { background-color: #A03196; }
.ansi-cyan-fg { color: #60C6C8; }
.ansi-cyan-bg { background-color: #60C6C8; }
.ansi-cyan-intense-fg { color: #258F8F; }
.ansi-cyan-intense-bg { background-color: #258F8F; }
.ansi-white-fg { color: #C5C1B4; }
.ansi-white-bg { background-color: #C5C1B4; }
.ansi-white-intense-fg { color: #A1A6B2; }
.ansi-white-intense-bg { background-color: #A1A6B2; }
.ansi-default-inverse-fg { color: #FFFFFF; }
.ansi-default-inverse-bg { background-color: #000000; }
.ansi-bold { font-weight: bold; }
.ansi-underline { text-decoration: underline; }
</style>
<div class="section" id="How-to-select-a-classifier">
<h1>5. How to select a classifier<a class="headerlink" href="#How-to-select-a-classifier" title="Permalink to this headline">¶</a></h1>
<p>This document will guide you through the process of selecting a
classifier for your problem.</p>
<p>Note that there is no established, scientifically proven rule-set for
selecting a classifier to solve a general multi-label classification
problem. Succesful approaches often come from mixing intuitions about
which classifiers are worth considering, decomposition in to
subproblems, and experimental model selection.</p>
<p>There are two things you need to consider before choosing a classifier:</p>
<ul class="simple">
<li>performance, i.e. generalization quality, how well will the model
understand the relationship between features and labels, note that
there for different use cases you might want to measure the quality
using different measures, we’ll talk about the measures in a moment</li>
<li>efficiency, i.e. how fast the classifier will perform, does it scale,
is it usable in your problem based on number of labels, samples or
label combinations</li>
</ul>
<p>There are two ways to make the choice: - intuition based on asymptotic
performance and results from empirical studies - data-driven model
selection using cross-validated parameter search</p>
<p>Let’s load up a data set to see have some thing to work on first.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre>
<span></span>In [2]:
</pre></div>
</div>
<div class="input_area highlight-ipython2 notranslate"><div class="highlight"><pre>
<span></span><span class="kn">from</span> <span class="nn">skmultilearn.dataset</span> <span class="kn">import</span> <span class="n">load_dataset</span>
</pre></div>
</div>
</div>
<div class="nbinput docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre>
<span></span>In [4]:
</pre></div>
</div>
<div class="input_area highlight-ipython2 notranslate"><div class="highlight"><pre>
<span></span><span class="n">X_train</span><span class="p">,</span> <span class="n">y_train</span><span class="p">,</span> <span class="n">feature_names</span><span class="p">,</span> <span class="n">label_names</span> <span class="o">=</span> <span class="n">load_dataset</span><span class="p">(</span><span class="s1">'emotions'</span><span class="p">,</span> <span class="s1">'train'</span><span class="p">)</span>
<span class="n">X_test</span><span class="p">,</span> <span class="n">y_test</span><span class="p">,</span> <span class="n">_</span><span class="p">,</span> <span class="n">_</span> <span class="o">=</span><span class="n">load_dataset</span><span class="p">(</span><span class="s1">'emotions'</span><span class="p">,</span> <span class="s1">'test'</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nboutput nblast docutils container">
<div class="prompt empty docutils container">
</div>
<div class="output_area docutils container">
<div class="highlight"><pre>
emotions:train - does not exists downloading
Downloaded emotions-train
emotions:test - does not exists downloading
Downloaded emotions-test
</pre></div></div>
</div>
<p>Usually classifier’s performance depends on three elements:</p>
<ul class="simple">
<li>number of samples</li>
<li>number of labels</li>
<li>number of unique label classes</li>
<li>number of features</li>
</ul>
<p>We can obtain the first two from the shape of our output space matrices:</p>
<div class="nbinput docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre>
<span></span>In [8]:
</pre></div>
</div>
<div class="input_area highlight-ipython2 notranslate"><div class="highlight"><pre>
<span></span><span class="n">y_train</span><span class="o">.</span><span class="n">shape</span><span class="p">,</span> <span class="n">y_test</span><span class="o">.</span><span class="n">shape</span>
</pre></div>
</div>
</div>
<div class="nboutput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre>
<span></span>Out[8]:
</pre></div>
</div>
<div class="output_area highlight-none notranslate"><div class="highlight"><pre>
<span></span>((391, 6), (202, 6))
</pre></div>
</div>
</div>
<p>We can use numpy and the list of rows with non-zero values in output
matrices to get the number of unique label combinations.</p>
<div class="nbinput docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre>
<span></span>In [9]:
</pre></div>
</div>
<div class="input_area highlight-ipython2 notranslate"><div class="highlight"><pre>
<span></span><span class="kn">import</span> <span class="nn">numpy</span> <span class="kn">as</span> <span class="nn">np</span>
<span class="n">np</span><span class="o">.</span><span class="n">unique</span><span class="p">(</span><span class="n">y_train</span><span class="o">.</span><span class="n">rows</span><span class="p">)</span><span class="o">.</span><span class="n">shape</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">unique</span><span class="p">(</span><span class="n">y_test</span><span class="o">.</span><span class="n">rows</span><span class="p">)</span><span class="o">.</span><span class="n">shape</span>
</pre></div>
</div>
</div>
<div class="nboutput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre>
<span></span>Out[9]:
</pre></div>
</div>
<div class="output_area highlight-none notranslate"><div class="highlight"><pre>
<span></span>((26,), (21,))
</pre></div>
</div>
</div>
<p>Number of features can be found in the shape of the input matrix:</p>
<div class="nbinput docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre>
<span></span>In [10]:
</pre></div>
</div>
<div class="input_area highlight-ipython2 notranslate"><div class="highlight"><pre>
<span></span><span class="n">X_train</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
</pre></div>
</div>
</div>
<div class="nboutput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre>
<span></span>Out[10]:
</pre></div>
</div>
<div class="output_area highlight-none notranslate"><div class="highlight"><pre>
<span></span>72
</pre></div>
</div>
</div>
<div class="section" id="Intutions">
<h2>5.1. Intutions<a class="headerlink" href="#Intutions" title="Permalink to this headline">¶</a></h2>
<div class="section" id="Generalization-quality-measures">
<h3>5.1.1. Generalization quality measures<a class="headerlink" href="#Generalization-quality-measures" title="Permalink to this headline">¶</a></h3>
<p>There are several ways to measure a classifier’s generalization quality:</p>
<ul class="simple">
<li><a class="reference external" href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.hamming_loss.html#sklearn.metrics.hamming_loss">Hamming
loss</a>
measures how well the classifier predicts each of the labels,
averaged over samples, then over labels</li>
<li><a class="reference external" href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html#sklearn.metrics.accuracy_score">accuracy
score</a>
measures how well the classifier predicts label combinations,
averaged over samples</li>
<li><a class="reference external" href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.jaccard_similarity_score.html#sklearn.metrics.jaccard_similarity_score">jaccard
similarity</a>
measures the proportion of predicted labels for a sample to its
correct assignment, averaged over samples</li>
<li><a class="reference external" href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html#sklearn.metrics.precision_score">precision</a>
measures how many samples with ,</li>
<li><a class="reference external" href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html#sklearn.metrics.recall_score">recall</a>
measures how many samples ,</li>
<li><a class="reference external" href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html#sklearn.metrics.f1_score">F1
score</a>
measures a weighted average of precision and recall, where both have
the same impact on the score</li>
</ul>
<p>These measures are conveniently provided by sklearn:</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre>
<span></span>In [ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython2 notranslate"><div class="highlight"><pre>
<span></span><span class="kn">from</span> <span class="nn">skmultilearn.adapt</span> <span class="kn">import</span> <span class="n">MLkNN</span>
<span class="n">classifier</span> <span class="o">=</span> <span class="n">MLkNN</span><span class="p">(</span><span class="n">k</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span>
<span class="n">prediction</span> <span class="o">=</span> <span class="n">classifier</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">X_train</span><span class="p">,</span> <span class="n">y_train</span><span class="p">)</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">X_test</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nbinput docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre>
<span></span>In [7]:
</pre></div>
</div>
<div class="input_area highlight-ipython2 notranslate"><div class="highlight"><pre>
<span></span><span class="kn">import</span> <span class="nn">sklearn.metrics</span> <span class="kn">as</span> <span class="nn">metrics</span>
<span class="n">metrics</span><span class="o">.</span><span class="n">hamming_loss</span><span class="p">(</span><span class="n">y_test</span><span class="p">,</span> <span class="n">prediction</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nboutput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre>
<span></span>Out[7]:
</pre></div>
</div>
<div class="output_area highlight-none notranslate"><div class="highlight"><pre>
<span></span>0.2953795379537954
</pre></div>
</div>
</div>
</div>
<div class="section" id="Performance">
<h3>5.1.2. Performance<a class="headerlink" href="#Performance" title="Permalink to this headline">¶</a></h3>
<p>Scikit-multilearn provides 11 classifiers that allow a strong variety of
classification scenarios through label partitioning and ensemble
classification, let’s look at the important factors influencing
performance. $ g(x) $ denotes the performance of the base classifier in
some of the classifiers.</p>
<dl><dt><p><a class="reference external" href="api/skmultilearn.adapt.brknn.html#skmultilearn.adapt.brknn.BRkNNaClassifier">BRkNNaClassifier</a>,
<a class="reference external" href="api/skmultilearn.adapt.brknn.html#skmultilearn.adapt.brknn.BRkNNbClassifier">BRkNNbClassifier</a></p>
</dt><dd><p><strong>Parameter estimation needed</strong>: Yes, 1 parameter</p>
<p><strong>Complexity</strong>: <code class="docutils literal notranslate"><span class="pre">O(n_{labels}</span> <span class="pre">*</span> <span class="pre">n_{samples}</span> <span class="pre">*</span> <span class="pre">n_{features}</span> <span class="pre">*</span> <span class="pre">k)</span></code></p>
<p>BRkNN classifiers train a k Nearest Neighbor per label and use infer
label assignment in one of the two variants.</p>
<p><strong>Strong sides</strong>: - takes some label relations into account while
estimating single-label classifers - works when distance between samples
is a good predictor for label assignment. Often used in biosciences.</p>
<p><strong>Weak sides</strong>: - trains a classifier per label - less suitable for
large label space - requires parameter estimation.</p>
</dd><dt><p><a class="reference external" href="api/skmultilearn.adapt.mltsvn.html">MLTSVN</a></p>
</dt><dd><p><strong>Parameter estimation needed</strong>: Yes, 2 parameters</p>
<p><strong>Complexity</strong>: <code class="docutils literal notranslate"><span class="pre">O((n_{samples}</span> <span class="pre">*</span> <span class="pre">n_{features}</span> <span class="pre">+</span> <span class="pre">n_{labels})</span> <span class="pre">*</span> <span class="pre">k)</span></code></p>
<p>MLkNN builds uses k-NearestNeighbors find nearest examples to a test
class and uses Bayesian inference to select assigned labels.</p>
<p><strong>Strong sides</strong>: - estimates one multi-label SVM subclassifier without
any one-vs-all or one-vs-rest comparisons, O(1) classifiers instead of
O(l^2). - works when distance between samples is a good predictor for
label assignment</p>
<p><strong>Weak sides</strong>: - requires parameter estimation</p>
</dd><dt><p><a class="reference external" href="api/skmultilearn.adapt.mlknn.html#multilabel-k-nearest-neighbours">MLkNN</a></p>
</dt><dd><p><strong>Parameter estimation needed</strong>: Yes, 2 parameters</p>
<p><strong>Complexity</strong>: <code class="docutils literal notranslate"><span class="pre">O((n_{samples}</span> <span class="pre">*</span> <span class="pre">n_{features}</span> <span class="pre">+</span> <span class="pre">n_{labels})</span> <span class="pre">*</span> <span class="pre">k)</span></code></p>
<p>MLkNN builds uses k-NearestNeighbors find nearest examples to a test
class and uses Bayesian inference to select assigned labels.</p>
<p><strong>Strong sides</strong>: - estimates one multi-class subclassifier - works when
distance between samples is a good predictor for label assignment -
often used in biosciences.</p>
<p><strong>Weak sides</strong>: - requires parameter estimation</p>
</dd><dt><p><a class="reference external" href="api/skmultilearn.adapt.mlaram.html">MLARAM</a></p>
</dt><dd><p><strong>Parameter estimation needed</strong>: Yes, 2 parameters</p>
<p><strong>Complexity</strong>: <code class="docutils literal notranslate"><span class="pre">O(n_{samples})</span></code></p>
<p>An ART classifier which uses clustering of learned prototypes into large
clusters improve performance.</p>
<p><strong>Strong sides</strong>: - linear in number of samples, scales well</p>
<p><strong>Weak sides</strong>: - requires parameter estimation - ART techniques have
had generalization limits in the past</p>
</dd><dt><p><a class="reference external" href="api/skmultilearn.problem_transform.br.html#skmultilearn.problem_transform.BinaryRelevance">BinaryRelevance</a></p>
</dt><dd><p><strong>Parameter estimation needed</strong>: Only for base classifier</p>
<p><strong>Complexity</strong>:
<code class="docutils literal notranslate"><span class="pre">O(n_{labels}</span> <span class="pre">*</span> <span class="pre">base_single_class_classifier_complexity)</span></code></p>
<p>Transforms a multi-label classification problem with L labels into L
single-label separate binary classification problems.</p>
<p><strong>Strong sides</strong>: - estimates single-label classifiers - can generalize
beyond avialable label combinations</p>
<p><strong>Weak sides</strong>: - not suitable for large number of labels - ignores
label relations</p>
</dd><dt><p><a class="reference external" href="api/skmultilearn.problem_transform.cc.html#skmultilearn.problem_transform.ClassifierChain">ClassifierChain</a></p>
</dt><dd><p><strong>Parameter estimation needed</strong>: Yes, 1 + parameters for base classifier</p>
<p><strong>Complexity</strong>:
<code class="docutils literal notranslate"><span class="pre">O(n_{labels}</span> <span class="pre">*</span> <span class="pre">base_single_class_classifier_complexity)</span></code></p>
<p>Transforms multi-label problem to a multi-class problem where each label
combination is a separate class.</p>
<p><strong>Strong sides</strong>: - estimates single-label classifiers - can generalize
beyond avialable label combinations - takes label relations into account</p>
<p><strong>Weak sides</strong>: - not suitable for large number of labels - quality
strongly depends on the label ordering in chain.</p>
</dd><dt><p><a class="reference external" href="api/skmultilearn.problem_transform.lp.html#skmultilearn.problem_transform.LabelPowerset">LabelPowerset</a></p>
</dt><dd><p><strong>Parameter estimation needed</strong>: Only for base classifier</p>
<p><strong>Complexity</strong>:
<code class="docutils literal notranslate"><span class="pre">O(base_multi_class_classifier_complexity(n_classes</span> <span class="pre">=</span> <span class="pre">n_label_combinations))</span></code></p>
<p>Transforms multi-label problem to a multi-class problem where each label
combination is a separate class and uses a multi-class classifier to
solve the problem.</p>
<p><strong>Strong sides</strong>: - estimates label dependencies, with only one
classifier - often best solution for subset accuracy if training data
contains all relevant label combinations</p>
<p><strong>Weak sides</strong>: - requires all label combinations predictable by the
classifier to be present in the training data - very prone to
underfitting with large label spaces</p>
</dd><dt><p><a class="reference external" href="api/skmultilearn.ensemble.rakeld.html#skmultilearn.ensemble.RakelD">RakelD</a></p>
</dt><dd><p><strong>Parameter estimation needed</strong>: Yes, 1 + base classifier’s parameters
<strong>Complexity</strong>:
<code class="docutils literal notranslate"><span class="pre">O(n_{partitions}</span> <span class="pre">*</span> <span class="pre">base_multi_class_classifier_complexity(n_classes</span> <span class="pre">=</span> <span class="pre">n_label_combinations_per_partition))</span></code></p>
<p>Randomly partitions label space and trains a Label Powerset classifier
per partition with a base multi-class classifier.</p>
<p><strong>Strong sides</strong>:</p>
<ul class="simple">
<li>may use less classifiers than Binary Relevance and still generalize
label relations while not underfitting like LabelPowerset</li>
</ul>
<p><strong>Weak sides</strong>:</p>
<ul class="simple">
<li>using random approach is not very probable to draw an optimal label
space division</li>
</ul>
</dd><dt><p><a class="reference external" href="api/skmultilearn.ensemble.rakeld.html#skmultilearn.ensemble.RakelO">RakelO</a></p>
</dt><dd><p><strong>Parameter estimation needed</strong>: Yes, 2 + base classifier’s parameters
<strong>Complexity</strong>:
<code class="docutils literal notranslate"><span class="pre">O(n_{partitions}</span> <span class="pre">*</span> <span class="pre">base_multi_class_classifier_complexity(n_classes</span> <span class="pre">=</span> <span class="pre">n_label_combinations_per_cluster))</span></code></p>
<p>Randomly draw label subspaces (possibly overlapping) and trains a Label
Powerset classifier per partition with a base multi-class classifier,
labels are assigned based on voting.</p>
<p><strong>Strong sides</strong>:</p>
<ul class="simple">
<li>may provide better results with overlapping models</li>
</ul>
<p><strong>Weak sides</strong>:</p>
<ul class="simple">
<li>takes large number of classifiers to generate improvement, not
scalable</li>
<li>random subspaces may not be optimal</li>
</ul>
</dd><dt><p><a class="reference external" href="api/skmultilearn.ensemble.partition.html#skmultilearn.ensemble.LabelSpacePartitioningClassifier">LabelSpacePartitioningClassifier</a></p>
</dt><dd><p><strong>Parameter estimation needed</strong>: Only base classifier <strong>Complexity</strong>:
<code class="docutils literal notranslate"><span class="pre">O(n_{partitions}</span> <span class="pre">*</span> <span class="pre">base_classifier_complexity(n_classes</span> <span class="pre">=</span> <span class="pre">n_label_combinations_per_partition))</span></code></p>
<p>Uses clustering methods to divide the label space into subspaces and
trains a base classifier per partition with a base multi-class
classifier.</p>
<p><strong>Strong sides</strong>:</p>
<ul class="simple">
<li>accomodates to different types of problems</li>
<li>infers when to divide into subproblems or not and decide when to use
less classifiers than Binary Relevance</li>
<li>scalable to data sets with large numbers of labels</li>
<li>generalizes label relations well while not underfitting like
LabelPowerset</li>
<li>does not require parameter estimation</li>
</ul>
<p><strong>Weak sides</strong>:</p>
<ul class="simple">
<li>requires label relationships present in training data to be
representable of the problem</li>
<li>partitioning may prevent certain label combinations from being
correctly classified, depends on base classifier</li>
</ul>
</dd><dt><p><a class="reference external" href="api/skmultilearn.ensemble.voting.html#skmultilearn.ensemble.MajorityVotingClassifier">MajorityVotingClassifier</a></p>
</dt><dd><p><strong>Parameter estimation needed</strong>: Only base classifier <strong>Complexity</strong>:
<code class="docutils literal notranslate"><span class="pre">O(n_{clusters}</span> <span class="pre">*</span> <span class="pre">base_classifier_complexity(n_classes</span> <span class="pre">=</span> <span class="pre">n_label_combinations_per_cluster))</span></code></p>
<p>Uses clustering methods to divide the label space into subspaces
(possibly overlapping) and trains a base classifier per partition with a
base multi-class classifier, labels are assigned based on voting.</p>
<p><strong>Strong sides</strong>:</p>
<ul class="simple">
<li>accomodates to different types of problems</li>
<li>infers when to divide into subproblems or not and decide when to use
less classifiers than Binary Relevance</li>
<li>scalable to data sets with large numbers of labels</li>
<li>generalizes label relations well while not underfitting like
LabelPowerset</li>
<li>does not require parameter estimation</li>
</ul>
<p><strong>Weak sides</strong>:</p>
<ul class="simple">
<li>requires label relationships present in training data to be
representable of the problem</li>
</ul>
</dd><dt><p><a class="reference external" href="api/skmultilearn.embedding.partition.html#skmultilearn.ensemble.LabelSpacePartitioningClassifier">EmbeddingClassifier</a></p>
</dt><dd><p><strong>Parameter estimation needed</strong>: Only for embedder <strong>Complexity</strong>:
depends on the selection of embedder, regressor and classifier</p>
<p>Embedds the label space, trains a regressor (or many) for unseen samples
to predict their embeddings, and a classifier to correct the regression
error</p>
<p><strong>Strong sides</strong>:</p>
<ul class="simple">
<li>improves discriminability and joint label probability distributions</li>
<li>good results with low-complexity linear embeddings and weak
regressors/classifiers</li>
<li><strong>Weak sides</strong>:</li>
<li>requires some parameter estimation while rule-of-thumb ideas exist in
papers</li>
</ul>
</dd></dl></div>
</div>
<div class="section" id="Data-driven-model-selection">
<h2>5.2. Data-driven model selection<a class="headerlink" href="#Data-driven-model-selection" title="Permalink to this headline">¶</a></h2>
<p>Scikit-multilearn allows estimating parameters to select best models for
multi-label classification using scikit-learn’s model selection
<a class="reference external" href="http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html">GridSearchCV
API</a>.
In the simplest version it can look for the best parameter of a
scikit-multilearn’s classifier, which we’ll show on the example case of
estimating parameters for MLkNN, and in the more complicated cases of
problem transformation methods it can estimate both the method’s hyper
parameters and the base classifiers parameter.</p>
<div class="section" id="Estimating-hyper-parameter-k-for-MLkNN">
<h3>5.2.1. Estimating hyper-parameter k for MLkNN<a class="headerlink" href="#Estimating-hyper-parameter-k-for-MLkNN" title="Permalink to this headline">¶</a></h3>
<p>In the case of estimating the hyperparameter of a multi-label
classifier, we first import the relevant classifier and scikit-learn’s
GridSearchCV class. Then we define the values of parameters we want to
evaluate. We are interested in which combination of <code class="docutils literal notranslate"><span class="pre">k</span></code> - the number
of neighbours, <code class="docutils literal notranslate"><span class="pre">s</span></code> - the smoothing parameter works best. We also need
to select a measure which we want to optimize - we’ve chosen the F1
macro score.</p>
<p>After selecting the parameters we intialize and _run the cross
validation grid search and print the best hyper parameters.</p>
<div class="nbinput docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre>
<span></span>In [5]:
</pre></div>
</div>
<div class="input_area highlight-ipython2 notranslate"><div class="highlight"><pre>
<span></span><span class="kn">from</span> <span class="nn">skmultilearn.adapt</span> <span class="kn">import</span> <span class="n">MLkNN</span>
<span class="kn">from</span> <span class="nn">sklearn.model_selection</span> <span class="kn">import</span> <span class="n">GridSearchCV</span>
<span class="n">parameters</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'k'</span><span class="p">:</span> <span class="nb">range</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span><span class="mi">3</span><span class="p">),</span> <span class="s1">'s'</span><span class="p">:</span> <span class="p">[</span><span class="mf">0.5</span><span class="p">,</span> <span class="mf">0.7</span><span class="p">,</span> <span class="mf">1.0</span><span class="p">]}</span>
<span class="n">clf</span> <span class="o">=</span> <span class="n">GridSearchCV</span><span class="p">(</span><span class="n">MLkNN</span><span class="p">(),</span> <span class="n">parameters</span><span class="p">,</span> <span class="n">scoring</span><span class="o">=</span><span class="s1">'f1_macro'</span><span class="p">)</span>
<span class="n">clf</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">X_train</span><span class="p">,</span> <span class="n">y_train</span><span class="p">)</span>
<span class="k">print</span> <span class="p">(</span><span class="n">clf</span><span class="o">.</span><span class="n">best_params_</span><span class="p">,</span> <span class="n">clf</span><span class="o">.</span><span class="n">best_score_</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nboutput nblast docutils container">
<div class="prompt empty docutils container">
</div>
<div class="output_area docutils container">
<div class="highlight"><pre>
({'k': 1, 's': 0.5}, 0.45223607257008969)
</pre></div></div>
</div>
<p>These values can be then used directly with the classifier.</p>
</div>
<div class="section" id="Estimating-hyper-parameter-k-for-embedded-classifiers">
<h3>5.2.2. Estimating hyper-parameter k for embedded classifiers<a class="headerlink" href="#Estimating-hyper-parameter-k-for-embedded-classifiers" title="Permalink to this headline">¶</a></h3>
<p>In problem transformation classifiers we often need to estimate not only
a hyper parameter, but also the parameter of the base classifier, and
also - maybe even the problem transformation method. Let’s take a look
at this on a three-layer construction of ensemble of problem
transformation classifiers using label space partitioning, the
parameters include:</p>
<ul class="simple">
<li><code class="docutils literal notranslate"><span class="pre">classifier</span></code>: which takes a parameter - a classifier for
transforming multi-label classification problem to a single-label
classification, we will decide between the Label Powerset and
Classifier Chains</li>
<li><code class="docutils literal notranslate"><span class="pre">classifier__classifier</span></code>: which is the base classifier for the
transformation strategy, we will use random forests here</li>
<li><code class="docutils literal notranslate"><span class="pre">classifier__classifier__n_estimators</span></code>: the number of trees to be
used in the forest, will be passed to the random forest object</li>
<li><code class="docutils literal notranslate"><span class="pre">clusterer</span></code>: a label space partitioning class, we will decide
between two approaches provided by the NetworkX library.</li>
</ul>
<div class="nbinput docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre>
<span></span>In [10]:
</pre></div>
</div>
<div class="input_area highlight-ipython2 notranslate"><div class="highlight"><pre>
<span></span><span class="kn">from</span> <span class="nn">skmultilearn.problem_transform</span> <span class="kn">import</span> <span class="n">ClassifierChain</span><span class="p">,</span> <span class="n">LabelPowerset</span>
<span class="kn">from</span> <span class="nn">sklearn.model_selection</span> <span class="kn">import</span> <span class="n">GridSearchCV</span>
<span class="kn">from</span> <span class="nn">sklearn.naive_bayes</span> <span class="kn">import</span> <span class="n">GaussianNB</span>
<span class="kn">from</span> <span class="nn">sklearn.ensemble</span> <span class="kn">import</span> <span class="n">RandomForestClassifier</span>
<span class="kn">from</span> <span class="nn">skmultilearn.cluster</span> <span class="kn">import</span> <span class="n">NetworkXLabelGraphClusterer</span>
<span class="kn">from</span> <span class="nn">skmultilearn.cluster</span> <span class="kn">import</span> <span class="n">LabelCooccurrenceGraphBuilder</span>
<span class="kn">from</span> <span class="nn">skmultilearn.ensemble</span> <span class="kn">import</span> <span class="n">LabelSpacePartitioningClassifier</span>
<span class="kn">from</span> <span class="nn">sklearn.svm</span> <span class="kn">import</span> <span class="n">SVC</span>
<span class="n">parameters</span> <span class="o">=</span> <span class="p">{</span>
<span class="s1">'classifier'</span><span class="p">:</span> <span class="p">[</span><span class="n">LabelPowerset</span><span class="p">(),</span> <span class="n">ClassifierChain</span><span class="p">()],</span>
<span class="s1">'classifier__classifier'</span><span class="p">:</span> <span class="p">[</span><span class="n">RandomForestClassifier</span><span class="p">()],</span>
<span class="s1">'classifier__classifier__n_estimators'</span><span class="p">:</span> <span class="p">[</span><span class="mi">10</span><span class="p">,</span> <span class="mi">20</span><span class="p">,</span> <span class="mi">50</span><span class="p">],</span>
<span class="s1">'clusterer'</span> <span class="p">:</span> <span class="p">[</span>
<span class="n">NetworkXLabelGraphClusterer</span><span class="p">(</span><span class="n">LabelCooccurrenceGraphBuilder</span><span class="p">(</span><span class="n">weighted</span><span class="o">=</span><span class="bp">True</span><span class="p">,</span> <span class="n">include_self_edges</span><span class="o">=</span><span class="bp">False</span><span class="p">),</span> <span class="s1">'louvain'</span><span class="p">),</span>
<span class="n">NetworkXLabelGraphClusterer</span><span class="p">(</span><span class="n">LabelCooccurrenceGraphBuilder</span><span class="p">(</span><span class="n">weighted</span><span class="o">=</span><span class="bp">True</span><span class="p">,</span> <span class="n">include_self_edges</span><span class="o">=</span><span class="bp">False</span><span class="p">),</span> <span class="s1">'lpa'</span><span class="p">)</span>
<span class="p">]</span>
<span class="p">}</span>
<span class="n">clf</span> <span class="o">=</span> <span class="n">GridSearchCV</span><span class="p">(</span><span class="n">LabelSpacePartitioningClassifier</span><span class="p">(),</span> <span class="n">parameters</span><span class="p">,</span> <span class="n">scoring</span> <span class="o">=</span> <span class="s1">'f1_macro'</span><span class="p">)</span>
<span class="n">clf</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">X_train</span><span class="p">,</span> <span class="n">y_train</span><span class="p">)</span>
<span class="k">print</span> <span class="p">(</span><span class="n">clf</span><span class="o">.</span><span class="n">best_params_</span><span class="p">,</span> <span class="n">clf</span><span class="o">.</span><span class="n">best_score_</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nboutput nblast docutils container">
<div class="prompt empty docutils container">
</div>
<div class="output_area docutils container">
<div class="highlight"><pre>
({'classifier__classifier__n_estimators': 50, 'classifier__classifier': RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=50, n_jobs=1,
oob_score=False, random_state=None, verbose=0,
warm_start=False), 'classifier': LabelPowerset(classifier=RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=50, n_jobs=1,
oob_score=False, random_state=None, verbose=0,
warm_start=False),
require_dense=[True, True]), 'clusterer': <skmultilearn.cluster.networkx.NetworkXLabelGraphClusterer object at 0x7fc42ec75e50>}, 0.59569187181557981)
</pre></div></div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre>
<span></span>In [ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython2 notranslate"><div class="highlight"><pre>
<span></span>
</pre></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="related" role="navigation" aria-label="related navigation">
<h3>Navigation</h3>
<ul>
<li class="right" style="margin-right: 10px">
<a href="genindex.html" title="General Index"
accesskey="I">index</a></li>
<li class="right" >
<a href="py-modindex.html" title="Python Module Index"
>modules</a> |</li>
<li class="right" >
<a href="labelrelations.html" title="1. Exploring Label Relations"
accesskey="N">next</a> |</li>
<li class="right" >
<a href="datasets.html" title="3. Dataset handling"
accesskey="P">previous</a> |</li>
<li class="nav-item nav-item-0"><a href="index.html">scikit-multilearn</a> »</li>
<li class="nav-item nav-item-1"><a href="userguide.html" accesskey="U">User Guide</a> »</li>
</ul>
</div>
<footer class="page-footer blue-grey darken-4">
<div class="container">
<div class="row ">
<div class="col l6 s12">
<h5 class="white-text">Cite US!</h5>
<p>If you use scikit-multilearn in your research and publish it, please consider citing us, it will help us get funding for making the library better. The paper is available on <a href="https://arxiv.org/abs/1702.01460">arXiv</a>, to cite it try the Bibtex code on the right.</p>
</div>
<div class="col l4 s12">
<pre><code>
@ARTICLE{2017arXiv170201460S,
author = {{Szyma{\'n}ski}, P. and {Kajdanowicz}, T.},
title = "{A scikit-based Python environment for performing multi-label classification}",
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1702.01460},
primaryClass = "cs.LG",
keywords = {Computer Science - Learning, Computer Science - Mathematical Software},
year = 2017,
month = feb,
}
</code></pre>
</div>
</div>
</div>
<div class="footer-copyright blue-grey darken-4">
<div class="container">
Created using <a href="http://sphinx.pocoo.org/">Sphinx</a> 1.8.2.
<span style="padding-left: 5ex;">
<a href="_sources/modelselection.ipynb.txt"
rel="nofollow">Show this page source</a>
</span>
</div>
</div>
</footer>
<!-- Place this tag in your head or just before your close body tag. -->
<script async defer src="https://buttons.github.io/buttons.js"></script>
</body>
</html>