-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathdtree-background.html
47 lines (40 loc) · 1.62 KB
/
dtree-background.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
<html>
<head>
<title>Machine Learning Laboratory</title>
</head>
<body>
<h1>Machine Learning Laboratory</h1>
<h2>Decision Tree Induction</h2>
<p>
A decision tree is a representation of a decision procedure for
determining a class label to associate with a given example. At each
internal node of the tree, there is a test (question), and a branch
corresponding to each of the possible outcomes of the test. At each
leaf node, there is a class label (answer). Traversing a path from
the root to a leaf is much like playing a game of twenty questions.
</p>
<p>
Decision trees have a great many uses, particularly for solving
problems that can be cast in terms of producing a single answer in the
form of a class name. For example, one can build a decision tree that
could be used to answer a question such as `Does this patient have
hepatitis?' The answer may be as simple as `yes' or `no'. Based on
answers to the questions at the decision nodes, one can find the
appropriate leaf and the answer it contains.
</p>
<p>
Decision trees are constructed from examples that are already
labeled. For example, if one has established for a variety of
patients with varying attributes which of them do and do not have
hepatitis, then these examples can guide the tree construction
process. There is much ongoing research on decision tree induction.
The subject is studied within several disciplines. A good place to
start is Machine Learning journal or Machine Learning conference
proceedings.
</p>
<hr><address>
Last Updated: July 25, 1997 <br>
© Copyright 1997, All Rights Reserved, Paul Utgoff, University of Massachusetts
</address>
</body>
</html>