Skip to content

Commit

Permalink
Add original version of JMLR page.
Browse files Browse the repository at this point in the history
  • Loading branch information
lawrennd committed Feb 21, 2017
1 parent 4bdfb22 commit 3ee63fc
Show file tree
Hide file tree
Showing 720 changed files with 40,286 additions and 0 deletions.
Binary file added abernethy13-supp.pdf
Binary file not shown.
120 changes: 120 additions & 0 deletions abernethy13.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Large-Scale Bandit Problems and <span>KWIK</span> Learning | ICML 2013 | JMLR W&amp;CP</title>

<!-- Stylesheet -->
<link rel="stylesheet" type="text/css" href="../css/jmlr.css" />

<!-- MathJax -->
<script type="text/x-mathjax-config">
MathJax.Hub.Config({tex2jax: {inlineMath: [['\\(','\\)']]}});
</script>
<script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>


<!-- Metadata -->
<!-- Google Scholar Meta Data -->

<meta name="citation_title" content="Large-Scale Bandit Problems and {KWIK} Learning">

<meta name="citation_author" content="Abernethy, Jacob">

<meta name="citation_author" content="Amin, Kareem">

<meta name="citation_author" content="Kearns, Michael">

<meta name="citation_author" content="Draief, Moez">

<meta name="citation_publication_date" content="2013">
<meta name="citation_conference_title" content="Proceedings of The 30th International Conference on Machine Learning">
<meta name="citation_firstpage" content="588">
<meta name="citation_lastpage" content="596">
<meta name="citation_pdf_url" content="http://jmlr.org/proceedings/papers/v28/abernethy13.pdf">

</head>
<body>

<div id="fixed"> <a align="right" href="http://www.jmlr.org/" target="_top"><img class="jmlr" src="http://jmlr.org/proceedings/papers/img/jmlr.jpg" align="right" border="0"></a>
<p><br><br>
</p><p align="right"> <a href="http://www.jmlr.org/"> Home Page </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/papers"> Papers
</a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/author-info.html"> Submissions </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/news.html">
News </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/scope.html">
Scope </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/editorial-board.html"> Editorial Board </a>


</p><p align="right"> <a href="http://jmlr.csail.mit.edu/announcements.html"> Announcements </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/proceedings">
Proceedings </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/mloss">Open
Source Software</a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/search-jmlr.html"> Search </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/manudb"> Login </a></p>

<br><br>
<p align="right"> <a href="http://jmlr.csail.mit.edu/jmlr.xml">
<img src="http://jmlr.org/proceedings/papers/img/RSS.gif" class="rss" alt="RSS Feed">
</a>

</p>
</div>

<div id="content">

<h1>Large-Scale Bandit Problems and <span>KWIK</span> Learning</h1>

<div id="authors">

Jacob Abernethy,

Kareem Amin,

Michael Kearns,

Moez Draief
</div>;
<div id="info">
JMLR W&amp;CP 28
(1)
:
588–596, 2013
</div> <!-- info -->



<h2>Abstract</h2>
<div id="abstract">
We show that parametric multi-armed bandit (MAB) problems with large state and action spaces can be algorithmically reduced to the supervised learning model known as Knows What It Knows or KWIK learning. We give matching impossibility results showing that the KWIK learnability requirement cannot be replaced by weaker supervised learning assumptions. We provide such results in both the standard parametric MAB setting, as well as for a new model in which the action space is finite but growing with time.
</div>

<h2>Related Material</h2>
<div id="extras">
<ul>
<li><a href="abernethy13.pdf">Download PDF</a></li>

<li><a href="abernethy13-supp.pdf">Supplementary (PDF)</a></li>

</ul>
</div> <!-- extras -->

</div> <!-- content -->

</body>
</html>
Binary file added abernethy13.pdf
Binary file not shown.
118 changes: 118 additions & 0 deletions afkanpour13.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>A Randomized Mirror Descent Algorithm for Large Scale Multiple Kernel Learning | ICML 2013 | JMLR W&amp;CP</title>

<!-- Stylesheet -->
<link rel="stylesheet" type="text/css" href="../css/jmlr.css" />

<!-- MathJax -->
<script type="text/x-mathjax-config">
MathJax.Hub.Config({tex2jax: {inlineMath: [['\\(','\\)']]}});
</script>
<script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>


<!-- Metadata -->
<!-- Google Scholar Meta Data -->

<meta name="citation_title" content="A Randomized Mirror Descent Algorithm for Large Scale Multiple Kernel Learning">

<meta name="citation_author" content="Afkanpour, Arash">

<meta name="citation_author" content="György, András">

<meta name="citation_author" content="Szepesvari, Csaba">

<meta name="citation_author" content="Bowling, Michael">

<meta name="citation_publication_date" content="2013">
<meta name="citation_conference_title" content="Proceedings of The 30th International Conference on Machine Learning">
<meta name="citation_firstpage" content="374">
<meta name="citation_lastpage" content="382">
<meta name="citation_pdf_url" content="http://jmlr.org/proceedings/papers/v28/afkanpour13.pdf">

</head>
<body>

<div id="fixed"> <a align="right" href="http://www.jmlr.org/" target="_top"><img class="jmlr" src="http://jmlr.org/proceedings/papers/img/jmlr.jpg" align="right" border="0"></a>
<p><br><br>
</p><p align="right"> <a href="http://www.jmlr.org/"> Home Page </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/papers"> Papers
</a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/author-info.html"> Submissions </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/news.html">
News </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/scope.html">
Scope </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/editorial-board.html"> Editorial Board </a>


</p><p align="right"> <a href="http://jmlr.csail.mit.edu/announcements.html"> Announcements </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/proceedings">
Proceedings </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/mloss">Open
Source Software</a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/search-jmlr.html"> Search </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/manudb"> Login </a></p>

<br><br>
<p align="right"> <a href="http://jmlr.csail.mit.edu/jmlr.xml">
<img src="http://jmlr.org/proceedings/papers/img/RSS.gif" class="rss" alt="RSS Feed">
</a>

</p>
</div>

<div id="content">

<h1>A Randomized Mirror Descent Algorithm for Large Scale Multiple Kernel Learning</h1>

<div id="authors">

Arash Afkanpour,

András György,

Csaba Szepesvari,

Michael Bowling
</div>;
<div id="info">
JMLR W&amp;CP 28
(1)
:
374–382, 2013
</div> <!-- info -->



<h2>Abstract</h2>
<div id="abstract">
We consider the problem of simultaneously learning to linearly combine a very large number of kernels and learn a good predictor based on the learnt kernel. When the number of kernels d to be combined is very large, multiple kernel learning methods whose computational cost scales linearly in d are intractable. We propose a randomized version of the mirror descent algorithm to overcome this issue, under the objective of minimizing the group p-norm penalized empirical risk. The key to achieve the required exponential speed-up is the computationally efficient construction of low-variance estimates of the gradient. We propose importance sampling based estimates, and find that the ideal distribution samples a coordinate with a probability proportional to the magnitude of the corresponding gradient. We show that in the case of learning the coefficients of a polynomial kernel, the combinatorial structure of the base kernels to be combined allows sampling from this distribution in <span class="math">\(O(\log(d))\)</span> time, making the total computational cost of the method to achieve an epsilon-optimal solution to be <span class="math">\(O(\log(d)/epsilon^2)\)</span>, thereby allowing our method to operate for very large values of d. Experiments with simulated and real data confirm that the new algorithm is computationally more efficient than its state-of-the-art alternatives.
</div>

<h2>Related Material</h2>
<div id="extras">
<ul>
<li><a href="afkanpour13.pdf">Download PDF</a></li>

</ul>
</div> <!-- extras -->

</div> <!-- content -->

</body>
</html>
Binary file added afkanpour13.pdf
Binary file not shown.
Binary file added agarwal13-supp.pdf
Binary file not shown.
108 changes: 108 additions & 0 deletions agarwal13.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Selective sampling algorithms for cost-sensitive multiclass prediction | ICML 2013 | JMLR W&amp;CP</title>

<!-- Stylesheet -->
<link rel="stylesheet" type="text/css" href="../css/jmlr.css" />

<!-- MathJax -->
<script type="text/x-mathjax-config">
MathJax.Hub.Config({tex2jax: {inlineMath: [['\\(','\\)']]}});
</script>
<script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
</script>


<!-- Metadata -->
<!-- Google Scholar Meta Data -->

<meta name="citation_title" content="Selective sampling algorithms for cost-sensitive multiclass prediction">

<meta name="citation_author" content="Agarwal, Alekh">

<meta name="citation_publication_date" content="2013">
<meta name="citation_conference_title" content="Proceedings of The 30th International Conference on Machine Learning">
<meta name="citation_firstpage" content="1220">
<meta name="citation_lastpage" content="1228">
<meta name="citation_pdf_url" content="http://jmlr.org/proceedings/papers/v28/agarwal13.pdf">

</head>
<body>

<div id="fixed"> <a align="right" href="http://www.jmlr.org/" target="_top"><img class="jmlr" src="http://jmlr.org/proceedings/papers/img/jmlr.jpg" align="right" border="0"></a>
<p><br><br>
</p><p align="right"> <a href="http://www.jmlr.org/"> Home Page </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/papers"> Papers
</a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/author-info.html"> Submissions </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/news.html">
News </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/scope.html">
Scope </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/editorial-board.html"> Editorial Board </a>


</p><p align="right"> <a href="http://jmlr.csail.mit.edu/announcements.html"> Announcements </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/proceedings">
Proceedings </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/mloss">Open
Source Software</a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/search-jmlr.html"> Search </a>

</p><p align="right"> <a href="http://jmlr.csail.mit.edu/manudb"> Login </a></p>

<br><br>
<p align="right"> <a href="http://jmlr.csail.mit.edu/jmlr.xml">
<img src="http://jmlr.org/proceedings/papers/img/RSS.gif" class="rss" alt="RSS Feed">
</a>

</p>
</div>

<div id="content">

<h1>Selective sampling algorithms for cost-sensitive multiclass prediction</h1>

<div id="authors">

Alekh Agarwal
</div>;
<div id="info">
JMLR W&amp;CP 28
(3)
:
1220–1228, 2013
</div> <!-- info -->



<h2>Abstract</h2>
<div id="abstract">
In this paper, we study the problem of active learning for cost-sensitive multiclass classification. We propose selective sampling algorithms, which process the data in a streaming fashion, querying only a subset of the labels. For these algorithms, we analyze the regret and label complexity when the labels are generated according to a generalized linear model. We establish that the gains of active learning over passive learning can range from none to exponentially large, based on a natural notion of margin. We also present a safety guarantee to guard against model mismatch. Numerical simulations show that our algorithms indeed obtain a low regret with a small number of queries.
</div>

<h2>Related Material</h2>
<div id="extras">
<ul>
<li><a href="agarwal13.pdf">Download PDF</a></li>

<li><a href="agarwal13-supp.pdf">Supplementary (PDF)</a></li>

</ul>
</div> <!-- extras -->

</div> <!-- content -->

</body>
</html>
Binary file added agarwal13.pdf
Binary file not shown.
Binary file added agrawal13-supp.pdf
Binary file not shown.
Loading

0 comments on commit 3ee63fc

Please sign in to comment.