-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Roadmap
Robert Plummer edited this page Feb 2, 2018
·
26 revisions
v2: Simplicity and Performance First
It must be easy enough to teach a child and fast enough to cause innovation.
Currently we want to focus our efforts on creating a "layer playground", where we have tons of layers, and they can work with any network type, be it feedforward, or recurrent.
The concepts of recurrent and feedforward have always seemed like completely different networks when really there are a few very simple things that make them different. We want to make them so easy anyone can use them:
new FeedForward({
inputLayer: () => { /* return an instantiated layer here */ }
hiddenLayers: [
(input) => { /* return an instantiated layer here */ },
/* more layers? by all means... */
/* `input` here is the output from the previous layer */
]
outputLayer: (input) => { /* return an instantiated layer here */ }
});
import { FeedForward, layer } from 'brain.js';
const { input, feedForward, output } = layer;
const net = new FeedForward({
inputLayer: () => input({ width: 2 })
hiddenLayers: [
input => feedForward({ width: 3 }, input),
]
outputLayer: input => output({ width: 1 }, input)
});
net.train([
{ input: [0, 0], output: [0] },
{ input: [0, 1], output: [1] },
{ input: [1, 0], output: [1] },
{ input: [1, 1], output: [0] }
]);
net.run([0, 0]); // [0]
net.run([0, 1]); // [1]
net.run([1, 0]); // [1]
net.run([1, 1]); // [0]
new Recurrent({
inputLayer: () => { /* return an instantiated layer here */ }
hiddenLayers: [
(input, previousOutput) => { /* return an instantiated layer here */ },
/* more layers? by all means... */
/* `input` here is the output from the previous layer */
/* `previousOutput` is what came out of this layer previously, hence recurrent */
]
outputLayer: (input) => { /* return an instantiated layer here */ }
});
import { Recurrent, layer } from 'brain.js';
const { input, lstm, output } = layer;
const net = new FeedForward({
inputLayer: () => input({ width: 2 })
hiddenLayers: [
(input, previousOutput) => lstm({ width: 3 }, input, previousOutput),
]
outputLayer: input => output({ width: 1 }, input)
});
net.train([
{ input: [0, 0], output: [0] },
{ input: [0, 1], output: [1] },
{ input: [1, 0], output: [1] },
{ input: [1, 1], output: [0] }
]);
net.run([0, 0]); // [0]
net.run([0, 1]); // [1]
net.run([1, 0]); // [1]
net.run([1, 1]); // [0]
We've put in a considerable amount of work to achieve gpu acceleration, and will eventually be fully gpu, in all networks. The desired library where much of the work has been done is http://gpu.rocks.
v3: Unsupervised learning