-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Roadmap
v2: Simplicity and Performance First
It must be easy enough to teach a child and fast enough to cause innovation.
Currently we want to focus our efforts on creating a "layer playground", where we have tons of layers, and they can work with any network type, be it feedforward, or recurrent.
We've put in a considerable amount of work to achieve gpu acceleration, and will eventually be fully gpu, in all networks. The desired library where much of the work has been done is http://gpu.rocks.
The concepts of recurrent and feedforward have always seemed like completely different networks when really there are a few very simple things that make them different. We want to make them so easy anyone can use them:
new FeedForward({
inputLayer: () => { /* return an instantiated layer here */ }
hiddenLayers: [
(input) => { /* return an instantiated layer here */ },
/* more layers? by all means... */
/* `input` here is the output from the previous layer */
]
outputLayer: (input) => { /* return an instantiated layer here */ }
});
import { FeedForward, layer } from 'brain.js';
const { input, feedForward, output } = layer;
const net = new FeedForward({
inputLayer: () => input({ width: 2 })
hiddenLayers: [
input => feedForward({ width: 3 }, input),
]
outputLayer: input => output({ width: 1 }, input)
});
net.train([
{ input: [0, 0], output: [0] },
{ input: [0, 1], output: [1] },
{ input: [1, 0], output: [1] },
{ input: [1, 1], output: [0] }
]);
net.run([0, 0]); // [0]
net.run([0, 1]); // [1]
net.run([1, 0]); // [1]
net.run([1, 1]); // [0]
new Recurrent({
inputLayer: () => { /* return an instantiated layer here */ }
hiddenLayers: [
(input, previousOutput) => { /* return an instantiated layer here */ },
/* more layers? by all means... */
/* `input` here is the output from the previous layer */
/* `previousOutput` is what came out of this layer previously, hence recurrent */
]
outputLayer: (input) => { /* return an instantiated layer here */ }
});
import { Recurrent, layer } from 'brain.js';
const { input, random, lstm, output } = layer;
const net = new Recurrent({
inputLayer: () => input({ width: 2 }),
hiddenLayers: [
(input) => {
const recurrence = random({ width: 3, height: 4 });
return lstm({ width: 3 }, input, recurrence);
}
],
outputLayer: input => output({ width: 1 }, input)
});
net.train([
{ input: [0, 0], output: [0] },
{ input: [0, 1], output: [1] },
{ input: [1, 0], output: [1] },
{ input: [1, 1], output: [0] }
]);
net.run([0, 0]); // [0]
net.run([0, 1]); // [1]
net.run([1, 0]); // [1]
net.run([1, 1]); // [0]
brain.recurrent
provided a nice means of learning to simplify how to expose the concept of recurrent to the public, in v2 recurrent will essentially become the Recurrent
class, so we can remove brain.recurrent
, and continue development there.
More to come.
v3: Unsupervised learning, spiking neural networks, distributed learning