Skip to content
Robert Plummer edited this page Jul 25, 2017 · 26 revisions

Simplicity and Performance First

It must be easy enough to teach a child and fast enough to cause innovation.

Layer PlayGround

Currently we want to focus our efforts on creating a "layer playground", where we have tons of layers, and they can work with any network type, be it feedforward, or recurrent.

Layer idea types:

These layers will all be capable of at least two dimensions, and fit together easily and provide the basis for making what actually is going on understandable and straightforward.

GPU Acceleration

We've put in a considerable amount of work to achieve gpu acceleration, and will eventually be fully gpu, in all networks. The desired library where much of the work has been done is http://gpu.rocks.

The initial branch where we are experimenting with it: https://github.com/harthur-org/brain.js/tree/nn-gpu This branch's neural net will eventually shrink in size so that we are extending the base neural network neural-network.js.

Anti Matrix Transformation GPU CPU "Dance"

The industry practice is generally to use matrix transformations, and to dance back and forth between the cpu and gpu, providing little performance increase than what could be achieved. With gpu.js, we will keep most of the existing textures (the bit shifted memory that looks like an image, but is really the numbers) on the gpu, and they will never need to come back to the cpu. The reason we can do this is because, rather than relying on a bunch of matrix transformations (ie, multiply all matrixes then add all matrixes then relu all matrixes) we want to perform as much linear math at a single time on the gpu. With the approach, we are still getting the resulting matrix, but doing all the operations that are related in a single go (ie, multiply, add, and relu all matrixes simultaneously), letting the gpu work out the complexities of the math, which is what it does best. The net result in testing with a simple matrix multiplication using texture mode with gpu.js is over 100 times faster than that of the cpu with 32 bit decimal accuracy. We could easily drop the accuracy of the resulting matrices, if we extended gpu.js, to be 8 or 16, and that should give us an order of magnitude faster on the gpu, going from 100 to 1000 times faster than conventional cpu (testing was done on firefox, and isn't really fair because it is 500% faster than chrome, but just for understandability).

Clone this wiki locally