Skip to content

wmathor/RemNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 

Repository files navigation

中文 | English

RemNet is an open source deep learning framework written in C++. It is very easy to use, you just need to define the network structure and set the relevant parameters to start training!

Download

$ git clone https://github.com/wmathor/RemNet.git

Once the clone is local, please use Visual Studio to import the project and run it, it's very easy. You can also try changing the information in myModel.json, such as adding some other layers, and then run it again.

By the way, you don't need to prepare any data at the beginning, I've already provided it for you, it's in ./RemNet/mnist_data.

More about RemNet

  • Supports the most commonly used layer types by far: Conv、Pool、FC、ReLU、Tanh、Dropout、BN、Scale
  • Support for the two most commonly used loss layers: CrossEntropy、SVM
  • Supports multiple optimizers: SGD、Momentum、RMSProp
  • Two kinds of weight initialization are supported: Gaussian、MSRA
  • Support fine-tune operation

RemNet is written in a similar way to Caffee, and its basic data types include Cube and Blob, which are related to each other in RemNet as follows

Here is a simple network diagram to help you understand the Net, Layer, and Blob relationships in the source code

Design focus

Taking the MNIST dataset as an example, the following figure shows how its Images and Labels are stored in RemNet

Obviously, I've done one-hot Encoding for the tags, which makes it easier to calculate Loss later, and it's all in Blob format, which is more useful for understanding than a normal data type, because that's how most deep learning problems are done.

Even the MNIST dataset has 60,000 samples, so it is not possible to enter them all at once, so RemNet also supports developers to set their own batch size. suppose I define an ultra-small convolutional neural network (Conv->ReLU->Pool->FC->Softmax/SVM), with forward and backward propagation as follows . You'll notice that some layers only have x in them, not w and b, but I've declared them all for programming convenience, I just don't use them. The same goes for backpropagation, some layers have no w and b gradient information at all!

Traditional CNN operations have no problem using a data representation like a blob, but what about when it comes to the FC Layer? When I use PyToch, I almost always flatten the input data before the FC Layer, but RemNet solves this problem by not flattening it because it wastes more time.

Let's look at the left side first, if you are not familiar with convolution and full connection, please think back a bit. On the left side is the solution of RemNet, it corresponds to multiplying the values in each position of the channel and summing them. If you flatten each Cube, it becomes the right side.

FAQ

  1. What is the biggest difference between RemNet and Tensorflow and PyTorch?

    Tensorflow I almost never use, because it's too hard to learn, PyTorch is the deep learning framework I always use, I will talk about the difference between RemNet and PyTorch, PyTorch need custom optimizer, then pass parameters, in the iterative process also need gradient zeroing and backpropagation, but RemNet doesn't need so much trouble, you just need to modify myModel.json parameters, for example, what optimizer to use, how much the weight decay is. You just need to modify the parameters of myModel.json, such as what optimizer to use and how much the optimizer's weight decay is, and leave the rest to the program.

  2. Why is this project called RemNet?

    Because I like a girl called レム, her name on the English translation is Rem:heart:

  3. What can I do?

    You can fork the project and add new features to it. If someone helps maintain the project, I'll add a list of "contributors" and expect you to be the first

  4. Can I take it to a competition or business?

    If you are a contributor, I can consider it. Otherwise, you can't. For details, please refer to "license".

🎨 TODO

  • Implement support for L2 regularization
  • Implement a common image data interface
  • Optimized code, encapsulated as executable
  • Support for RNN
  • Support GPU training (lifetime series)
  • Design graphical interface(lifetime series × 2)

LICENSE

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

About

An open source deep learning framework

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages