-
Custom Neural Network Architecture: Designed and implemented fully customizable layers and activation functions (ReLU, Sigmoid, Tanh, etc.), allowing for custom network topology and training parameters.
-
Optimization: Implemented Adam optimization for efficient and adaptive gradient descent, ensuring faster convergence, alongside Xavier initialization to improve weight initialization and promote stable gradient flow during training.
-
Efficient Training: Enabled randomized mini-batch gradient descent with advanced backpropagation, supporting loss/cost functions like Huber Loss.
-
Scalable Design: Built a modular framework supporting different topologies, making it easy to expand and adapt for various datasets and applications.
-
Achieved 98.10% accuracy over 10,000 test cases on the MNIST Handwritten Digits Classification Test
-
Achieved 96.26% accuracy after a single epoch.
-
See the Excel spreadsheet for a more in-depth analysis of the library's test results :D
-
Implement parallelization to improve computing time
-
Implement cross-entropy cost function and softmax to optimize classification tasks
-
Implement SVMs
You'll probably see these changes in v2 of the LlamaNet, where I will also implement a custom image processing library to be able to detect and classify hotdogs 🌭🌭🌭🌭. Hope to see you there!