- New costs functions
- Batch in train algorithm
- Add optimizers
- Add Dropout layer
- FIX INPUTS/OUTPUTS DIMENSIONS
- Create custom Cost
- Create custom Activation functions
- Fix problems with activation register
- Add early stopping
- Check if clone is necessary
- Change train API to use config struct instead of arguments (TrainConfig)
- Flatten layer
- Separates from/to MSGPACK logic from Layer trait
- Try to use only one global register isntead of three
- Separates builder and gloabl register logic
- Create
register!
macro to register all layers, activations and costs - Create derive macro for CostFunction and ActivationFunction
- Create derive macro for
MSGPackFormatting
- Generalize CostFunction and ActivationFunction traits (NNUtils)
- Check docs
- Check
README
- Add BatchNorm layer
- Add Conv layer
- Add Pooling layer
- Add Deconv layer
- Add Embedding layer
- Add Recurrent layer
- Improve backpropagation (resilient propagation)
- Multithreading
- Try to use GPU (WGPU, torch, etc)
- See erased-serde
- Refactoring train algorithm
- Fix Adam optimizer
- Web for docs and examples
- Allow users to set type of the numbers used in the neural network (f32 or f32)