You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 3, 2021. It is now read-only.
What's the reasoning behind having the first layer smaller than the middle layers, unlike what the picture shows? Is it to reduce the number of parameters and overfitting, or simply the best configuration from the experiments?
The text was updated successfully, but these errors were encountered:
My understanding is that while numerous architectures were explored (different activation types, adding hidden layers, trying different numbers of nodes per layer, different dropout rates, different learning rates, dense re-feeding off and on, etc.), this confirmation had the best out-of-sample performance.
As to why this configuration performed the best in empirical experiments, I hypothesize that having a wide bottleneck layer helps the neural network learn a large number of "features" from the previous layer. Additionally, having a high dropout rate forces the model to learn robust features; e.g. with only 20% of the neurons active (80% dropout rate), the model must be extra careful when learning which features are most useful for the task at hand.
Thus, this configuration likely had the best out-of-sample performance because the wide bottleneck layer with the high dropout rate allowed the model to learn a large number of very robust features.
Yes, I think @paulhendricks is right - wide middle layer with large dropout allows it to learn robust representations.
Regarding first layers (e.g. first encoder layer) and last layer (e.g. last decoder layer) - those are actually huge in terms of weights because number data (x) is high dimensional. This, if x is around 17,000 and first layer has only 128 activations, then it means that there are 17,000x128 weights in the first layer.
What's the reasoning behind having the first layer smaller than the middle layers, unlike what the picture shows? Is it to reduce the number of parameters and overfitting, or simply the best configuration from the experiments?
The text was updated successfully, but these errors were encountered: