You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have tried to run the code on a server with 4 Nvidia GPUS and the performance drops 6 times with respect to using a CPU on a notebook.
Is there a way to exploit GPUs better?
The text was updated successfully, but these errors were encountered:
hi attardi,
I have been working on some urgent projects recently and hopely will investigate the problem a.s.a.p.
For a short try, could you please test replacing the l2 regularization (lines 339 to 350) with tf.nn.l2_loss() ? In our experience, it help boosting the program.
I have tried to run the code on a server with 4 Nvidia GPUS and the performance drops 6 times with respect to using a CPU on a notebook.
Is there a way to exploit GPUs better?
The text was updated successfully, but these errors were encountered: