We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
python train.py --task lp --dataset cora --model GCN --lr 0.01 --dim 16 --num-layers 2 --act relu --bias 1 --dropout 0.2 --weight-decay 0 --manifold Euclidean --log-freq 5 --cuda 0
Traceback (most recent call last): File "F:\Temp\Learn_hyperbolic\hgcn-master\train.py", line 153, in <module> train(args) File "F:\Temp\Learn_hyperbolic\hgcn-master\train.py", line 99, in train train_metrics['loss'].backward() File "D:\Softwares\MiniConda\envs\cuda117\lib\site-packages\torch\_tensor.py", line 487, in backward torch.autograd.backward( File "D:\Softwares\MiniConda\envs\cuda117\lib\site-packages\torch\autograd\__init__.py", line 197, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2708, 16]], which is output 0 of ReluBackward0, is at version 2; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! ```, what should i do ?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
python train.py --task lp --dataset cora --model GCN --lr 0.01 --dim 16 --num-layers 2 --act relu --bias 1 --dropout 0.2 --weight-decay 0 --manifold Euclidean --log-freq 5 --cuda 0
The text was updated successfully, but these errors were encountered: