-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with demo_faster_rcnn.py script. #5
Comments
Could you provide you mxnet version? |
I am using mxnet 1.2.0 |
You need to update you mxnet to newest version, at least >1.2.1 |
in ubuntu, |
I am doing training using aws instance and inference in Windows.. It is working fine... mxnet = 1.3.0 But i have to check whether it produces good output.. I will update it once i get the output. |
Just like the command in windows that I posted in another issue #3:
XX is your cuda version, eg. mxnet-cu80 if you use cuda 8.0 Refer to https://pypi.org/project/mxnet/ for all aviliable packages. |
I did it.. But still when i check the mxnet version. It is showing as 1.2.0 INFO:root:Start training from [Epoch 0] Stack trace returned 10 entries: Aborted (core dumped) |
Nerver seen such a error. |
@WalterMa It is not coming everytime.. Sometimes it comes..and sometimes it works fine. I do not knwo why.. May be due to memory usage.. |
Hi,
I have done training on my own dataset and i got 70% accuracy after 4 epochs..
I want to visualize the output.. so i tried with demo script.. i gave 1 input image and tried with the trained model. i changed the class names in demo script.
but i got this error.. Could you please let me know whats the problem.. Thank You
Traceback (most recent call last):
File "demo_faster_rcnn.py", line 65, in
cls, scores, bboxes = net(data.as_in_context(ctx), im_info.as_in_context(ctx))
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/gluon/block.py", line 413, in call
return self.forward(*args)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/gluon/block.py", line 629, in forward
return self.hybrid_forward(ndarray, x, *args, **params)
File "/home/ubuntu/gluon-faster-rcnn/rcnn/rcnn.py", line 69, in hybrid_forward
rois = self.proposal(rpn_cls_prob, rpn_bbox_pred, im_info)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/gluon/block.py", line 413, in call
return self.forward(*args)
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/gluon/block.py", line 629, in forward
return self.hybrid_forward(ndarray, x, *args, **params)
File "/home/ubuntu/gluon-faster-rcnn/rcnn/proposal.py", line 32, in hybrid_forward
threshold=self.rpn_nms_threshold, rpn_min_size=self.rpn_min_size)
File "", line 82, in MultiProposal
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/_ctypes/ndarray.py", line 92, in _imperative_invoke
ctypes.byref(out_stypes)))
File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/base.py", line 149, in check_call
raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: Cannot find argument 'cls_prob', Possible Arguments:
rpn_pre_nms_top_n : int, optional, default='6000'
Number of top scoring boxes to keep after applying NMS to RPN proposals
rpn_post_nms_top_n : int, optional, default='300'
Overlap threshold used for non-maximumsuppresion(suppress boxes with IoU >= this threshold
threshold : float, optional, default=0.7
NMS value, below which to suppress.
rpn_min_size : int, optional, default='16'
Minimum height or width in proposal
scales : tuple of , optional, default=[4,8,16,32]
Used to generate anchor windows by enumerating scales
ratios : tuple of , optional, default=[0.5,1,2]
Used to generate anchor windows by enumerating ratios
feature_stride : int, optional, default='16'
The size of the receptive field each unit in the convolution layer of the rpn,for example the product of all stride's prior to this layer.
output_score : boolean, optional, default=0
Add score to outputs
iou_loss : boolean, optional, default=0
Usage of IoU Loss
, in operator _contrib_MultiProposal(name="", feature_stride="16", ratios="(0.5, 1, 2)", rpn_min_size="16", scales="(8, 16, 32)", rpn_post_nms_top_n="300", rpn_pre_nms_top_n="6000", threshold="0.7", cls_prob="
[[[[9.2525011e-01 9.8686647e-01 9.9559492e-01 ... 9.6093690e-01
9.3473071e-01 8.3388972e-01]
[9.8144472e-01 9.9909139e-01 9.9984789e-01 ... 9.9409735e-01
9.8589975e-01 9.3193233e-01]
[9.8883343e-01 9.9964535e-01 9.9995410e-01 ... 9.9763453e-01
9.9354243e-01 9.5571983e-01]
...
[9.8543328e-01 9.9948043e-01 9.9991584e-01 ... 9.9969471e-01
9.9917930e-01 9.8851913e-01]
[9.7469234e-01 9.9870670e-01 9.9970120e-01 ... 9.9901140e-01
9.9767345e-01 9.7834754e-01]
[9.0814865e-01 9.8365211e-01 9.9276966e-01 ... 9.8466349e-01
9.7399849e-01 9.0880662e-01]]
[[9.0745032e-01 9.8171026e-01 9.9309546e-01 ... 9.4973421e-01
9.1728598e-01 8.1418854e-01]
[9.7337264e-01 9.9846858e-01 9.9970394e-01 ... 9.9094427e-01
9.7995251e-01 9.1768110e-01]
[9.8243284e-01 9.9936765e-01 9.9990177e-01 ... 9.9625152e-01
9.9037081e-01 9.4557309e-01]
...
[9.7682333e-01 9.9898654e-01 9.9980742e-01 ... 9.9938107e-01
9.9859077e-01 9.8415011e-01]
[9.6041822e-01 9.9727988e-01 9.9929476e-01 ... 9.9794215e-01
9.9601054e-01 9.7078675e-01]
[8.7193352e-01 9.7200722e-01 9.8639816e-01 ... 9.7515827e-01
9.6249181e-01 8.8967586e-01]]
[[5.2806801e-01 5.3886396e-01 5.5010569e-01 ... 5.2669793e-01
5.2231640e-01 5.0962281e-01]
[5.3768706e-01 5.5899465e-01 5.7622200e-01 ... 5.5065542e-01
5.4214233e-01 5.2825642e-01]
[5.4670048e-01 5.8282024e-01 6.0394657e-01 ... 5.6064773e-01
5.5870861e-01 5.4004127e-01]
...
[5.3445053e-01 5.7814318e-01 6.0343522e-01 ... 5.9942263e-01
5.9564185e-01 5.6060779e-01]
[5.3276056e-01 5.7079929e-01 5.9411222e-01 ... 5.9032643e-01
5.8910215e-01 5.5843079e-01]
[5.2759832e-01 5.5251533e-01 5.7285386e-01 ... 5.6627262e-01
5.6415069e-01 5.4235542e-01]]
...
[[1.6489255e-01 5.4860741e-02 2.7824294e-02 ... 1.1167015e-01
1.5454119e-01 2.7348977e-01]
[7.4070774e-02 1.0540956e-02 3.4242510e-03 ... 3.8886167e-02
6.8945184e-02 1.8131968e-01]
[6.2780201e-02 6.9735665e-03 1.9518270e-03 ... 2.3429820e-02
4.5364555e-02 1.4465846e-01]
...
[8.8465296e-02 1.4774417e-02 5.4020169e-03 ... 1.1072693e-02
1.9699827e-02 8.5111000e-02]
[1.4203803e-01 3.7630506e-02 2.2168955e-02 ... 3.7781410e-02
5.5475168e-02 1.4689194e-01]
[2.8729475e-01 1.7887905e-01 1.5341425e-01 ... 1.8521468e-01
2.1700267e-01 3.0219343e-01]]
[[7.6535888e-02 1.3174757e-02 4.5662634e-03 ... 3.9024629e-02
6.6420421e-02 1.6296616e-01]
[1.8801216e-02 8.8067626e-04 1.5799509e-04 ... 5.9979130e-03
1.4321312e-02 6.7583486e-02]
[1.2135372e-02 3.7127602e-04 5.5430377e-05 ... 2.5837927e-03
6.8379878e-03 4.4826828e-02]
...
[1.6011752e-02 5.7889975e-04 1.1127151e-04 ... 3.9832355e-04
9.8138128e-04 1.2958074e-02]
[2.6029671e-02 1.4883390e-03 3.9916934e-04 ... 1.2645581e-03
2.7250603e-03 2.4580965e-02]
[1.0069502e-01 1.9385004e-02 9.5264316e-03 ... 1.9384181e-02
3.1448375e-02 1.0509791e-01]]
[[4.3997696e-01 3.9228746e-01 3.6535779e-01 ... 4.2972672e-01
4.3877992e-01 4.6890491e-01]
[4.0322891e-01 3.3196816e-01 3.0024055e-01 ... 3.9013031e-01
4.0616569e-01 4.5436901e-01]
[4.0472379e-01 3.2188171e-01 2.8730047e-01 ... 3.8169590e-01
3.9514536e-01 4.5027012e-01]
...
[4.1938949e-01 3.4382537e-01 3.1303972e-01 ... 3.3924583e-01
3.4913608e-01 4.1282722e-01]
[4.3390730e-01 3.7113073e-01 3.4658235e-01 ... 3.7191394e-01
3.8552991e-01 4.3190357e-01]
[4.6732298e-01 4.3559861e-01 4.2490643e-01 ... 4.4331262e-01
4.5451128e-01 4.7383672e-01]]]]
<NDArray 1x18x37x37 @gpu(0)>")
The text was updated successfully, but these errors were encountered: