Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improving wheel chair detector #354

Open
5 tasks
marc-hanheide opened this issue Oct 7, 2016 · 11 comments
Open
5 tasks

improving wheel chair detector #354

marc-hanheide opened this issue Oct 7, 2016 · 11 comments
Assignees
Milestone

Comments

@marc-hanheide
Copy link
Member

  • hard negative mining
  • training on test data (as we are running live now anyway)
  • parameter tuning to go for reasonably low FPR
  • upload new version of scipy / THEANO
  • test in old tracking pipeline
@marc-hanheide marc-hanheide added this to the AAF y4 milestone Oct 7, 2016
@Pandoro
Copy link
Contributor

Pandoro commented Oct 7, 2016

We should also not forget the map filter option! This seemed to work nice
and it might also help to fix many of the FPs.

On Oct 7, 2016 10:38 AM, "Marc Hanheide" notifications@github.com wrote:

Assigned #354
#354 to @Pandoro
https://github.com/Pandoro.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#354 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AByRzZyKJrthPlLnygl91HdNdvc442Vtks5qxgTkgaJpZM4KQy9v
.

@lucasb-eyer
Copy link
Member

Thanks for writing this up, Marc. What are the main problems trying to be solved here?

I agree with @Pandoro that the biggest bang for the buck we will get is adding a map-filter, and I think @cdondrup already did that for another detector in the past, so it should be very little buck to pay?

As to the suggestions, here are my thoughts, mostly negative sorry 😄 :

  • I don't expect hard negative mining to be any useful here.
    We go through the whole dataset while training and each single point contributes to the loss. The wronger it is, the more it contributes to the loss already. Hard negative mining is useful for not wasting computation time when many visited points don't contribute to the loss due to the way the loss is formulated, for example in the margin criterion.
  • training on all data: sure.
  • parameter tuning: they are currently "optimal". If we go for lower FP, we will have many more FN, and according to @Pandoro's experiments, that won't help the tracker. But if that's really what we want, we can do it.
  • Updating scipy/theano won't change anything, likely not even speed, why is it needed?
  • Old tracking pipeline: yeah, might be interesting to see how that goes. @Pandoro is it easy for you to run that experiment and create the pr-curve?

@lucasb-eyer
Copy link
Member

Btw, I can run such a hard-negative experiment, but I'm pretty certain that at best, it will improve learning speed. I've made such experiments in the past.

@Pandoro
Copy link
Contributor

Pandoro commented Oct 9, 2016

Just to clarify, almost all of this is what I suggested.

  • Hard negative mining seems interesting and is something I want to do to see what it does. I totally agree with your point, but it seems that this is a standard thing in "traditional detection" approaches and in my brain it is semi-mixed with hard negative mining on scenes where clearly no wheelchair is present (as in just recording further bags in random other environments).
  • Parameter tuning is just the threshold. During the review I used 0.9 or 0.95, but after doing the PR curves I can see how even higher values might be valuable.
  • The scipy and theano version just seems like a nice thing to update. Last time I already went wrong because I installed cudnn 5. This is purely to make it "easier to use" for others and if speed is improved that is a nice little benefit.
  • I don't really get how the old tracking pipeline got in here. But as far as I know we all agreed that for AAF we are not going to use the old tracker.

I just asked Marc to tag you in here so that you know what is going on. I was planning on doing the stuff myself.

@cdondrup
Copy link
Member

cdondrup commented Oct 9, 2016

Regarding the laser filter, given that the people perception is started with with_laser_filter:=true (which is the default) all you need to do is to subscribe to /base_scan_filter instead of /scan.

@lucasb-eyer
Copy link
Member

Cool, thanks for the clarification Alex!

About hard-negative: if this means on new data that's not part of train/test set, it might help as it will give more data. On the training data itself, it was used in most classic detection scenarios because they used SVN and thus the hinge loss, where it makes sense. I can tell you more when I'm back if you're interested.

About old tracker: I just had a look at Bastian's slides today, and it sounded like there will be a revival. With all Stefan's fixes, the Denis tracker is actually really good (better than NNJPDA).

@cdondrup
Copy link
Member

As long as it allows to integrate different detectors easily, I don't mind switching. But I cannot guarantee that all the components relying on the current tracker will still work if the output is not the same as for the old tracker and I don't have time to work on this I am afraid.

@lucasb-eyer
Copy link
Member

Now might be time to get Stefan a github account and ping him here =)

The thing is that with Stefan's fixes and modifications, besides being better, the tracker also allows to get the camera images for a track (remember we wanted to do this a while ago @cdondrup?) which is essential for basically all of the vision-related tasks if we want them on top of tracks. And I think we want/need this with the human workspace stuff?

Probably we could make the tracker also publish on a topic in the same format as the current one for compatibility. But that's starting to get outside of my knowledge zone.

@Pandoro
Copy link
Contributor

Pandoro commented Oct 10, 2016

So we discussed all of this with Marc and Nick at the GA. For AAF we will not mess around with the tracker since all of the models that @cdondrup learned are based on the old model and there it will just be to risky for this deadline. For the TSC deployment Nick told Stefan he can do it if he is interested and I guess it would be nice to see. There, there is not such a high risk since only some other components really rely on the tracks that should be more robust and there is some more test time until the deployment.

Marc showed Stefan how to do some tests and told him which topics are important, but so far nobody really saw any real use case for our trackers. (This all was post Bastian's talk.) However, I see your point @lucasb-eyer and it might thus be nice for Stefan to have a look at this. He is on vacation for a week, but wanted to look at it when he is back. However, in this issue I don't think this point has any relevance.

@lucasb-eyer
Copy link
Member

Ok thanks for the update; I should not say so much while I'm out of the loop hehe. If there's no requirement from anyone (from me, there's not anymore) then we might not want Stefan to do the work. Relevance to this issue is because it was the last point in the OP.

@Pandoro
Copy link
Contributor

Pandoro commented Oct 10, 2016

Just to clarify, with "has any relevance" I didn't want to say stop discussing it, I just wanted to point out that I think it should not be a point in the OP. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants