Skip to content
This repository has been archived by the owner on May 26, 2023. It is now read-only.

LED Controls Through Pi #4

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from
Draft

LED Controls Through Pi #4

wants to merge 2 commits into from

Conversation

PascalSkylake
Copy link
Contributor

Don't merge this!

I wanted to learn about networking in java and made this. I also made a program for the Pi that controls an LED strip based on packets sent from the rio https://github.com/PascalSkylake/PiLED. In theory this should work so I'd like to test it at some point in the offseason, though it's kind of pointless so we probably shouldn't actually use it.

@Ernie3
Copy link
Member

Ernie3 commented May 13, 2022

Pretty cool! And this is something we can totally try in the shop if there is serious interest. Just a couple things I want to note:

  1. The FMS is a very restricted network. Any network communications over the FMS needs to follow R704 in the game manual. Only ports 5800-5810 are open to teams. Any packet sent outside that window (and not part of the other predetermined ports) will be dropped. Keep in mind the currently used ports too like the camera feedback on 5800, the camera REST API on 5809 and RasDash I think on port 5810 (we can change these too, and if push ever came to shove we could move the camera REST API under the same web server as the camera stream to save a port, but we are not skimped for ports at the moment).
  2. If the goal is to control LEDs on the robot, then going thru the Pi may be an unnecessary step. We may be able to plug directly into the roboRIO or perhaps the new addressable LEDs port on the power distribution hub, eliminating a lot of complexity. Generally speaking we use the Pi to offload heavy computations like vision or camera feedback (especially when the camera feedback is converting MJPEG to H.264).

I have not looked at the specific logic or overall flow, but +1 for utilizing states as enums and the periodic subsystem function! Very nice encapsulation there! If you guys have any questions with the implementation feel free to ask me, but definitely be sure to hit up other software members to discuss this @PascalSkylake!

@PascalSkylake
Copy link
Contributor Author

I thought the FMS port restrictions don't apply to the network on the robot itself? It didn't specify in the manual, but this only communicates directly. And yeah, this is definitely unnecessary, like I said I mostly just made it to mess around with networking for something not totally useless.

Also, on an unrelated note, Cole and I were talking about trying to modify the driver cam stuff to use HEVC/h.265. Some people (Colin) were complaining about the quality being terrible. I'm not sure how big a difference it would make but I know ffmpeg supports it and it should allow for better quality while still fitting in the FMS's tiny max speed. I just figured I should ask your opinion before either of us tries to do anything with it.

@Ernie3
Copy link
Member

Ernie3 commented May 13, 2022

Looking at our use case closer I think you may be right about the port blocking. Whatever port you choose will probably be fine I just like to bring this up when I hear about consuming ports just in case. And sounds good man I just wanted to bring it up just in case as I was not 100% sure of your guys' goals.

The web app we use for driver feedback is slow because of the firewall, I think. Whenever we use our camera feedback app it is very quick until you enable the firewall (which is done by the radio configuration tool, it is a checkbox option when you configure the radio). This is probably because the data is sent over a TCP websocket which is not an ideal protocol/way of communication when trying to stream real time video. In the case of real time camera feedback, you usually want to use UDP or some best effort protocol that does not require the costly "3 way handshake" or TCP retries when a camera packet fails to deliver (in the case of real time, any lost packets should be dropped and only send the latest packets as to make sure it is actually real time streaming and not video of several seconds ago or w/e). Sadly the web browser does not support direct UDP connections for security reasons, and it is a shame because we really like web apps as they are easy to work with, easy to code and very relevant in today's industry. So I am all for looking for a new solution that fits our needs if the current way seems futile (needs to stay under the bandwidth limit and be real time, ideally as far under the limit as we can get it as to not push the limits of it, and so it is scalable for more cameras, and having control of the camera settings like brightness and whatnot is a big plus). And I would say it needs to be simple/maintainable for future members as camera feedback is something we need pretty much every year.

If you have a pi, sd card and a usb camera you can follow the steps in the pi_h264 repo to get it running at home. We have only tested it with raspbian stretch tho (this was back in 2019) so you may need to go the raspbian archives. Or you can chance it on a newer version and try to get it running, making note of what needs to be done to get it to work if you run into errors when installing dependencies and make a PR In the pi_h264 repo. You can edit the config.json to increase FPS and/or Resolution. In my testing, I have found the code to be sufficient until we enable the radio firewall which is disappointing. To compensate for this on the robot, we may have the resolution and fps lowered, but this may not be helping as much as we think since there would already be lag in the stream anyway.

To specifically answer your HEVC/h.265 question, my initial thought is that it is probably overkill for what we need (a lot of teams still use MJPEG), but if you guys are seriously interested it might be a fun thing to work on. I will leave that up to you guys, but just know that the issues we experience is not a H.264 vs 265 thing.

@Ernie3 Ernie3 marked this pull request as draft May 13, 2022 23:47
@Ernie3
Copy link
Member

Ernie3 commented May 13, 2022

Also, since you said this was not ready for merge, I marked it as a draft. This is a cool feature if you ever want people to check out your code (to test, for input, etc) but are not ready for it to be merged.

@Ernie3
Copy link
Member

Ernie3 commented May 14, 2022

Thinking about the camera stream, perhaps at one of our offseason events we can ask an FTA to look at packet information to determine what is the hold up for our camera stream. It works well until you enable their firewall, then everything is delays 1-2 seconds. If that did not happen I think our current solution would work (we would be able to up the FPS a tad and Resolution too). Although I am not sure they can even look at such a thing and hard to say if they will fully understand what we are trying to debug, would depend on the FTA maybe.

We can also double check in the shop by disabling the firewall, testing it, then enable and test again. We can also try it with the bandwidth limit only and no firewall just to make sure it is truly the firewall and not a bandwidth issue, tho I suspect that it is a firewall issue.

@lucasb365
Copy link

lucasb365 commented May 15, 2022 via email

@Ernie3
Copy link
Member

Ernie3 commented May 15, 2022

@lucasb365 We are well under the BW limit tho. But I suppose we have never had one enabled and not the other (firewall vs BW), hence why I suggested trying it just to be sure to isolate the issue.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants