Skip to content

Latest commit

 

History

History
85 lines (75 loc) · 3.94 KB

README.md

File metadata and controls

85 lines (75 loc) · 3.94 KB

ArucoRoi

Util to track AruCo Marker positions and identities in camera space in relation to predefined regions of interest.

You can use this to track / detect in still frames and live video.

It is recommmended to use the still image detection. Use video detection for debugging your config.json since the display loop will thread-block. Video detection hot-reloads the config, making it a perfect tool to write your config with live feedback.

  • If you want to detect in video, simply call image_detect each frame
  • Currently, image_detect() reads the config file everytime, costing performance.
    This will be fixed in the future.

How to use

  • Import Detector from ArucoRoi.detector
  • Instantiate a detector object
  • Call frame, roi_statuses = detector_object.image_detect(img)
    • img is the opencv Image object you want to detect on
    • frame is img but processed, meaning annotated with marker ids, rois (+names), correctness markings
    • roi_statuses is a dict with
      -> the target marker as key
      -> a field roi_name containing the name of the markers target ROI
      -> a field roi_desc containing the description of the target ROI
      -> a field fulfilled which is True if the target marker is inside, False if outside of the ROI
      -> a field deviation_x which is the desired markers X deviation from the center of the ROI
      -> a field deviation_y which is the desired markers Y deviation from the center of the ROI 'deviation_x': deviation_x, 'deviation_y': deviation_y

Why is my resolution bad

video_detect:
You should surrently only run video_detect for creating configs. If your resolution is bad, I recommend telling OpenCV exactly what you want, e.g. for an imaginary camera set a specific capture format and resolution supported by your camera:

stream.set(cv.CV_CAP_PROP_FOURCC, cv.VideoWriter.fourcc('M', 'J', 'P', 'G'))
        stream.set(cv.CV_CAP_PROP_FRAME_WIDTH, 3840)
        stream.set(cv.CV_CAP_PROP_FRAME_HEIGHT, 2160)

This has to be "implemented" (copy-pasted) by you, by hand

image_detect:
Check if your input image is of high enough resolution. If not and you're using OpenCV for capture, you might benefit from the adjustment above.

How to setup config.json:

In here you define all of your ROIs and their desired markers. Identities and positions of markers not attached in the config will still be tracked and logged in the onscreen_markers dict.

{
  "region_marker": [
    {
      "align_id": 620,                // id of marker that ROIs will be in delta to
      "align_name": "torso-center",   // purely for your information
      "rois": [                       // all ROIs attached to the marker
        {
          "reg_name": "V1",           // ROI display name and internal dict key
          "reg_desc": "i'm a circle",
          "reg_shape": "circle",      // available: 'circle', 'rectangle' (rect see below)
          "reg_dX": 250,              // ROI center coords in delta (also in the center for rectangles!)
          "reg_dY": 200,              // to align marker camera space position
          "reg_radius": 100,          // ROI radius (specific to circle rois!)
          "desired_marker_id": 601    // id of marker that should be inside the ROI
        }
      ]
    },
    {
      "align_id": 630,
      "align_name": "left-hand",
      "rois": [
        {
          "reg_name": "V2",
          "reg_desc": "i'm a rectangle",
          "reg_shape": "rectangle",
          "reg_dX": 0,
          "reg_dY": 200,
          "reg_width": 200,           // roi width (specific for rect rois!)
          "reg_height": 150,          // roi height (specific for rect rois!)
          "desired_marker_id": 602
        }
      ]
    }
  ]
}

Dependencies:

  • OpenCV
  • Numpy