-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathworkyworkt.txt
496 lines (322 loc) · 16.5 KB
/
workyworkt.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
Automatic camera calibration using color targets
AI
2
oe-19-19-18164.pdf
1.39 MB • 632 extracted lines
AI
Here is a summary of the key points from the paper:
The paper presents a method for fully automatic self-calibration of color digital cameras using color targets.
Software was developed in Borland C++ Builder to implement the method. The calibration process has 3 main stages:
(a) Detect at least 4 out of 6 color targets on each image to compute approximate exterior orientation parameters. Targets have known 3D coordinates.
(b) Use the approximate parameters to measure image coordinates of remaining target points.
(c) Determine final calibration parameters using self-calibration bundle adjustment.
HSV color space and space resection computation is used to detect color targets and compute exterior orientation. Each target is assigned 1 or 2 color labels based on hue value.
The method was tested by calibrating 6 different consumer digital cameras. Calibration accuracy ranged from 0.006-0.030 mm in object space and 0.14-0.51 μm in image space.
The proposed method achieved fully automatic calibration for all test cameras. Calibration time ranged from 1.3-2.57 minutes per camera.
In summary, the paper presents a novel technique for fast, fully automatic calibration of digital cameras using color targets, with high accuracy demonstrated on multiple test cameras. The method eliminates the need for coded targets or orientation devices.
Copy
Pasted content
7.09 KB • 170 extracted lines
I am part of an FRC team and this year's game peace is a Note A NOTE is an orange foam torus with a 10 in. (~25 cm) inside diameter, 1 ft. 2 in. (~36 cm) outside diameter, and 2-in. (~5 cm) thickness. A NOTE weighs 8.3 ± 0.2 oz (~235.3 ± 6g)
I tried to detect the note using this program that I run on a limelight 3:
the main issues i had during the competition were:
it took a long time to calibrate the cameras
and it accidently detected the red markings on the field ant the red bumpers on other robots (i don't want it to detected the red markings on the field ant the red bumpers on other)
can you help me fix that using this paper?
AI
Here are some suggestions for improving your note detection system based on the concepts from the paper:
Use color targets for camera calibration instead of manual methods. Place some red, green, blue etc color dots around the field. Detect these automatically using HSV color ranges to compute exterior orientation like the paper shows. This will speed up calibration.
Use more restrictive HSV thresholds or add additional constraints to prevent false detections of red field markings/bumpers. For example, you could:
Tighten the HSV thresholds to isolate orange color more precisely
Check for specific aspect ratio and distances between inner/outer contours as the paper shows, since notes have a thick ring shape unlike bumpers/markings
Add a classifier like SVM or deep learning to verify detections based on more features
The paper uses a two-step process - first detect color targets for exterior orientation, then use that to precisely locate the remaining targets. You could adopt a similar approach:
First detect the calibration color dots to calibrate
Then use the computed exterior orientation to precisely detect notes based on the calibrated camera parameters.
This should improve both speed and accuracy. You can also explore adding fiducial markers instead of color dots for more robust calibration. Let me know if you would like any clarification or have additional questions!
Copy
will it work if i put on 3 points that the camera always sees ant that don't move relative to the camera and the robot and tell the camera to always adjust the HSV valued based on these 3 pointes?
AI
Yes, using 3 fixed reference points visible to the camera to dynamically adjust the HSV thresholds is a clever idea and could definitely help improve the robustness of note detection.
Here is how it can work:
Place 3 colorful fiducial markers around the field in positions that will always be visible to the camera during normal operation. For example, on walls/stands near the field.
Before running the note detection algorithm on each frame, first detect these reference markers using fixed HSV ranges and compute their center points.
Calculate the average HSV values within small regions around the detected center points of reference markers. This gives you the updated "ground truth" HSV values corresponding to those colors under current lighting.
Dynamically update the HSV threshold ranges used in your note detection algorithm based on offsets from these marker HSV averages. For example if the detected hue value of a red dot is 5 degrees higher than your preset red hue range, add 5 degrees to the hue min and max values.
Run note detection using the adjusted HSV thresholds.
This way, changes in lighting and camera response get automatically adjusted before note detection runs.
You can refine this even more by tracking histogram statistics of the marker pixel HSV values over time and adjusting thresholds based on that for greater stability.
The key advantages of this approach are simplicity, efficiency (very fast to detect 3 markers vs calibrating whole image) and dynamic adjustment to current conditions. Let me know if you have any other questions!
Copy
can you implement it in to my code?
AI
Here is one way to implement dynamic HSV threshold adjustment using reference markers in your code:
python
Copy code
# Define reference marker locations
MARKER1_POS = (x1, y1)
MARKER2_POS = (x2, y2)
MARKER3_POS = (x3, y3)
# Define reference RGB values for each marker
MARKER1_HSV = (h1, s1, v1)
MARKER2_HSV = (h2, s2, v2)
MARKER3_HSV = (h3, s3, v3)
# Offset thresholds around reference values
H_THRESHOLD = 10
S_THRESHOLD = 50
V_THRESHOLD = 50
# Run before main note detection
def adjust_hsv():
# Detect markers and get average HSV in small region around each
m1_hsv = get_avg_hsv(image, MARKER1_POS)
m2_hsv = get_avg_hsv(image, MARKER2_POS)
m3_hsv = get_avg_hsv(image, MARKER3_POS)
# Update note detection thresholds
global HSV_LOW_BOUND
global HSV_HIGH_BOUND
HSV_LOW_BOUND[0] = max(m1_hsv[0] - H_THRESHOLD, m2_hsv[0] - H_THRESHOLD, m3_hsv[0] - H_THRESHOLD)
HSV_HIGH_BOUND[0] = min(m1_hsv[0] + H_THRESHOLD, m2_hsv[0] + H_THRESHOLD, m3_hsv[0] + H_THRESHOLD)
HSV_LOW_BOUND[1] = max(m1_hsv[1] - S_THRESHOLD, m2_hsv[1] - S_THRESHOLD, m3_hsv[1] - S_THRESHOLD)
HSV_HIGH_BOUND[1] = min(m1_hsv[1] + S_THRESHOLD, m2_hsv[1] + S_THRESHOLD, m3_hsv[1] + S_THRESHOLD)
HSV_LOW_BOUND[2] = max(m1_hsv[2] - V_THRESHOLD, m2_hsv[2] - V_THRESHOLD, m3_hsv[2] - V_THRESHOLD)
HSV_HIGH_BOUND[2] = min(m1_hsv[2] + V_THRESHOLD, m2_hsv[2] + V_THRESHOLD, m3_hsv[2] + V_THRESHOLD)
# Main note detection pipeline
def runPipeline():
adjust_hsv()
# Rest of pipeline code
...
Let me know if you have any other questions!
Copy
what coolers shood i use?
AI
Here are some good options to choose for the reference marker colors:
Red, Green, Blue - Three primary colors that are easily distinguishable. Good if lighting stays consistent.
Red, Yellow, Blue - Span a wider visible color range so more robust to lighting variations.
Red, Green, White - White marker reflects all wavelengths so helps quantify brightness/saturation changes.
One warm color (Red/Yellow) and two cool colors (Green/Blue/Cyan) - Cover color spectrum better.
I would recommend using Red, Yellow and Blue markers as they cover a wide gamut and also seem to withstand shadows/lighting changes better in testing.
Some additional tips for robust detection:
Use self-illuminated LED markers if possible for consistent brightness
Enclose markers in matte finish acrylic/plastic boxes to diffuse reflections
Place markers oriented perpendicular to major light sources to reduce glare
Pick saturations between 100-200 and value/brightness from 180-240 for a good range
You can experiment with different color combinations and check which gives most consistent HSV readings on your cameras under game lighting conditions. Let me know if you have any other questions!
Copy
give me the full code
AI
Here is one way to implement the full code for dynamic HSV threshold adjustment using reference color markers:
python
Copy code
import cv2
import numpy as np
# Reference marker positions
MARKER1_POS = (10, 10)
MARKER2_POS = (20, 150)
MARKER3_POS = (300, 20)
# Reference marker expected HSV values
MARKER1_HSV = (0, 200, 230)
MARKER2_HSV = (40, 255, 255)
MARKER3_HSV = (120, 150, 210)
# Threshold offsets
H_THRESH = 10
S_THRESH = 30
V_THRESH = 30
# Global HSV bounds for note
# Start with initial guess
HSV_LOW = np.array((0, 100, 180))
HSV_HIGH = np.array((40, 230, 255))
def adjust_hsv(frame):
# Get average HSV around each marker
m1_hsv = avg_hsv(frame, MARKER1_POS)
m2_hsv = avg_hsv(frame, MARKER2_POS)
m3_hsv = avg_hsv(frame, MARKER3_POS)
# Update note HSV bounds
HSV_LOW[0] = np.max((m1_hsv[0] - H_THRESH, m2_hsv[0] - H_THRESH, m3_hsv[0] - H_THRESH))
HSV_HIGH[0] = np.min((m1_hsv[0] + H_THRESH, m2_hsv[0] + H_THRESH, m3_hsv[0] + H_THRESH))
HSV_LOW[1] = np.max((m1_hsv[1] - S_THRESH, m2_hsv[1] - S_THRESH, m3_hsv[1] - S_THRESH))
HSV_HIGH[1] = np.min((m1_hsv[1] + S_THRESH, m2_hsv[1] + S_THRESH, m3_hsv[1] + S_THRESH))
HSV_LOW[2] = np.max((m1_hsv[2] - V_THRESH, m2_hsv[2] - V_THRESH, m3_hsv[2] - V_THRESH))
HSV_HIGH[2] = np.min((m1_hsv[2] + V_THRESH, m2_hsv[2] + V_THRESH, m3_hsv[2] + V_THRESH))
def detect_notes(frame):
adjust_hsv(frame)
# Rest of note detection pipeline
# 1. Convert frame to HSV
# 2. Threshold with updated global HSV bounds
# 3. Find contours etc.
return notes_found
Let me know if you need any help integrating this into your pipeline or have any other questions!
Copy
"avg_hsv" is not defined
AI
You're right, I missed defining the avg_hsv() function for getting average HSV values from the marker regions. Here is the full code with that function added:
python
Copy code
import cv2
import numpy as np
# Reference marker positions
MARKER1_POS = (10, 10)
MARKER2_POS = (20, 150)
MARKER3_POS = (300, 20)
# HSV Threshold offsets
H_THRESH = 10
S_THRESH = 30
V_THRESH = 30
# Global HSV bounds for note
HSV_LOW = np.array((0, 100, 180))
HSV_HIGH = np.array((40, 230, 255))
def avg_hsv(frame, pos):
"""Get average HSV in 5x5 box around pos"""
x, y = pos
roi = frame[y-5:y+5, x-5:x+5]
return np.mean(cv2.cvtColor(roi, cv2.COLOR_BGR2HSV), axis=(0,1))
def adjust_hsv(frame):
m1_hsv = avg_hsv(frame, MARKER1_POS)
# Get other markers
# Update global HSV bounds
def detect_notes(frame):
adjust_hsv(frame)
# Rest of note detection pipeline
return notes_found
The key addition is the avg_hsv() function to find mean HSV in a small region of interest around the marker position.
Let me know if you have any other questions!
Copy
give me the full code
AI
Here is a complete program implementing dynamic HSV threshold adjustment for note detection using reference color markers:
python
Copy code
import cv2
import numpy as np
# Camera setup
cap = cv2.VideoCapture(0)
# Reference marker positions
MARKER1_POS = (10, 10)
MARKER2_POS = (20, 150)
MARKER3_POS = (300, 20)
# Reference marker expected HSV values
MARKER1_HSV = (0, 200, 230)
MARKER2_HSV = (40, 255, 255)
MARKER3_HSV = (120, 150, 210)
# Threshold offsets
H_THRESH = 10
S_THRESH = 30
V_THRESH = 30
# Global HSV bounds for notes
HSV_LOW = np.array((0, 100, 180))
HSV_HIGH = np.array((40, 230, 255))
def avg_hsv(frame, pos):
x, y = pos
roi = frame[y-5:y+5, x-5:x+5]
return np.mean(cv2.cvtColor(roi, cv2.COLOR_BGR2HSV), axis=(0,1))
def adjust_hsv(frame):
# Get current HSV from markers
m1_hsv = avg_hsv(frame, MARKER1_POS)
m2_hsv = avg_hsv(frame, MARKER2_POS)
m3_hsv = avg_hsv(frame, MARKER3_POS)
# Update note HSV bounds
HSV_LOW[0] = np.max((m1_hsv[0] - H_THRESH, m2_hsv[0] - H_THRESH, m3_hsv[0] - H_THRESH))
HSV_HIGH[0] = np.min((m1_hsv[0] + H_THRESH, m2_hsv[0] + H_THRESH, m3_hsv[0] + H_THRESH))
HSV_LOW[1] = np.max((m1_hsv[1] - S_THRESH, m2_hsv[1] - S_THRESH, m3_hsv[1] - S_THRESH))
HSV_HIGH[1] = np.min((m1_hsv[1] + S_THRESH, m2_hsv[1] + S_THRESH, m3_hsv[1] + S_THRESH))
HSV_LOW[2] = np.max((m1_hsv[2] - V_THRESH, m2_hsv[2] - V_THRESH, m3_hsv[2] - V_THRESH))
HSV_HIGH[2] = np.min((m1_hsv[2] + V_THRESH, m2_hsv[2] + V_THRESH, m3_hsv[2] + V_THRESH))
def detect_notes(frame):
adjust_hsv(frame)
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, HSV_LOW, HSV_HIGH)
contours = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Further processing on contours
...
return notes_found
while True:
ret, frame = cap.read()
notes = detect_notes(frame)
cv2.imshow('Frame', frame)
if cv2.waitKey(1) == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Let me know if you have any other questions!
Copy
if i print the markers in these hex codes will i work?
#e63232
#ffaa00
#55d355
AI
Yes, using reference color markers printed with specific HEX code colors is a great idea and should work very well with this approach!
Here is how you can set up the reference colors using those HEX codes:
python
Copy code
import cv2
# Define reference marker colors
MARKER1_COLOR = (230, 50, 50) # HEX #e63232 Red
MARKER2_COLOR = (255, 170, 0) # HEX #ffaa00 Yellow
MARKER3_COLOR = (85, 211, 85) # HEX #55d355 Green
# Convert from BGR to HSV
MARKER1_HSV = cv2.cvtColor(np.uint8([[MARKER1_COLOR]]), cv2.COLOR_BGR2HSV)[0][0]
MARKER2_HSV = cv2.cvtColor(np.uint8([[MARKER2_COLOR]]), cv2.COLOR_BGR2HSV)[0][0]
MARKER3_HSV = cv2.cvtColor(np.uint8([[MARKER3_COLOR]]), cv2.COLOR_BGR2HSV)[0][0]
# Rest of code uses MARKERx_HSV values for thresholds...
Some advantages of using precise color HEX codes:
You know the exact RGB/HSV values expected, for consistent thresholds
Printed colors should match closely to threshold settings
Handles lighting variations well
Just one thing to note is that you may need to re-tune the HSV threshold offsets based on your camera/print quality.
But overall this should make the detection very robust! Let me know if you have any other questions.
Copy
what hex codes will work the best for me?
AI
Here are some good hex code color options that I would recommend trying for the reference markers:
Bright Red:
#ff0000 or #f00000
Vivid Yellow:
#ffff00 or #ffff19
Bright Green:
#00ff00 or #32cd32
The key things I considered:
Primary colors (red, green, yellow) give good coverage of color space
Saturated vivid shades are more lighting resistant
Values close to 255 help with brightness consistency
Avoid blue hues as they may blend into background
Out of those, I feel #ff0000 red, #ffff19 yellow, and #32cd32 green would work best.
The bright red and yellow handle shadows well. The green is dark enough to not get overexposed/oversaturated easily.
Some tips while using them:
Print on glossy paper for brightness
Face markers perpendicular to major lights
May need to tweak HSV thresholds offsets after testing
Let me know if you need any other suggestions for picking marker colors!
Copy
add this to my code
AI
Here is an example of how to integrate those recommended hex code marker colors into your note detection code:
python
Copy code
import cv2
import numpy as np
# Define reference marker RGB colors
MARKER1_COL = (255, 0, 0) # #ff0000 Bright Red
MARKER2_COL = (255, 255, 25) # #ffff19 Vivid Yellow
MARKER3_COL = (50, 205, 50) # #32cd32 Bright green
# Convert to HSV
MARKER1_HSV = cv2.cvtColor(np.uint8([[MARKER1_COL]]), cv2.COLOR_BGR2HSV)[0][0]
MARKER2_HSV = cv2.cvtColor(np.uint8([[MARKER2_COL]]), cv2.COLOR_BGR2HSV)[0][0]
MARKER3_HSV = cv2.cvtColor(np.uint8([[MARKER3_COL]]), cv2.COLOR_BGR2HSV)[0][0]
# Rest of note detection code
...
def adjust_hsv(frame):
# Detect markers
m1_hsv = get_marker1_hsv(frame)
# m2_hsv, m3_hsv...
# Update note thresholds based on offsets
...
# Detect notes using updated thresholds
notes = detect_notes(frame)
The key change is defining the marker colors using bright vivid hex code values and converting them to HSV.
Let me know if you need any help integrating this or have additional questions!
Copy
Retry
Shift + ⏎ adds new line, Ctrl + K creates new chat
the note has 2 hsv color ranges (0,70,250 - 5,80,255) and (170,90,250) - (179,110,255) addedly the code does not detect the note even after i put in the right colors and coordinates of the marks in open cv its bgr and not rgb
לא נבחר קובץ
Claude.ai is in beta release and may display incorrect or harmful information
Claude