- https://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html
- https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html#display-video
- https://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/
- https://www.pyimagesearch.com/2015/09/07/blur-detection-with-opencv/
- https://stackoverflow.com/questions/45070004/iterations-vs-kernel-size-in-morphological-operations-opencv
- https://github.com/informramiz/opencv-face-recognition-python
- https://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_api.html#Ptr createLBPHFaceRecognizer(int radius, int neighbors, int grid_x, int grid_y, double threshold)
- https://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_api.html
- cron job to start the bot at 8
- Get token and psw from file
-
Get Classifier path from home direcotry - Save images/videos with format video-user_id.extension
- use Cam_shotter to get video
- Stop/start cam_motion class by flag value
- reorganize prints
- implement a logger
- Add error handling at the origin to not stop the class
- Fix logging , do not print on terminal
- Comment code
- Add requirements.txt
- Forgiveness instead of Permission
- Catch OpenCV Error: Assertion failed in CamShotter self.queue[1]=cv2.cvtColor(gray,cv2.COLOR_BGR2GRAY)
- fix mp4 video on telegram mobile
- Command to stop bot execution
- Make custom inline keyboard to set flags
- User friendly motion detection notification
- Send caption with image
- Command to reset ground image
- Reset ground image -> stops motion tasks
- Add command to send background image
- Fix send background command
- Fix reset background command
- Surround with try/except every bot_edit_message for the error telegram.error.BadRequest: Message is not modified
- Write help command
- Add help command
- Fix video command
- Use step motor with GPIO to move the camera
- Take a video while the camera performs a 180° rotation
- Integrate movement with background reset
- Nofity when movement is detected
- Enable/disable notification
- Send different image
- Send different video
- Detect face in image change
- Draw rectangle around face
- Find something faster than SSIM -> MSE
- Get face photo
- Denoise photo
- Wait after cam is opened
- Add date time to difference video
- Remove rectangles from face recognition
- Add profile face detector
- Fix are_different loop
- Fix date display
- Reset ground image programmaticly
- detect movement direction (right,left) (position of areas)
- detect movement direction (incoming, outcoming) (sum of areas)
- detect multiple faces
- Update motion notifier
- Save faces into corresponding dirs
- Train model for every face
- Classify person
- Find a goddamn way to get the classification confidence
- Resize all training/predition images to same size
- Save model
- Load/update model with new faces
- Delete faces which have been updated into the recognitor
- Get face confidence
- Delete unkown faces classified with a confidence < 70
- Send photo and recognize faces in image
- Fix add new face
- Try to recognize less blur images (blur index?)
-
New thread class for image/video/message sending
-
Fix while score, exit when no difference are detected anymore
-
Save countour list
-
Save area list
-
Implement profiling function
-
Optimize are different (replace for with any)
-
Moved from gaussianBlur to blur (x4 times faster)
-
Optimize face recognition
-
Optimize face detection in time (detectMultiScale is slow)
-
Delete subjects face images after the model has been trained with them
-
Saving the recognizer object create a yaml file of 17M, while the photo in the Faces direcories are 4M... check out if the yaml file increses or stays constant in size
-
Parallelize draw_on_frames
-
New thread function to get face in video
- Add possibility to send infos to specific id in telegram class
- Updated README
- Add class Face_recognizer which allow to save the images of faces with the corresponding name
- Add flag for face recognition
- Finally got the face recognition confidence
- Auto train for the recognizer and unknown images, with the update method
- Save/Load the recognizer from yaml file
- Removed detectMultiScale and replace it with multiple face prediction to get the best faces and faster prediction
- Optimized code now it is 27% faster
- Optimized code, now running 25% times faster
- Using rsync.. no more debugging push!
- Implemented blur detection for face image
- Implemented face recognition from sent image
- Added flag for green square on movement
Telegram gif not showing up on mobile Using
codec= cv2.VideoWriter_fourcc(*'MP4V') out = cv2.VideoWriter(video_name, codec, fps,(640,480)) out.write(frame)
Generate a .mp4 video with is shown as a gif in telegram. While the desktop version has no problem viewing it the mobile version displays a blank file wich can be seen only by downloading the .mp4.
While generating the file OpenCv yelds the following warning
OpenCV: FFMPEG: tag 0x5634504d/'MP4V' is not supported with codec id 13 and format 'mp4 / MP4 (MPEG-4 Part 14)' OpenCV: FFMPEG: fallback to use tag 0x00000020/' ???'
- Changing the resolution from 640,480 to any other resolution brings telegram to recognize the file as a video (not gif), but it still does not show up in the mobile version
- Changing the file extension to .mp4v does not work
- Changing codec to cv2.VideoWriter_fourcc(*'MPEG') does not show gif on desktop too
- Using isColor=False does not work
- Changing codec to cv2.VideoWriter_fourcc(*'avc1') and extension to .mov sends a file (not a gif) which can be viewd both by the desktop and the mobile version of telegram
- Final solution: Removed the codec calss and used 0x00000021 instead (with .mp4 extension), found here
Video difference is laggy The video difference is send when a difference in frame is detected, this detection is time costly thus writing a frame to the video object too slowly. This brings to a laggy gif file. GRAY SCALING takes 0.01 seconds SSIM takes about 0.5 seconds for every image, while gray scale takes 0.01 seconds PSNR takes 0.04 seconds for every image
- Remove sleep(1/self.fps) from while loop...not working
- Remove face detection...not working
- Taking the frames in the Cam_shotter class resolved the issue
If you are having an error like:
VIDEOIO ERROR: V4L: index 0 is not correct!
Change the cam_idx in Cam_shotter to the correct one for your raspberry pi
Encountered when the cam_movement class first start to compute difference between images
python3.5/site-packages/skimage/measure/simple_metrics.py:142: RuntimeWarning: divide by zero encountered in double_scalars return 10 * np.log10((data_range ** 2) / err)
When the cam_shotter class compl
Using the haarcascades/haarcascade_frontalface_alt.xml with CascadeClassifier yelds a great number of false-positive
Changing to haarcascades/haarcascade_frontalface_alt_tree.xml resolved the issue
Found error while performing the abs difference frameDelta = cv2.absdiff(grd_truth, gray). In cam_movement class
OpenCV Error: Sizes of input arguments do not match (The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array') in arithm_op, file /home/pi/InstallationPackages/opencv-3.1.0/modules/core/src/arithm.cpp, line 639 Cv Error: /home/pi/InstallationPackages/opencv-3.1.0/modules/core/src/arithm.cpp:639: error: (-209) The operation is neither 'array op array' (where arrays have the same size and the same number of channels), nor 'array op scalar', nor 'scalar op array' in function arithm_op
- It seems to be correlated to the number of channels of the images passed.
- When the error occurres the grd_thruth shape is (480, 640, 3) while the gray is (480,640), the number 3 should not be there since the image is being converted to gray scale with
gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
- Sorround difference with try catch
- Forgot to call cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY) XD
If you recieve the following message when starting the program with python main.py
:
libv4l2: error setting pixformat: Device or resource busy VIDEOIO ERROR: libv4l unable to ioctl S_FMT libv4l2: error setting pixformat: Device or resource busy libv4l1: error setting pixformat: Device or resource busy VIDEOIO ERROR: libv4l unable to ioctl VIDIOCSPICT
Use killall pyhton
(This will stop every pyhton process currently running)
Telegram command to get the ground image of the Cam_movement calss seems to stop while writing the image to file. It may be connected with the continuous use of the ground_image inside the movement class. It is connected only to the cv.imwrite() function
- Implement a get method
- Return a copy of the object
- Send image throught cam_movement class
Get the prediction confidence with the cv2.face.createLBPHFaceRecognizer().predict() method
- Followed this, but no luck
solved by using the collector object
Algorithm | Time taken in seconds | Suggested range |
---|---|---|
GRAY SCALING | 0.01 | |
SSIM | 0.5 | x < 0.75 |
PSNR | 0.03 | x < 30 |
NRMSE | 0.035 | x > 0.3 |
MSE | 0.025 | x > 500 |
- Change in shadow with value 3919
- It does not detect image far away persons
- Switched to PSNR
- Way more sensible than MSE (in a good way)
- Not so sensitive to shadow changes
- Change detected with score 24, while there was none
- Is triggered when camera auto adjust brightness
- In bright places it becomes very sensitive -> the use of an equalizeHist seems to resolve the problem
- No good in poor light condition
- Using gaussian_weights=True -> time increases to 0.7 seconds
-
Currently detectMultiScale is the slowest part of the program, it takes up to 30 seconds fo detect an image. With a time per call of 0.065
-
I'm using scale_factor = 1.4 and min_neight = 3.
-
Setting min_size fto (20,20) doesn't change anything
-
Setting the min_size to (50,50) speeds up the computation by x3
-
Setting mi_size to (100,100) ... small faces won't be recognized
-
Setting the min_size to (75,75) too big ... keeping 50
-
detectMultiScale is called twice per frame. One for finding the face and another to get the contours
A solution might be saving the list of contours and then using it later. It worked! now the detectMultiScale is computed once for every frame
- Taking up to 12% of total time, per call time is 0.013. It is done 3.3 times for every frame
A solution could be using the cvtColor inside the cam_shotter, for every first frame.
- With the previous change the calls of cvtColor went down to 1.5 times for frame. The total time dropped by by 25%
- Taking up 10% of time with 0.2 seconds per call.
- Try to use paralleling programming
- It is been called 1.3 times per frame, and takes up 10% of total time
- Trying to change the kernel as mentioned here
- Using np.ones((11, 11)) as kernel slows the percall time to 0.124
- Reduced iteration from 5 to 3... now it runs ato 0.083 seconds per call
- Changing kernel to np.ones((5, 5)) ...percall now is 0.044
- dilate is still the first function in total time taken but now running 10 times faster
- changing iteration to 1 gives a percall of 0.034 seconds
- Using kernel= cv2.getStructuringElement(cv2.MORPH_CROSS, (5, 5)) with one iteration... percall is 0.022 but the dilation is not enough
- Same kernel as before but 2 iterations ... percall is 0.036 but not enough dilation
- kernel= cv2.getStructuringElement(cv2.MORPH_CROSS, (7, 7)) with 2 iterationd...still not enough dilation and percall 0.048
- cv2.getStructuringElement(cv2.MORPH_CROSS, (33, 25)) with 1 iteration... sufficient dilation, percall 0.091
- cv2.getStructuringElement(cv2.MORPH_RECT, (7, 13)) with 1 iterations... not enough dilation, percall 0.032
- cv2.getStructuringElement(cv2.MORPH_RECT, (33, 25)) with 1 iteration... good dilation, percall 0.078
- cv2.getStructuringElement(cv2.MORPH_RECT, (17, 13)) with 1 iteration... good enough, percall 0.048 Stopping Here
- cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (33, 25)) with 1 iteration... sufficient dilation, percall 0.8 slowest so far
- Takes the majority of the time to generate the video
A solution may be remove the compute_img_difference for the frames, this will take away the green squares surrounding the movement. It worked perfectly, the total time to send a video has been halved!
- Slowest command when not using the square flag, percall= 0.045
- Trying changinf fps from 30 to 20... reduced the percall to 0.033 and gives the video a slower movement which is nice