Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Face detection #49

Open
wants to merge 21 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
168e147
Implement emotion type updates and emotion events
zayfod Nov 13, 2020
2c3d47c
Add support for extracting camera matrix
zayfod Nov 14, 2020
8075be4
Add support for extracting saved cube IDs
zayfod Nov 14, 2020
66e7307
Add opencv-python as requirement
zayfod Nov 15, 2020
25c9850
A simple script showing how to remotely control Cozmo while streaming…
davinellulinvega Mar 25, 2021
204a759
Scale up and sharpen the image from the camera before sending it to t…
davinellulinvega Mar 25, 2021
b555b60
Add a comment noting that the same sharpening effect could be achieve…
davinellulinvega Mar 25, 2021
e7c1fce
Add constants to more easily configure the unsharp mask algorithm.
davinellulinvega Mar 25, 2021
2edc17e
Miscellaneous reformatting to conform as much as possible to PEP8.
davinellulinvega Mar 25, 2021
e88f7a4
Define a class and a simple example to show how face/hand detection c…
davinellulinvega Mar 25, 2021
8a3a33b
Remove some print left there for debug purposes.
davinellulinvega Mar 26, 2021
13c7779
Make the path for the tracker's default configuration files absolute
davinellulinvega Mar 26, 2021
2b64b69
Move the tracker's default configuration files to their default path.
davinellulinvega Mar 26, 2021
5c4d550
Remove the tracker's default configuration files from the examples fo…
davinellulinvega Mar 26, 2021
8fa6182
Remove hand detection from the face_detection example. This is not th…
davinellulinvega Mar 26, 2021
d10a150
Add multi_tracking to the list of imported modules.
davinellulinvega Mar 26, 2021
39fa3ca
Add a TODO comment to use the correct asset path in the final version…
davinellulinvega Mar 26, 2021
5c22da1
Add an ignore rule for the pre-commit configuration file.
davinellulinvega May 23, 2021
87a8c57
Implement different classes for object detection, tracking, both trac…
davinellulinvega Jun 11, 2021
922161b
Import the new classes in the __init__ file.
davinellulinvega Jun 11, 2021
760cf30
Modify the face_detection example to use the new classes.
davinellulinvega Jun 11, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,5 @@ __pycache__/
/.mypy_cache/

/examples/*.png

.pre-commit-config.yaml
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -208,9 +208,10 @@ Requirements
------------

- [Python](https://www.python.org/downloads/) 3.6.0 or newer
- [Pillow](https://github.com/python-pillow/Pillow) 6.0.0 - Python image library
- [Pillow](https://github.com/python-pillow/Pillow) 6.0.0 or newer - Python image library
- [FlatBuffers](https://github.com/google/flatbuffers) - serialization library
- [dpkt](https://github.com/kbandla/dpkt) - TCP/IP packet parsing library
- [dpkt](https://github.com/kbandla/dpkt) - TCP/IP packet parsing library
- [OpenCV](https://opencv.org/) 4.0.0 or newer - computer vision library


Installation
Expand Down
5 changes: 3 additions & 2 deletions docs/source/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Overview
========

https://github.com/zayfod/pycozmo
[https://github.com/zayfod/pycozmo](https://github.com/zayfod/pycozmo)

`PyCozmo` is a pure-Python communication library, alternative SDK, and application for the
[Cozmo robot](https://www.digitaldreamlabs.com/pages/cozmo) . It allows controlling a Cozmo robot directly, without
Expand Down Expand Up @@ -69,9 +69,10 @@ Requirements
------------

- [Python](https://www.python.org/downloads/) 3.6.0 or newer
- [Pillow](https://github.com/python-pillow/Pillow) 6.0.0 - Python image library
- [Pillow](https://github.com/python-pillow/Pillow) 6.0.0 or newer - Python image library
- [FlatBuffers](https://github.com/google/flatbuffers) - serialization library
- [dpkt](https://github.com/kbandla/dpkt) - TCP/IP packet parsing library
- [OpenCV](https://opencv.org/) 4.0.0 or newer - computer vision library


Installation
Expand Down
136 changes: 136 additions & 0 deletions examples/face_detection.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
from queue import Empty
import math
import time
import numpy as np
import cv2 as cv
import pycozmo as pc
from pycozmo.object_detection_n_tracking import ObjectDetectionNTracking
from pycozmo.object_tracker import TrackType
from pycozmo.object_detector import ObjCat
from pycozmo.display import Display

# Instantiate a face detection and tracking object
TRACKER = ObjectDetectionNTracking(TrackType.MOSSE, skip_frames=5, obj_cats=[ObjCat.HEAD], conf_thres=0.5,
conf_decay_rate=0.995, img_w=480, img_h=480)

# Instantiate a display
DISPLAY = Display(win_name='Tracker')

# Those two constants are used when sharpening the image with the unsharp mask
# algorithm
SHARP_AMOUNT = 0.7
SHARP_GAMMA = 2.2

# Some more constants to store the robots current status
HEAD_TILT = (pc.MAX_HEAD_ANGLE.radians - pc.MIN_HEAD_ANGLE.radians) * 0.1
HEAD_INC = math.radians(4)
HEAD_LIGHT = False

# If you want the camera to only be in grayscale, set this to False
COLOR_CAMERA = True


# Define a function for handling new frames received by the camera
def on_camera_img(cli, image):
"""
Detect and track any head that appear in the frame using yolov4-tiny.
:param cli:
:param image:
:return: None
"""
global TRACKER, SHARP_AMOUNT, SHARP_GAMMA

# Convert the image to a numpy array
frame = np.array(image)

# Check if the frame is in color
if frame.shape[-1] == 3:
# OpenCV mainly works with BGR formatted images so we need to convert
# the frame
frame = cv.cvtColor(frame, cv.COLOR_RGB2BGR)

# Rescale the image
scaled_frame = cv.resize(frame, None, fx=2, fy=2,
interpolation=cv.INTER_LANCZOS4)

# Try to sharpen the image as much as we can
blurred_frame = cv.GaussianBlur(scaled_frame, (3, 3), 0)
sharp_frame = cv.addWeighted(scaled_frame, 1 + SHARP_AMOUNT,
blurred_frame, -SHARP_AMOUNT,
gamma=SHARP_GAMMA)

# Let the tracker detect the different faces
# (this is where the heavy lifting happens)
TRACKER.process_frame(sharp_frame)


if __name__ == "__main__":
# Connect to the robot
with pc.connect() as cli:
try:
# Look forward
cli.set_head_angle(HEAD_TILT)

# Enable the camera
cli.enable_camera(enable=True, color=COLOR_CAMERA)

# Wait a little bit for the image to stabilize
time.sleep(2)

# Handle new incoming images
cli.add_handler(pc.event.EvtNewRawCameraImage, on_camera_img)

# Loop forever
while True:
try:
# Get the next frame with the bounding boxes
# A timeout is applied so that the robot might still be
# controlled even if no image can be displayed
img = TRACKER.get_next_frame(block=False)
# Display the image in a dedicated window
DISPLAY.step(img)
except Empty:
pass

# Read the next key event received by OpenCV's main window
key = cv.waitKey(25)

# Act accordingly
if key == ord('q'):
# Exit the program
break

elif key in [ord('k'), ord('j')]:
if key == ord('k'):
# Increase head tilt
HEAD_TILT = min(pc.MAX_HEAD_ANGLE.radians,
HEAD_TILT + HEAD_INC)
elif key == ord('j'):
# Decrease head tilt
HEAD_TILT = max(pc.MIN_HEAD_ANGLE.radians,
HEAD_TILT - HEAD_INC)
# Set the head angle
cli.set_head_angle(HEAD_TILT)

elif key == ord('l'):
# Toggle the head light
HEAD_LIGHT = not HEAD_LIGHT
# Set the head light
cli.set_head_light(enable=HEAD_LIGHT)

# Display the robot's status
print("Head angle: {:.2f} degrees, "
"Head light enabled: {}".format(math.degrees(HEAD_TILT),
HEAD_LIGHT), end='\r')

finally:
# Set the head down
cli.set_head_angle(pc.MIN_HEAD_ANGLE.radians)

# Close the display
DISPLAY.stop()

# Stop the face detection and tracking process
TRACKER.stop()
Loading