You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wanted to use this great package to automatically detect when a photo with a face is on its side or upside down so that it can then be rotated correctly. Unfortunately, the faces in such pictures usually don't seem to be recognized at all, as the script below shows. Is there anything that can be done? (Apart from simply trying out the rotations when a face is not recognized). It would probably be ideal to repeat the training and use 90-degree rotations and flips as additional augmentations, but unfortunately I don't have the training database for this.
importosfromioimportBytesIOimportcv2importnumpyasnpimportrequestsfromPILimportImagefromfacenet_pytorchimportMTCNN# User agent for Wikipedia downloadheaders= {'User-Agent': 'Mozilla/5.0'}
# Download image from URLurl='https://upload.wikimedia.org/wikipedia/commons/a/a0/2007-08-19_Solveig_Hareide_-_Kalv%C3%B8ya.jpg'response=requests.get(url, headers=headers)
ifresponse.status_code==200:
img_data=response.contentimage=Image.open(BytesIO(img_data))
else:
raiseException("Image couldn't be downloaded")
# Create a directory to save imagesos.makedirs('rotated_images', exist_ok=True)
# Save original imageimage.save('rotated_images/original.jpg')
# Convert image to OpenCV formatimage_cv=cv2.cvtColor(np.array(image), cv2.COLOR_RGB2BGR)
# Define rotationsrotations= {
'original': image_cv,
'rotated_90': cv2.rotate(image_cv, cv2.ROTATE_90_CLOCKWISE),
'rotated_180': cv2.rotate(image_cv, cv2.ROTATE_180),
'rotated_270': cv2.rotate(image_cv, cv2.ROTATE_90_COUNTERCLOCKWISE)
}
# Save rotated imagesforname, imginrotations.items():
cv2.imwrite(f'rotated_images/{name}.jpg', img)
# Initialize MTCNNmtcnn=MTCNN(keep_all=True)
defdetect_orientation(image_cv2):
image_rgb=cv2.cvtColor(image_cv2, cv2.COLOR_BGR2RGB)
boxes, probs, landmarks=mtcnn.detect(image_rgb, landmarks=True)
iflandmarksisnotNone:
forlandmarkinlandmarks:
left_eye=landmark[0]
right_eye=landmark[1]
nose=landmark[2]
left_mouth=landmark[3]
# right_mouth = landmark[4]# Calculate the slope of the eye linedx=right_eye[0] -left_eye[0]
dy=right_eye[1] -left_eye[1]
angle=np.degrees(np.arctan2(dy, dx))
# Determine the orientation based on eye line and mouth positionorientation2=""if-45<=angle<=45:
ifleft_mouth[1] >nose[1]:
orientation2="Upright"else:
orientation2="Upside down"elif45<angle<=135:
orientation2="Rotated 90 degrees"elifangle>135orangle<-135:
ifleft_mouth[1] <nose[1]:
orientation2="Upside down"else:
orientation2="Upright"elif-135<=angle<-45:
orientation2="Rotated -90 degrees"returnorientation2return"No face detected"# Detect and print the orientation for each imageforname, imginrotations.items():
orientation=detect_orientation(img)
print(f"Orientation of {name}: {orientation}")
I wanted to use this great package to automatically detect when a photo with a face is on its side or upside down so that it can then be rotated correctly. Unfortunately, the faces in such pictures usually don't seem to be recognized at all, as the script below shows. Is there anything that can be done? (Apart from simply trying out the rotations when a face is not recognized). It would probably be ideal to repeat the training and use 90-degree rotations and flips as additional augmentations, but unfortunately I don't have the training database for this.
My requirements.txt:
The text was updated successfully, but these errors were encountered: