This repository provides code examples and a tutorial for applying motion blur to images in the spatial and frequency domains and restoring the original image from the blurred image using Python in Google Colab. Motion blur is a common phenomenon that occurs when there is relative motion between the camera and the scene being captured. The examples and explanations provided here will help you understand the concept of motion blur, implement it in both spatial and frequency domains, and restore the original image from the blurred image.
Motion blur occurs when there is relative motion between the camera and the scene being captured. It results in a smeared appearance of moving objects in images. Understanding and modeling motion blur is crucial in various applications, including image restoration and computer vision.
This repository demonstrates how to apply motion blur to images in both the spatial and frequency domains using Python in Google Colab. It also covers the restoration of the original image from the blurred image using inverse Fourier transform. The provided code examples and explanations will help you understand the techniques and implement them in your own projects.
To follow the examples in this repository, you need the following prerequisites:
- Basic knowledge of Python programming language.
- Familiarity with image processing concepts.
- A Google account to access Google Colab or any other platforms that can open jupyter notebook.
The repository includes an example of motion blur simulation in both spatial and frequency domains, along with image restoration using inverse Fourier transform. Here are a few highlights:
def motion_blur(img, size=None, angle=None):
k = np.zeros((size, size), dtype=np.float32)
k[(size-1)//2, :] = np.ones(size, dtype=np.float32)
k = cv2.warpAffine(k, cv2.getRotationMatrix2D((size/2-0.5, size/2-0.5), angle, 1.0), (size, size))
k = k * (1.0/np.sum(k))
return cv2.filter2D(img, -1, k)
The top left image is our input image - Other images are our outputs for different values of motion size and angles |
We convert the image from the spatial domain to the frequency domain by Fourier transform
This transformed photo is our photo in the frequency domain |
This is the motion blur function in the frequency domain:
The degradation function of motion blur in the frequency domain |
then we need to set motion blur parameters:
T = 0.5 # exposure
a = 0 # vertical motion
b = 0.05 # horizontal motion
# Create matrix H (motion blur function H(u,v))
H = np.zeros((M+1,N+1), dtype=np.complex128) # +1 to avoid zero division
# Fill matrix H
for u in range(1,M+1):
for v in range(1,N+1):
s = np.pi*(u*a + v*b)
H[u,v] = (T/s) * np.sin(s) * np.exp(-1j*s)
# index slicing to remove the +1 that we have added before for avoiding zero division
H = H[1:,1:]
Then we blur the image in frequency domain
This photo is our motion blurred photo in the frequency domain |
We convert the image in the frequency domain to the spatial domain using the inverse Fourier transform
Result:
The image on the left is our input - the image on the right is our output |
Details about the motion blur function that we used to blur the image:
T = 0.5 # exposure
a = 0 # vertical motion
b = 0.05 # horizontal motion
H = np.zeros((M+1,N+1), dtype=np.complex128) # +1 to avoid zero division
# Fill matrix H
for u in range(1,M+1):
for v in range(1,N+1):
s = np.pi*(u*a + v*b)
H[u,v] = (T/s) * np.sin(s) * np.exp(-1j*s)
# index slicing to remove the +1 that we have added before for avoiding zero division
H = H[1:,1:]
We have blurred the image by multiplying the above function in the image in the frequency domain. Now we can restore the image to its original state by dividing the image by the same function.
The top image is our input image - The middle image is our blurred image - The bottom image is our restored image |
👾Have fantastic coding👾