Mini self-driving car with raspberry pi board. Read the report on this project!
- Lines detections
- Traffics signs detections
- Obstacles avoidance
- Neural Network algorithm
- Remote control
- Build a complete system to develop and analyse the autonomous driving.
- Remote "live" monitoring
- Open Dataset for detections
- 2 wheels platforms (with one caster)
- 1 raspberry PI with a camera and a usb wifi dongle
- 1 motors I2C card: https://www.robot-electronics.co.uk/htm/md25tech.htm
- 5 I2C Ultra sonic range sensors: https://www.robot-electronics.co.uk/htm/srf08tech.html
Note that these pieces are quite expensive for a personal project (~100$ each piece). With some works, I believe it's possible to replace them with low cost materials.
ai/
- the code for all the intelligent system.preprocessing.py
- computer vision pre-processingdetection.py
- lines/signs detections with haar cascade & hough linesdeepq.py
- reinforcement learning using neural networks
robot/
- the code relative to the robotcloud/
- computer/server that remotely control or monitor the robotremote.py
- live-stream of car sensing with detections & remote control with socket & curse
raspberry/
- all the raspberry code, control the robot, communicate with the remote server and use ai library to perform automatic action.camera.py
- camera thread to have efficient recordcontrols.py
- remote controls with socketi2c.py
- function to use I2C (get sonars input, motors functions)livestream.py
- thread to send data with socketdrive.py
- car class & implement different driver (human, ai, logic based).
The decision for this structure is that the โAIโ code is independent from the robot. This allow to use it both in real time on the robot or offline on the servers. This allow for more to train more effectively neural networks and to also create a live stream of what the cars see and detect.
In the end, the software can be viewed as a simple loop which performs a number of tasks. The first is to fetch the last sensors informations. This task can be time consuming, so it's suggested to use parallel computing and cache system. The second task is to live-stream the car informations. It's purely optional and intended to ease development process. The third task is to get the driver actions. It can be a human driver or an autonomous โAIโ driver. In the case of an autonomous driver, it's important to have enough hardware to support real time decision. Finally, the driver's actions are pass by a constraint module. It's in this module where logic constraint are apply (for example, to avoid collision with obstacles). It's also in this module where the car can inform that an action wasn't apply for X reasons. (for example, in the case of a broken vehicle). Finally, the data capture by the cars, the driver's decision and the result of the constraint module are all register in a in-board database. The database will regularly push records to the central server.
- Replace hardware with cheaper component (and more powerful?)
- Add kinetic position
- Upgrade detections algorithms
- Create or found open dataset for each
- Create a scoring system for each
- Train thee reinforcement learning algorithm