Skip to content

Capstone Project solution for Udacity Self-Driving Car final project.

License

Notifications You must be signed in to change notification settings

KhardanOne/CarND-Capstone

 
 

Repository files navigation

This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.

Purpose

The main goal of the project is to implement some basic functionality of an autonomous vehicle system by writing ROS nodes. The system architectrue of the used system is shown on the below image:

system architecture

The project aimed to code the following ROS nodes:

  • Traffic Ligth Detection Node
  • Waypint Updater Node
  • DBW Node

Team

The project has been completed by the following team.

Native Installation

  • Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.

  • If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:

    • 2 CPU
    • 2 GB system memory
    • 25 GB of free hard drive space

    The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.

  • Follow these instructions to install ROS

  • Download the Udacity Simulator.

Docker Installation

Install Docker

Build the docker container

docker build . -t capstone

Run the docker file

docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone

Port Forwarding

To set up port forwarding, please refer to the "uWebSocketIO Starter Guide" found in the classroom (see Extended Kalman Filter Project lesson).

Usage

  1. Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
  1. Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
  1. Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
  1. Run the simulator

Real world testing

  1. Download training bag that was recorded on the Udacity self-driving car.
  2. Unzip the file
unzip traffic_light_bag_file.zip
  1. Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag
  1. Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch
  1. Confirm that traffic light detection works on real life images

Implementation of the Waypint Updater Node

First we implemented the Waypint Updater node. This node publishes waypoints from the car's actual position to some distance ahead. For this the node needs to know the position of the car and the list of all the waypoints. To get this information this node subscribes to the /current_pose and the /base_waypoints topics. To consider the traffic lights, subscription to the /traffic_waypoint topic is also necessary.

	rospy.Subscriber('/current_pose', PoseStamped, self.pose_cb)
	rospy.Subscriber('/base_waypoints', Lane, self.waypoints_cb)
	rospy.Subscriber('/traffic_waypoint', Int32, self.traffic_cb)
    
	self.final_waypoints_pub = rospy.Publisher('final_waypoints', Lane, queue_size=1)

The waypoints_cb() callback is called when the /base_waypoints topic provides the waypoint list in a message. The received waypoints are stored in a KDTree. The pose_cb() callback is updating the actual car position based on the received message. Through the /traffic_waypoint topic the node receives an index which points to the waypoint of the stopline the car needs to stop at.

To publish the limited number of waypoints ahead of the car, in the node loop the publish_waypoints() function is called. This functions prepares the waypoint list to send. In case there's a red light ahead, the decelerate_waypoints() function updates the velocities of the waypoints using a square root shaped function.

Implementation of the DBW Node

The DBW (drive by wire) node governs the physical operation of the vehicle by sending throttle, brake, and steering commands. The input of the node is the /twist_cmd topic, also the /vehicle/dbw_enabled topic is listened to check when we have control of the car.

	rospy.Subscriber('/vehicle/dbw_enabled', Bool, self.dbw_enabled_cb)
	rospy.Subscriber('/twist_cmd', TwistStamped, self.twist_cb)
	rospy.Subscriber('/current_velocity', TwistStamped, self.velocity_cb)
	
	self.steer_pub = rospy.Publisher('/vehicle/steering_cmd', SteeringCmd, queue_size=1)
	self.throttle_pub = rospy.Publisher('/vehicle/throttle_cmd', ThrottleCmd, queue_size=1)
	self.brake_pub = rospy.Publisher('/vehicle/brake_cmd', BrakeCmd, queue_size=1)	

Through the /twist_cmd topic a linear and angular velocity values arrive. Based on these values the control() method of the Controller class is calculating the necessary throttle, brake and steering control values and they are being published through the /vehicle/steering_cmd, /vehicle/throttle_cmd and /vehicle/brake_cmd topics towards the Simulator or the car itself.

Implementation of the Traffic Ligth Detection Node

The purpose of the Traffic Ligth Detection node is to warn the car if there's a red traffic light ahead so the car can stop. The position of all the traffic lights are known via the /vehicle/traffic_lights topic. Considering the current position of the car (/current_pose) the node should send the index of the waypoint for the nearest upcoming red light's stop line (/traffic_waypoint).

	sub1 = rospy.Subscriber('/current_pose', PoseStamped, self.pose_cb)
	sub2 = rospy.Subscriber('/base_waypoints', Lane, self.waypoints_cb)
	sub3 = rospy.Subscriber('/vehicle/traffic_lights', TrafficLightArray, self.traffic_cb)

	self.upcoming_red_light_pub = rospy.Publisher('/traffic_waypoint', Int32, queue_size=1)

The Traffic Ligth Detection node uses the LightDetector class. This class was implemented in python and uses many techniques that we learned about image manipulation and shape detection. To study the proper behavior of the class, we created an ipynb project file that you can find here: "TrafficDocs/Traffic_Light_Classifier.ipynb"

The project uses image blurring (image smoothing) to make circle detection easier. Read more about Blurring here

Original image:

Original Green Light

Blurred green light image:

Blurred Green Light

After blurring, the circle detection in Hough image space is more effective. Read more about HoughCircles here

Green found

The next step after we found the circles on the image is to select the green, red and yellow colored ones. The HoughCircles() function provided by opencv gives back also the coordinates of the middle position of the circle so we can get the pixel or pixels around it. The pixel colour identification is tuned manually. There are three functions in LightDetector class to decide if the color is red, green or yellow, the parameter of them is a pixel from the image. The drawCirclesAndGetImages() function was made for cropping out the detected circle of the image and collect them into one list with the classified color. This function has been simplified in the node because image generation is not required for classification. After these step, the getLightColor() function decides what color we return from that image. If we can see red, we return red always. If we can see green, we keep looking if there is a red or an orange on the picture because we always care about reds. If we can see an orange, we keep looking, same way as in the case of the green.

Found,cropped and classified

More information and example codes can be found in the "TrafficDocs/Traffic_Light_Classifier.ipynb" project.

Sample vidoes

Some videos about how the car approaches the traffic lights:

  • At the beginning of the simulation, first traffic light: sim_video_at_start.avi
  • At some random traffic light: sim_video_stop_at_red_1.avi
  • One more example for a red signal: sim_video_stop_at_red_2.avi

About

Capstone Project solution for Udacity Self-Driving Car final project.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 70.2%
  • CMake 11.2%
  • Python 10.9%
  • C++ 7.3%
  • Other 0.4%