This lab is based on Udacity's self driving car simulator, which is a nice testbed for training an autonomous car using Convolutional Neural Networks.
The simulator, written in Unity, allows you to drive a car around a track and record a video of the front view of the ride as well as the input commands.
You will upload this training data (the images + the input commands) to the clusters and train a CNN to predict the steering angle command from the front view image alone. Thus, the input of your CNN is an image and the output is the steering angle.
You will then download back the trained model onto your lab machine so that you can run the car simulation in autonomous mode.
-
Download and extract the ZIP of this repo (download link here). Rename this folder
self-driving-lab
. -
If you are on the CADLab machines, start the anaconda prompt,
Anaconda Prompt (Miniconda3)
Note: Make sure you select Anaconda Prompt (Miniconda3), you must find specifically this version, inside
All Programs
->Anaconda 3 (64 bit)
.
- In your conda prompt, go to the extracted
self-driving-lab
directory and then type:
conda env create -f environment.yml
Then activate the environment. On windows you'll do:
conda activate 4c16
This will take a while, so in the meantime, let's play with the simulator (see step 4).
- Download our modified Udacity's self driving car simulator.
On the Lab machines, you will find a copy on c:\4c16 Car\
. Otherwise you
can download a version for your system here:
On OSX, you'll need to follow instructions for how to open an app from an unidentified developer.
If you want to install this on your machine, you will need
miniconda or anaconda to use the
environment setting, or simply install the depencies with pip
.
NOTE: If you using Apple M* (M1/M2) PC, you will have to install
tensorflow-macos
andtensorflow-metal
instead oftensorflow
.
-
Start up the Udacity self-driving simulator, choose the lake scene (left) and press the Training Mode button.
-
Then press
R key
and select the data folder, where your training images and CSV will be stored. -
Press R again to start recording and R to stop recording and wait for the processing of video to complete.
-
You should do around 1 to 5 laps of the lake track.
-
Zip both
driving_log.csv
file andIMG
directory into a zip file that you will namerecordings.zip
(do this by selecting these two items inside the recordings folder and selecting 'create archive', rather than by right-clicking and compressing the folder from the parent). Then uploadrecordings.zip
inside the4c16-labs/data
directory of your Google Drive. -
In the Jupyter notebook, the cell containing the following line will unzip the file to the Colab:
!unzip -o -qq /content/gdrive/MyDrive/4c16-labs/data/recordings.zip -d /content/recordings
check the Jupyter notebook for instructions.
-
Once you have trained your model and saved the weights in
model.h5
. Download the weights back to your lab machine in theself-driving-lab
directory. -
Start up the Udacity self-driving simulator, choose the lake (left) scene and press the Autonomous Mode button.
-
In your conda prompt type
python drive.py model.h5
and watch.
- to stop the simulation: close the simulator window. Check in the
prompt window that the outout file
car_positions.npz
has been saved. Typectrl-c
. It may take a while beforectrl-c
has an effect.
Check that the output file is in your directory and uplaod
car_positions.npz
to your lab-07
folder in the Google Drive and add it to your git for
assessment.
One key take away from this lab should be that, more important than the network architecture is the design/collection of the training data. Think about what the network needs to learn! If your system performs poorly, this is probably because the training set is poor.
NVIDIA's paper: End to End Learning for Self-Driving Cars for the inspiration and model structure.