Neural Network
An exploration into the field of neural networks.
I created a neural network capable of navigating an object around a track.
I created a neural network capable of navigating an object around a track.
For our specialization course at TGA, I decided to explore neural networks due to my interest in this subject. I have gotten the impression that using neural networks for games is still quite uncommon practice for many reasons, mainly production time and cost. But it is hard to read about AI without machine learning or neural network being mentioned somewhere.
I did have some previous knowledge of what a neural network was and how it worked in theory. I saw a good opportunity to see what goes into creating a neural network practically. Therefore, I gave it a try!
The first network
A neural network works by receiving some inputs in the form of numbers and using that input to return an output using an activation function calculated with the input and so-called weight and biases.
The goal for the first week of the project became to create a network able to take manual input and get a deterministic output.
Started researching and discovered that what I thought to be the only kind of neural network is the most simple called a "Feed-Forward" neural network. With this new knowledge, I decided this was a good place to begin. Feed-forward means that the information only moves forward through the input nodes onto hidden layer nodes (if any) and then to the output nodes.
Making the world
In the second week, the focus was to create the game world and the car.
The world is quite simple. The only three things are walls, see-through checkpoints and cars. The checkpoints have a custom collision layer that the cars check for in code and the car casts 5 raycasts to detect the walls and the distance to them. These distances will be used as inputs to our neural network. If a car runs into a wall its ability to move is seized. For each passed checkpoint a score is added.
Connecting network and world
During the second and third weeks, I connected the network and the world by using the information gathered from the cars.
The raycast information was sent into the input nodes of the network, and the points gathered by the cars were used to evaluate their performance to be used in a genetic algorithm. A genetic algorithm is a technique and algorithm that imitates real-life evolution by combining good performing agents, in this case, neural networks to evolutionarily find the best neural network possible. Genetic algorithm is a subject I have worked with previously and was therefore comfortable using to train the neural network.
During my research of neural networks, a method called backpropagation is commonly used to train neural networks by telling the networks which outputs are right and wrong. This is not as applicable to this situation as a genetic algorithm is and that's another reason why I chose to use a genetic algorithm as my training method.
Training track
Training
Week 4 consisted of training the network.
With the simple setup of 5 inputs from the raycasts and an input for the current speed, the network learned to control the little car with 2 outputs, one for steering angle and one for speed. Gif shows one generation of the genetic algorithm.
As my goal was to have the network be able to drive a track it'd never seen before I placed it on just that, a track it'd never seen and it drove it flawlessly.
Unseen track
Conclusion
With the result of my neural network being able to drive an arbitrary track, I decided to conclude my project. With more time I would have liked to try if backpropagation would have been a good training method. I would also like to spend more time getting the car to handle more like a real car and iterate my neural network around that.
I have enjoyed this project and I have learned a lot about how neural networks which as stated previously always have been a big interest and curiosity. Wish to be able to do something similar in the future