// The Comment #9: Aye, Yo…TrafficFlow?

Bone chilling temperatures, but a dope sunset.

The Comment is a weekly digest of the stuff that grabbed my attention or occupied some part my mind during the past week. Normally, it’ll be one thing that’s really been on my mind, followed by a handful of things that I found interesting. The Comment will be published each Monday at 10:30AM EST. 

Thanks for reading.

## What’s up with TrafficFlow?

TrafficFlow was a side project I took on to deepen my understanding of machine learning and TensorFlow with some hands on experience. I started out with the goal of being able to train a neural network to tell me if an image from a traffic camera shows traffic congestion. Initially, I did not think that this was an ambitious goal, but it turns out it’s more challenging than I initially thought. I started this project in the summer of 2017 and just got around to training a neural network on the collected data. I haven’t reached my goals, but I have a few takeaways.

Where am I at now?

I have done a first pass at training a neural network with the data I collected and classified.  The Keras code I am using is very similar to a tutorial that walks through training a neural network to recognize cats and dogs.  The network tells me there’s congestion in every image I run inference (prediction) on, even in some of the classified training and validation data. Something is very wrong.

Went well – Programming things

Part of the reason TrafficFlow got off to a great start was because I scripted the data collection aspects. I wrote an Android app for the classification stage. Finally I scripted the data preparation steps. All of the manual work was configuring the neural network (more on this in a later article) and using the app tp classify the data.

Not so well – Data is Key!

The largest portion of TrafficFlow was data collection and classification. I setup a script that automatically saved an image from a traffic camera every 3 minutes. The script worked flawlessly. The challenge I immediately experienced was dealing with data from rotating cameras. I wanted to add a few constraints to minimize effort, one being I would train a neural network to recognize congestion on one side of a street or highway during the daytime. It was really easy to throw out images captured at night. It wasn’t as easy throwing away data from a rotated or zoomed camera. The cameras never returned to the previous position perfectly. Sometimes it would be off or zoomed in (or out). I had trouble determining if I should keep this sample or toss it.

Another challenge I experienced was the lack of data. I captured over 9,500 images. This was not enough. Over half of these images were thrown out because it was night or the camera’s perspective changed. When it came time to train, I had ~270 samples of data showing traffic congestion and ~2,500 samples of data containing no traffic congestion. I estimate that I’d need a magnitude or more of data, (2700 samples of congestion, 25000 samples of no congestion) for me to have a shot at a reasonably trained network.

Where do I go from here?

I’ll need to really dive in to the configuration of my neural network. I re-purposed a configuration from a tutorial thats used to determine whether or not the picture has a cat or dog in it. I have a hunch that I’ll need something more purpose built.  This is the reason why I got into this side project, to really understand why I would use certain configurations of a neural network over another.

In the mean time, I’ll be uploading the scripts and code I wrote to Github sometime this week.

In the meantime, enjoy a time-lapse generated from the collected data.

/* fini */