The Real Threat of AI

Kai-Fu Lee writing an opinion piece for the New York Times:

Unlike the Industrial Revolution and the computer revolution, the A.I. revolution is not taking certain jobs (artisans, personal assistants who use paper and typewriters) and replacing them with other jobs (assembly-line workers, personal assistants conversant with computers). Instead, it is poised to bring about a wide-scale decimation of jobs — mostly lower-paying jobs, but some higher-paying ones, too.

This transformation will result in enormous profits for the companies that develop A.I., as well as for the companies that adopt it. Imagine how much money a company like Uber would make if it used only robot drivers. Imagine the profits if Apple could manufacture its products without human labor. Imagine the gains to a loan company that could issue 30 million loans a year with virtually no human involvement. (As it happens, my venture capital firm has invested in just such a loan company.)

We are thus facing two developments that do not sit easily together: enormous wealth concentrated in relatively few hands and enormous numbers of people out of work. What is to be done?

Being in the tech industry and having done a lot of work in automation, this is something I often think about.  Artificial intelligence and machine learning are enabling companies to hire fewer people (or hire people for more specific roles).  Those who get hired or keep their jobs are doing the work that cannot be easily automated and relying on software tools for tasks that used to be fulfilled by people.  A significant portion of those savings benefit shareholders aiding the phenomenon, “the rich get richer”.

I’m still hopeful that new types of innovative, creative, and well compensating work will appear, but in the meantime, our society needs to be able to handle the influx of the newly unemployed.  People who lose out because of larger economic forces, entirely out of their control, need to be able to retrain and remake themselves for a new economy.  Instead, we (the US) are cutting social services like healthcare and reducing investment in community colleges and universities. Seems like we should be doing the opposite.

TrafficFlow – Classifying Data

As a follow up to my initial TrafflicFlow post, I have built some more software to help me classify the dataset I collected over the past few weeks.

TrafficFlow is a project where I develop an algorithm that can “look” at a still image pulled from a traffic camera and determine whether or not traffic is congested.  I am using the deep learning framework, TensorFlow, to build the model that will house this algorithm.

Over the past few weeks, I have collected 4,966 still images from a NCDOT traffic camera.  I wrote a Python script that took a snapshot.  I cron’d that Python script to run every 4 minutes.  Now that I have all of this data, how can I efficiently classify it?  A few ideas came to mind:

  • Python script that loaded the image in a picture viewer and presented a question in terminal.  This worked, but the picture viewer grabbed focus. I also couldn’t close the picture viewer automatically.  I determined that the extra interaction involved would make classifying the data this way, inefficient.  This also limited me to classifying data on my MacBook Pro only.
  • AngularJS web app that allowed me to classify images in a desktop web browser.  This was interesting, but I didn’t know a ton of Angular and this limited me to classifying data on my MacBook Pro only.

I’m an Android developer by day (checkout RadioPublic 😉 ).  I figured I’d just build an Android app that would allow me to classify the data, so I did.  But first, I needed to gather the collected data into a format that is easily usable in this app.  So I wrote a Python script:

This script simply reads a list of files from a directory, creates an entry in a dictionary (saving some additional metadata in the process), and exports that object to JSON.

A snippet from the exported data looks like:

Next, I uploaded this JSON file to Firebase.  Firebase is a backend as a service that allows app developers to quickly build apps without needing to spin up servers or turn into “devops”.  Best of all, it’s free to get started and use.

Finally, I uploaded 4,966 images to my web server so that my app can access them.

Now on to the app.  It’s nothing special and particularly ugly, but it works.

It allows me to quickly classify an image as congested (1), not congested (0), or ditch/don’t include (-1).  Once I classify an image, it saves the result (and my progress) to Firebase, then automatically loads the next one.  It turns this exercise into a single tap adventure, well a 4,966 series-of-single-taps adventure.

I’ve uploaded the Python script and Classify Android app to GitHub (  I hope to make my dataset available soon as well.

Now onto classification.

Hello TrafficFlow

I am interested in machine learning.  I’ve finished most of the Udacity “Intro to Machine Learning Course”.  I’ve been thinking of ways to get my feet wet in machine learning.  A practical project that I can start and finish that will give me some hands on experience.

Hello TrafficFlow (Traffic + TensorFlow)

I-40 at Wade Avenue in Raleigh, North Carolina

I’ve built an Android app, Traffcams, that lets people view traffic images from traffic cameras.  I’ve done the TensorFlow tutorial walking through image recognition.  So I’m thinking that I can modify that tutorial to tell me if an image from a traffic camera contains a lot of traffic.  My first step in training a TensorFlow model is collecting the data.  I wrote a Python script that simply saves an image to disk from a given URL.

I have this script cron’d on a Ubuntu server.  It runs every 4 minutes saving an image from this camera, which means I’ll save 360 images per day.  I’ll probably throw away the night pictures (sunset to sunrise is about 8 hours)…so I’ll acquire about 240 usable pictures per day.  I’m predicting I’ll need about 2,000 to 3,000 images to train a model.  I’ll play it safe and say I’ll need 3,000 images.  In 12 and a half days, I’ll have enough data to train.

My next step is to manually classify these images as having a lot of traffic (1) or not (0).  Sounds monotonous.

New Android Stuff Part 1 😍

I’m barely into the things that were released or announced at Google I/O 2017.  I’ve already got a list of stuff that I need to watch and review.  It’s really a lot of stuff and it’s only day 1!

What’s New In Android

After watching the Google I/O Keynote, this is normally the video I watch next.

Kotlin is Officially Support for Android Development

I’ve been holding off on doing anything major in Kotlin until it was blessed with official support from the Android team.  Well, I’m out of excuses.  Kotlin is an officially supported language for Android development.  It’s necessary dependencies and plugins are being integrated into Android Studio, beginning with version 3.0.

Kotlin and Android | Android Developers

New Android Studio Profilers

There are a ton of re-designed profilers for CPU, memory, and network operations in Android Studio 3.0.  I’ll let the pictures do the talking (all taken from Android Developers).

I’m especially pumped about the network profiler!

Android Studio 3.0 Canary 1 | Android Developers

Android O Beta

The next version of the Android O Beta was released today.  If you have a Nexus 5X, 6P, Pixel, Pixel XL, Nexus Player, or Pixel C, you can enroll your device at  I’ve been using it for a few hours.   The only issues I’ve seen are Android Pay doesn’t work (it politely lets you know with a splash screen) and the Google Play Music playback notification just re-appears from time to time.

Android O Developer Preview | Android Developers

Android Architecture Components

The Android team has started putting together new tools and guidelines to help Android developers properly architect their app to prevent memory leaks, make lifecycle management easier (!), and reduce boiler plate code.

A new SQLite object mapper from the Android team, called Room.

Screenshot from the Architecture Components Talk


Android Architecture Components | Android Developers

These are just a few of the things that immediately stood out to me as an Android Developer.  I’m looking forward to doing a deeper dive into all of it.

Traffcams, now serving Georgia and Washington

Beginning today, Traffcams now has traffic cameras from Washington (WSDOT) and Georgia (GDOT).  Traffcams is a powerful app that puts the traffic cameras around you in your hand.  Traffcams now contains over 3,700 traffic cameras in 3 states.  More locations are on the way.

Get Traffcams free, from the Google Play Store today.

Get it on Google Play