$400,000 vs. Optics

Justin Collins, writing in ExtraNewsFeed:

To me, the fact that every president does this is the problem. And it’s not really an Obama problem, but a structural problem, that has existed for years, and completely undermines our confidence in our system of government, where it seems like those involved in “public service,” particularly presidents, can walk away extremely wealthy. And thus, have an incentive to serve the interests of the wealthy and the powerful (who can essentially pay them later) while they are in office. Even the good presidents.

I’m of the opinion that this needs to change, but I’ve had a few competing thoughts on President Barack Obama accepting $400,000 from Cantor Fitzgerald to give a speech.  On one hand, accepting lucrative paid speeches and other opportunities are the things public servants do when leaving office.  As an African-American, is important to me that I leave wealth and a strong legacy to my children because its extremely rare for it to happen for African-Americans.  I imagine President Obama wants to do the same.  On the other hand, the optics are absolutely terrible.  Let the record show, I voted for President Obama in 2008 and 2012.  I often agreed with the direction the country was headed under President Obama, as compared to the previous 8 years under President George Bush.  There’s one area where I think President Obama could have been a lot stronger, prosecuting Wall Street for the crimes that led to the disastrous recession of the late 2000s.

I graduated college in December 2007 with my degree in Computer Engineering, accepted an offer with Sony Ericsson, and began working in January 2008.  The economy was melting down at this precise time.  Lawmakers and President George Bush decided that it was better to bail out the financial industry than let the US economy sink into a deep depression.  In the lead up to the recession, banks were pushing people into subprime mortgages (some using predatory practices) that were being sold financial institutions.  Subprime mortgages are generally risky investments, but home prices were skyrocketing during this time, the upside seemed to be there, at least to Wall Street.  That all came tumbling down when people started to default on their mortgages and home prices began to crater.  As a result, a lot of people lost their homes, and because large portions of the economy were capitalized with borrowed money, a lot of people lost their jobs as it became harder for businesses to borrow money.  The morality of a lot of these investments were suspect at best and criminal at worst, in my opinion.

I was laid off from Sony Ericsson in September 2008, along with 450+ other people.  I believe the layoffs were not a direct consequence of the Wall Street meltdown, but product mismanagement at the highest levels of the company.  With that said, I can understand how easy it would be to blame greed and the criminal-like behavior of Wall Street for your tragic circumstances.  Not only did people lose jobs, but also access to healthcare, homes, retirement funds, etc.  Those who lived through those tumultuous few years have probably lost a lot of confidence in Wall Street and large financial institutions (ex. I do most of my banking at local credit unions now).  When those people, many of whom voted for President Obama, see him accept a $400,000 speaking gig, from Wall Street no less, they feel betrayed.  This betrayal is exacerbated by the fact that Wall Street received a $700 billion dollar bailout and the executives, largely seen as responsible for the collapse, walked away without jail time, under the Obama administration.

I’m sure there are other motivations for criticizing President Obama for accepting the $400,000 speaking gig, but at a time when our elected officials seem disconnected from the reality of everyday Americans and the wealth gap between the top 1% and the rest of us growing , the optics are absolutely terrible for President Obama (with some collateral damage to the Democratic party).  As that wealth gap continues to widen, the system where public servants can massively enrich themselves giving speeches, should probably come to a halt.

On the other hand, it’s rare for African-Americans to be in a position where they can build enough wealth to pass on to the next generation.  When I see it happen, its great because that wealth and legacy can serve as a foundation for generations to come.  It’s life changing.  When I view President Obamas predicament primarily through this lens, it’s hard not to agree with Trevor Noah of the Daily Show.

*Note: I haven’t really written about my politics.  I have a lot of thoughts that I’ve slowly developed over the past 2.5-3 years.  I have tons of “political discussion fragments” scattered throughout my journal.  Social media seems like the place to express my beliefs, but the toxicity and general discourse make that a non-starter for me.  Politics is such a nuanced topic that 140 characters just won’t do it (or a string of 140 character messages, aka a Tweet storm).  I’m especially interested in economic policy, climate change, gerrymandering / voting rights, criminal justice reform, and “Internet” policy.  I hope to write more.

*Note 2: I’m a software engineer so I love writing code.  I rarely write long pieces on non-technical topics.  Excuse my grammar. 😛


Almost 5 years ago, I built an app that put every NC DOT traffic camera in your hand.  At that time, there was no such thing as looking up traffic conditions in Google Maps.  I was also in the early days of being a professional Android developer and used this app as a way to learn the ins and outs of building, marketing, and publishing an app on Google Play.

Today, the app is still used by a handful of users, but I noticed it’s usage is very seasonal.  People were using NC Traffic Cams (the name at the time) to check not only traffic congestion, but road and travel conditions.  It did not surprise me to see spikes in usage around snow storms, tropical storms, or hurricanes.  Things have stalled in recent years, but I am revisiting this app in hopes of building a really cool app and platform to view images from traffic cameras.  The first step of this starts today.  I’m re-launching NC Traffic Cams as simply, Traffcams.


Traffcams has been completely rebuilt and redesigned from the ground up.  It is built on modern technology and given a more modern design.  Traffcams contains all of the same features people came to love, plus a few more:

  • View cameras around you using your location
  • View cameras on a map
  • Favorite cameras for easy access
  • Search for traffic cameras by city, street, highway, or interstate name
  • Pin your searches for easy access


Traffcams is launching with traffic camera data availability for North Carolina.  I hope to quickly add support for other states as soon as I can.  You can get Traffcams, for free, on Google Play today.
Get it on Google Play
I have a roadmap of features to implement that are going to be really useful for commuters.  As with all side projects I work, Traffcams will allow me to learn new patterns and technologies for mobile development and really dive into using machine and deep learning.

Share the Cache

A lot of Android apps make viewing images a core part of their user experience.  Many of these apps use image caching libraries, like Glide, to make image caching easy, robust, and configurable.  Sharing these cached images can be a bit tricky.  A few questions arose when I tried:

  • How do I access files in the cache?
  • Do other apps have access to these files?
  • What are the needed permissions if I need to manually copy the file somewhere else?

Those are just a few of the questions that came up, each with a pretty simple answer.  A first pass at sharing images in my cache involved, re-caching the image to a public directory (ie. root of the external storage device), generating a URI, and passing that URI to an Intent to share with other apps.  This sounds simple, but it is complicated by the fact that I needed to ask the user for permission to read and write to the external storage on their device, if they were running Android 6.0 Marshmallow or above.  This flow worked, but I was looking for something much simpler for the user and myself, the developer.  Enter the FileProvider.

The FileProvider API

The FileProvider API, added to the Android Support libraries in version 22.0.0, is a ContentProvider-like API that allows URI specific sharing of files relevant to your application.  It can, temporarily, enable access (read and/or write) to the file at the URI.  You also do not need to copy the file to a more accessible location on the user’s device.  Setting up the FileProvider is very straightforward.

Setting up a File Provider

Setting up the FileProvider is a three step process.

First, like we would do with a ContentProvider, we added a <provider> entry to our AndroidManifest.xml file.


        android:resource="@xml/filepaths" />

This provider entry has a <meta-data> element that points to an XML file, that defines what paths, within our application file structure, we want to expose with our FileProvider.  This is important.  You can only share files in the paths contained in this XML file.  I created an XML file in the “res/xml” folder.

    <external-files-path name="image_files" path="." />

In my particular instance, I am caching images to the External Files directory (which can be located on the SD card or a portion of internal memory, simulating an SD Card).  Other tags you can use here:

The name attribute is a string that will be used as a URI-path component in the URI that’s generated by FileProvider.  The path attribute, is the relative path to folder containing the files you would like share.  In my case, I’m fine with sharing the root because it only contains cached images, but you should probably be as granular as possible in the case where you have multiple directories or file types in this directory.

Now that my FileProvider is set up and configured, how do I share files out of my cache.

Sharing Cached Files

The specifics on sharing cached image files is highly dependant on the Image caching library used and how it’s configured.  At a high level, the process is:

  1. Cache an image to a location (done by an image caching library) that falls in the location specified in XML configuration file given to the FileProvider.
  2. Get a reference to that cached image using the java.io.File API.
  3. Pass the File reference to FileProvider.getUriForFile() to get a URI you can pass to other apps.
  4. Pack that URI into an Android Intent to start sharing with other apps.

Very straightforward.  Now this is how I did it with Glide.

My first step was to specify a location for Glide to store cached images.  I needed to subclass DiskLruCacheFactory to make this happen (Note: you don’t need to subclass DiskLruCacheFactory if you want to use the Cache directory).

private static class DiskCacheFactory extends DiskLruCacheFactory {

    DiskCacheFactory(final Context context, final String diskCacheName, long diskCacheSize) {
        super(new CacheDirectoryGetter() {
            File cacheDirectory = new File(context.getExternalFilesDir(null), diskCacheName);
            return cacheDirectory;
        }, (int) diskCacheSize);

As you can see from the code snippet, I pass in a reference to the External Files directory.  By default, Glide uses the Cache directory.

I implement a GlideModule so that I can customize caching behavior in my application.  I specify my subclassed DiskLruCacheFactory class in my GlideModule.

public class MyGlideModule implements GlideModule {
    private static final int IMAGE_CACHE_SIZE = 200_000_000;

    public void applyOptions(Context context, GlideBuilder builder) {
        builder.setDiskCache(new DiskCacheFactory(context, “.”, IMAGE_CACHE_SIZE));

    public void registerComponents(Context context, Glide glide) {
        // ...

Now, I instruct Glide where it can find my GlideModule by adding a <meta-data> tag to the AndroidManifest.xml file, in <application> element.

<meta-data android:name="com.yourdomain.android.MyGlideModule" android:value="GlideModule"/>

Next, I invoke a manual file caching with Glide.

File file = Glide.with(context)
    .load(uri) // uri to the location on the web where the image originates
    .downloadOnly(Target.SIZE_ORIGINAL, Target.SIZE_ORIGINAL)

This is specific to Glide and worth mentioning.  When Glide caches a file to disk, the file name is comprised of generated key with an integer appended to it.  There is no extension.  This is important because sharing an extension-less image will make it difficult for the apps you are sharing with to determine how this file should be handled (despite the MIME type being sent along with the image in the Intent).

Contents of a Glide image cache folder


I worked around this issue by simply copying that file, appending an extension to the duplicate in the process.  Since I know I am always handling JPEGs, I give these files the “.jpg” extension.  This may not work in cases where you may be dealing with different types of images like, GIFs, PNGs, WebP, etc.

Finally, I can get a URI from the FileProvider for my cached image by calling FileProvider.getUriForFile().

// be sure to use the authority given to your FileProvider in the AndroidManifest.xml file
String authority = “com.yourdomain.android.fileprovider”;
Uri uri = FileProvider.getUriForFile(context, authority, file);
Intent intent = new Intent(Intent.ACTION_SEND);
intent.putExtra(Intent.EXTRA_STREAM, uri);
context.startActivity(Intent.createChooser(intent, “Share via”);

The Uri generated by the FileProvider looks like:


Once the Intent makes its way to the target app, things should look just like you are sharing a picture you’ve just taken.

Sharing an image from Traffcams to Gmail


Diving into Android Things

I’ve always tinkered with electronics since my teens.  I went to school and graduated with a Computer Engineering degree with a focus on hardware (embedded systems, ASIC design, etc).  I somehow got into software since graduation and am now an Android developer at RadioPublic.  When Google announced Android Things in late 2016, I was beyond excited because it gave me a reason to break out my old breadboard, resistors, LEDs, and power regulators.  It also gave me a reason to buy a Raspberry Pi.  With Android Things I’m finally able to leverage my expertise in Android development in a more embedded paradigm.

I’m not going to cover a ton of Android Things fundamentals here because a lot of other really good developers have already done a great job at that:

I’m going to share a project I began working on, on Friday, February 10th and finished prototyping on Monday, February 13th.  When exploring something new, it’s important for me to find a practical application for it.  I’m a homeowner of a house built in the early 90s.  It can use some home-built tech from 2017.  A superficial problem I and other members of my household have trouble with is parking (correctly and in alignment) in the garage. Either we parked to close to the wall and can’t walk around both sides of the car or we aren’t sure if the car will get caught in the garage door. 

Hello CantParkRight

My first Android Things project is to build an assistive parking devices that uses several sensors to assist drivers in parking correctly in the garage.  Think of the signals you see when you enter a carwash.  Normally, there are two or three lights.  When you first enter, the light is green, which instructs you to keep driving forward.  When you have driven far enough, the light turns red to alert you to stop.  I want this in my garage.

Image from Signal Tech


The first step of this is prototyping CantParkRight.

Prototyping the Hardware

A huge advantage that Raspberry Pi-like devices provide is the ability to quickly and cheaply prototype assistive devices like the one I’m building.  The fact that I can officially leverage Android APIs (and down the road, Google APIs) is a big plus.

The supplies I used for my prototype include:

  • Raspberry Pi Model 3 running Android Things Preview 1
  • HC-SR04 Ultrasonic proximity sensor
  • 2 resistors, 10KΩ and 20KΩ
  • 3 LEDs (Red, Yellow, Green)
  • A breadboard
  • Assortment of jumper wire

I had most of my supplies already.  I bought a Raspberry Pi sometime ago and recently bought a pack of 5 HC-SR04 Ultrasonic sensors from Amazon.  I settled on the HC-SR04 after quite a bit of research.  How the HC-SR04 works is, you send a 10µS (microsecond) signal to the TRIGGER pin.  Sometime in the next few milliseconds, the HC-SR04 sends a burst of 8 40KHz sound waves that will eventually bounce back.  If an object is in range, the signal will bounce back and be detected by the receiver portion of the sensor.  The HC-SR04 then sends a variable length echo to any device attached to the ECHO pin.  The length of this pulse is determined by the distance the signal traveled before returning to the sensor.  The HC-SR04 has a range of around 400cm (~13 feet).  Perfect. Note: check out the datasheet on the HC-SR04 here.

After a lot of experimentation, here is how my circuit is arranged on my breadboard.

CantParkRight prototype schematic


A few hardware gotchas:

  • The accuracy varies greatly between sensors, especially the “knockoffs”.  Out of the pack of 5, some sensors were more sensitive to object movements while others exhibited less variation. 
  • The signal sent to the ECHO pin is at 5V.  The GPIO ports on the Raspberry Pi are rated for 3.3V.  You can damage it by sending to high a voltage, so I use the resistors to step the voltage down to 3.3V.

Prototyping the Software

The best part of this project was writing the software in Android Studio, deploying it via ADB (over WiFi), and seeing the results play out in front of my eyes.  I based the implementation on:

Over the course of the article, Daniel builds several implementations, some synchronous and some asynchronous using while loops, callbacks, and threads.  I decided I wanted to build upon that, but use RxJava to implement asynchronous handling of sensor data.  I’ve used RxJava in most of the Android apps that I’ve built.  It offers quick and convenient ways to build, reuse, and arrange pieces of logic that leverage the flow of data from one end to the next, basically perfect for CantParkRight.

Disclaimer: I am NOT an RxJava expert.  There are likely ways to do what I did using RxJava in a more efficient manner.

The critical piece is how I go about initiating the TRIGGER and waiting for an ECHO.  My first implementation of this used a RxJava Observable that essentially wraps a few While loops (check out my repository, then go to the first commit).  

The process was:

  • Send the 10µS signal to the TRIGGER
  • Start a while loop that executed until the ECHO goes hi, record the start time
  • Start a while loop that iterated until the ECHO goes low, record the end time and calculate the pulse width which is used to calculate the distance

It worked, sometimes, but often for reasons I’m still researching, the sensor would stop responding (ie. the ECHO never went hi after a TRIGGER).  The improvement came when I used a GpioCallback.  A GpioCallback allows you to listen to edge triggers (signal going high, signal going low, etc.) asynchronously.  I combined my implementation of a GpioCallback with a RxJava Observable (more specifically an Emitter).  From what I’ve read, the advantages of the Emitter over using a plain Observable (using Observable.create) is that it forces you to specify a BackPressure strategy, which is important when reading values pushed from a sensor.  CantParkRight uses the BUFFER BackpressureMode.  Using RxJava allows me to start the distance detection process, simply by subscribing to the correct Observable.   Using an Emitter also allows me to right code to unregister my GpioCallback when I unsubscribe in onDestroy(…).  This prevents future memory leaks.

What’s Next

For CantParkRight, I’m working up to building an actual device I can easily mount in my garage.  With the prototype complete, I turn my efforts to making that happen.  

In the meantime, you can check out the source code for CantParkRight on GitHub.  Be sure to follow me on Twitter or (cough) Google+ for updates on CantParkRight.  I intend on posting the finished project here in the coming months, but watching the repository is great way to keep up.