Diving into Android Things

I’ve always tinkered with electronics since my teens.  I went to school and graduated with a Computer Engineering degree with a focus on hardware (embedded systems, ASIC design, etc).  I somehow got into software since graduation and am now an Android developer at RadioPublic.  When Google announced Android Things in late 2016, I was beyond excited because it gave me a reason to break out my old breadboard, resistors, LEDs, and power regulators.  It also gave me a reason to buy a Raspberry Pi.  With Android Things I’m finally able to leverage my expertise in Android development in a more embedded paradigm.

I’m not going to cover a ton of Android Things fundamentals here because a lot of other really good developers have already done a great job at that:

I’m going to share a project I began working on, on Friday, February 10th and finished prototyping on Monday, February 13th.  When exploring something new, it’s important for me to find a practical application for it.  I’m a homeowner of a house built in the early 90s.  It can use some home-built tech from 2017.  A superficial problem I and other members of my household have trouble with is parking (correctly and in alignment) in the garage. Either we parked to close to the wall and can’t walk around both sides of the car or we aren’t sure if the car will get caught in the garage door. 

Hello CantParkRight

My first Android Things project is to build an assistive parking devices that uses several sensors to assist drivers in parking correctly in the garage.  Think of the signals you see when you enter a carwash.  Normally, there are two or three lights.  When you first enter, the light is green, which instructs you to keep driving forward.  When you have driven far enough, the light turns red to alert you to stop.  I want this in my garage.

Image from Signal Tech

 

The first step of this is prototyping CantParkRight.

Prototyping the Hardware

A huge advantage that Raspberry Pi-like devices provide is the ability to quickly and cheaply prototype assistive devices like the one I’m building.  The fact that I can officially leverage Android APIs (and down the road, Google APIs) is a big plus.

The supplies I used for my prototype include:

  • Raspberry Pi Model 3 running Android Things Preview 1
  • HC-SR04 Ultrasonic proximity sensor
  • 2 resistors, 10KΩ and 20KΩ
  • 3 LEDs (Red, Yellow, Green)
  • A breadboard
  • Assortment of jumper wire

I had most of my supplies already.  I bought a Raspberry Pi sometime ago and recently bought a pack of 5 HC-SR04 Ultrasonic sensors from Amazon.  I settled on the HC-SR04 after quite a bit of research.  How the HC-SR04 works is, you send a 10µS (microsecond) signal to the TRIGGER pin.  Sometime in the next few milliseconds, the HC-SR04 sends a burst of 8 40KHz sound waves that will eventually bounce back.  If an object is in range, the signal will bounce back and be detected by the receiver portion of the sensor.  The HC-SR04 then sends a variable length echo to any device attached to the ECHO pin.  The length of this pulse is determined by the distance the signal traveled before returning to the sensor.  The HC-SR04 has a range of around 400cm (~13 feet).  Perfect. Note: check out the datasheet on the HC-SR04 here.

After a lot of experimentation, here is how my circuit is arranged on my breadboard.

CantParkRight prototype schematic

 

A few hardware gotchas:

  • The accuracy varies greatly between sensors, especially the “knockoffs”.  Out of the pack of 5, some sensors were more sensitive to object movements while others exhibited less variation. 
  • The signal sent to the ECHO pin is at 5V.  The GPIO ports on the Raspberry Pi are rated for 3.3V.  You can damage it by sending to high a voltage, so I use the resistors to step the voltage down to 3.3V.

Prototyping the Software

The best part of this project was writing the software in Android Studio, deploying it via ADB (over WiFi), and seeing the results play out in front of my eyes.  I based the implementation on:

Over the course of the article, Daniel builds several implementations, some synchronous and some asynchronous using while loops, callbacks, and threads.  I decided I wanted to build upon that, but use RxJava to implement asynchronous handling of sensor data.  I’ve used RxJava in most of the Android apps that I’ve built.  It offers quick and convenient ways to build, reuse, and arrange pieces of logic that leverage the flow of data from one end to the next, basically perfect for CantParkRight.

Disclaimer: I am NOT an RxJava expert.  There are likely ways to do what I did using RxJava in a more efficient manner.

The critical piece is how I go about initiating the TRIGGER and waiting for an ECHO.  My first implementation of this used a RxJava Observable that essentially wraps a few While loops (check out my repository, then go to the first commit).  

The process was:

  • Send the 10µS signal to the TRIGGER
  • Start a while loop that executed until the ECHO goes hi, record the start time
  • Start a while loop that iterated until the ECHO goes low, record the end time and calculate the pulse width which is used to calculate the distance

It worked, sometimes, but often for reasons I’m still researching, the sensor would stop responding (ie. the ECHO never went hi after a TRIGGER).  The improvement came when I used a GpioCallback.  A GpioCallback allows you to listen to edge triggers (signal going high, signal going low, etc.) asynchronously.  I combined my implementation of a GpioCallback with a RxJava Observable (more specifically an Emitter).  From what I’ve read, the advantages of the Emitter over using a plain Observable (using Observable.create) is that it forces you to specify a BackPressure strategy, which is important when reading values pushed from a sensor.  CantParkRight uses the BUFFER BackpressureMode.  Using RxJava allows me to start the distance detection process, simply by subscribing to the correct Observable.   Using an Emitter also allows me to right code to unregister my GpioCallback when I unsubscribe in onDestroy(…).  This prevents future memory leaks.

What’s Next

For CantParkRight, I’m working up to building an actual device I can easily mount in my garage.  With the prototype complete, I turn my efforts to making that happen.  

In the meantime, you can check out the source code for CantParkRight on GitHub.  Be sure to follow me on Twitter or (cough) Google+ for updates on CantParkRight.  I intend on posting the finished project here in the coming months, but watching the repository is great way to keep up.

Audio Playback on Android

Playing music, podcasts, or other audio is one of the most common activities for smartphones in 2016. Most of the time, audio plays in the background while we are driving, cleaning, working out, or cooking. Architecting your application to support background audio playback is standard fare whether you are incorporating the standard Android MediaPlayer API or using a library, like ExoPlayer.

I want to briefly walk through how I architected PremoFM, an open source podcast player, to play audio in the background using ExoPlayer. It’s not perfect, but it’s a good starting point for the transition to ExoPlayer 2. If you want to learn more about ExoPlayer 2, check out my previous post, Exploring ExoPlayer 2.

The Architecture

In order to play audio in the background (or do anything in the background) the process that manages playback should be based on the Service class. Services, on Android, allow background work to be done without needing to have a user interface in the foreground. Naturally, I based the background audio playback of PremoFM on a Service, the PodcastPlayerService. It is obviously doing a lot. It manages audio playback, updates the database, listens for events like a headphone disconnection, and manages the persistent notification. Initially, most of my code involving direct management of the ExoPlayer was also embedded directly in this service. This led to a bloated class and a highly coupled design. I re-architected things when I added Google Cast support by creating a generic MediaPlayer abstract class.

The abstract class provided a common set of methods for interacting with ExoPlayer like, playing a media file, fast forward & rewind, getting playback state information, and changing the playback speed. All I needed to do was extend my MediaPlayer abstract class, using ExoPlayer. This resulted in all of my ExoPlayer code existing in one class, LocalMediaPlayer.

This will make my upgrade to ExoPlayer 2 significantly easier than if I had continued the previous architecture. All of the code that needs to change exists in one place. In my next article I will get into the nitty gritty of my migration.

Feel free to take a swing at it before I do. Check out the source code for PremoFM from GitHub and hack away.

Follow me on Twitter or visit my website for more Android Development related articles like this.

Exploring ExoPlayer 2

ExoPlayer is an extensible, application level media player for Android apps. It’s an alternative to the high level Android MediaPlayer API. MediaPlayer is built on several low level media playing APIs like AudioTrack and MediaDRM. These low level APIs can also be used by developers to build your own media player with it’s own custom behavior. ExoPlayer is built on these low level APIs and it has the additional benefit of being open source. You don’t need to build your own media player, from scratch, to get the behavior you need. You can extend ExoPlayer instead.

ExoPlayer was created and is maintained by Google. Out of the box, it can play a wide range of audio and video formats such as:

  • MP3
  • MP4
  • WAV
  • MKV
  • MPEG-TS
  • Ogg

Remember, ExoPlayer is open source, so it can, with some extension, decode and play any format, as long as you build the capability.

Just a Few ExoPlayer Basics & Components

ExoPlayer component diagram

Source: ExoPlayer Documentation on GitHub

ExoPlayer

The ExoPlayer class is the actual media player. It depends on a few other components for media loading, buffering, decoding, and track selection. When all of the required components are configured, your app will interact with the ExoPlayer class to control the playback of your media. You can register listeners with ExoPlayer to be notified of certain events like buffering or the conclusion of a track.

MediaSource

The MediaSource class is charged with controlling what media will be played and how it will be loaded. The MediaSource class is used directly by the ExoPlayer class. MediaSource enables a ton of different behaviors. For example, you can merge multiple MediaSource classes together to show video, along with captions or you can use multiple MediaSource classes to create playlists where the transitions between those sources are seamless (gapless).

There are several prebuilt MediaSource classes available out of the box in ExoPlayer to support many common use cases like playing normal media files or streaming DASH content from a remote server. Of course, you can implement your own to support your application’s use case.

DataSource

The DataSource class provides samples of data to a MediaSource. These samples of data can originate from a file on the SD card, a resource in the assets directory, and even a remote server. You can use one of the prebuilt DataSource classes or build your own to read data in a way necessary to support your use case. For example, maybe your application will stream media on a company intranet. You can use a custom DataSource to define the rules and protocols that allow this to happen securely.

TrackSelector

The TrackSelector class dictates which track is selected for rendering and playback.

Renderer

The Renderer class decodes media and renders it. An example implementation is the MediaCodecAudioRenderer, which decodes audio data and renders it using several lower level ExoPlayer APIs.

LoadControl

The LoadControl class defines the buffering behavior of a particular MediaSource.

Finally

At this point, I know as much about ExoPlayer 2 as you do. I have some pretty extensive knowledge of ExoPlayer 1.X because I’ve used it on several Android projects that I’ve worked on. This series on ExoPlayer 2 will document my journey of learning about ExoPlayer 2 and upgrading an app to ExoPlayer 2 that is currently using ExoPlayer 1.5.9. I will probably make mistakes, but I hope this series will help a few other developers in their effort to implement ExoPlayer 2 in a real world app.

The app I will be using for demonstrating this upgrade is PremoFM. PremoFM is an open-source podcast player that I started building almost two years ago. The source code for the app is on GitHub (https://github.com/emuneee/premofm). I will be using a branch (https://github.com/emuneee/premofm/tree/exoplayer_2) for all of my ExoPlayer 2 upgrade work. I invite you to follow along. I’ll be back next week to discuss the structure of a typical audio playing app and how ExoPlayer fits in.

Please follow me on Twitter (@emuneee).

Finally, I’m working with a great team at RadioPublic to build an awesome podcast experience for Android and iOS. Hop on the beta today.

Some resources to review in the meantime:

ExoPlayer on GitHub

https://github.com/google/ExoPlayer

ExoPlayer — Developer Guide

https://google.github.io/ExoPlayer/guide.html

ExoPlayer on Medium

https://medium.com/google-exoplayer

Android Developer Backstage 48: ExoPlayer

http://androidbackstage.blogspot.com/2016/05/episode-48-exoplayer.html

Testing Your ContentProvider with Robolectric

You like testing?  I love testing, especially unit testing.  ContentProviders are the underpinnings of many data layer implementations in Android apps and obviously, an important thing to test.  I added some new code to a ContentProvider in the RadioPublic app and wanted to verify that the ContentProvider and model code worked.  I spent an hour looking through the documentation and online. I also wanted to use the Robolectric test framework already setup in the app.  After concluding my research, I found what I was looking for and its very straightforward.

First of all, if you haven’t already done so, in your module’s build.gradle file, depedencies section:

testCompile 'junit:junit:4.12'
testCompile 'org.robolectric:robolectric:3.1.1'

In your unit test class, class annotations

@RunWith(RobolectricTestRunner.class)
@Config(constants = BuildConfig.class, sdk = 18)

Register your ContentProvider with the appropriate Authority:

private static final String AUTHORITY = "com.example.debug";

@Before
public void setup() {
    YourProvider provider = new YourProvider();
    provider.onCreate();
    ShadowContentResolver.registerProvider(
            AUTHORITY, provider
    );
}

Get a reference to the ContentResolver, since this is, most likely, how you’ll be interacting with your provider:

ContentResolver contentResolver = RuntimeEnvironment.application.getContentResolver();

Finally, test your ContentProvider

@Test
public void getSomeData() {
   ...
   cursor = contentResolver.query(Test.MyUri, null, null, null, null);
   ...
}

That’s it!  Hopefully this will save you time and encourage you to write tests for your ContentProviders.

Next Project

The next version of NC Traffic Cams (version 3.0, but with a different name) is nearly feature complete.  I completely overachieved on this project, but purposefully.  Since I stopped actively building PremoFM, I wanted to start on a project that would teach me a few advanced Android development tricks.  I’m building the next version of NC Traffic Cams with the following in mind:

  • Model – View – Presenter, similar to Model – View -Controller, enforces the separation of Android specific logic and business logic in an effort to make the business logic more testable.  I’m also using Loaders to keep Presenters around.  This is great because it means I have to do no work to persist the apps state during a device rotation.
  • Lots of unit tests.
  • Lots of RxJava, in the latest released version of RxJava I wrote a ton of AsyncTasks. This time around 0 AsyncTasks.  I have RxJava to thank.
  • OrmLite for the data layer, no more writing SQL selects and inserts by hand (I actually tried to integrate with Realm, however, I threw it out because of the thread management issues I encountered).
  • OkHttp / Retrofit / Gson for the API later, no more writing HttpUrlConnection or JSON parsing logic.  I have a simple API setup for NC Traffic Cams and Retrofit made it ridiculously easy for me to get that data into NC Traffic Cams.

I wrote substantially less code this time around and the app is better, more stable, and more performant.  Can’t wait to show you all what it looks like.  It’ll be done soon™.

Here’s NC Traffic Cams v1 & v2 since it’s #ThrowbackThursday

Screen Shot 2016-03-31 at 10.41.25 PMcities_n5_v220-605x1024