The Comment is a weekly digest of the stuff that grabbed my attention or occupied some part my mind during the past week. Normally, it’ll be one thing that’s really been on my mind, followed by a handful of things that I found interesting. The Comment will be published each Monday at 10:30AM EST.
Thanks for reading.
## The Google Pixel 2
I’m not a tech reviewer, but I got my hands on the Google Pixel 2 in XL form. I’ve been using it for about 5 days. I’m coming from a Pixel XL “OG”. The Pixel 2 XL is pretty great for reasons that are important to me. Here’s why:
Phones from Google, first the Nexus line and now the Pixel line, haven’t always been great at taking pictures. Starting with the Nexus 5X & Nexus 6P, that changed and continued with the original Pixel. Things have taken a serious step forward on the Google Pixel 2. The images that come off of the sensor and processed through Google’s algorithms are nothing short of incredible. It’s the primary reason I upgraded. I take a lot of pictures, mostly of my daughter, so having the best possible quality, so conveniently, is a huge plus. I enjoy how effortless it is to get some pretty fantastic pictures. Portrait mode is also great, though not perfect. Sometimes, edges around faces or hair aren’t blurred consistently or light causes some distortion.
The video quality is fine, but the audio quality is so disappointing. It sounds “canny” or super compressed. It’ll probably be fixed in a software update, but in the meantime, any precious memories captured on video will have pretty bad audio.
Finally, Google Photos continues to be a great complement to the Pixel line and one of Google’s greatest products.
I’ve used Android since 2010, when I failed to buy an iPhone 4 and bought a Samsung Captivate (the carrier bloated version of the original Samsung Galaxy S) running Android 2.1 “Eclair”. Side note, how different would my life be if I would have bought an iPhone 4? It was because of that phone that I overwhelmingly prefer “clean” versions of Android. No weird theming, terrible customizations, and useless features. I just want fast software and quick updates. The Pixel 2 checks all of those boxes. Finally, the software experience is consistently quick, as it was on OG Pixel.
Front Facing Speakers
When I jumped from the Nexus 6P (and from the Nexus 6 before that) to the original Pixel, I gave up the front facing stereo speakers. They are back on the Pixel 2 and sound great. I no longer need to cup my phone to hear music, podcasts, and videos from the OG Pixel’s single down firing speaker. The Pixel 2’s speakers get pretty loud, have decent stereo separation, and even a teeny bit of bass.
Sometime between the official Pixel 2 announcement and the device becoming available, Google announced that the Pixel 2 has a special purpose machine learning chip embedded in it capable of 3 trillion operations per second (for comparison’s sake, the iPhone 8/X’s “Neural Engine” peaks at 600 million operations per second). The chip is comprised of 8 compute cores or what Google is calling IPUs (Image Processing Units), on-die memory, and a smaller processing core to keep data fed to the 8 IPUs. This chip’s purpose is to offload the HDR+ from the CPU and to accelerate machine learning. Introduced in the Nexus 5X/6P, Google’s HDR+ algorithms quickly take a stack of photos (the last 10 or 15) and uses the combined data to drastically reduce noise and increase the dynamic range. This gives you great looking pictures where the shadows have a pretty good amount of detail and the lightest parts of the photo aren’t blown out. On OG Pixel, some portion of this algorithm was accelerated by the Hexagon DSP in Qualcomm’s Snapdragon 821 with the rest running on the CPU. The same is happening on the Pixel 2, but over the next few months Google will run these algorithms on the Pixel Visual Core which will speed up HDR processing by a factor of 5 while reducing power usage by 90%.
Google is also opening this chip up to developers of camera apps who want to run their Halide image processing code and those who want to run their trained TensorFlow neural network models on the Pixel. The latter is the part that intrigues me. I’ve been toying around with machine learning and getting close to having something deployable. The ability to do inference (inference is using an already training neural network to make an estimate on new data) is entirely on the device, with decent performance is valuable to making the user experience better for apps that make use of machine learning.
It’s fine. The colors are noticeably less saturated than the Pixel XL, but I’d rather have a properly calibrated display. Otherwise, it looks like any other competent smartphone display.
The texture on the back of Pixel 2 is an improvement over the slippery-ness of the Pixel. It’s more grippy.
The fingerprint sensor is noticeably quicker at recognizing my fingerprint than the OG Pixel.
The ambient music recognizing is pretty cool, though it wasn’t consistent. It recognized music in loud bars and restaurants, music in movies, and sometimes music on the radio. It even correctly recognized a remixed Rihanna track (Rihanna vocals for “Work” over a different beat).
The body and display are more rounded than the OG Pixel XL. Using the Pixel 2 XL feels more comfortable in hand than the OG Pixel XL.
The always-on ambient display is super useful. It’s easy to glance at without having to do the double tap dance needed on the OG Pixel XL. A couple of subtle things about the ambient display. It adjusts its brightness depending on the lighting in your environment and it’s not always on. If the proximity sensor detects that something is right in front of it (ie. the phone is in your pocket or upside down on a table), the screen shuts off.
The Pixel 2 XL is a worthy upgrade over the OG Pixel. It’s expensive. I had an almost pristine OG Pixel XL, so I got a pretty good deal on the trade in that helped bring the cost down.
// Alpha Go Zero learns the rules
In 2015 and 2016, Google’s AI division, DeepMind, trained an artificial intelligence to play and master the ancient game of Go. They training method they used a lot of different machine learning techniques, including, learning from amateur and professional Go players. AlphaGo mastered Go and surprisingly beat the best Go players in the world in 2016.
The next version of AlphaGo, AlphaGo Zero takes things further. This time around, AlphaGo Zero just learned the rules to game. It then trains itself by playing itself millions of times, leveraging reinforcement learning. As a result Alpha Go Zero, not only master Go more quickly, it did so with fewer compute resources, and attained a higher level of expertise in the game of Go. This is fascinating because it shows how efficient computers can be if they aren’t restricted to human level understanding, as was the earlier versions of AlphaGo. To me, this is a breakthrough.
// Three Step Guide to Blockchain
I’m still wrapping my head around the technology that enables Bitcoin and Ethereum, Blockchain. Thijs Maas lays out 3 fundamental technologies that makes Blockchain so awesome.
// In rotation: “Laila’s Wisdom” from Rapsody
This album is so so so dope. There are a handful of albums that have been memorable and still in my rotation to this day years after their release date. Off the top of me head, a few of them are J. Cole’s “The Warmup”, Katrynada’s “99.9%”, Little Brother’s “The Minstrel Show”, and Wiz Khalifa’s “Kush & Orange Juice”. I’m 100% sure “Laila Wisdom” is going to end up in this list. It’s such a smooth listen from beginning to end. I’m also a huge fan of the midtrack beat switch up.
/* fini */