NeatoCode Techniques

It’s release day for the amazon Fire phone. My wife’s first impressions above. :) It’s exciting to try some of the new hardware innovations, like the magnetic ear buds with flat cord that should stay together in your pocket and tangle less, and the face tracking camera array allowing the phone to respond to how you move it in your hand.

For software developers, an amazon developer evangelist also told me there is a developer API for apps to provide their own responses when the Firefly button is pressed to identify products, music, and video. Here are some screenshots of using the Firefly button to identify music and a book:

Besides just buying things, it also worked well for just taking pictures of phone numbers and URLs and tapping the result instead of having to type them in.

Some other good features are Corning Gorilla Glass 3 front and back and an IPS 720P display. 720P is the lowest HD TV resolution, with some phones already shipping 1080P or more. I think it is plenty of detail for a small phone display anyway, however. Some Apple and LG phones I’ve had have had glass backs that shattered. Hopefully this Gorilla Glass version will stand up better, but still have the pretty glossy back superior to plastic.

IPS is a type of LCD display with better viewing angles than many others, letting you still see the screen even when tilted - important considering amazon’s face tracking technology uses tilting extensively. For example, on the map screen, you will see just markers after a search, but can tilt the phone slightly to see labels and ratings as well when you want them.

Maps screen before tilting with less information.

Map screen after tilting with more information.

Amazon introduces these features well with tutorial videos when first starting the phone and a MayDay service where customer support will video call with you, see your screen, and help you out. Starting it up, I was amazed it was already logged into my amazon account. This is a nice touch that made setup one step simpler.

It’s unlikely amazon actually changes anything on each phone shipped out since that would be too much work, but they must know the serial number of what is shipped out and when the device logs into WiFi for the first time their server can tell it who bought it. 

In the cons category my wife and I both found the keyboard and touch screen difficult to use and the OS rather slow to respond to any input. We searched for some way to calibrate the touch screen, but couldn’t find any and ended up always having to touch the option or on screen keyboard button above the one we wanted. Even once we learned that trick the OS was slow to react to any touches and my wife would often tap two or three times or tilt the phone several ways before it reacted to the first thing she did. 

It isn’t entirely clear why this should be. The Qualcomm Snapdragon 800 Quad-core 2.2 GHz Krait 400 CPU is only one generation behind modern Android flag ship phones. Similarly the Android 4.2.2 based OS isn’t far behind either, although amazon has not licensed the standard Google apps such as Google Play. The HTC One M8 uses a Snapdragon 801 at 2.3-2.5GHz, for example, and runs Android 4.4. Maybe the laggy UI is due to running the camera array constantly to try to track the user’s face.

For developers, to connect the Android Debug Bridge, go to “Settings”->”Device”->”Get info” and click the top item until a “Developer Options” button appears at the bottom of the screen. Tap this and turn on Developer Options and USB Debugging options. Make sure you have the vendor code 0X1949 in your adb_usb.ini inside your .android directory in your home directory and restart adb if needed (“adb kill-server” then “adb devices”).

Here is what the device looks like to the “system_profiler SPUSBDataType” command on OS X:

            Android:

              Product ID: 0x0800

              Vendor ID: 0x1949  (Lab126)

              Version:  2.32

              Serial Number: B0F10701427307PF

              Speed: Up to 480 Mb/sec

              Manufacturer: Android

              Location ID: 0x1d110000 / 6

              Current Available (mA): 500

              Current Required (mA): 2

Have fun and thanks for checking out the phone with us!

2GB Glass vs. Original Comparison Pics and Stats

My wife recently received the new 2GB Glass as a warranty replacement and I had the opportunity to take some pictures. This version was just announced by Google on Google+ as a way to improve speed and reliability. The most obvious physical difference is that the new version also has a new type of nose pad mounted on a swivel, here it is in comparison to my older Google Glass:


And here it is alone:

This should definitely be an improvement because the old nose pads were always falling off and disappearing. My current Glass actually only has one. The new Google Glass also has an FCC mark on the bottom:

When I originally signed up for Google Glass at Google IO we all had to basically sign on to a human research experiment. So FCC approval is a big step up. Lastly, the memory available is much more. Here is /proc/meminfo for the new unit:
MemTotal: 1475828 kB
MemFree: 664108 kB
Buffers: 8164 kB
Cached: 280776 kB
SwapCached: 0 kB
Active: 321460 kB
Inactive: 241100 kB
Active(anon): 273972 kB
Inactive(anon): 2708 kB
Active(file): 47488 kB
Inactive(file): 238392 kB
Unevictable: 0 kB
Mlocked: 0 kB
HighTotal: 996352 kB
HighFree: 248880 kB
LowTotal: 479476 kB
LowFree: 415228 kB
SwapTotal: 131068 kB
SwapFree: 131068 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 273660 kB
Mapped: 326416 kB
Shmem: 3080 kB
Slab: 29612 kB
SReclaimable: 13120 kB
SUnreclaim: 16492 kB
KernelStack: 7016 kB
PageTables: 9992 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 868980 kB
Committed_AS: 12293056 kB
VmallocTotal: 507904 kB
VmallocUsed: 193464 kB
VmallocChunk: 181244 kB

And here they are for the older one:
MemTotal: 596116 kB
MemFree: 36368 kB
Buffers: 9168 kB
Cached: 140428 kB
SwapCached: 15832 kB
Active: 187736 kB
Inactive: 200780 kB
Active(anon): 117384 kB
Inactive(anon): 123356 kB
Active(file): 70352 kB
Inactive(file): 77424 kB
Unevictable: 1008 kB
Mlocked: 0 kB
HighTotal: 106496 kB
HighFree: 1416 kB
LowTotal: 489620 kB
LowFree: 34952 kB
SwapTotal: 131068 kB
SwapFree: 111276 kB
Dirty: 28 kB
Writeback: 0 kB
AnonPages: 234412 kB
Mapped: 228084 kB
Shmem: 772 kB
Slab: 21896 kB
SReclaimable: 8124 kB
SUnreclaim: 13772 kB
KernelStack: 6312 kB
PageTables: 8404 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 429124 kB
Committed_AS: 10665796 kB
VmallocTotal: 507904 kB
VmallocUsed: 186124 kB
VmallocChunk: 157700 kB

New /proc/cpuinfo :
Processor : ARMv7 Processor rev 3 (v7l)
processor : 0
BogoMIPS : 1194.54

processor : 1
BogoMIPS : 1199.54

Features : swp half thumb fastmult vfp edsp thumbee neon vfpv3 tls
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x1
CPU part : 0xc09
CPU revision : 3

Hardware : OMAP4430
Revision : 0005
Serial : 0168376606012020

Older /proc/cpuinfo :
Processor : ARMv7 Processor rev 3 (v7l)
processor : 0
BogoMIPS : 597.27

processor : 1
BogoMIPS : 599.77

Features : swp half thumb fastmult vfp edsp thumbee neon vfpv3 tls
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x1
CPU part : 0xc09
CPU revision : 3

Hardware : OMAP4430
Revision : 0003
Serial : 015d984107018012

I know many people I’ve talked too, particularly AR and game developers, were hoping Google would move to a modern processor. The TI OMAP shipped on smartphones many years ago. Unfortunately we didn’t see that with this revision.

For Android developers here is the system properties of the new unit:
# begin build properties
# autogenerated by buildinfo.sh
ro.build.id=XRV70D
ro.build.display.id=XRV70D
ro.build.version.incremental=1218353
ro.build.version.sdk=19
ro.build.version.codename=REL
ro.build.version.release=4.4.2
ro.build.date=Mon Jun 9 22:36:33 UTC 2014
ro.build.date.utc=1402353393
ro.build.type=user
ro.build.user=android-build
ro.build.host=kpfj1.cbf.corp.google.com
ro.build.tags=release-keys
ro.product.model=Glass 1
ro.product.brand=Google
ro.product.name=glass_1
ro.product.device=glass-1
ro.product.board=glass_1
ro.product.cpu.abi=armeabi-v7a
ro.product.cpu.abi2=armeabi
ro.product.manufacturer=Google
ro.product.locale.language=en
ro.product.locale.region=US
ro.wifi.channels=
ro.board.platform=omap4
# ro.build.product is obsolete; use ro.product.device
ro.build.product=glass-1
# Do not try to parse ro.build.description or .fingerprint
ro.build.description=glass_1-user 4.4.2 XRV70D 1218353 release-keys
ro.build.fingerprint=Google/glass_1/glass-1:4.4.2/XRV70D/1218353:user/release-keys
ro.build.characteristics=default
# end build properties
#
# from device/glass/glass-1/system.prop
#
wifi.interface=wlan0
com.ti.omap_enhancement=true
ro.bq.gpu_to_cpu_unsupported=1

#
# ADDITIONAL_BUILD_PROPERTIES
#
drm.service.enabled=false
glass.gestureservice.start=1
persist.sys.usb.config=ptp
ro.com.android.dateformat=MM-dd-yyyy
ro.build.version.glass=XE18.1
ro.build.version.minor.glass=RC05
ro.error.receiver.system.apps=com.google.glass.logging
wifi.interface=wlan0
wifi.supplicant_scan_interval=60
bluetooth.enable_timeout_ms=10000
hwui.text_gamma=4
persist.sys.forced_orientation=0
ro.hwui.disable_scissor_opt=true
ro.hwui.drop_shadow_cache_size=2
ro.hwui.gradient_cache_size=0.5
ro.hwui.layer_cache_size=5
ro.hwui.patch_cache_size=64
ro.hwui.path_cache_size=3
ro.hwui.r_buffer_cache_size=3
ro.hwui.text_large_cache_height=512
ro.hwui.text_large_cache_width=2048
ro.hwui.text_small_cache_height=256
ro.hwui.text_small_cache_width=1024
ro.hwui.texture_cache_flushrate=0.4
ro.hwui.texture_cache_size=16
ro.opengles.version=131072
ro.sf.lcd_density=240
dalvik.vm.heapgrowthlimit=72m
dalvik.vm.heapmaxfree=2m
dalvik.vm.heapminfree=512k
dalvik.vm.heapsize=192m
dalvik.vm.heapstartsize=5m
dalvik.vm.heaptargetutilization=0.75
dalvik.vm.jit.codecachesize=0
persist.sys.dalvik.vm.lib=libdvm.so
dalvik.vm.dexopt-flags=m=y
net.bt.name=Android
dalvik.vm.stack-trace-file=/data/anr/traces.txt

And of the older one:
# begin build properties
# autogenerated by buildinfo.sh
ro.build.id=XRV72
ro.build.display.id=XRV72
ro.build.version.incremental=1223935
ro.build.version.sdk=19
ro.build.version.codename=REL
ro.build.version.release=4.4.2
ro.build.date=Thu Jun 12 03:02:32 UTC 2014
ro.build.date.utc=1402542152
ro.build.type=user
ro.build.user=android-build
ro.build.host=wped21.hot.corp.google.com
ro.build.tags=release-keys
ro.product.model=Glass 1
ro.product.brand=Google
ro.product.name=glass_1
ro.product.device=glass-1
ro.product.board=glass_1
ro.product.cpu.abi=armeabi-v7a
ro.product.cpu.abi2=armeabi
ro.product.manufacturer=Google
ro.product.locale.language=en
ro.product.locale.region=US
ro.wifi.channels=
ro.board.platform=omap4
# ro.build.product is obsolete; use ro.product.device
ro.build.product=glass-1
# Do not try to parse ro.build.description or .fingerprint
ro.build.description=glass_1-user 4.4.2 XRV72 1223935 release-keys
ro.build.fingerprint=Google/glass_1/glass-1:4.4.2/XRV72/1223935:user/release-keys
ro.build.characteristics=default
# end build properties
#
# from device/glass/glass-1/system.prop
#
wifi.interface=wlan0
com.ti.omap_enhancement=true
ro.bq.gpu_to_cpu_unsupported=1

#
# ADDITIONAL_BUILD_PROPERTIES
#
drm.service.enabled=false
glass.gestureservice.start=1
persist.sys.usb.config=ptp
ro.com.android.dateformat=MM-dd-yyyy
ro.build.version.glass=XE18.11
ro.build.version.minor.glass=RC06
ro.error.receiver.system.apps=com.google.glass.logging
wifi.interface=wlan0
wifi.supplicant_scan_interval=60
bluetooth.enable_timeout_ms=10000
hwui.text_gamma=4
persist.sys.forced_orientation=0
ro.hwui.disable_scissor_opt=true
ro.hwui.drop_shadow_cache_size=2
ro.hwui.gradient_cache_size=0.5
ro.hwui.layer_cache_size=5
ro.hwui.patch_cache_size=64
ro.hwui.path_cache_size=3
ro.hwui.r_buffer_cache_size=3
ro.hwui.text_large_cache_height=512
ro.hwui.text_large_cache_width=2048
ro.hwui.text_small_cache_height=256
ro.hwui.text_small_cache_width=1024
ro.hwui.texture_cache_flushrate=0.4
ro.hwui.texture_cache_size=16
ro.opengles.version=131072
ro.sf.lcd_density=240
dalvik.vm.heapgrowthlimit=72m
dalvik.vm.heapmaxfree=2m
dalvik.vm.heapminfree=512k
dalvik.vm.heapsize=192m
dalvik.vm.heapstartsize=5m
dalvik.vm.heaptargetutilization=0.75
dalvik.vm.jit.codecachesize=0
persist.sys.dalvik.vm.lib=libdvm.so
dalvik.vm.dexopt-flags=m=y
net.bt.name=Android
dalvik.vm.stack-trace-file=/data/anr/traces.txt

Thanks for reading! Hope this helps anyone considering buying an upgrade. Here is us with the last version:

Hopefully this version goes better!

Auto Chat Sticker: Foreground Extraction using the Dual Lens HTC One M8

HTC recently released a new version of the HTC One, called the M8, with dual lenses on the back that allows lots of interesting uses of the produced depth data - for example quick and easy foreground extraction.

You can start with the DualLensDemo included in their public API:
https://www.htcdev.com/devcenter/opensense-sdk/htc-dual-lens-api

This examples produces a depth map where the color changes based on the distance. Here is a screenshot of what it looks like:

Here is the code to draw the depth visualization:

DualLens.Holder<byte[]> buf = mDualLens.new Holder<byte[]>();
DualLens.DataInfo datainfo = mDualLens.getStrengthMap(buf);
int [] depthData = new int[datainfo.width * datainfo.height];
int leftByte;
for(int i = 0; i < datainfo.width * datainfo.height; i++) {
    leftByte = buf.value[i] & 0x000000ff;
    depthData[i] = mColorBar[leftByte*500];
}
Bitmap bmp = Bitmap.createBitmap( depthData, datainfo.width, datainfo.height, Config.ARGB_8888);
image.setImageBitmap(bmp);
image.setBackgroundColor(Color.WHITE);

You can keep a reference to the original image bitmap and then either pull colors from it or white out pixels based on the depth like this:

DualLens.Holder<byte[]> buf = mDualLens.new Holder<byte[]>();
DualLens.DataInfo datainfo = mDualLens.getStrengthMap(buf);
int [] outputImage = new int[datainfo.width * datainfo.height];
int pixelDepth;
for(int i = 0; i < datainfo.width * datainfo.height; i++) {
    pixelDepth = buf.value[i] & 0x000000ff;                        
    int depthX = i % datainfo.width;
    int depthY = i / datainfo.width;            
    int originalX = originalImageBitmap.getWidth() * depthX / datainfo.width;
    int originalY = originalImageBitmap.getHeight() * depthY / datainfo.height;
    //White out background, pick original color from foreground.
    outputImage[i] = pixelDepth > 64 ? Color.WHITE : 
        originalImageBitmap.getPixel(originalX, originalY);
}
Bitmap bmp = Bitmap.createBitmap( outputImage, datainfo.width, datainfo.height, Config.ARGB_8888);
image.setImageBitmap(bmp);
image.setBackgroundColor(Color.WHITE);

Source code on GitHub. Here is a screenshot with the modifications:

Boom! Instant chat stickers just like are zooming all over the place in hit communications apps like Line and Facebook Messager. Foreground extraction is also very useful for making product images for user marketplace apps.

Traditionally it has been very labor intensive and in real graphics and retail industries interns and other employees just starting out get tasked with having to carefully edit photos or use more complex imaging setups than we have on consumer phones.

So using the M8 Dual Lens API to make the results better and with no effort is really exciting! They also offer other ways to get data, like OpenGL geometries.

Wow, really detailed, honest, and interesting look at what it is like to run an app startup off ad revenue:

locketapp:

Dear Locket users,

For those of you that signed up to earn cash, I’m regretful that we had to discontinue the pay-per-swipe service as of January 1st, 2014. We are now offering gift card options to redeem your remaining balance. The option is available inside the Locket app.

Below is an…

Hello. How can I stream a live feed from a computer to google glass? Tks.
Anonymous

Hello, the most often used solution for this is to setup a Google Hangout between the Glass and the computer.

Thanks for the great website, I had a quick question. I look for a method to stream google glass camera over wifi to my pc, and vice versa (stream video glass) are you aware of any methods of achieve this through a local wifi?
Anonymous

If you are a developer, some people have the web RTC source code working on Glass:

https://code.google.com/p/webrtc/issues/detail?id=2083

https://code.google.com/p/webrtc/issues/detail?id=2561

That would enable WiFi video calling. I don’t know of any good apps for that yet. It does usually take code changes to get an normal Android video app to work, even if you have root and a Bluetooth keyboard paired.

Hello, we created a glass app with custom menu using mirror API. But when we click the menu it shows a synchronization icon over timeline item and the timeline becomes first position of the app but cannot get the menu's click event from notification servlet. Tumblr doesn't allow to post URLs so i can send the links. If you can advice us on this, would really appreciate.
Anonymous

Hello, this often happens when developing with your server on localhost where the Google Mirror API servers can’t contact your server. Try to just deploying to a free appspot.com instance instead, it’s easier than setting up all the strange proxy solutions people have.

Panning UI via Head Movements in Google Glass

The display of a wearable computer can be extended using the technique of measuring user head or eye movements and panning the display. This is an example of many techniques that will have to be developed in more depth to overcome the issues with limited display and input options on wearable devices. I show off the sample code below in action, and the built-in Google Glass photo sphere and browser panning support in this video:

Google Glass additionally uses head movements to scroll the list of voice commands after you use the “OK Glass” command:

Another place you may have seen it used is in my Through The Walls AR hack where you can look around and have the display scroll markers across the screen indicating where distant out of sight things like traffic cameras are.

So what happens is the user sees one part of the image:

Moves their head:

Then sees another part:

The code I used for the demo is available on GitHub as GyroImageView. I tried to make a simple as possible example so just scroll an image of the Golden Gate bridge. Here is the code that sets up the scrolling image using the GestureImageView library: 


setContentView(R.layout.view_activity);
image = (GestureImageView) findViewById(R.id.image);
moveAnimation = new MoveAnimation();
moveAnimation.setAnimationTimeMS(ANIMATION_DURATION_MS);
moveAnimation.setMoveAnimationListener(new MoveAnimationListener() {
@Override
public void onMove(final float x, final float y) {
image.setPosition(x, y);
image.redraw();
}
});

Here is where the sensor fusion adjusted gyro measurements are used to scroll the view using the Sensor Fusion orientation tracking method:


// On gyro motion, start an animated scroll that direction.
@Override
public void onUpdate(float[] aGyro, float[] aGyroSum) {
final float yGyro = aGyro[1];
final float deltaX = GYRO_TO_X_PIXEL_DELTA_MULTIPLIER * yGyro;
animateTo(deltaX);
}

// Animate to a given offset, stopping at the image edges.
private void animateTo(final float animationOffsetX) {
float nextX = image.getImageX() + animationOffsetX;
final int maxWidth = image.getScaledWidth();
final int leftBoundary = (-maxWidth / 2) + image.getDisplayWidth();
final int rightBoundary = (maxWidth / 2);
if ( nextX &lt; leftBoundary ) {
  nextX = leftBoundary;
} else if ( nextX &gt; rightBoundary ) {
  nextX = rightBoundary;
}
moveAnimation.reset();
moveAnimation.setTargetX(nextX);
moveAnimation.setTargetY(image.getImageY());
image.animationStart(moveAnimation);
}

An animation is used to help smooth the scrolling of the UI. It spreads the movement out overtime, and is reset if a new reading comes in. Without animation, the UI tended to jitter back and forth a lot. You can adjust the GYRO_TO_X_PIXEL_DELTA_MULTIPLIER and ANIMATION_DURATION_MS constants in the class to pan the UI more based on the detected motion, or to take more or less animation time to show the result.

Sensor Fusion is used to address the problems of orientation tracking using Android sensors. If the orientation is tracked using the accelerometer and magnetic compass, the signal is very noisy. If the orientation is tracked using the gyro, turning your head left and then turning your head right back to where you started may not register as the same location, a problem called gyro drift.

What are your opinions on this neat technique?

-Lance

Interested in working on cutting edge Android and Glass apps at startups and multi-billion dollar companies? New Frontier Nomads is hiring!

Hi, I hope you are doing well. I am impressed by your work. The 'Vein Overlay' app looks amazing. I am also looking to start the Google Glass app development. Can you please specify me the tools to use for the development and what programming languages. Thank You.

Thanks! The vein overlays ( http://neatocode.tumblr.com/post/53113748050/super-human-vision-google-glass ) were done using Java and the Android Developer kit:

http://developer.android.com/sdk/index.html

Java is the language used to write Android apps. Google recently released some similar demos for writing Java Android Apps for Google Glass here:

https://developers.google.com/glass/gdk

Hi, great glassware app for the autism hackathon. I'm curious how you are getting the display mirrored to your desktop for screencasting.

Thanks David, I wrote up how to mirror the display here:

http://neatocode.tumblr.com/post/49566072064/mirroring-google-glass

It basically uses the Android developer tools’ ability to take a screenshot and does it as fast as it can for a somewhat decent video feed.