NeatoCode Techniques
Hello. How can I stream a live feed from a computer to google glass? Tks.
Anonymous

Hello, the most often used solution for this is to setup a Google Hangout between the Glass and the computer.

Thanks for the great website, I had a quick question. I look for a method to stream google glass camera over wifi to my pc, and vice versa (stream video glass) are you aware of any methods of achieve this through a local wifi?
Anonymous

If you are a developer, some people have the web RTC source code working on Glass:

https://code.google.com/p/webrtc/issues/detail?id=2083

https://code.google.com/p/webrtc/issues/detail?id=2561

That would enable WiFi video calling. I don’t know of any good apps for that yet. It does usually take code changes to get an normal Android video app to work, even if you have root and a Bluetooth keyboard paired.

Hello, we created a glass app with custom menu using mirror API. But when we click the menu it shows a synchronization icon over timeline item and the timeline becomes first position of the app but cannot get the menu's click event from notification servlet. Tumblr doesn't allow to post URLs so i can send the links. If you can advice us on this, would really appreciate.
Anonymous

Hello, this often happens when developing with your server on localhost where the Google Mirror API servers can’t contact your server. Try to just deploying to a free appspot.com instance instead, it’s easier than setting up all the strange proxy solutions people have.

Panning UI via Head Movements in Google Glass

The display of a wearable computer can be extended using the technique of measuring user head or eye movements and panning the display. This is an example of many techniques that will have to be developed in more depth to overcome the issues with limited display and input options on wearable devices. I show off the sample code below in action, and the built-in Google Glass photo sphere and browser panning support in this video:

Google Glass additionally uses head movements to scroll the list of voice commands after you use the “OK Glass” command:

Another place you may have seen it used is in my Through The Walls AR hack where you can look around and have the display scroll markers across the screen indicating where distant out of sight things like traffic cameras are.

So what happens is the user sees one part of the image:

Moves their head:

Then sees another part:

The code I used for the demo is available on GitHub as GyroImageView. I tried to make a simple as possible example so just scroll an image of the Golden Gate bridge. Here is the code that sets up the scrolling image using the GestureImageView library: 


setContentView(R.layout.view_activity);
image = (GestureImageView) findViewById(R.id.image);
moveAnimation = new MoveAnimation();
moveAnimation.setAnimationTimeMS(ANIMATION_DURATION_MS);
moveAnimation.setMoveAnimationListener(new MoveAnimationListener() {
@Override
public void onMove(final float x, final float y) {
image.setPosition(x, y);
image.redraw();
}
});

Here is where the sensor fusion adjusted gyro measurements are used to scroll the view using the Sensor Fusion orientation tracking method:


// On gyro motion, start an animated scroll that direction.
@Override
public void onUpdate(float[] aGyro, float[] aGyroSum) {
final float yGyro = aGyro[1];
final float deltaX = GYRO_TO_X_PIXEL_DELTA_MULTIPLIER * yGyro;
animateTo(deltaX);
}

// Animate to a given offset, stopping at the image edges.
private void animateTo(final float animationOffsetX) {
float nextX = image.getImageX() + animationOffsetX;
final int maxWidth = image.getScaledWidth();
final int leftBoundary = (-maxWidth / 2) + image.getDisplayWidth();
final int rightBoundary = (maxWidth / 2);
if ( nextX < leftBoundary ) {
  nextX = leftBoundary;
} else if ( nextX > rightBoundary ) {
  nextX = rightBoundary;
}
moveAnimation.reset();
moveAnimation.setTargetX(nextX);
moveAnimation.setTargetY(image.getImageY());
image.animationStart(moveAnimation);
}

An animation is used to help smooth the scrolling of the UI. It spreads the movement out overtime, and is reset if a new reading comes in. Without animation, the UI tended to jitter back and forth a lot. You can adjust the GYRO_TO_X_PIXEL_DELTA_MULTIPLIER and ANIMATION_DURATION_MS constants in the class to pan the UI more based on the detected motion, or to take more or less animation time to show the result.

Sensor Fusion is used to address the problems of orientation tracking using Android sensors. If the orientation is tracked using the accelerometer and magnetic compass, the signal is very noisy. If the orientation is tracked using the gyro, turning your head left and then turning your head right back to where you started may not register as the same location, a problem called gyro drift.

What are your opinions on this neat technique?

-Lance

Interested in working on cutting edge Android and Glass apps at startups and multi-billion dollar companies? New Frontier Nomads is hiring!

Hi, I hope you are doing well. I am impressed by your work. The 'Vein Overlay' app looks amazing. I am also looking to start the Google Glass app development. Can you please specify me the tools to use for the development and what programming languages. Thank You.

Thanks! The vein overlays ( http://neatocode.tumblr.com/post/53113748050/super-human-vision-google-glass ) were done using Java and the Android Developer kit:

http://developer.android.com/sdk/index.html

Java is the language used to write Android apps. Google recently released some similar demos for writing Java Android Apps for Google Glass here:

https://developers.google.com/glass/gdk

Hi, great glassware app for the autism hackathon. I'm curious how you are getting the display mirrored to your desktop for screencasting.

Thanks David, I wrote up how to mirror the display here:

http://neatocode.tumblr.com/post/49566072064/mirroring-google-glass

It basically uses the Android developer tools’ ability to take a screenshot and does it as fast as it can for a somewhat decent video feed.

Realtime Voice Analysis with Google Glass

Twillio’s recent Autism Hackathon inspired me and several others to try to harness real time sensor analysis, including voice, to help people with autism be more self sufficient. We used Google’s new wearable computer, Google Glass. Staff from the Autism Speaks organization, and another gentleman who works full time providing life training for people with Autism Spectrum Disorder, said our hack would be great for mock interview training! Here’s how it works:

We built on top of an excellent open source sound analysis app called Audalyzer. Here’s what the basic Audalyzer screen looks like running on Google Glass with a whistle at a certain tone in the background:

Screens we added then help someone keep their voice calm and level during an interview, comparing the current loudest pitch to the average:

We also used other sensors. For example, the orientation sensor is used to track gaze level. Here is looking at your feet:

Here is a level gaze:

A scoring system rewards users who get things right and ace their interview!

The code is all open source on Github. Most of the new code is in the mymonitor package with some ugly hooks into the main app due to time constraints at the hackathon. Code is broken into separate analyzers that can warn the user about behavior. For example, here’s a simple analyzer about speaking too loud:

public class TooLoudAnalyzer implements Analyzer {
    
	private static final float TOO_LOUD = -20f;
	
	private static TooLoudAnalyzer instance = new TooLoudAnalyzer();
	
	private Float currentVolume;
	
	public static synchronized TooLoudAnalyzer getInstance() {
		return instance;
	}
	
	@Override
	public synchronized String getLabel() {
		final Boolean okLoudness = isConditionGood();
		return null == okLoudness ? "Measuring loudness..."
				: okLoudness ? ("Good Job Lance!\n("
						+ Math.round(currentVolume) + "dB vs "
						+ Math.round(TOO_LOUD) + "dB)")
						: ("Please keep your voice down.\n("
								+ Math.round(currentVolume) + "dB vs "
								+ Math.round(TOO_LOUD) + "dB)");
	}
	
	@Override
	public synchronized Boolean isConditionGood() {
		if (null == currentVolume) {
			return null;
		}
		return currentVolume < TOO_LOUD;
	}
	
	public synchronized void setPower(float power) {
		if (Float.isInfinite(power) || Float.isNaN(power)) {
			return;
		}
		currentVolume = power;
	}
	
	@Override
	public synchronized Integer getDrawable() {
		return R.drawable.loud_card;
	}

Thanks for checking out our team’s hack! This seems to be a really interesting area to work in! Rock Health recently funded a company called Spire, which helps lower stress and make people more productive by measuring breathing patterns and letting people train themselves to be calmer. Measuring voice has similar interesting propositions, not just for keeping your own voice calm, but also for detecting stress or pain levels in a conversation partner or patient’s voice.

-Lance

Interested in working on cutting edge Android and Glass apps at startups and multi-billion dollar companies? New Frontier Nomads is hiring! Check out our job listing.

felixinclusis:

ronulicny: “The Answer Is No”, 1958.  By: KAY SAGE….

Replace the frames with LCDs and you have my day job&#8230;

felixinclusis:

ronulicnyThe Answer Is No”, 1958.  By: KAY SAGE….

Replace the frames with LCDs and you have my day job…

Like Android’s cooler older brother :)

High Frame Rate Video Output for Google Glass

Taking turns wearing Google Glass only goes so far! When you need to show the Google Glass display output up on a big screen, here’s a good way to go about it:

1. Start the MyGlass app on a paired Android phone.

image

2. Select the Screencast option from the drop down.

image

You now have the Google Glass screen mirrored on your display. Tap the touchpad or use the lookup gesture if you have it enabled to wakeup Glass and test it out.

image

3. Use a video output adapter from the Android phone to show the Google Glass display.

image

Video adapters differ by phone, unfortunately. I’m using an MHL which works with recent HTC and Samsung phones to output HDMI video from the micro-USB port. The same port used for charging. The Nexus 4 supports a similar adapter called a SlimPort instead. I’ve also successfully used a Miracast wireless display adapter from Netgear with an HTC One. Since Miracast uses WiFi direct, a connection just between the phone and display adapter, it’s much less laggy than internet and conventional WiFi wireless display options. These are all HDMI output only, so you often need a converter box to then go to VGA for projectors. The SlimPort technology is supposed to support VGA out as well, but I’ve yet to find any actual adapter for sale.

This Screencast method in general has various pros and cons vs. the Android Debug Bridge (ADB) method I previously covered. The frame rate is higher and you don’t need a computer with ADB installed. Comments re my previous post indicated getting ADB working was not easy for non-Android developers. As a drawback, however, you need a phone with video out and need to depend on wireless signals. As such this method may be better for home and office use, or smaller investor or project meetings.

Wireless equipment often performs poorly at conferences, events, and hackathons. At one Google IO keynote, Google had to ask the audience many times to turn off all their interfering devices. Additionally, the Screencast option in the MyGlass app isn’t very reliable across devices. I can’t get it to work on an HTC One, for example, so have to use a different phone than I want to to use it.

Video output methods can also be a good method to record video of the Glass display. Be careful with that, though. In early experiments I bought a very expensive HDMI recorder box, a BlackMagic Shuttle Intensity, only to find out it was incompatible with MHL output. I think recording will require converting from HDMI to something analog, or getting a box to strip out any protection flags on the HDMI signal. 

Good luck with any presentations, meetings, and events you need to show off Glass and your apps for it at! I know I’ve won more in prizes at such events than the Google Glass even cost originally. So you can make Google Glass pay for itself with these methods. This explorer edition is so expensive that many forums and communities have banned “glass funds” where people collect money to get one just because they drown out any other activity.

-Lance

Interested in working on cutting edge Android apps at startups and
multi-billion dollar companies? New Frontier Nomads is hiring! Check
out our job listing.