Search code examples
androidgoogle-glassgoogle-gdk

How to Navigate a Google Glass GDK Immersion Application using Voice Command only?


How would I go about coding a voice trigger to navigate Google Glass Cards?

This is how I see it happening:

1) "Ok Glass, Start My Program"

2) Application begins and shows the first card

3) User can say "Next Card" to move to the next card 
(somewhat the equivalent of swiping forward when in the timeline)

4) User can say "Previous Card" to go back 

The cards that I need to display are simple text and images, I'm wondering if I can setup a listener of some type to listen for voice commands while the card is being shown.


I've researched Glass voice command nearest match from given list but wasn't able to run the code, although I do have all the libraries.

side note: It's important that the user still see the card when using the voice command. Also his hands are busy so tap/swipe isn't an option.

Any ideas on how to control the timeline within my Immersion app using only voice control? would be greatly appreciated!

I am tracking https://code.google.com/p/google-glass-api/issues/detail?id=273 as well.


My ongoing research made me look back at Google Glass Developer to use Google's suggested way of listening to gestures: https://developers.google.com/glass/develop/gdk/input/touch#detecting_gestures_with_a_gesture_detector

How can we activate these gestures with voice commands?


Android just beta-released wearable devices upgrade for android http://developer.android.com/wear/notifications/remote-input.html, Is there a way we can use this to answer my question? it still feels like we are still 1-step away since we can call on the service but not have it "sleep" and "wake up" as a background service when we talk.


Solution

  • I'm writing out the entire code in detail since it took me such a long time to get this working.. perhaps it'll save someone else valuable time.

    This code is the implementation of Google Contextual Voice Commands as described on Google Developers here: Contextual voice commands

    ContextualMenuActivity.java

       package com.drace.contextualvoicecommands;
    
        import android.app.Activity;
        import android.os.Bundle;
        import android.view.Menu;
        import android.view.MenuItem;
        import com.drace.contextualvoicecommands.R;
        import com.google.android.glass.view.WindowUtils;
    
        public class ContextualMenuActivity extends Activity {
    
        @Override
        protected void onCreate(Bundle bundle) {
            super.onCreate(bundle);
    
            // Requests a voice menu on this activity. As for any other
            // window feature, be sure to request this before
            // setContentView() is called
            getWindow().requestFeature(WindowUtils.FEATURE_VOICE_COMMANDS);
            setContentView(R.layout.activity_main);
        }
    
        @Override
        public boolean onCreatePanelMenu(int featureId, Menu menu) {
            if (featureId == WindowUtils.FEATURE_VOICE_COMMANDS) {
                getMenuInflater().inflate(R.menu.main, menu);
                return true;
            }
            // Pass through to super to setup touch menu.
            return super.onCreatePanelMenu(featureId, menu);
        }
    
        @Override
        public boolean onCreateOptionsMenu(Menu menu) {
            getMenuInflater().inflate(R.menu.main, menu);
            return true;
        }
    
        @Override
        public boolean onMenuItemSelected(int featureId, MenuItem item) {
            if (featureId == WindowUtils.FEATURE_VOICE_COMMANDS) {
                switch (item.getItemId()) {
                    case R.id.dogs_menu_item:
                        // handle top-level dogs menu item
                        break;
                    case R.id.cats_menu_item:
                        // handle top-level cats menu item
                        break;
                    case R.id.lab_menu_item:
                        // handle second-level labrador menu item
                        break;
                    case R.id.golden_menu_item:
                        // handle second-level golden menu item
                        break;
                    case R.id.calico_menu_item:
                        // handle second-level calico menu item
                        break;
                    case R.id.cheshire_menu_item:
                        // handle second-level cheshire menu item
                        break;
                    default:
                        return true;
                }
                return true;
            }
            // Good practice to pass through to super if not handled
            return super.onMenuItemSelected(featureId, item);
        }
        }
    

    activity_main.xml (layout)

     <?xml version="1.0" encoding="utf-8"?>
        <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
            xmlns:tools="http://schemas.android.com/tools"
            android:layout_width="match_parent"
            android:layout_height="match_parent" >
    
              <TextView
            android:id="@+id/coming_soon"
            android:layout_alignParentTop="true"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:text="@string/voice_command_test"
            android:textSize="22sp"
            android:layout_marginRight="40px"
            android:layout_marginTop="30px"
            android:layout_marginLeft="210px" /> 
        </RelativeLayout>
    

    strings.xml

    <resources>
    <string name="app_name">Contextual voice commands</string>
    <string name="voice_start_command">Voice commands</string>
    <string name="voice_command_test">Say "Okay, Glass"</string>
    <string name="show_me_dogs">Dogs</string>
    <string name="labrador">labrador</string>
    <string name="golden">golden</string>
    <string name="show_me_cats">Cats</string>
    <string name="cheshire">cheshire</string>
    <string name="calico">calico</string>
    </resources>
    

    AndroidManifest.xml

     <manifest xmlns:android="http://schemas.android.com/apk/res/android"
        package="com.drace.contextualvoicecommands"
        android:versionCode="1"
        android:versionName="1.0" >
    
        <uses-sdk
            android:minSdkVersion="19"
            android:targetSdkVersion="19" />
    
        <uses-permission android:name="com.google.android.glass.permission.DEVELOPMENT"/>
    
        <application
            android:allowBackup="true"
            android:icon="@drawable/ic_launcher"
            android:label="@string/app_name" >
    
           <activity
                android:name="com.drace.contextualvoicecommands.ContextualMenuActivity"
                android:label="@string/app_name" >
                <intent-filter>
                    <action android:name="com.google.android.glass.action.VOICE_TRIGGER" />
                </intent-filter>
    
                <meta-data
                    android:name="com.google.android.glass.VoiceTrigger"
                    android:resource="@xml/voice_trigger_start" />
            </activity>
    
        </application>
        </manifest>
    

    It's been Tested and works great under Google Glass XE22 !