Using machine learning, computer vision, and wrist-worn, smaller, time-of-flight cameras, we can recover hand pose and micro-gesture (small movements of the fingers and thumb). It is clear that ubiquitous wearables will need a similar eyes-free user interface�but how should this interface be designed? We are examining interaction through user tests�what gesture set designs work well for text entry or focus selection? How can we predict user experience and the usability of such systems? We hope to answer such questions through the EMGRIE system and experimental application design.