Towards Inviscid Text-Entry for Blind People through Non-Visual Word Prediction Interfaces

Abstract

Word prediction can significantly improve text-entry rates on mobile touchscreen devices. However, these interactions are inherently visual and require constant scanning for new word predictions to actually take advantage of the suggestions. In this paper, we discuss the design space for non-visual word prediction interfaces and finally present Shout-out Suggestions, a novel interface to provide non-visual access to word predictions on existing mobile devices.

Publication
CHI Workshop on Inviscid Text-Entry and Beyond
Avatar
Kyle Montague
Associate Professor