Wearable Cognitive Assistance (WCA) applications have been developed to help users with tasks such as assembling physical objects, remembering people’s names, exercising, and playing games. WCA is a compelling use case for edge computing. Many of these applications utilize large deep neural network (DNN) models that are too computationally intensive to run on a small and lightweight mobile device. However, these applications generate large volumes of data that must be processed quickly. Therefore, computation must be offloaded to a server with close network proximity to the mobile device that is capturing the data. Scaling up WCA to tasks with 100 parts or more is challenging because of (a) the difficulty of vision-based state detection with very small parts in the context of much larger objects being assembled; (b) the combinatorial explosion of possible error states; and (c) the large manual effort needed to create accurate DNNs that can reliably determine when task steps have been completed. These problems can be solved by a combination of (1) hierarchical decomposition of complex assemblies into modular compositions of subassemblies, (2) on-demand seamless escalation for live expert assistance, and (3) synthetic generation of training sets for born-digital components. The resulting solution can be implemented in a scalable and maintainable way using modular software components. This will enable the development of WCA applications for more complex tasks, which is a necessary step along the path towards making WCA applications practical for real world tasks.
Mahadev Satyanarayanan (Chair)
Padmanabhan Pillai (Intel Labs)
In Person and Zoom Participation. See announcement.