RI

As the use of robotic manipulation in manufacturing continues to increase, the robustness requirements for fastening operations such as screwdriving increase as well. To investigate the reliability of screwdriving and the diverse failure categories that can arise, we collected a dataset of screwdriving operations and manually classified them into stages and result categories. I will present the data collection process, analysis, and lessons learned, and I will discuss how to transfer this knowledge to collecting another manipulation dataset.

Research Qualifier Committee:
Matt Mason
Nancy Pollard
Artur Dubrawski
Stefanos Nikolaidis

The Robotics Institute celebration National Robotics Week with an Open House, Lab Tours, Demos, Talks and more...

Watch for details!

With a prosthetic device, people with a lower limb amputation can remain physically active, but most do not achieve medically recommended physical activity standards and are therefore at a greater risk of obesity and cardiovascular disease. Their reduced activity may be attributed to the 10 – 30% increase in energetic cost during walking compared to able-bodied individuals. Several active ankle-foot systems have been developed to provide external power during the push-off phase of gait, potentially alleviation this high cost. This talk will focus on the biologically inspired design of these devices, and several of our recent and ongoing projects exploring if and how people utilize external mechanical power to influence their metabolic effort, how this is influenced by the magnitude of power delivered, and the influence of the individual’s characteristics. I will then discuss our recent efforts to evaluate powered prosthetic technology in real-world environments.

Deanna Gates is an Assistant Professor in the Departments of Movement Science, Biomedical Engineering and Robotics at the University of Michigan. She earned her B.S. in Mechanical Engineering from the University of Virginia (2002), M.S. in Biomedical Engineering from Boston University (2004), and Ph.D. in Biomedical Engineering at the University of Texas at Austin (2009). Dr. Gates worked in engineering consulting and in civilian and military clinical gait laboratories, before arriving at the University of Michigan in 2012. Dr. Gates directs the Rehabilitation Biomechanics Laboratory focusing on the study of repetitive human movements such as walking and reaching. Throughout these studies, we try to determine which aspects of movement a person actively controls and how this function can most effectively be modeled. We can then use these models, and governing control strategies, to design both passive and active devices which can mimic biological function and restore or improve function in individuals with disability. Another focus of our research is determining appropriate outcomes to measure performance with new prosthetic and orthotic technology.

Faculty Host: Katharina Muelling

Deformable objects such as cables and clothes are ubiquitous in factories, hospitals, and homes. While a great deal of work has investigated the manipulation of rigid objects in these settings, manipulation of deformable objects remains under-explored. The problem is indeed challenging, as these objects are not straightforward to model and have infinite-dimensional configuration spaces, making it difficult to apply established approaches for motion planning and control. One of the key challenges in manipulating deformable objects is selecting a model which is efficient to use in a control loop, especially when an accurate model is not available. Our approach to control uses a set of simple models of the object, determining which model to use at the current time step via a novel Multi-Armed Bandit algorithm that reasons over estimates of model utility.

I will also present our work on interleaving planning and control for deformable object manipulation in cluttered environments, again without an accurate model of the object. Our method predicts when a controller will be trapped (e.g., by obstacles) and invokes a planner to bring the object near its goal. The key to making the planning tractable is to avoid simulating the motion of the object, instead only forward-propagating the constraint on overstretching. This approach takes advantage of the object’s compliance, which allows it to conform to the environment as long as stretching constraints are satisfied. Our method is able to quickly plan paths in environments with complex obstacle arrangements and then switch to the controller to achieve a desired object configuration.

Dmitry Berenson received a BS in Electrical Engineering from Cornell University in 2005 and received his Ph.D. degree from the Robotics Institute at Carnegie Mellon University in 2011, where he was supported by an Intel PhD Fellowship. He completed a post-doc at UC Berkeley in 2012 and was an Assistant Professor at WPI 2012-2016. He started as an Assistant Professor in the EECS Department and Robotics Institute at the University of Michigan in 2016. He received the IEEE RAS Early Career award in 2016.

Faculty Host: David Held

Creating realistic virtual humans has traditionally been considered a research problem in Computer Animation primarily for entertainment applications. With the recent breakthrough in collaborative robots and deep reinforcement learning, accurately modeling human movements and behaviors has become a common challenge faced by researchers in robotics, artificial intelligence, as well as Computer Animation. In this talk, I will focus on two different yet highly relevant problems: how to teach robots to move like humans and how to teach robots to interact with humans.

While Computer Animation research has shown that it is possible to teach a virtual human to mimic human athletes’ movements, transferring such complex controllers to robot hardware in the real world is perhaps even more challenging than learning the controllers themselves. In this talk, I will focus on two strategies to transfer highly dynamic skills from character animation to robots: teaching robots basic self-preservation motor skills and developing data-driven algorithms on transfer learning between simulation and the real world.

The second part of the talk will focus on robotic assistance with dressing, which is a prominent activities of daily living (ADLs) most commonly requested by older adults. To safely train a robot to physically interact with humans, one can design a generative model of human motion based on prior knowledge or recorded motion data. Although this approach has been successful in Computer Animation, such as generating locomotion, designing procedures for a loosely defined task, such as “being dressed”, is likely to be biased to the specific data or assumptions. I will describe a new approach to modeling human motion without being biased toward specific situations presented in the dataset.

C. Karen Liu is an associate professor in School of Interactive Computing at Georgia Tech. She received her Ph.D. degree in Computer Science from the University of Washington. Liu’s research interests are in computer graphics and robotics, including physics-based animation, character animation, optimal control, reinforcement learning, and computational biomechanics. She developed computational approaches to modeling realistic and natural human movements, learning complex control policies for humanoids and assistive robots, and advancing fundamental numerical simulation and optimal control algorithms. The algorithms and software developed in her lab have fostered interdisciplinary collaboration with researchers in robotics, computer graphics, mechanical engineering, biomechanics, neuroscience, and biology. Liu received a National Science Foundation CAREER Award, an Alfred P. Sloan Fellowship, and was named Young Innovators Under 35 by Technology Review. In 2012, Liu received the ACM SIGGRAPH Significant New Researcher Award for her contribution in the field of computer graphics.

 

Faculty Host: David Held

Data driven approaches to modeling time-series are important in a variety of applications from market prediction in economics to the simulation of robotic systems. However, traditional supervised machine learning techniques designed for i.i.d. data often perform poorly on these sequential problems. This thesis proposes that time series and sequential prediction, whether for forecasting, filtering, or reinforcement learning, can be effectively achieved by directly training recurrent prediction procedures rather then building generative probabilistic models.

To this end, we introduce a new training algorithm for learned time-series models, Data as Demonstrator (DaD), that theoretically and empirically improves multi-step prediction performance on model classes such as recurrent neural networks, kernel regressors, and random forests. Additionally, experimental results indicate that DaD can accelerate model-based reinforcement learning. We next show that latent-state time-series models, where a sufficient state parametrization may be unknown, can be learned effectively in a supervised way using predictive representations derived from observations alone. Our approach, Predictive State Inference Machines (PSIMs), directly optimizes – through a DaD-style training procedure – the inference performance without local optima by identifying the recurrent hidden state as a predictive belief over statistics of future observations. Finally, we experimentally demonstrate that augmenting recurrent neural network architectures with Predictive-State Decoders (PSDs), derived using the same objective optimized by PSIMs, improves both the performance and convergence for recurrent networks on probabilistic filtering, imitation learning, and reinforcement learning tasks. Fundamental to our learning framework is that the prediction of observable quantities is a lingua franca for building AI systems.

Thesis Committee:
J. Andrew Bagnell (Co-chair)
Martial Hebert (Co-chair)
Jeff Schneider
Byron Boots (Georgia Institute of Technology)

 

Robot controllers, including locomotion controllers, often consist of expert-designed heuristics. These heuristics can be hard to tune, particularly in higher dimensions. It is typical to use simulation to tune or learn these parameters and test on hardware. However, controllers learned in simulation often don't transfer to hardware due to model mismatch. This necessitates controller optimization directly on hardware. Experiments on walking robots can be expensive, due to the time involved and fear of damage to the robot. This has led to a recent interest in adapting data-efficient learning techniques to robotics. One popular method is Bayesian Optimization, a sample-efficient black-box optimization scheme. But the performance of Bayesian Optimization typically degrades in problems with higher dimensionality, including dimensionality of the scale seen in bipedal locomotion. We aim to overcome this problem by incorporating prior knowledge to reduce the  number of dimensions in a meaningful way, with a focus on bipedal locomotion. We propose two ways of doing this, hand-designed features based on knowledge of human walking and using neural networks to extract this information automatically. Our hand-designed features project the initial controller space to a 1-dimensional space, and show promise in simulation and on hardware. On the other hand, the automatically learned features can be of varying dimensions, and also lead to improvement on traditional Bayesian Optimization methods and perform competitively to our hand-designed features in simulation. Our hardware experiments are evaluated on the ATRIAS robot, while simulation experiments are done for two robots - ATRIAS and a 7-link biped model. Our results show that these feature transforms capture important aspects of walking and accelerate learning on hardware and perturbed simulation, as compared to traditional Bayesian Optimization and other optimization methods.

Thesis Committee:
Christopher G. Atkeson (Chair)
Hartmut Geyer
Oliver Kroemer
Stefan Schaal​ (MPI Tübingen and University of S0uthern California)

This presentation will highlight the history of autonomous vehicle development at Ford Motor Company and elsewhere within the industry, with an emphasis on discussing some of the difficult remaining challenges to be solved. Additionally, examples illustrating the broader range of potential applications for AI and Robotics within the transportation industry will be touched upon.

Dr. James McBride has been a member of the Research Staff at Ford Motor Company since 1984, and is presently the Senior Technical Leader for Autonomous Vehicles.  More than a decade ago, he founded Ford’s research program in vehicular autonomy as a means to enhance safety and mobility within the automotive industry.  Notably, he led one of a select few teams to the finals of both the Desert and Urban DARPA Grand Challenges, watershed events in the rapid advancement of autonomous technologies. Although Dr. McBride is nominally a solid-state physicist with expertise in sensing techniques involving lasers and optics, he has also worked on a wide variety of topics such as nuclear physics, alternative energy devices, exhaust-gas catalysis, crystallography, and superconductivity. Over the course of his career, he has collaborated on projects with numerous national laboratories, governmental agencies, universities, and corporations, and has published over 50 peer-reviewed articles and presented more than 100 invited talks.

Faculty Host: Red Whittaker

While much work in human-robot interaction has focused on leader- follower teamwork models, the recent advancement of robotic systems that have access to vast amounts of information suggests the need for robots that take into account the quality of the human decision making and actively guide people towards better ways of doing their task. This thesis proposes an equal-partners model, where human and robot engage in a dance of inference and action, and focuses on one particular instance of this dance: the robot adapts its own actions via estimating the probability of the human adapting to the robot. We start with a bounded-memory model of human adaptation parameterized by the human adaptability - the probability of the human switching towards a strategy newly demonstrated by the robot. We then examine more subtle forms of adaptation, where the human teammate adapts to the robot, without replicating the robot’s policy. We model the interaction as a repeated game, and present an optimal policy computation algorithm that has complexity linear to the number of robot actions. Integrating these models into robot action selection allows for human-robot mutual-adaptation. Human subject experiments in a variety of collaboration and shared-autonomy settings show that mutual adaptation significantly improves human-robot team performance, compared to one-way robot adaptation to the human.

Thesis Committee:
Siddhartha Srinivasa (Chair)
Jodi Forlizzi
Emma Brunskill
Ariel Procaccia
David Hsu (National University of Singapore)

Copy of Thesis Document

One of the most fundamental tasks for any robotics application is the ability to adequately assimilate and respond to incoming sensor data. In the case of 3D range sensing, modern-day sensors generate massive quantities of point cloud data that strain available computational resources. Dealing with large quantities of unevenly sampled 3D point data is a great challenge for many fields, including autonomous driving, 3D manipulation, augmented reality, and medical imaging. This thesis explores how carefully designed statistical models for point cloud data can facilitate, accelerate, and unify many common tasks in the area of range-based 3D perception. We first establish a novel family of compact generative models for 3D point cloud data, offering them as an efficient and robust statistical alternative to traditional point-based or voxel-based data structures. We then show how these statistical models can be utilized toward the creation of a unified data processing architecture for tasks such as segmentation, registration, visualization, and mapping.

In complex robotics systems, it is common for various concurrent perceptual processes to have separate low-level data processing pipelines. Besides introducing redundancy, these processes may perform their own data processing in conflicting or ad hoc ways. To avoid this, tractable data structures and models need to be established  that share common perceptual processing elements. Additionally, given that many robotics applications involving point cloud processing are size, weight, and power-constrained, these models and their associated algorithms should be deployable in low-power embedded systems while retaining acceptable performance. Given a properly flexible and robust point processor, therefore, many low-level tasks could be unified under a common architectural paradigm and greatly simplify the overall perceptual system.

In this thesis, a family of compact generative models is introduced for point cloud data based on hierarchical Gaussian Mixture Models. Using recursive, data-parallel variants of the Expectation Maximization algorithm, we construct high fidelity statistical and hierarchical point cloud models that compactly represent the data as a 3D generative probability distribution. In contrast to raw points or voxel-based decompositions, our proposed statistical model provides a better theoretical footing for robustly dealing with noise, constructing maximum likelihood methods, reasoning probabilistically about free space, utilizing spatial sampling techniques, and performing gradient-based optimizations. Further, the construction of the model as a spatial hierarchy  allows for Octree-like logarithmic time access. One challenge compared to previous methods, however, is that our model-based approach incurs a potentially high creation cost. To mitigate this problem, we leverage data parallelism in order to design models well-suited for GPU acceleration, allowing them to run at rate for many time-critical applications. We show how our models can facilitate various 3D perception tasks, demonstrating state-of-the-art performance in geometric segmentation, registration, dynamic occupancy map creation, and 3D visualization.

Thesis Committee:
Alonzo Kelly (Chair)
Martial Hebert
Srinivasa Narasimhan
Jan Kautz (NVIDIA)

Copy of Thesis Document

Pages

Subscribe to RI