Robotics, AI & VR Come Together To Rapidly Teach Robots Complex Tasks
(Image Credit: Sisu)
Robots capable of learning — sounds like something out of a science fiction horror movie, doesn’t it? But in reality, that’s the very definition of AI. And as this technology advances, we are seeing it combine with other innovative fields like robotics, mixed reality, and manufacturing.
One such company blazing the trail in robotics-based AI is Sisu. Based in Austin, they make control systems for robots. Having recently partnered with Sixense (a company that delivers computing solutions for enterprise), they’ve created an interesting way to help robots learn complex tasks with motion tracking and a handheld controller.
Dubbed “programming by demonstration” (“PbD” for short), this new methodology enables robots to learn via instruction from a human teacher. And although it is better suited to tasks that don’t require microscopic accuracy, the technique still has its advantages.
Professor Maya Cakmak, an expert in PbD at the University of Washington, made the following comment about the new technology,
“The software is still difficult for most robots. Some of the more accurate robots out there have 300-page user manuals. I’ve seen some code for these, and you have to know algebra and matrix transformations to still be able to do anything.”
As you can imagine, these barriers to entry are high enough that enterprise-level organizations are not exactly banging down the door to adopt this technology.
One potential solution could lie in the hands of Covariant.ai — a company who has been working on something they call “reinforcement learning” which adds a VR component to the education process of the robots. Using a combination of motion sensors and control systems stolen from gaming software, they’ve created what they consider to be a ready-to-use set of proprietary hardware designed to train robots for complex tasks.
While it still appears to be too early for mass adoption, this type of technology that blends mixed reality and AI is likely what will lead to robots eventually teaching themselves and each other to learn. Humans will always be less exact than robots in their measurements because of the limitations of the human body, but as the technology advances, human involvement will become less and less necessary in complex engineering tasks like robotics in manufacturing.
What do you think about the combination of mixed reality and robotics?