The Revolution of AI by Experts In The Field
Revolution of AI – this topic is the combination of the work from Shivani Bigler together with:
- Dr. Raj Rajkumar of Carnegie Mellon University, a pioneer in self-driving car technology.
- Dr. Matt Travers of Carnegie Mellon University, leading a team that won a subterranean navigation competition for search and rescue robots.
- Dr. Vijay Kumar at the University of Pennsylvania, working on swarms of drones for precision agriculture.
- Dr. Julie Shah at MIT, leading research on human-robot collaboration.
- Dr. Hod Lipson at Columbia University, developing creative robots and working towards artificial general intelligence and robotic consciousness.
- Barbara Gross of Harvard University, a pioneer in natural language processing for AI.
- Hao Li, founder of Pinscreen, creating hyper-realistic digital avatars using AI.
- Matt McMullen, founder of Realbotix, building lifelike androids with AI capabilities.
This article is the continuation from The Revolution of AI by Shivani Bigler – Part 1.
Human-robot Collaboration
Meet Doctor Julie Shaw at the Massachusetts Institute of Technology. She's leading groundbreaking research and human robot collaboration.
By being aware of real people robots can directly work and interact with them. How do you teach robots or machines to do human-like tasks? The first step, as for any person, is to become immersed in the environment & observe. Then we need an active learning process.
The robot needs to communicate or show back to the person what it has learned. We don't want the robot to learn a direct sequence of actions; we want it to learn a more general understanding. That's ultimately our challenge. But getting a robot to grasp the bigger picture concept requires a lot of observation & guidance. Julie's research focuses on making robot programming easier by teaching robots tasks through demonstration.
Challenges of Human-robot Collaboration
Her colleague Ankit Shah shows how this robot is learning to set a table by observing him do it for two weeks. The AI recognizes objects with a visual tag and learns where items go based on Ankit's placement. Dynamic tasks like setting a table are easy for humans but incredibly hard for robots due to many variables. Even subtle environmental changes can throw them off.
To test if the robot truly learned the concept, Ankit hides an object to see if it can still complete the task properly. Incredibly, when the hidden spoon is revealed, the robot recognizes it & instantly places it correctly next to the bowl. This shows the robot grasped the overall context not just the specific actions. The software continuously writes and revises its own code – it's learning, like humans, to understand the bigger picture context beyond just mathematical tasks. Here is the text rewritten with factual and easy to understand vocabulary using only English: The main goal is not to develop AI to replace human work, but to find ways for AI and humans to work together effectively.
Basketball Team Analogy
People work in teams to build cars and planes, and robots need to be able to be an effective team member too. It's real teamwork, like in a basketball game where you have to think about where your teammates are and when to pass the ball to them so everything flows smoothly. The basketball team analogy is great because we actually need to know where our robot teammates will be spatially and timing is critically important. So we need to develop AI for robots to work well with us humans. One of the hardest parts of creating advanced AI is something even humans sometimes struggle with – anticipation.
Simulation – Human-robot Interaction
Anticipating what a teammate or co-worker might do requires understanding context at a sophisticated level and predicting what will happen next. Can robots make predictions as accurately as we can? At Penn, we are giving an industrial robot named Abby the intelligence needed to help it anticipate a human co-worker's actions. This is a simulated manufacturing task where a person places fasteners on a surface and a robot applies sealant. For safety reasons, human-robot interaction is currently minimal in factories, with robots typically kept separate from people in cages.
We're trying to make it safe for a person to interact directly with a robot. To work together, the robot must first be able to see & recognize the human's actions & adjust to their every move. Cameras and lights act as the robot's “eyes” to track the person's hand movements. If the person moves their hand unexpectedly into the robot's workspace, the AI software quickly understands this and stops the robot's motion.
It's important for the robot and human to be able to safely share the workspace. Building on this teamwork, the next step is helping the robot anticipate where the human will move next based on subtle context clues. The robot will not only track the human's past actions, but predict which area they will work in next. The robot then plans its own motions to avoid those locations, allowing the human & robot to work more closely and safely together without colliding.
Increase in Productivity & Accuracy
This anticipatory coordination makes the interaction more efficient since the robot doesn't have to keep stopping. It's also safer & feels better for the human worker since the robot avoids nearly hitting them. Programming this kind of fluid coordination between humans and robots could revolutionize workplaces and society. In the future, advanced human-machine collaboration could dramatically increase productivity and accuracy across many industries. AI robot assistants could anticipate surgeons' needs in hospitals, handing them the right medical tool just before it's required to reduce surgery times & human error.
Artificial General Intelligence and Robotic Consciousness
As machines become smarter at interacting with humans, questions arise about whether they could develop consciousness or surpass human intelligence. Current AI systems like Watson for trivia and chess have exceeded human ability in specific skills by using brute-force computing power and specialized programming. However they lack general intelligence and flexibility. Here is the text rewritten with factual and easy to understand vocabulary, using only English: Scientists are working to create extremely intelligent software that can outperform humans. To achieve this ultimate goal of hyper-intelligence, they must develop systems with flexible, human-like abilities to learn & think.
This form of advanced intelligence is called Artificial General Intelligence. At Columbia University, Dr. Hod Lipson's lab is developing creative robots that can paint original artworks, self-assemble, and even learn about their world without human assistance. However, his ultimate goal is even more ambitious: Can a machine think about itself? Can it have free will?
Let's meet Doctor Hod Lipson at Columbia University.
Dr. Lipson believes that machines can be sentient and self-aware even though we don't fully understand how consciousness works in humans. His hypothesis is simple: self-awareness is the ability to simulate oneself, to model oneself, to have a self-image. The first step towards creating robotic consciousness is teaching the software to build an image of its physical, mechanical self inside its computer mind.
Robot's Self-image
Humans develop awareness of their own emotions & thoughts around age 1 which helps them understand their self-image & learn about their environment. When a robot learns what it is, it can use that self-image to plan new tasks. In both humans and robots awareness of one's physical self is called proprioception sometimes referred to as a sixth sense. Just like a baby moving its arms & touching its nose to confirm its self-image, a robot can test its proprioception by predicting and feeling its own movements. Rob Kwiatkowski has built a new baby robot that is developing its own internal self-image by interacting with its surroundings.
Robot's Own Informative Model
It sends random actions to its robot arms, like a baby flailing its limbs, to understand itself. After learning through this “babbling” process for a period of time the robot will create a model of itself and learn to walk, just as an earlier version did after 100 hours. However, walking alone won't lead to robotic consciousness – self-awareness is crucial. Another robot in the lab is called “self-aware” because it is aware of its location in space & how it moves, thanks to deep learning AI that allows it to experience and process reality.
To test the robot's self-awareness, it must locate and pick up objects without cameras relying only on its internal self-model, just as a human could do with their eyes closed by using their sense of proprioception. Here is the text rewritten with factual and easy to understand vocabulary, using only English: A robot was given a task without any instructions or map. It had to figure out how to complete the task on its own. The robot learned to use its arm through trial and error, developing an understanding of its surroundings and its place in them. It created an internal image of the external world to guide its arm in picking up nine balls and placing them in a cup. This task required the robot to have an internal model of its world which is an important step towards self-awareness in AI.
Robots Learned by Trial & Error
Furthermore, it was not given a map or any formal instructions. The robot simply has to feel its way through the task. All right, let's see it. Give it a shot. First, the robot learned how to use its arm through trial and error, developing a sense of proprioception by exploring its surroundings it generated an internal representation of the world and its place in it. The robot is using only its internal image of the external world to maneuver its arm to pick up all nine balls and place them in the cup. I'm not sure that's something I could do with my eyes closed. It's really just based off of understanding where you are in space.
Creating AI robots that have an internal model of their world is an important step towards machine self-awareness self-awareness is sort of a similar proprietitive capability, but applied to mental thinking so if they think about thinking, they think about what they are, because once you can do that, it means you can plan things into the future. Once robots become self-aware, they will need advanced ways to communicate with humans.
Natural Language Processing
Self-awareness means an AI can think about its own thinking and existence. Once AI becomes self-aware, it will need advanced ways to communicate with humans beyond just keyboards & screens. Pioneering scientist Barbara Gross of Harvard University laid the groundwork for natural language processing, which allows AI like Alexa or Siri to understand & respond to spoken language. Natural language processing started with efforts to translate between languages. Understanding spoken language is challenging because meaning depends on context and tone.
Barbara's research helped program computers to understand meaning from tone and context cues. While current AI speech systems are amazing they are still limited to narrow tasks & can't fully understand the context of human conversation. Researchers are using machine learning, training AI on hours of human conversations, to improve contextual understanding. Future versions will allow more natural conversations with computers. This technology combined with hyper-realistic digital avatars that can be instantly created and animated, will change how we interact with AI.
Alexa & Siri
This takes Siri and Alexa as examples. They're mostly oriented around a single question or a single request, and they presume that anybody will stay within the range of behavior that the designers imagined. So researchers are turning to machine learning by training AI with hours and hours of human conversation, they can learn to better understand the context of how humans converse. Future versions of this technology will allow us to have natural conversations with our computers.
One of the things that's amazing is that the fields have succeeded so well that there are devices out in the world that people use every day. I never dreamed that would be the case in my lifetime. Hyper-intelligent natural language AI will change the way we interact with our computers and robots, but this advanced technology will never reach its full potential as human companions until it looks convincingly like us.
Digital Avatars and Lifelike Androids
In Los Angeles, a company called Pinscreen is developing techniques to create these hyper-realistic digital human faces using complex AI algorithms. Their software can digitize a person's face from just a single image & animate it in real-time, a process that normally takes hours with other methods.Here is the text rewritten with factual & easy to understand vocabulary, using only English: This technology uses a green screen similar to how Hollywood movies use computer-generated imagery (CGI). The computer models a person's face and tracks their movements in real-time. It generates the entire image on the fly, creating a realistic representation of the person's appearance and expressions.
Realistic Images of Celebrities
The software can generate realistic images of anyone, even famous people like Tom Cruise or Audrey Hepburn. It predicts and generates details like teeth, which are not present in the original image. This technology can give a more human-like appearance to digital interfaces & virtual assistants. In the future, intelligent virtual beings powered by artificial intelligence (AI) could become commonplace. These holographic assistants could assist with various aspects of daily life, such as fashion advice or business consultation.
Their appearance & attire could be customized based on their role, like a doctor or a personal assistant. These virtual assistants could be equipped with the latest knowledge and capabilities, allowing them to accurately diagnose common diseases or provide expert advice. The same technology could also capture and preserve the image, voice, and life story of loved ones after their passing creating virtual representations of them. While virtual assistants are useful, some people may prefer physical robots that resemble humans, called androids.
AI as Companions
For AI to become true companions people need to feel comfortable interacting with these androids, both figuratively & literally. In San Diego, a company called Realbotix is building androids designed to be physically appealing and engaging companions. The goal is to create robots that people would want to get to know, not just machines. The androids have modular faces made of silicone skin, allowing for different characters to be created using the same robotic head.
The facial expressions are controlled by magnets in the skin, enabling natural non-verbal communication. Realbotix uses artificial intelligence to create advanced chatbots for these robots, enabling them to have natural conversations with their human companions. The robots can also detect and respond to human emotions through facial expressions and other cues. While some may perceive these androids as sex robots, the company's goal is to create versatile robots that can serve various purposes such as providing companionship for the elderly or those with social anxiety, or acting as therapists or caregivers.Here is the text rewritten with factual and easy to understand vocabulary, using only English: Companion robots can help with communication and allow people to open up because they don't feel judged. Robots like these could create a future where no one has to feel alone.
Ethical Considerations
While human-like robots and virtual beings have the potential to enhance human social interactions, there are also ethical concerns. Using artificial intelligence, it's possible to hijack someone's physical identity. One big problem is privacy. What if someone did something harmful to you like reconstructing you and making you say things you never would? Digital fabrications like this are already appearing online as “deepfakes.” Someone could take a picture from your website and create content with it without your consent.
Also, dangerous swarms of AI-driven drones could be used in terrorist attacks. Drones can certainly be weaponized. These scientific breakthroughs can often be used against humans, so those who develop them must be held accountable and consider the broader ethical consequences. However, I remain hopeful that science, technology, & human ingenuity will find solutions to these big problems. Artificial intelligence has enormous potential to profoundly improve society, jobs healthcare and education.
If done correctly, building computer systems to assist and augment human capabilities could make us superhuman, allowing us to make better decisions and gain better insights about the world. Future versions of this technology will likely become even more intelligent than humans. There's no doubt robots will exceed human capability, whether it takes 20 years or 200 years. This may be the most powerful technology we've ever invented, with limitless potential. Whatever big idea you can conceive, you can probably program a robot or computer to carry out your vision.
In the right hands, this technology has the potential to radically transform daily life for the better. A true partnership with hyper-intelligent robots, whose intentions are aligned with our own could transform humanity for the greater good.
You might be interested in The History & Evolution of Artificial Intelligence 1940s-Now