The Revolution of AI by Shivani Bigler - Part 1 - AI Tools & AI Information | AICentral.Top

The Revolution of AI by Shivani Bigler – Part 1

The Revolution of AI - Self-Driving Car

The Revolution of AI by Experts In The Field

Revolution of AI – this topic is the combination of the work from Shivani Bigler together with:

  • Dr. Raj Rajkumar of Carnegie Mellon University, a pioneer in self-driving car technology.
  • Dr. Matt Travers of Carnegie Mellon University, leading a team that won a subterranean navigation competition for search and rescue robots.
  • Dr. Vijay Kumar at the University of Pennsylvania, working on swarms of drones for precision agriculture.
  • Dr. Julie Shah at MIT, leading research on human-robot collaboration.
  • Dr. Hod Lipson at Columbia University, developing creative robots and working towards artificial general intelligence and robotic consciousness.
  • Barbara Gross of Harvard University, a pioneer in natural language processing for AI.
  • Hao Li, founder of Pinscreen, creating hyper-realistic digital avatars using AI.
  • Matt McMullen, founder of Realbotix, building lifelike androids with AI capabilities.

AI Self-driving Car

The future of advanced artificial intelligence looks very different from today. Most people will not own cars. Instead, self-driving electric vehicles controlled by artificial intelligence will transport people. This will greatly reduce air pollution and traffic jams around the world. Drones that can navigate by themselves will be used for disaster response & search and rescue missions.

Many people will live and work alongside self-aware robots. These AI companions will help increase productivity & free humans from boring repetitive tasks completely changing modern life. Today, scientists are making breakthroughs that could lead to this futuristic world. The idea that AI systems can make their own decisions is amazing.

I want to understand the latest advancements. AI can converse dynamically, which is wondrous. Machines may become self-aware in ways humans cannot comprehend shaping the future. Shivani Bigler, an engineering and neuroscience student, was fascinated by artificial intelligence.

As a child my father took me to technology and robotics events where I became enthralled by science. Each year, the inventions were smarter. AI has progressed rapidly in recent years. Most AI is programmed to think independently and learn from examples, simulating human intelligence by learning from past experiences. But how does AI actually work?

In the future, will AI have human traits like emotions, consciousness, or free will? And how will humans and robots collaborate? The clearest path to the future is the self-driving car. Unlike regular cars self-driving cars are robots that can make decisions. Will all cars eventually become driverless?

To find out, Shivani went to Pittsburgh, Pennsylvania, a hub for self-driving car research. Everyone is discussing self-driving cars because they are the future. But to understand them, we need to look under the hood. Making even simple decisions does not come easily for computers. To discover the inner workings, he met a true pioneer in the field, Dr. Raj Rajkumar of Carnegie Mellon University. Carnegie Mellon birthed self-driving car technology. Thanks largely to Raj and his colleagues' work, they have led innovation in this area for over 25 years. So how does his self-driving car make decisions to safely navigate like a human driver?

The Safety of Self-driving Car?

Since Raj was distracted by our conversation Pennsylvania required another driver in the front seat to monitor the road for safety. I was nervous but excited. Raj has driven vehicles autonomously for hundreds of miles. I enabled the self-driving mode, and the car really drove itself! While most self-driving cars are built from scratch, Raj modified a regular used car with powerful onboard computer systems, making it more adaptable than other cars.

Sensors were installed to control the transmission, steering wheel, brakes, and gas pedal. The software running on the computer enables this capability by trying to mimic human decision-making through key artificial intelligence layers. Most self-driving cars use cameras and advanced radar to perceive their surroundings. The AI software compares objects to an internal 3D map of streets, signs, and infrastructure.

It maps static elements but figures out dynamic elements like traffic and pedestrians on the fly. This allows it to understand where it is going and react to changes & traffic signals. However, making a vehicle drive itself is an extremely difficult AI challenge. Safety is the top priority.

Signs Recognition

But what happens when the AI system does not understand specific objects in its environment? A pedestrian in Tempe, Arizona was killed by a self-driving taxi because it failed to recognize a jaywalker. In the future, advanced self-driving cars will need to make life-or-death decisions flawlessly. Here is the text rewritten with factual and easy to understand vocabulary, using only English: Imagine a self-driving car encounters a person crossing the street illegally. If avoiding the pedestrian means crashing head-on with another car, potentially killing the driver, what should the car choose? How will scientists address these major ethical dilemmas?

The first artificially intelligent robots were programmed by engineers with fixed sets of rules to achieve their goals. These rules are called algorithms, but not all rules work in every situation. This approach is very inflexible, requiring new programming for even small changes in any given task. A new approach called machine learning has transformed this. With machine learning computers can absorb and use information from their interactions with the world to rewrite their own programming, becoming smarter on their own.

Search & Rescue Robots

To see machine learning in action, let's meet a team from Carnegie Mellon University at an abandoned coal mine. Dr. Matt Travers leads a group that won a challenging underground navigation competition held by the Department of Defense's research agency, DARPA. They are affectionately known as R1R2, where R stands for Robot. These twin robots are designed for search and rescue missions too dangerous for humans, and unlike self-driving cars, they operate without a map.

Ability to Identify Every Single Object Nearby

To achieve this, they have to learn to identify every single object they encounter on the fly. They are programmed to act fully autonomously, making 100% of their own decisions, recognizing objects, and deciding where to go next and where to explore. Imagine having a map of a collapsed mine before sending a rescue team – it's a game-changer. How the robot discerns elements in this environment parallels how an infant learns about her environment.

Robot Mimics Baby's Learning

A three-month-old uses her senses to cognitively map out her environment and learn to recognize her parents. She ultimately uses this map to interact with everything in her world, just like this robot. Artificial intelligence makes this learning curve possible, but how does the robot create its own map and identify a human on its own without an internal mapping system like the internet? As it explores and maps the cave, it drops devices called signal repeaters to create a Wi-Fi network trail, like breadcrumbs along the path. Using this network the robot sends data back to home base to create a map.

Lidar Systems

At the same time, the robot must look at every single object to identify the stranded human. The LIDAR system is giving a full laser scanner. LIDAR stands for light detection and ranging, similar to radar which uses radio waves. LIDAR systems send out laser pulses of light and calculate the time it takes to hit a solid object and bounce back. This process creates a 3D representation of the objects in the environment which the onboard computer can then identify.

Decision Making Robots

By fully understanding its environment the robot can make better decisions about where to go and where not to go. As the robot continues mapping the mine, it stumbles upon its intended target – a dummy representing a stranded human. With this discovery the robot can not only alert emergency personnel but also give them a map on how to find the stranded person. These incredible rescue robots are clearly paving the path to the future of advanced artificial intelligence. Here is the rewritten text with factual & easy to understand vocabulary using only English: Intelligent robots can move over forests water bodies, or even mountains in Philadelphia.

Swarm Robotics

Jason Deronic of XN Technologies is working to solve this problem. His focus is on autonomous aerial robotics to enable drones to safely navigate in unknown or unexplored areas. Jason's team has built the first industrial drone that can fly itself anywhere. Amazingly, these autonomous robots navigate without GPS and map their environment as they go. They focus on all aspects of autonomy, including perception orientation during flight, motion planning, and control.

However, going from 2 dimensions to 3 dimensions requires an increase in artificial intelligence processing. The mission for their drone is to fly independently through a three-dimensional path from one end of the warehouse to the other. Starting mission 3-2-1. Now, to test its computer mind Jason's team places new and unexpected obstacles in its path. Will the drone recognize these unexpected changes?

Will it get lost? Will it crash? Essentially, the drone has a gimbled LIDAR system that allows it to paint a full 360-degree sphere around it to sense its environment. Like a robot in a mine, this aerial robot uses LIDAR to see. It generates a visual representation of the space and for each cube in the space, it tries to determine whether it is occupied or free space.

The robot's onboard computer makes real-time decisions on where to go based on its visual input, similar to how humans do. Incredibly the drone recognizes the whiteboards and flies around them. One of the special things about this system is that it's being used in the real world to keep people out of harm's way. Autonomous drones like these are already at work in hazardous industries like mining, construction, and oil exploration.

Self-flying Robots to the Rescue

They safely conduct inspections in dangerous locations and create detailed maps of rough terrain. From a technological perspective, the fact that they can do everything onboard, self-contained, & enable the system to make its own decisions is remarkable. Self-flying robots like these will revolutionize search and rescue disaster response, and could also transform how packages are delivered. However, there are limits to what large, single drones can do. More complex tasks will require teams of small, nimble autonomous robots.

Doctor Vijay Kumar at the University of Pennsylvania is working with swarms of drones to perform tasks like playing music or building structures cooperatively. He's also developing technologies to tackle some very big problems, including world hunger. In a couple of decades, we'll have over 9 billion people to feed on this planet which is a big challenge to take on. He's building an army of small flying robots with the ability to synchronize, like a flock of birds reacting to a predator or a school of fish. You have coordination, collaboration and it all happens very organically.

Using AI to get robots to work as a coordinated collective group is a daunting task. Five years ago, most robots relied on GPS-like sensors. Today, they have the equivalent of smartphones embedded in them, and they sense how fast they're going by just looking at the world, integrating that with the inertial measurement unit information and then getting an estimate of where they are in the world and how fast they're traveling. Vijay's apprentice Dinesh Takur, demonstrates how the drones fly in formation. The first step is to provide them with a common point of reference, in this case, a visual tag similar to a basic QR code.

Using only the onboard camera, these drones reference the code on the tag & visualize where they are in space. Using sophisticated bio-inspired algorithms, the drones then figure out where each other drone is within the collective swarm. These drones are communicating with one another over Wi-Fi. Future versions of these drones will create their own localized wireless network to communicate. For now, this swarm is a proof of concept.

Pros of Swarm Flying Robots

You've defined a formation, and then they're assuming that formation. Once they figure out where they are in relationship to each other, they can then work together to accomplish a shared goal, like ants working as a collective. Here is the text rewritten with factual & easy to understand vocabulary, using only English: Groups of flying robots have advantages over single drones. Unlike one drone, self-coordinating groups can perform complex tasks like mapping much faster by working together and combining their data. And losing one drone in a group doesn't ruin the whole operation.

Vijay imagines using his advanced group technology to work on farms. This precise agriculture will help feed the world's growing population. We want robots to be able to roam farms and provide precise information about individual plants that could then be used to increase the efficiency of food production. This would have a huge impact on the world.

This is our duty as responsible citizens and engineers. This high-tech approach towards solving future problems is definitely a path I can support. In the future, artificial intelligence could coordinate flocks of drones to protect the environment & boost food supply to combat climate change's negative effects on crops. Robotic bees could assist with pollination in orchards & farms, making them more sustainable and productive.

Fish-shaped underwater robots could automatically deploy at the first sign of an oil spill rapidly containing spills & saving marine life and oceans worldwide. Modern society has a long history of building robots for dangerous, difficult, or repetitive work unsuitable for humans. AI is poised to automate all kinds of tedious work, from factories to taxis to customer service. While some worry smart robots will replace human labor, that's not necessarily true.

The artificial intelligence sector is expected to generate 58 million new types of jobs in just a few years. So what will human-robot interaction mean for our work & livelihoods? At the Massachusetts Institute of Technology, Doctor Julie Shaw leads groundbreaking research on human-robot collaboration. Julie's lab develops robots that are effective teammates with people. Her team creates software that helps robots learn from humans, even giving them insight into different human behaviors.

You might want to read part 2 of this article: The Revolution of AI by Shivani Bigler – Part 2

You might be interested in Why Artificial Intelligence Is Good – Enhancing Human Potential

Releated By Post

Artificial Intelligence GPT Free: Make Use Of GPT-4 Intelligently

Artificial intelligence has made substantial strides over the last few…

The History & Evolution of Artificial Intelligence 1940s-Now

I have just researched about Artificial Intelligence, who started AI…

Leave a Reply