How Do Robots Perceive The World?

Robots are the next big thing when it comes to technology trends. Just like we experienced the PC and Internet revolutions, the next major one will be the rise of robots. Given the rate at which robots are expanding, it is imperative that we understand precisely how they work, just as we do with computers and the Internet. Now, although robotics is a vast ocean of a subject, we want to look into one particular aspect of robots—perception. Before we get into the explanation of the various ways in which robots perceive, let’s take a look at how humans perceive the world.

Defining Human Perception

Humans perceive the world through the five senses—sight, smell, hearing, taste and touch. Now, while all the senses are equally important, the primary ones are seeing, hearing and touch. If even one of these senses is not present, it can make life quite difficult. Most people don’t appreciate the value of these three senses—until they’re gone. Take a step back and think about it, even simple things like walking on the road and hearing the honk of a motorist or a person shouting to prevent you from touching a hot object… this would be impossible without the mastery and function of your key senses.

god and man

(Photo Credits: Pixabay)

We must also realize that perception is not only the sensing of each sense organ, but also the seamless integration and processing of them to make sense of the input coming from your senses in a coherent way. The brain and spinal cord primarily control the integration of our senses. Let’s use our vision as an example to understand sensory integration and processing. When one looks at an object, the object forms an inverted image on the retina. This image formed by light is then converted into electrical signals by the retina and sent to the brain via the optic nerve. At that point, the object is inverted again to get the image to stand upright. This helps us view the image and gives us details of the object’s nature, including its dimensions and distance from us, which represents a prime example of sensory integration and processing. Now, nature’s designs are pristine and remarkably brilliant; to duplicate such a scale of ingenuity in a robot takes a whole new level of mathematical rigor and discipline.

Image Processing

girl robot

(Photo Credits: Pixabay)

Just as an image is formed inside our eyeballs, a robot can also create a digital image with the help of a camera. A digital image can be defined as the representation of a two-dimensional image as a finite set of digital values, called picture elements or pixels. The digitization of an image into a pixel implies that the image is an approximation of the actual image. Pixel values contain a lot of information in them, such as gray levels, colors, heights, opacities, etc. In computer science, digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can help you avoid problems such as noise build-up and signal distortion during processing.

Natural Language Processing

The way we understand language is highly intuitive, but computers, on the other hand, take a much more logical approach to processing language. Natural language processing (NLP) is a subfield of computer science, information engineering, and artificial intelligence that is works on the language differentials between computers and human language. Natural Language Processing comes in two forms: rule-based NLP and statistical NLP. In the earliest days of NLP, most processing was based on handwritten code. Later on, NLP switched to more of a machine learning paradigm with a statistical approach.

Natural process

Many different classes of machine-learning algorithms have been applied to natural language-processing tasks. These algorithms take a large set of “features” generated from the input data. Some of the earliest-used algorithms, such as decision trees, produced systems of hard “if-then” rules similar to the systems of handwritten rules that were common at that time. Increasingly, however, research has focused on statistical models that make soft, probabilistic decisions based on attaching real-value weights to each input feature. Such models have the advantage of being able to express the relative certainty of many different possible answers, rather than only one, thus producing more reliable results when such a model is included as a component of a larger system.

With all of this in mind, now you can better understand just how much effort it takes for a robot understand the world around it!

References:

  1. UC San Diego
  2. The Mind Project
The short URL of the present article is: http://sciabc.us/FXRvH
Help us make this article better
About the Author:

Venkatesh is an Electrical and Electronics Engineer from SRM Institute of Science and Technology, India. He is deeply fascinated by Robotics and Artificial Intelligence. He is also a chess aficionado, He likes studying chess classics from the 1800 and 1900’s. He enjoys writing about science and technology as he finds the intricacies which come with each topic fascinating.

.
Science ABC YouTube Videos

  1. Hawking Radiation Explained: What Exactly Was Stephen Hawking Famous For?
  2. Current Vs Voltage: How Much Current Can Kill You?
  3. Coefficient Of Restitution: Why Certain Objects Are More Bouncy Than Others?
  4. Jump From Space: What Happens If You Do A Space Jump?
  5. Does Earth Come To The Same Spot Every Year On Your Birthday?
  6. Bird Strike: What Happens When A Bird Strikes An Aircraft?
  7. Google Maps Secrets: How Exactly Does Google Maps Work?
  8. 10 Things About The Solar System Your Teachers Never Told You

Get more stuff like this
in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.