Human-level AI: Is that possible?

No jokes. It’s straight, folks…

Image result for 2025 year
Source: VideoBlocks

When you think about it, 2025 is not so far away. Whenever I ponder over it, I find myself daydreaming about what the state of the world will look like in 10, 20 or 30 years from now. One of the most interesting spheres for me is AI and robotics actively filling our lives. Now we have machines making everyday routines easy, such as programs for mapping the best route to reach the destination or software for searching information in vast expanses of Internet. Additionally, AI is replacing humans in labor-intensive and repetitive activities.


Does this make us think that…

This can already be seen with Amazon’s experimentation of a cashier-free convenience store. While these accomplishments clearly indicate that technology has dramatically developed, there are still some malfunctions in machine performance which could lead to severe troubles.
Melanie Mitchell, Professor of Computer Science at Portland State University discussed these points in her book “Artificial Intelligence: A Guide for Thinking Humans” which will be published in the upcoming year and likely become best-seller. In one of her articles, she presented overview of that book and evidences that machines lack humanlike intelligence and ability to adopt knowledge into new situations. For instance, infinitesimal changes in the image can lead to misclassification of objects and other cracks in face-recognition systems yet they aren’t able to affect human vision.

Moreover, I’m sure that everyone has experienced failures in utilizing translation software or apps at least once in their lives. Examples of error may vary from harmless, like aforesaid ones to potentially fatal: A self driving car could fail to detect pedestrians due to unspecific changes in lighting conditions.

Similar situation is likely to occur in security systems. Needless to say that, there were lots of occasions when hackers managed to trick AI by the help of minute modifications to images, documents or audio signals.
All of these and other categories of vulnerability clarify the perception that the race for commercializing and spreading machines trained merely for particular circumstances doesn’t provide sufficient help.

Reckoning that issues will probably disappear in imminent future isn’t going to work as well. In fact, coping with all of these points should start by deeply investigating nature of intelligence and shedding the light into phenomenon of cognitive mechanisms to create reliable systems. It requires us to study the field which is more extensive than data analyzing and computer science.

For more information check out this:

Mitchell’s article

Leave a Reply

Your email address will not be published. Required fields are marked *