AI is having an enormous impact in science. It started to be used in many disciplines to handle enormous quantity of data, like the study of the cosmos where the quantity of data collected are so huge and constantly increasing that their analysis is no more humanly feasible. Today, machine learning and access to cloud computing are the basic tools to handle such complexity. Visual recognition is also widely used, even if it remains a deterministic process, far from our human capabilities : “Human intuitions are often equally impenetrable. You look at a photograph and instantly recognize a cat — “but you don’t know how you know. Your own brain is in some sense a black box.”
Two scientists at the Massachusetts Institute of Technology has developed a device that could detect what one is thinking, send Google queries and get replies from the Internet.
Arnav Kapur and Pattie Maes, the developers, called it AlterEgo, consisting of a headset that puts sensors in seven areas on the cheeks, jaw and chin, and which detects signals in these speech-related areas. In a demonstration with New Scientist writerChelsea Whyte, Kapur was asked several questions such as the population of Santiago, Chile, the square root of 360,005 and the product of two large numbers. Kapur just repeated the questions in his mind. The computer responded with the correct answers. In a test with eight people, AlterEgo could recognize words and numbers with 90% accuracy.
AlterEgo is just one of several artificial intelligence softwares being developed to read thoughts through brain waves or nerve signals. The implications can be alarming. The technology now is still in its infancy, but it seems only a matter of time before such devices can really read our thoughts with high accuracy.
Psychology Technology Date: March 2018 Source: New Scientist
Artificial Intelligence in Japan is now able to guess what a person sees by analyzing brain scans.
An article in New Scientist states that “the AI is given an image of a person’s brain, taken with an fMRI scanner. The fMRI scanner shows the surges in blood flow that correspond with activity, so the different parts of the brain involved in processing the image light up on the scan. From this, the AI then produces a caption based on what it thinks the person was viewing. For example, one caption it generated was ‘A dog is sitting on the floor in front of an open door,’ which correctly described the scene.”