Human-AI Interaction Is A Partnership Based on Blind Trust

People express a lot of fear when it comes to AI. Some worry AI will grow superhuman and kill us all. Others are concerned that AI lead automation will displace over 100 million workers and devastate the economy. Honestly, either may happen because the simple truth of AI is that when machines learn, humans lose control. Currently, self-learning AI is not particularly sophisticated. Remember the AI Elon Musk said was too dangerous for release to the public? Well, it got released and was pretty disappointing. When it comes to his companies, Musk is more hype man than soothsayer. But AI is getting advanced, projects from Deepmind are learning games like chess and go without knowing the rules, and these AI’s are beating human players. AI May Be Missing A Foundation? AI is advancing, but it may be missing a piece that will prevent it from reaching a human-like general intelligence. An article in Science Magazine shares the story of a 3-year old being able to learn and apply recently acquired knowledge to new contexts quickly. The child in the account belongs to Gary Marcus, a cognitive scientist and a founder of RobustAI. Marcus argues that humans start with instincts either at birth or from childhood, that allow us to think abstractly and flexibly.  A starting intuition is something that is not currently programmed into most AI’s. I can imagine programming an AI with instincts anywhere close to those of a human would be a daunting task. But the lack of what I’ll call “base instinctive code” creates in my mind…Human-AI Interaction Is A Partnership Based on Blind Trust

Leave a Reply

Your email address will not be published. Required fields are marked *