Doomsday predictions for artificial intelligence aren’t outlandish, author and MIT scholar Max Tegmark says. Unless we proceed with caution, asking what we want to accomplish with this powerful technology, humanity may find itself in trouble—even imperiled.
“If we bumble into AI unprepared, it will probably be the biggest mistake in human history, resulting in our own extinction,” Tegmark warns in his thought-provoking Micron Insight ’18 keynote speech. “We’d better get it right the first time.”
Tegmark, author of “Life 3.0: Being Human in the Age of Artificial Intelligence,” lauds the amazing progress AI is already making in medicine, transportation, and chess, and then asks: What are we accelerating towards? How far will AI go?
Watch to learn the truth about Artificial General Intelligence, the hotly-debated “holy grail” of AI, and “superintelligence,” in which machines become smarter and faster than even the greatest human minds. Are these technologies even feasible? And how do we ensure that, now and in the future, AI works for us—not the other way around?