Skip to main content

Another AI winter could usher in a dark period for artificial intelligence

[ad_1]


Self-driving cars. Faster MRI scans, interpreted by robotic radiologists. Mind reading and x-ray vision. Artificial intelligence promises to permanently alter world. (In some ways, it already has. Just ask this AI scheduling assistant.)



Artificial intelligence can take many forms. But it’s roughly defined as a computer system capable of tackling human tasks like sensory perception and decision-making. Since its earliest days, AI has fallen prey to cycles of extreme hype—and subsequent collapse. While recent technological advances may finally put an end to this boom-and-bust pattern, cheekily termed an “AI winter,” some scientists remain convinced winter is coming again.


What is an AI winter?



Humans have been pondering the potential of artificial intelligence for thousands of years. Ancient Greeks believed, for example, that a bronze automaton named Talos protected the island of Crete from maritime adversaries. But AI only moved from the mythical realm to the real world in the last half-century, beginning with legendary computer scientist Alan Turing’s foundational 1950 essay asked and provided a framework for answering the provocative question, “Can machines think?”



At that time, the United States was in the midst of the Cold War. Congressional representatives decided to invest heavily in artificial intelligence as part of a larger security strategy. The specific emphasis in those days was on translation, specifically Russian-to-English and English-to-Russian. The years 1954 to 1966 were, according to computational linguist W. John Hutchins’ history of machine translation, “the decade of optimism,” as many prominent scientists believed breakthroughs were imminent and deep-pocketed sponsors flooded the field with grants.



But the breakthroughs didn’t come as quickly as promised. In 1966, seven scientists on the Automatic Language Processing Advisory Committee published a government-ordered report concluding that machine translation was slower, more expensive, and less accurate than human translation. Funding was abruptly cancelled and, Hutchins wrote, machine translation came “to a virtual end… for over a decade.” Things only got worse from there. In 1969, Congress mandated that the Defense Advanced Research Projects Agency, or DARPA, fund only research with a direct bearing on military efforts, putting the kibosh on numerous exploratory and basic scientific projects, including AI research, which had previously been funded by DARPA.



“During AI winter, AI research program had to disguise themselves under different names in order to continue receiving funding,” according to a history of computing from the University of Washington. (“Informatics” and “machine learning,” the paper notes, were among the euphemisms that emerged in this era.) The late 1970s saw a mild resurgence of artificial intelligence with the fleeting success of the Lisp machine, an efficient, specialized, and expensive workstation that many thought was the future of AI hardware. But hopes were dashed by the late 1980s—this time by the rise of the desktop computer and resurgent skepticism among government funding sources about AI's potential. The second cold snap lasted into the mid-1990s and researchers have been ice-picking their way out ever since.



The last two decades have been a period of almost-unrivaled optimism about artificial intelligence. Hardware, namely high-powered microprocessors, and new techniques, specifically those under the umbrella of deep learning, have finally created artificial intelligence that wows consumers and funders alike. A neural network can learn tasks after it’s carefully trained on existing examples. To use a now-classic example, you can feed a neural net thousands of images, some labeled “cat” others labeled “no cat,” and train the machine to identify “cats” and “no cats” in pictures on its own. Related deep learning strategies also underpin emerging technology in bioinformatics and pharmacology, natural language processing in Alexa or Google Home devices, and even the mechanical eyeballs self-driving cars use to see.


Is winter coming again?



But it’s those very self-driving cars that are causing scientists to sweat the possibility of another AI winter. In 2015, Tesla founder Elon Musk said a fully-autonomous car would hit the roads in 2018. (He technically still has four months.) General Motors is betting on 2019. And Ford says buckle up for 2021. But these predictions look increasingly misguided. And, because they were made public, they may have serious consequences for the field. Couple the hype with the recent death of a pedestrian in Arizona, who was killed in March by an Uber in driverless mode, and things look increasingly frosty for applied AI.



Fears of an impending winter are hardly skin deep. Deep learning has slowed in recent years, according to critics like AI researcher Filip Piekniewski. The “vanishing gradient problem,” has shrunk, but still stops some neural nets from learning past a certain point, stymying human trainers despite their best efforts. And artificial intelligence’s struggle with “generalization,” persists: A machine trained on house cat photos can identify more house cats, but it can’t extrapolate that knowledge to, say, a prowling lion.



These hiccups pose a fundamental problem to self-driving vehicles. “If we were shooting for the early 2020s for us to be at the point where you could launch autonomous driving, you’d need to see every year, at the moment, more than a 60 percent reduction [in safety driver interventions] every year to get us down to 99.9999 percent safety,” said Andrew Moore, Carnegie Mellon University’s dean of computer science, on a recent episode of the Recode Decode podcast. “I don’t believe that things are progressing anywhere near that fast.” While some years we may reduce the need for humans by 20 percent, in other years, it’s in the single digits, potentially pushing the arrival date back by decades.



Much like actual seasonal shifts, AI winters are hard to predict. What’s more, the intensity if each event can vary widely. Excitement is necessary for emerging technologies to make inroads, but it’s clear the only way to prevent a blizzard is calculated silence—and a lot of hard work. As Facebook’s former AI director Yann LeCun told IEEE Spectrum, “AI has gone through a number of AI winters because people claimed things they couldn’t deliver.”




[ad_2]

Written By Eleanor Cummins

Comments

Popular posts from this blog

Ice technicians are the secret stars of the Winter Olympics

[ad_1] The emphasis of this year's two-week-long Winter Olympic Games has been placed squarely on the Olympians themselves. After all, the stated purpose of the international competition is to bring together the world’s greatest athletes in a nail-biting competition across fifteen different winter sports. But before the curlers, skiers, and skaters even arrived in Pyeongchang, South Korea, the Olympians of the ice technician world were already a few weeks deep in a competition of their own. Mark Callan of the World Curling Federation and Markus Aschauer of the International Bobsleigh and Skeleton Federation both say they’re hoping to make the best ice the Winter Olympics have ever seen. To transform the barren concrete jungle of existing tracks and arenas into an ice- and snow-covered wonderland is an enormous undertaking. And it takes a keen understanding of the physics and chemistry that keeps frozen precipitation pristine. Curling Callan has been making and maintaining ic...

Humans flourished through a supervolcano eruption 74,000 years ago (so you can make it through Tuesday)

[ad_1] About 74,000 years ago, a large chunk of a Pacific island exploded. It sent ash and other debris around the world, including to the southern tip of Africa, where it would be found by a team of international scientists and entered as the latest data point in one of the hottest debates in paleoanthropology ( I know ): Did the Toba supervolcano thrust our planet into a 1,000-year volcanic winter, thus bottle-necking animals and plants alike? Or was it just a little blip on our historic radar? That’s the contentious arena into which our intrepid researchers venture, this time with a new study in Nature establishing that humans in modern-day South Africa not only survived, but flourished after the Toba eruption. Where once was (we think, maybe) a mountain, there is now a huge caldera with a lake inside, and an island inside that. Their evidence shows that debris from the explosion landed 9,000 kilometers (5592.3 miles) away, the farthest distance traveled ever recorded for the ...

These 1950s experiments showed us the trauma of parent-child separation. Now experts say they're too unethical to repeat—even on monkeys.

[ad_1] John Gluck’s excitement about studying parent-child separation quickly soured. He’d been thrilled to arrive at the University of Wisconsin at Madison in the late 1960s, his spot in the lab of renowned behavioral psychologist Harry Harlow secure. Harlow had cemented his legacy more than a decade earlier when his experiments showed the devastating effects of broken parent-child bonds in rhesus monkeys. As a graduate student researcher, Gluck would use Harlow’s monkey colony to study the impact of such disruption on intellectual ability. Gluck found academic success, and stayed in touch with Harlow long after graduation. His mentor even sent Gluck monkeys to use in his own laboratory. But in the three years Gluck spent with Harlow—and the subsequent three decades he spent as a leading animal researcher in his own right—his concern for the well-being of his former test subjects overshadowed his enthusiasm for animal research. Separating parent and child,...