Skip to main content

Another AI winter could usher in a dark period for artificial intelligence

[ad_1]


Self-driving cars. Faster MRI scans, interpreted by robotic radiologists. Mind reading and x-ray vision. Artificial intelligence promises to permanently alter world. (In some ways, it already has. Just ask this AI scheduling assistant.)



Artificial intelligence can take many forms. But it’s roughly defined as a computer system capable of tackling human tasks like sensory perception and decision-making. Since its earliest days, AI has fallen prey to cycles of extreme hype—and subsequent collapse. While recent technological advances may finally put an end to this boom-and-bust pattern, cheekily termed an “AI winter,” some scientists remain convinced winter is coming again.


What is an AI winter?



Humans have been pondering the potential of artificial intelligence for thousands of years. Ancient Greeks believed, for example, that a bronze automaton named Talos protected the island of Crete from maritime adversaries. But AI only moved from the mythical realm to the real world in the last half-century, beginning with legendary computer scientist Alan Turing’s foundational 1950 essay asked and provided a framework for answering the provocative question, “Can machines think?”



At that time, the United States was in the midst of the Cold War. Congressional representatives decided to invest heavily in artificial intelligence as part of a larger security strategy. The specific emphasis in those days was on translation, specifically Russian-to-English and English-to-Russian. The years 1954 to 1966 were, according to computational linguist W. John Hutchins’ history of machine translation, “the decade of optimism,” as many prominent scientists believed breakthroughs were imminent and deep-pocketed sponsors flooded the field with grants.



But the breakthroughs didn’t come as quickly as promised. In 1966, seven scientists on the Automatic Language Processing Advisory Committee published a government-ordered report concluding that machine translation was slower, more expensive, and less accurate than human translation. Funding was abruptly cancelled and, Hutchins wrote, machine translation came “to a virtual end… for over a decade.” Things only got worse from there. In 1969, Congress mandated that the Defense Advanced Research Projects Agency, or DARPA, fund only research with a direct bearing on military efforts, putting the kibosh on numerous exploratory and basic scientific projects, including AI research, which had previously been funded by DARPA.



“During AI winter, AI research program had to disguise themselves under different names in order to continue receiving funding,” according to a history of computing from the University of Washington. (“Informatics” and “machine learning,” the paper notes, were among the euphemisms that emerged in this era.) The late 1970s saw a mild resurgence of artificial intelligence with the fleeting success of the Lisp machine, an efficient, specialized, and expensive workstation that many thought was the future of AI hardware. But hopes were dashed by the late 1980s—this time by the rise of the desktop computer and resurgent skepticism among government funding sources about AI's potential. The second cold snap lasted into the mid-1990s and researchers have been ice-picking their way out ever since.



The last two decades have been a period of almost-unrivaled optimism about artificial intelligence. Hardware, namely high-powered microprocessors, and new techniques, specifically those under the umbrella of deep learning, have finally created artificial intelligence that wows consumers and funders alike. A neural network can learn tasks after it’s carefully trained on existing examples. To use a now-classic example, you can feed a neural net thousands of images, some labeled “cat” others labeled “no cat,” and train the machine to identify “cats” and “no cats” in pictures on its own. Related deep learning strategies also underpin emerging technology in bioinformatics and pharmacology, natural language processing in Alexa or Google Home devices, and even the mechanical eyeballs self-driving cars use to see.


Is winter coming again?



But it’s those very self-driving cars that are causing scientists to sweat the possibility of another AI winter. In 2015, Tesla founder Elon Musk said a fully-autonomous car would hit the roads in 2018. (He technically still has four months.) General Motors is betting on 2019. And Ford says buckle up for 2021. But these predictions look increasingly misguided. And, because they were made public, they may have serious consequences for the field. Couple the hype with the recent death of a pedestrian in Arizona, who was killed in March by an Uber in driverless mode, and things look increasingly frosty for applied AI.



Fears of an impending winter are hardly skin deep. Deep learning has slowed in recent years, according to critics like AI researcher Filip Piekniewski. The “vanishing gradient problem,” has shrunk, but still stops some neural nets from learning past a certain point, stymying human trainers despite their best efforts. And artificial intelligence’s struggle with “generalization,” persists: A machine trained on house cat photos can identify more house cats, but it can’t extrapolate that knowledge to, say, a prowling lion.



These hiccups pose a fundamental problem to self-driving vehicles. “If we were shooting for the early 2020s for us to be at the point where you could launch autonomous driving, you’d need to see every year, at the moment, more than a 60 percent reduction [in safety driver interventions] every year to get us down to 99.9999 percent safety,” said Andrew Moore, Carnegie Mellon University’s dean of computer science, on a recent episode of the Recode Decode podcast. “I don’t believe that things are progressing anywhere near that fast.” While some years we may reduce the need for humans by 20 percent, in other years, it’s in the single digits, potentially pushing the arrival date back by decades.



Much like actual seasonal shifts, AI winters are hard to predict. What’s more, the intensity if each event can vary widely. Excitement is necessary for emerging technologies to make inroads, but it’s clear the only way to prevent a blizzard is calculated silence—and a lot of hard work. As Facebook’s former AI director Yann LeCun told IEEE Spectrum, “AI has gone through a number of AI winters because people claimed things they couldn’t deliver.”




[ad_2]

Written By Eleanor Cummins

Comments

Popular posts from this blog

Ice technicians are the secret stars of the Winter Olympics

[ad_1] The emphasis of this year's two-week-long Winter Olympic Games has been placed squarely on the Olympians themselves. After all, the stated purpose of the international competition is to bring together the world’s greatest athletes in a nail-biting competition across fifteen different winter sports. But before the curlers, skiers, and skaters even arrived in Pyeongchang, South Korea, the Olympians of the ice technician world were already a few weeks deep in a competition of their own. Mark Callan of the World Curling Federation and Markus Aschauer of the International Bobsleigh and Skeleton Federation both say they’re hoping to make the best ice the Winter Olympics have ever seen. To transform the barren concrete jungle of existing tracks and arenas into an ice- and snow-covered wonderland is an enormous undertaking. And it takes a keen understanding of the physics and chemistry that keeps frozen precipitation pristine. Curling Callan has been making and maintaining ice for m

In the wake of NYC terrorist attack, Trump says he's ordered increased 'Extreme Vetting'

[ad_1] President Donald Trump has requested for a heightened vetting program following Tuesday's terrorist attack in New York. @realDonaldTrump: I have just ordered Homeland Security to step up our already Extreme Vetting Program. Being politically correct is fine, but not for this! Earlier, he tweeted that the attack in lower Manhattan was committed by a "sick and deranged person." @realDonaldTrump: In NYC, looks like another attack by a very sick and deranged person. Law enforcement is following this closely. NOT IN THE U.S.A.! His remarks came after a motorist drove onto a busy bicycle path near the World Trade Center memorial and struck several people on Tuesday, leaving at least eight people dead and a dozen injured. NBC News repor

How to save everything you post to social media

[ad_1] If you get the urge to revisit that cute photo you posted some time last year, you'll have to scroll through your timeline for what feels like hours to track it back down. Instead, when you share a post on social media, also save it to your phone for safe-keeping. This will not only save your social media hits for posterity, but also make them easier to find if you ever need to rediscover them. In this guide, we focus on saving photos and videos, because text posts are slightly more complicated—the only way to really preserve text from Facebook and Twitter is to download your entire archive (we'll explain how to do this below), and Instagram and Snapchat don't let you save or export your instant messages at all. When it comes to photos and videos, there's a shortcut to make sure they stay on your phone: Originally film them through a dedicated app, which will save them to a gallery. Only then should you open up a social media app to share them. However, there'