Saturday, July 5, 2014

robotics and driverless cars update - 07.05.2014


Your Dinner Table: Soon to Be Cleared by Robots
Sympathy for machines' experience has led to a new way for them to interact with the world.
Megan Garber Jun 28 2014, 7:07 PM ET


 Don't feel like doing the dishes? Here, let the machine do it for you. (Voyagerix/Shutterstock)

ASPEN, Colo.—Robots can be awkward. Even the most advanced we have—DARPA's automated pack mule, Softbank's "emotional" machine—are reminiscent of toddlers taking their first, tentative steps. "The Robot" is so named because, no matter how smart our mechanical assistants seem to get, their movements are distinctively stilted.

What this has meant, among other things, is a world of service robots that have been extremely limited in their ability to make our lives a little easier. There are Roombas, of course—oh, such Roombas—but in terms of the robots that can interact smoothly and seamlessly with the world around them, picking things up and putting them away and otherwise lending us a non-human hand ... there aren't many. Algorithms rely on patterns; the patterns of human life are notoriously difficult to discern.

Which is why we have put a robot on Mars, but we have yet to avail ourselves of a robot that can clear the table—or do the dishes, or do the laundry, or make the bed—for us. Robots, like humans, have to coordinate their intelligence systems with their physical outputs. They have to negotiate around a physical world that is full of uncertainty and surprise, using vision—"vision"—that is blurry and out of focus. They have to link their senses to their sensors.

As Ken Goldberg, a professor of engineering at Berkeley, describes the experience of being a robot: "Nothing is reliable, not even your own body."

That kind of sympathy for the robotic experience has led to a new approach to robotic design: "belief space." Which has nothing to do with spirituality—unless your particular religion happens to involve robots—but is related instead to the robots' ability to interact with the physical spaces they occupy. "Belief space" is robots' ability to understand those spaces via statistical descriptions—descriptions of probability distributions.

So if you're a robot, and your task is to pick up a coffee mug ... how do you do that? How do you grasp an object in the way that will allow you to pick up the cup with ease? You'd want to do just what a human likely would: to use the mug's handle to do the grasping. But then: how do you distinguish the handle from the rest of the mug? If you don't have a brain—and, with it, anecdotal experience—that will differentiate mug from handle from table from chair ... how do you complete the task that is so basic for humans?

"Being able to process belief space was extremely daunting," Goldberg said, during a talk at the Aspen Ideas Festival, put on by the Aspen Institute and The Atlantic, this afternoon. But processing it, it turned out, was a matter of collecting experience on behalf of the robots: You can use the basic framework of the Internet—networked information-sharing—to allow robots to learn from each others' experience. You can have robots communicate their learnings—the curve of a mug handle, for example—to each other. It's networked knowledge, robot-style. "Robots are now getting on the Internet," Goldberg says, "to share information and software."

Using that unique kind of crowdsourcing, Goldberg and his fellow roboticists are figuring out ways to help automated machines analyze uncertainty—and, more importantly, developing statistical models that allow the robots to predict, over time, the way they're supposed to treat and move certain objects. Goldberg, for his part, is developing what he calls a "nominal grasp algorithm"—an algorithm that helps robots both to identify objects and to understand where to grasp the object for pickup. And he's developing it with the help of this roboticized Internet.

Which means that, soon, robots could be picking up your mugs ... and clearing your table.

here:
http://www.theatlantic.com/technology/archive/2014/06/your-dinner-table-soon-to-be-cleared-by-robots/373653/

====================================================



6/29/2014 at 11:23 AM
Robots Have Become One of 21st-Century Cinema’s Go-To Blockbuster Clichés
By Nick Schager

If you plan on going to the movies this weekend, or this month, or at any point this summer, it might help if you like robots. Not robots with nuanced personalities. Or ones whose relationships with their organic counterparts are symbolic of some larger societal issue. No, just robots. Robots that are BIG, that are LOUD, that shoot laser beams from their FACES. You must have a desire to watch ones that fly across the screen and make weird noises while changing shape, and ones that can withstand thunderous blows from superpowered heroes and shrug off artillery fire from the U.S. military. You must adore their enormity and strength and apparent steel-shell invincibility. You must quiver with excitement over their overwhelming clankity-clanking awesomeness.

Moviegoers got their latest robotic fix with Transformers: Age of Extinction. Michael Bay’s latest robo-saga comes mere weeks after Edge of Tomorrow, in which Tom Cruise’s military man – trapped in a time-loop that forced him to relive the same battlefield day over and over again – combated extraterrestrial foes with the aid of a heavy-duty mecha-suit. And that was preceded by last month’s X-Men: Days of Future Past, which found the persecuted heroes traveling back in time to yada yada yada look out for those Sentinels! While they’re the impetus for the action in Days of Future Past – which is based on one of Marvel Comics’ greatest stories (first told in 1981) – the Sentinels have finally gotten their proper due on-screen because, over the past few years, blockbusters have decided that nothing provides a “wow” factor quite like a cinematic frame awash in hulking war machines. Whether they’re sentient automatons or human-controlled weapons, robots have become the 21st century’s go-to action-spectacle cliché.

And those three movies (as well as The Amazing Spider-Man 2, with Paul Giamatti’s robotic exo-skeleton) come on the heels of Iron Man 3 (robot armor), The Wolverine (mecha samurai villain), Pacific Rim (people-operated robot Goliaths), Elysium (mecha exo-skeleton), the new Robocop (cyborg hero), Thor (alien robot) and even Man of Steel, whose penultimate set piece involves Superman battling two opponents, one of whom is a faceless giant in head-to-toe armor whose appearance and behavior make him a robot in spirit, if not fact. And did I mention that Ultron, the villain of next summer’s Avengers: Age of Ultron is also an evil robot? This is overkill, pure and simple. And it’s a prime reason so many superhero and special effects–saturated extravaganzas feel the same.

We’ve come full-circle, in a sense, as 2007’s original Transformers is to blame for this trend. While it’s altogether too easy to pile onto Michael Bay and his Hasbro toy-based franchise, part of its legacy was providing proof to studio executives that there were untold millions to be made from fundamentally basing films on CGI giant mechanical creatures. And moreover, that those creatures didn’t need to have distinctive traits, or even be visually lucid, to satisfy audiences. Bay’s Autobots and Decepticons make only passing attempts at personality – sure, Optimus Prime is noble and Bumblebee is loyal, but that’s like giving credit to a Superman movie for making the Man of Steel courageous. Furthermore, it ignores the fact that most of the other Transformers are buzzing, rotating, pointy-edged contraptions indistinguishable from one another, giving the films a visual schema defined by blurry metal moving fast through fiery explosions in metropolitan centers.

Of course, science fiction, fantasy, and superhero films have always been fascinated by robots. Yet from Metropolis, The Day the Earth Stood Still, and Westworld to 2001, Blade Runner, Terminator 2: Judgment Day, and A.I: Artificial Intelligence, robots have been capable of serving as more than merely token elements to amplify a movie’s gee-whiz quotient. Rather, they’ve been most compelling when their nature, their personalities, and their relationships to their makers or users have spoken to grander ideas about man’s dependence on technology. Often, as in Alien and The Terminator, they help a story touch upon the dangerous unreasonableness of expecting machines to comprehend and exhibit feelings of compassion and mercy, and to be relied upon to safeguard people’s interests. Or, as in 2009’s underrated Moon, in which computer system Gerty 3000 (voiced by Kevin Spacey) acts as both an emotional friend and foil to Sam Rockwell’s astronaut, they help shine a light on issues of solitude, loneliness, and what it means to be human.

What they don’t do, when at their best, is simply function as special effects to be oohed and aahed at for their size, their volume, and the neat-o technical wizardry that went into creating them. While robots are sometimes necessary as suitably strong, larger-than-life adversaries for superpowered he-men like Superman and the X-Men, their ubiquity has rendered them not just dull, but downright unimaginative – an easy way to provide some been-here, done-that computerized bang for one’s buck at the expense of genuine thrills. The Sentinels may rise up to carry out a murderously intolerant agenda in Days of Future Past; Edge of Tomorrow’s power suits may help humanity defeat invading E.T.s; and the Autobots – and Dinobots! – may help humanity temporarily stave off extermination in Age of Extinction. Yet for the health of our current CG-overloaded big-budget action cinema, it’d be far preferable if it were all these massive mechanical Goliaths who were wiped out.

here:
http://www.vulture.com/2014/06/transformers-edge-of-tomorrow-robots-blockbuster-cliches.html

====================================================

We're Not Ready for Robots
The U.S. government, an insider argues, is ill-equipped for a world of automated warfare.
Megan Garber Jun 28 2014, 4:26 PM ET



ASPEN, Colo.—Should the U.S. establish a new federal agency to regulate robots?

Here's one potential problem with that proposal—one that has very little to do with the law, and very much to do with technology: "The government has virtually no experts on the inside that understand autonomous robotic systems."

That's according to Missy Cummings, a professor of engineering at Duke, an expert on drones and other robots, and a former fighter pilot. Cummings came to that conclusion—one that means, she says, that "the United States government is in serious trouble"—while advising the government in, among other things, its development of a $100 million robotic helicopter program.

"The one thing that I realized while I was on the inside,"
she said during a talk at the Aspen Ideas Festival, put on by the Aspen Institute and The Atlantic, is essentially that "the defense industry really cannot get the people that it needs for the robotics programs it would like to have." The U.S. not only doesn't know about robotics ... it doesn't know, in the words of another former member of the military, what it doesn't know. It doesn't fully understand how to test robots, Cummings says. It doesn't fully know how to regulate them.

Take drones. There are currently six sites, scattered around the country, that the FAA has established as testing areas for unmanned autonomous vehicles. But the agency, Cummings argues, likely won't be able to hire the people they're going to need to run these programs. It's a systemic problem, and one that begins with the education system. "Our country," Cummings says, "simply is not putting out enough" people—engineers, roboticists, software engineers—who have expertise in robotics. The government, in the military and beyond, isn't doing enough to incentivize or compensate technologists. "And the ones that we do train," Cummings adds, "are going to private companies like Google or Apple."

That means, among other things, a government that is ill-equipped when it comes to the work of regulation and oversight. Whether private industry's current hegemony over robotics is a generally good or bad thing is debatable, Cummings allows, "but I think it's certainly a problem when our government cannot assess whether or not technology is decent—or even ready to be deployed."

Which leads to another reason to think that "the United States government is in serious trouble." While the U.S. is lagging in when it comes to robotics' human resources, Cummings says, other countries are quickly catching up. They're developing their own expertise with automated technologies—including, alarmingly, automated weaponry. Drones, for better or for worse, are "are a true democratization of technology," Cummings says; they put significant amounts of power in the hands not just of states, but of individuals and other extra-state actors. And if the U.S. is ill-equipped, systemically, to deal with warfare that is newly democratized and newly weaponized ... "it's my prediction," Cummings says, "that we're about to have our you-know-whats handed to us on a platter."

here:
http://www.theatlantic.com/technology/archive/2014/06/our-government-is-not-ready-for-robots/373644/

====================================================

(if for some reason you cannot see this article, let me know and I will email you the pdf copy I made)

What Jobs Will Robots Have in the Future?
July 3, 2014 8:30 a.m. ET

Automation and digitization are transforming the workplace. With this in mind, we asked The Experts: What jobs do you see robots moving into in the near future?

This discussion relates to a recent Leadership Report and formed the basis of a discussion on The Experts blog in June 2014.

The Dark Side of Empowering Robots

NOREENA HERTZ: Do you remember those images of Japanese car factories in the 1980s with robots manning the production line? How futuristic they then seemed? Yet robots will increasingly encroach on the jobs that humans used to do. We already see robots replacing checkout staff in retail outlets. Expect within the next few years to see robots folding clothes at Banana Republic. In hospitals expect soon to see robots reading X-rays with more accuracy than any human radiologist.

Google Chairman  Eric Schmidt  recently said that "Robots will become omnipresent in our lives in a good way." "Omnipresent" I agree with. As for "good," I believe the picture is more nuanced. The takeover of machine by man will have associated costs.

Implications for the labor force will be significant, and it won't just be blue-collar workers who will be replaced. A recent paper by Carl Benedikt Frey and Michael Osborne, of the University of Oxford, predicts that as much as 47% of total U.S. employment is at risk from computerization. Jobs likely to be lost include not only production line workers but also paralegals, administrative support staff, telemarketers and a number of other white-collar occupations.

Expect, too, a whole host of machine-associated dangers to emerge. We've all been in taxicabs where the driver has taken us on a circuitous route—because he was "just following the GPS." Or think about how financial institutions massively underestimated their risk profiles at the dawn of the financial crisis because they gave too much ill-considered power to a single figure churned out by their bank's computer—value at risk.

The gravest danger we face when we hand over our thinking to machines isn't that the machines malfunction. It is that we become increasingly incapable of thinking ourselves.

Noreena Hertz (@NoreenaHertz) is based at the Centre for the Study of Decision-Making at University College London. She is the author of the recently published "Eyes Wide Open: How to Make Smart Decisions in a Confusing World."

Where Robots Will Go Next

DOMINIC BARTON: Historically, robots have been used primarily for assembly-line manufacturing or other tasks that involve routine, repetitive actions.

In the future, we expect to see robots moving into more nonroutine jobs that require human interaction, problem solving and creativity. Already we are seeing evidence of this shift—whether in call centers, paralegal roles or advanced manufacturing. This has been driven in part by rapid advances in natural-language processing and machine learning.

The economic value created by increased automation is significant. In the industrial space alone, we expect that robots could provide up to $1.2 trillion in value by 2025 through labor-saving productivity gains. In addition, robot applications in medicine (e.g., mobility aids and surgery), commercial services (e.g., retail and logistics) and personal services could create more than $3 trillion in value by 2025 through improvements in quality of life and time savings.

Dominic Barton is the global managing director at McKinsey & Co.

Will a Robot Take Your Job? Or Provide a New One?

MARK MURO and SCOTT ANDES : The McKinsey Global Institute estimates that the global economic benefits of advanced robotics in manufacturing—to be realized through massive productivity and quality gains—could reach $1.2 trillion by 2025. That's a huge gain to the world's welfare but it comes with some disquiet. It implies that automation is threatening the existence of more manufacturing occupations much faster than might have been anticipated just a few years ago.

The range of what can be automated is widening rapidly. The first robotics revolution—ushered by General Motors GM 0.00%  in the 1960s with robotic arms that stacked hot die-cast metal pieces—substituted capital for labor in the most dangerous, difficult and labor-intensive tasks. Today, we're in the midst of a second robotics revolution. Thanks to the new field of "machine learning," second-generation robots no longer require step-by-step commands by a human. Workers with highly routinized tasks—such as industrial painters, machine setters, laminators, fabricators—are most at risk of replacement. Yet workers with "middle" skills—those with mechanical, electrical or industrial credentials, as well as social perceptiveness and supervisory ability—complement automation and continue to be in high demand.

What is coming now, though, is a much more disruptive third era of industrial automation. Thanks to advances in artificial intelligence, machine vision, sensors, "big data" analytics, motors, and hydraulics, robots are becoming increasingly dexterous, smart and autonomous—not to mention cheap. Coming fast are robots that can take on more delicate tasks such as intricate electronics assembly, and work more easily for and with their human tenders. As a result, various machine operators, precision solderers, and electronic-equipment assemblers are all now at risk of replacement. All told, McKinsey estimates robots will replace 15% to 25% of industrial-worker tasks within a decade. Clearly the coming wave of automation will further reduce America's manufacturing employment per unit of output.

And yet, there could be benefits. For one, the ability of robots to drive costs down could encourage global companies to move some production back to the U.S., creating jobs. Otherwise, for some there will be job opportunities in minding, maintaining or improving the bots. On that score, an increasingly heard word of career advice will be: "robotics!"

Mark Muro (@markmuro1) is a senior fellow at the Brookings Institution and the policy director of the Metropolitan Policy Program there. Scott Andes is a senior policy analyst at the Brookings Institution.

How New Robots Are Smarter Than Ever

ROBERT PLANT: The term "robot" usually brings visions of large automatons, assembly-line machines with sparks flying from welding torches, each with a specific task to perform and all synchronized in perfect harmony. And yes, this is the case in many factory environments. These robots are in many ways amazing in what they do and the precision at which they do it. They have formed the very basis of the re-engineered workplace of today. But just like the human labor they replaced, they themselves are being replaced by more intelligent, agile and adaptive robotic systems. These new robots can understand that with no item in their "hand" they can't fulfill the function of, say, placing a nut on a blot, so the system then has to compensate, interact with the track, locate a blot, reinitiate at the track speed and then undertake the function. Adaptive robotics allows systems to be placed in a variety of new scenarios, including as aides to skilled assembly-line workers, physicians; and the frail or disabled members of our community.

The service industry is ripe for automation; the Dalu Robot restaurant in Jinan Shandong, China says it is developing a robotic system to cook meals and then deliver the food to the customer's table; while Momentum Machines of California says it has created a "smart restaurants" robotic system that not only takes the order but then can create 360 gourmet burgers an hour.

Moving down the supply chain, Kiva, the robotics system acquired by Amazon for $775 million in 2012, provides "innovative material handling technologies" that move the inventory around automated warehouses, removing the human element. It is possible that these systems will soon be integrated with autonomous delivery drones flying packages directly to our homes. Amazon may even be shipping products before you order it through the use of Artificial Intelligence "software robots" that monitor your behavior, lifestyle, and even the inventory in your refrigerator through their "dash" magic-wand hand-held bar code scanner. Shopping will never be the same.

The premise voiced by automation advocates is that robotic systems will free us to do more innovative things and have more free time. Ironically, the same was said about robots and computers in the 1960s and 1970s but it hasn't yet worked out to be that way.

Robert Plant (@drrobertplant) is an associate professor at the School of Business Administration, University of Miami, in Coral Gables, Fla.

Use Robotics to Reimagine Your Business, Not Re-Engineer It

CESARE MAINARDI: Is your job high volume, well defined and repetitive—one that needs tight quality control and involves a lot of lifting and moving? If yes, you're likely in greater danger of being replaced by a robot. Robots won't replace workers in jobs that require imagination, creativity, empathy, solution making and a "human touch." But they will take on work that's dull, dirty, and dangerous.

Every company wants to make a better, faster, cheaper car, phone, [insert your product here]. But digitizing your business smartly goes well beyond just having machines make things that people used to make. Re-engineering your manufacturing through technology isn't a bad thing, but it's a table-stakes move. Most companies are going to get value from applying robotics and automation to manufacturing. So you can't stop there.

The full "art of the possible" isn't to simply to look at how robots or technology could improve your production line, but to look at your entire business without being blinded by legacy beliefs about how people, processes, technology, capabilities and culture all "should" come together. Take the view of someone who's going to disrupt your business—and then design your digital strategy (and hire your robots) from there. This uses technology to fundamentally reimagine your business, not just re-engineer it. And if you do it well, you'll be rewarded with top-line revenue growth, higher multiples by the market, and ultimately a faster, more profitable business.

Cesare R. Mainardi is chief executive officer of Strategy&, formerly Booz & Co.

here:
http://online.wsj.com/articles/robots-how-will-they-be-employed-in-the-future-1404390617

====================================================

Truck of the future aims to drive itself
   
By Ben Brumfield, CNN
updated 9:53 PM EDT, Fri July 4, 2014 |


Mercedes' Future Truck 2025 will drive by itself. A prototype took a 3-mile self-guided trek.


In the spacious cab, the driver will be able to turn away from the wheel, gas and brakes.


Truck drivers will have time to surf the Internet or monitor data, while Future Truck 2025 watches the road.


In the future, vehicles will communicate with each other and tap into Big Data, Mercedes-Benz envisions.


Multiple radar systems, stereo cameras and wireless LAN watch the road and other vehicles.


Mercedes' truck of the future should automatically keep its distance from other vehicles but not pass them automatically.

more here:
http://www.cnn.com/2014/07/04/tech/mercedes-future-truck/index.html?sr=fb070514Truck3pStoryGalLink

====================================================

6/06/2014 @ 1:23PM
Autonomous Cars Like The Google May Be Viable In Less Than 10 Years

Neil Winton
Contributor

BRUSSELS, Belgium – President Barack Obama’s motorcade, abetted by the limousine cavalcades of his G7 leader colleagues and non-stop rain, bought traffic to a standstill here this week, making those stranded in their cars or diving into the underground railroad system for relief wonder whether computer controlled cars might one day make this aggravation a thing of the past.

News of Google GOOGL +0.56%’s autonomous car, which can transport two passengers around at speeds of up to 25 mph with the computer controlling the steering wheel and brakes, has set off speculation about just when this technology will be available.

Could it be with us in less than 10 years?

“Yes,” says Peter Fuss, Germany based automotive specialist from the EY consultancy.

Fuss told the annual Automotive News Congress here that so-called autonomous driving will arrive in less than 10 years, spurred on by safety and comfort benefits.

“No,” said other assorted experts at the conference, led by Volvo, who reckoned 10 to 15 years was more likely.

Nobody thought the computer controlled car was pie in the sky.

The new Mercedes S class already incorporates many technologies on the path to computer control
The new Mercedes S class already incorporates many technologies on the path to computer control

Peter Mertens, senior vice-president at Volvo Cars Corp, said many of the basic technologies have already been developed, including systems like radar cruise control, which keeps a constant speed on the highway and slows the car down when it approaches a slower car. The selected cruising speed is reinstated when the computer senses the coast is clear. Other techniques already in use include “city-brake”, now standard on many Volvos, which takes control of braking from the driver when the computer senses an imminent crash at speeds under 20 mph. Computerized parking, and “steer assist”, which senses that the car will go out of control unless curbed, are becoming commonplace. It’s really a question of developing and consolidating these systems, Mertens said.

The new Mercedes S class sedan already incorporates many technologies on the path to computer control.

On its drive towards autonomous cars, Volvo will have 100 cars in Gothenburg, Sweden in 2017, which will be able to drive around known routes without input from the driver.

These cars will take specially selected and trained customers on selected routes, although it’s not clear why they need to be selected or trained if the computer is doing all the work.

“It will be 10 to 15 years before you can get into the back of the car and read a newspaper,”
Mertens said.

Volvo is experimenting currently with cars which park themselves automatically, but which also recognize actions not directly related to the parking. This allows research to take place under restricted and safe conditions. Mertens said humans are good at recognizing danger, but poor at reacting and the industry needs to merge these two worlds.

“The technology to control cars is still pretty weak, given the scale of possibilities presented by real world driving,”
Mertens said. Accidents are currently 95 per cent the result of human error, but it is not known how many accidents are thwarted by skilful drivers.

Karlheinz Haupt from Germany’s Continental AG said as the technology develops, highways are likely to divide traffic into three sections – one lane for fully automated vehicles, one for highly automated ones, and the other for partially computer controlled vehicles. Haupt said the process will be evolutionary, not revolutionary. He said 2016 will see the first partially automated cars, with fully automated ones arriving from 2025.

The EY consultancy said autonomous cars will also accelerate the change in the way people own cars, driving what it called “different integrated mobility systems” like car sharing, and ideas about offering car rentals for shorter distances as part of the public transport system. EY pointed out autonomous driving makes sense to cut road accidents, now the 8th leading cause of death globally, and to curb the two time increase in car driving delays expected from congestion by 2050, when 6.3 billion people, or 70 per cent of the world’s population, will live in towns and cities.

EY’s Fuss also left the congress with a chilling thought, which reminded the industry players that they will have to hang tough at some point during the introduction of computerized cars.

“The first autonomous car to kill will have a tremendous negative effect,” Fuss said.

here:
http://www.forbes.com/sites/neilwinton/2014/06/06/autonomous-cars-like-the-google-may-be-viable-in-less-than-10-years/

====================================================

Intel Chases Sales on Silicon Road to Driverless Cars
By Ian King June 30, 2014

Hyundai Genesis
Hyundai’s Genesis illustrates the obstacles for Intel, Qualcomm and Nvidia -- whose chips dominate in computers and phones -- as they try to crack a potentially lucrative market. Photographer: Daniel Acker/Bloomberg

Intel Corp., Qualcomm Inc. and Nvidia Corp. -- pioneers in the production of chips for computers and phones -- are finding it harder to make inroads into the auto industry.

Consider Hyundai Motor Co. (005380)’s new 2015 Genesis, a luxury sedan brimming with semiconductors that handle everything from automatic braking and lane-keeping sensors to blind-spot detection. Other chips enable the car to open the trunk when it senses the owner’s arms are full, and to sniff for carbon dioxide to decide if the cabin needs more fresh air.

While the Genesis represents the forefront of the auto industry’s use of chips, only a handful of the vehicle’s thousands of semiconductors is provided by Intel. Qualcomm and Nvidia don’t even make the list. The main hurdle is the industry’s safety and reliability standards, which far exceed those for computers or phones. Instead, most of the electronic components are provided by longtime suppliers, like Freescale Semiconductor Ltd. (FSL:US), Renesas Electronics Corp. and STMicroelectronics NV, which have proven track records.

“We don’t get a beta test with our products -- they have to work from the first one,” said Mike O’Brien, a U.S.-based vice president of product planning for the Korean automaker, explaining the company’s cautious approach to chips in its cars. “We can’t say, ‘Oops, we didn’t do that right.’”

Safety Standards

Hyundai’s Genesis illustrates the obstacles for Intel, Qualcomm and Nvidia -- whose chips dominate in computers and phones -- as they try to crack a potentially lucrative market. Cars are increasingly filled with complex computing and communications systems and driverless vehicles are getting closer to becoming a reality.

The market for automotive chips is projected to grow 6.1 percent to $27.9 billion this year, according to IHS Corp. Within that business, sales of chips for automated driver-assistance systems, or ADAS, will increase an average of 13 percent a year through 2020, making it the fastest-growing area.

Even as the systems proliferate and software developers such as Google Inc. and others roll out plans for connected entertainment and mapping systems, carmakers have been slow to switch to unproven chip suppliers because their products are governed by rigorous safety requirements. When a computer crashes, a user might lose some data. When a car crashes, people can get hurt.

For autos, chips have to withstand temperatures as low as minus 40 degrees or as high as 160 degrees Celsius (minus 40 to 320 degrees Fahrenheit). They need to be available to carmakers for up to 30 years and have a zero failure rate, according to a study by PricewaterhouseCoopers LLP. By comparison, consumer-device chips only need to be around for a year and are built to fail less than 10 percent of the time.

“Experience in automotive is something that you don’t grow in one day,” said Luca De Ambroggi, an analyst at IHS. “The requirements are still tough.”

Touting Strengths

The newcomers are initially going after in-vehicle entertainment and driver-assistance functions by touting their strengths -- Intel’s processing, Nvidia’s graphics capabilities and Qualcomm (QCOM:US)’s wireless communications. As consumers come to expect their cars to get better at the same rate as their smartphones, tablets and laptops, the demand is there, yet it takes time to bring new technology to market while keeping the driver safe and free from distraction, Hyundai’s O’Brien said.

For example, deciding that automated steering requires too much effort to turn the car and adjusting software to lighten it could take two months of testing. When Hyundai was building a reversing system with lasers and cameras, it found that the technology initially couldn’t tell the difference between obstacles and steep driveways.

All of this complexity and expense needs cooperation from component suppliers, O’Brien said. Carmakers are looking for chips that they can tune to do the job of many, he said.

Supercomputers in Cars

That should be good news for Intel, Qualcomm and Nvidia, which make some of the fastest processors available. All three say they’ve got products in the market or coming that meet the most stringent automotive requirements.

Nvidia said its processors are now powerful enough that they can be partitioned -- devoting part to functions that must work no matter what, and others to information and entertainment, where hiccups are less dangerous.

“We’re seeing a lot of interest in the industry in the new technologies,” said Danny Shapiro, Nvidia’s senior director of automotive. “Ultimately every car is going to have a supercomputer.”

The ability to quickly capture and process images allows vehicles’ computers to know what’s going on around them and to alert drivers to potential hazards. Shapiro said that requires massive parallel processing -- something that Nvidia’s graphics chips excel at.

here:
http://www.businessweek.com/news/2014-06-30/intel-chases-sales-on-silicon-road-to-driverless-cars

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.