NEW YORK – Impressive gains in computing power are enabling artificial intelligence to create smarter products and services. In some cases, artificial intelligence (AI) is literally saving lives.
As the world’s leading AI experts gathered this week in New York for the O”Reilly Artificial Intelligence Conference, they showed off impressive new applications of AI in a broad spectrum of workplaces.
But they also wrestled with the tensions between hype and results, and how to help AI realize its potential to transform American life.
“We feel we are on our way to making a very significant impact on society with compute power,” said Pradeep Dubey, an Intel Fellow at Intel Labs.
AI pioneer Peter Norvig, the Director of Research at Google, says artificial intelligence is about to change our relationship with technology.
“I see this as a revolutionary time, because it looks like we’re making another transition in our platforms,” said Norvig. “We’re now 11 million Alexas into the personal assistant era. When that platform comes to be, it inherently encourages more AI.”
Norvig said the AI-powered emergence of personal assistants and voice interfaces will follow the path of previous platform transitions – such as the shift from mainframes to PCs, and from PCs to mobile devices – which have had widespread impacts on our economy and society.
The reason AI is promising big things, Norvig says, is because it has the ability to deliver on those promises.
“There are always buzzwords, and some will be overhyped,” said Norvig. “I don’t think this is hype.”
“AI is too amorphous to people right now, and that’s causing some consternation,” said Jana Eggers, the CEO of Nara Logics, which builds AI systems to help companies make better decisions. “I think the hype is actually low, compared to the potential of the technology.”
Putting AI to Work
The best way to help people understand the impact of a technology is to see it in action. The theme of the O’Reilly event was “Putting AI to Work,” and the conference showcased many examples of how cutting-edge AI is pushing the boundaries of how technology can assist society.
In one of the most impressive examples, Johns Hopkins AI researcher Suchi Saria demonstrated how the hospital is using machine learning to improve its ability to detect the onset of septic shock in emergency room patients – a scenario where early diagnosis can dramatically improve patient survival rates.
“Timeliness is everything,” said Saria. “Sepsis is preventable if treated early, but it’s very hard to diagnose early.”Johns Hopkins created a program called TREWS (Targeted Real-Time Early Warning Signs) that analyzes electronic health records and patient data – including vital signs, blood lab work and bedside reports – to alert doctors to problematic trends. Saria said TREWS can “detect the subtle signs that a patient is deteriorating.”
The warning signs of septic shock can emerge overs days or weeks. Saria said TREWS enabled helped Johns Hopkins to correctly spot 85 percent of sepsis onset, accelerating its detection by 24 hours on average, preventing organ damage in two-thirds of cases. The system is in use at Hopkins and nearby Howard County hospital in Baltimore, and is now being implemented at three other hospitals.
“This is something we’re very passionate about,” said Saria, who said that over time TREWS can be adapted for early diagnosis of other conditions. “This strategy is not unique to sepsis.”
Using AI to Boost the Bottom Line
AI is also creating ways to boost the bottom line for businesses. Salesforce Chief Scientist Richard Socher kicked off the O’Reilly conference by unveiling new capabilities for its Einstein suite of AI services:
- Einstein Sentiment sorts the tone of texts and emails as positive, negative or neutral, enabling developers to highlight customer feedback requiring priority status. This could be used to build apps that surface angry tweets or emails, showcase compliments and positive feedback, or identify customers experiencing shipping problems.
- Einstein Object Detection can recognize objects in images, detecting color, size and shape, which can allow developers to create apps to automate inventory management via images of shelves or warehouses.
Salesforce has more than 150,000 customers that use its platform for customer relationship management and other business use cases.
Artificial intelligence is being widely adopted by marquee tech companies like Amazon, Google and Alexa, which are all using AI to improve their virtual assistants, which loom as the new gateways to the broader universe of products and services these companies offer, providing users with access via voice commands.
Filtering Out the AI Hype
Yet sorting out the benefits of artificial intelligence remains challenging for many businesses. AI is actually a spectrum of technologies, and has entered a phase where it is hard to filter. There are now more than 3,000 companies focused on AI products and services, according to Kathryn Hume, the VP of Product and Strategy at integrate.ai, a Toronto-based machine learning company. Annual investment in AI startups has grown from $589 million in 2009 to $5 billion in 2016.
“We are indeed within a hype cycle,” said Hume. “There’s a lot of money going into the space, and a lot of companies being formed.”
In many ways, the artificial intelligence market resembles the state of cloud computing about five years ago. There’s a broad recognition that the technology will be game-changing, but there’s also a lot of buzz and hype, and arguments about definitions – all of which makes it challenging for businesses to know how to capitalize on it.
“All companies know is that there’s a technology that’s great,” said Kristian Hammond, Chief Scientist at Narrative Science, who conducted a workshop at the conference on bringing AI into the enterprise. “They’re not yet clear on how they will use it. There’s business there, and it’s a driver. If we let businesses tell us where there are problems to be solved, it will drive us forward.”
The cloud precedent would suggest that before long, businesses will match the technology to their business problems, and adoption will explode – a trend clearly seen in the current surge in demand for data center space to support cloud computing platforms.
That underscores a crucial task confronted by the cloud crowd, which is now facing the AI community. “There’s an opportunity for us to clarify the signal within the noise, and distill value and make it accessible,” according to Delip Rao, founder of Joostware, who has worked on natural language processing (NLP) at Google, Twitter and Amazon.
Parallels With Era of Cloud Confusion
As with cloud, language and definitions can create confusion. “Everyone has a different definition of AI right now,” said Eggers. “‘Artificial intelligence’ tends to be the scariest and most universal term. Everyone has a picture in their head, but everyone’s picture is different.”
With AI, that task is complicated by the need to provide context for today’s specialized applications of artificial intelligence, and decouple it from SkyNet and other science fiction narratives (for background, see Beyond SkyNet: Refraiming AI as a Force for Good).
“We try to stay away from biological metaphors, because it gives an increased expectation,” said Rao. “We look at better ways to use the data we find in the real world. I tend to steer away from the brain/mind analogy.”
There’s a big difference between “vertical intelligence” in which humans train machines to perform specialized tasks, and the “general intelligence” that stars in science fiction, in which machines can think for themselves and decide to take over the world.
“I think there’s an overestimation of how far we are from a general intelligence system,” said Rana El Kaliouby, CEO and co-founder of Affectiva, which uses facial recognition to add “emotional intelligence” to AI applications. “We’re in the early days of this research. Narrow AI is very transformative.”
“I think specificity is the key thing now,” said Hammond. “You label it as you please and say ‘this is what it does.'”
Computing Power Enables the Revolution
The application of artificial intelligence is being accelerated by dramatic advances in computing power, including the emergence of GPUs (graphics processing units) for parellel processing, and the development of custom hardware using specialized chips like FPGAs and ASICs.
“Things we thought were impossible five years ago now feel easy,” said Naveen Rao, VP and General Manager of AI Products at Intel Nervana. “I hope everyone can appreciate the transformation that’s happening.
“I think this is under appreciated,” Rao added. “We’ve actually hit a critical mass. Now it’s really about tools, making it easy and accessible, and bringing down the cost of solutions. Competition in this space is actually important to driving the industry forward.”
As noted by his Intel colleague Pradeep Dubey, the advances in computing are accompanied by decisions that impact the workplace.
“AI has the potential to increase the target market for computing, but the implications go well beyond computing,” said Dubey.
“Machines crunch numbers. Humans make decisions,” he said. “This division of labor is what is being challenged by the next round of AI. To the extent this holds, it will be very disruptive. The implications go well beyond computing. It’s truly transformational. It’s giving rise to a new class of machine-led services. This is a different mater from the compute implications. Humans may take a back seat.”
AI-driven automation is already making an impact int he workforce, and researchers and analysts expect that to accelerate. But they say employment this is another area where AI’s impact will be felt in many ways, some positive and some negative. Smart machines and robots may automate some jobs, but the largest application of Ai in the workplace will be “augmentation” – adding new capabilities that make employees more effective.
“There’s places we’re going to fully automate,” said Eggers. “There’s a lot more places we’re going to augment.”
What’s clear is that the the role of AI will be defined as we move forward into the platform transition described by Norvig, with early use cases outlining areas of opportunity and concern.
“The technologies that we design shape our world,” said Madeleine Elish, a researcher and anthropologist at Data & Society. “The technologies that we design today will shape our future. Social implications are always enmeshed in tech, in really complex ways.”