For a technology that’s decades old, artificial intelligence managed to emerge in the public imagination as one of the signature technologies of 2018 — if not always in a positive way.
On the upside, AI and its related sets of technology such as machine learning and deep learning enable now-taken-for-granted services such speech recognition in smartphones and devices such as Amazon.com Inc.’s Echo and Google LLC’s Home, not to mention self-driving cars, better disease diagnoses and, less obvious but at least as impactful, more automated information technology infrastructure in the cloud and data centers.
At the same time, AI has been used to target people with fake news, discriminate against certain kinds of workers or customers and stoked fears, albeit likely overblown, that machines could make most jobs obsolete before too long. Not least, some leading lights such as Tesla Inc. Chief Executive Elon Musk and the late physicist Stephen Hawking have raised concerns, still hotly debated, that runaway AI could threaten human existence.
Both for good and for ill, the coming year no doubt will see an acceleration in the use of AI and machine learning across a wide variety of products, businesses and everyday activities. Here are some predictions of what’s coming (and what’s not), along with what the experts think:
AI will become as important in enterprises as in consumer products
AI is already remaking the business intelligence, according to James Kobielus, lead analyst for AI, data, data science, deep learning and application development at Wikibon, SiliconANGLE’s sister market research firm. That’s allowing business users to do a lot of the analysis that once required a trained data scientist.
Then there’s robotic process automation, or software that emulates how people carry out tasks in a process, which has become one of the principal enterprise use cases for AI. AI is also becoming a critical foundation for managing information technology infrastructure, an emerging paradigm known as “AIOps.” The idea, as Kobielus has pointed out, is to make infrastructure and operations more continuously self-healing, self-managing, self-securing, self-repairing and self-optimizing.
Not least, machine learning is starting to transform software development itself, by enabling machines essentially to create applications rather than developers needing to program specific logic and rules. Look for this to become more apparent in 2019, especially as cloud computing giants offer more and more AI services.
How others see it
- “In 2019, more BI vendors will integrate a deep dose of AI to automate the distillation of predictive insights from complex data, while offering these sophisticated features in solutions that provide self-service simplicity and guided next-best-action prescriptions.” — James Kobielus, Wikibon
- “Machine learning will enter an operational phase, getting out of backroom experiments to move into the fabric of real-time, mission-critical, enterprise applications.” — Monte Zweben, CEO of Splice Machine, quoted in ZDNet
- “Don’t talk to me about the one or two AI projects you’re doing; I’m thinking, like, hundreds.” — Rob Thomas, general manager of IBM Corp. Analytics, on theCUBE
‘Humans in the loop’ will become the mantra – but not always the reality
Because of how well AI-driven services such as Amazon’s Alexa often work, there’s an assumption that AI will take over all manner of work. That’s far from the case, certainly anytime soon. McKinsey estimates that fewer than 5 percent of occupations can be entirely automated using current technology, but some 60 percent of occupations could see at least 30 percent of their activities automated.
All that means that for 2019 and several years beyond, some of the most successful applications will be those that help people do their jobs better, whether it’s clinicians parsing through MRI scans or factory workers alongside industrial robots or mortgage loan officers trying to process more prospects.
That said, some of the insistence that AI is just a tool rings a bit hollow, given that one person’s higher productivity often comes at the expense of someone else’s job. If AI is truly to benefit society without putting a lot of the people in that society out of work, AI providers and the companies that use it will need to start proving that case in 2019. And both private industry and governments will need to step up with solutions for the people who do lose jobs as a result of AI’s efficiencies.
How others see it
- “In 2019, AI will continue to make our work lives easier, and allow us to accomplish more…. Workers will choose to own certain tasks or delegate projects to the machine based on our preference.” — David Judge, SAP’s vice president of SAP Leonardo, machine learning and intelligent process automation, quoted in ZDNet
AI will become a little more transparent as faults and fears mount
One big knock on machine learning, especially the kinds such as deep learning that use artificial neural networks, is that the algorithms used to produce the results are a black box. You input a lot of data, and get a result whose provenance isn’t always clear — and sometimes is incorrect, such as when a self-driving car stops unexpectedly for a small, insignificant object on the road but then occasionally kills people it didn’t appear to see or comprehend correctly.
Just as bad, the data on which AI systems are trained are faulty or biased. For example, Amazon.com Inc. had to scrap its AI-driven recruitment tool in 2015 after it became apparent it was favoring men over women because it assumed that the fact that men were most of the applicants who got hired meant they were superior. This year, that realization will likely turn to more action to avoid this kind of thing — by legislation if necessary.
Although there’s only so much that can be done to open up that black box, any more than we can see into people’s brains to analyze their decisions, there’s a growing demand especially by lawmakers to shed more light on AI’s inner workings.
No doubt some tech companies that view their data and the algorithms to wrangle it as a proprietary advantage won’t be leading the way here. Governments likely will mandate some level of transparency, though it’s not clear how they can do it yet. But this will become an even bigger issue this year.
How others see it
- “AI and the power that is wielded by the global tech giants raises a lot of questions about how to regulate the industry and the technology. In 2019, we will have to start coming up with the answers to these questions — how do you regulate a technology when it is a multipurpose tool with context-specific outcomes? How do you create regulation that doesn’t stifle innovation or favor large companies (who can absorb the cost of compliance) over small startups? At what level do we regulate? International? National? Local?” — Rumman Chowdhury, managing director of Accenture’s Applied Intelligence division at Accenture and global lead of its Responsible AI initiative, quoted in VentureBeat
- “When accidents happen, it might require the resolution of liability to be settled in a court of law. New case law will have to be created in order for the court to have enough reference material on difficult matters concerning liability.” — J. Gerry Purdy, principal analyst, Mobilocity LLC
- “Maybe we should borrow some ideas from human psychology” to make AI more explainable — Danny Lange, vice president of AI and machine learning at Unity Technologies, quoted in ZDNet
- “2019 will be the year of action. Greater numbers of pledges and declarations about the responsible creation and use of AI will be written and companies will be pressured to adopt them. The public will fight back over government use of biased AI in decisions impacting human rights. More employees will demand influence over what they create and refuse to contribute to harmful automation. Companies will have to lead with their consciences — whether they are buying AI solutions or building them — and seek assurances that the systems are fair in order to avoid being the next headline on AI gone awry.” — Kathy Baxter, principal of Salesforce.com Inc.’s ethical AI practice
- “It’s only a matter of time before Congress will start to regulate AI, and require more verification, country of origin and transparency on both the consumer and enterprise sides. Banks in particular need to beware of discrimination practices associated with the utilization of big data, and will have to constantly assess possible biases embedded in the algorithms by the humans involved in the development.” — Kayvan Alikhani, co-founder and chief executive of Compliance.ai
Bad actors will escalate their use of AI for deception, ahead of efforts to stop them
Whether it’s “deep fake” pornography, more capable AI-powered cyberattacks or a continuation of nation-states such as Russia targeting people on Facebook and other social media to influence elections, AI has just begun to show how much of a threat it can be in the wrong hands.
And like most technologies, it’s impossible to keep them out of those hands. So look for more bad stuff to emerge from the use of AI and machine learning in 2019. “There is a perfect storm of AI nasties just waiting to happen,” says Wikibon’s James Kobielus. “The human race has barely begun to work through the disruptive consequences of this bubbling cauldron of risk.” Problem is, we’ve only begun to understand the enormity of the problem, let alone find ways to ameliorate it. That job has barely begun, but a lot of attention will be paid to it this year, both in private industry and by governments around the world.
How others see it
- “Many tradeoffs must be made and many people may find the resulting technological, regulatory and other remedies disproportionate to the peril. And we need political leadership everywhere who are themselves not going rogue on these matters. But we would be naïve to believe that society can ever fully protect itself from all the adverse consequences that may befall us from our AI inventions.” — James Kobielus, Wikibon
More specialized AI hardware will keep on coming
Nvidia Corp.’s graphics processing unit chips have dominated computers for doing machine learning thanks to their ability to process many operations in parallel. But that was a bit of a happy accident for the chips, which were developed originally to speed up gaming.
Now, a raft of alternative chips is about to hit the market from startups and big chipmakers such as Intel Corp. that have bought a number of those startups in recent years. Like Google’s Tensor Processing Unit chip that’s available via its cloud service, they are tuned to run machine learning algorithms purportedly even faster than GPUs. This year will show whether they can deliver on the promise.
More access to data sets and no-code tools will help democratize machine learning
So far, machine learning has been dominated by tech giants with a lot of data, such as Google, Amazon, Microsoft and Facebook — some of which also are among the leaders of cloud computing, so they can also sell their data-driven services to others as well. That has led to fears that small companies will fall further behind because they simply don’t have access to nearly as much data that powers modern AI.
Those fears may not be as justified as it appears, for a couple of reasons. For one, companies that lead in particular industries, products and services, such as, say, General Electric Co. in engines, have plenty of data of their own that even the Googles and Amazons don’t have. For another, there’s a growing number of open data sources as well as organizations pushing them that may well help arm the little guys. Whether they succeed will become apparent in the next year or so.
How others see it
- “The implementation of machine learning will be very widely distributed. Google will not ‘have all of the data’ – Google will have all of the Google data. Google will have more relevant search results, GE will have better engine telemetry and Vodafone will have better analysis of call patterns and network planning, and those are all different things built by different companies. Google gets better at being Google, but this does not mean it somehow gets good at anything else.” — Benedict Evans, partner at Andreessen Horowitz
- “You do not need to know how the technology of a microwave works in order to use it, it is simply a tool. With the huge influx of no-code, point-and-click tools we are entering into the same phase with AI where it will become a widely used utility by everyone, regardless of technical background. As a result, most of the AI applications in the coming years will be built by people with little or no AI training.” — Vitaly Gordon, vice president of data science, Salesforce.com.
Self-driving cars still won’t see wide use anytime soon
There wouldn’t even be trials of self-driving cars were it not for the machine learning that can make sense of all the data from myriad sensors and at the same time make split-second decisions on what the vehicle should do. But the technology is far from perfect, as the deaths of a couple of drivers or pedestrians in the past couple of years proves.
More than that, though, many people clearly aren’t ready for fully self-driving cars. In Arizona, some people have been vandalizing and throwing rocks at Waymo vehicles. And companies, let alone governments, aren’t even close to figuring out accident liability and many other legal issues starting to arise. As a result, despite all the testing and promise, self-driving cars as any kind of mass phenomenon remain years away.
That said, big and well-funded companies from Waymo and General Motors Corp. to Tesla, Uber Inc. and Lyft Inc. are driving full speed ahead to perfect the technology side. At the least, AI-driven vehicles may start becoming much more common for last-mile deliveries of products, either from drones or from ground-based machines. Don’t be surprised to see them rolling or flying to your doorstep in the coming year.
How others see it
- “People have been planning to have self-driving cars for a while. Some still fear an AI take over might be just 20 years away but the truth is we’re still a long ways away from truly autonomous cars. Self-driving car features will continue to improve, but they will not take over the road.” — Richard Socher, chief scientist at Salesforce.com.
Since you’re here …
The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.