Do you remember that friendly (albeit a bit creepy) little paperclip, “Clippy,” that Microsoft introduced several years ago as part of Windows 2000? If you don’t remember him, don’t worry, he was retired soon after the launch.
All the recent talk of the democratization of artificial intelligence (AI) — from its infusion into Microsoft Office and the new personal assistant, Cortana, to Saleforce.com’s Einstein and Apple’s Siri — point to the growth of AI as the future of computing.
While there is a clear role for AI-driven digital assistants in both the business and consumer arenas, it might be time to hit the pause button before we get too far ahead of ourselves. AI may not be necessary everywhere in everyday life — and in some cases it can get in the way.
Take for example, the student working hard to complete a history research report. Maybe he doesn’t want a reminder every few minutes that the deadline is fast approaching; or maybe a simple request for the answer to how many people lived in Germany in the 1800s results in so much information that he’s sorry he asked about it in the first place.
The point is, while AI can play a key role in the future of computing, it shouldn’t be part of a checklist of technology specifications required for every situation. You can’t start with technology and fit it into a problem; you need to start with a problem and develop the technology to address it.
The new reality of a world filled with AI
The reality of AI and machine learning in the real world is that it must accumulate a lot of information about the user in order to be most effective. It will need to get to know you really well – your spending habits, your work habits and the types of junk food you love. Many people may not be ready for that level of intimacy with a robot.
This brings up another consideration: while it’s one thing to share information with an AI system to help improve our own productivity, it’s another thing to allow retailers and other companies to benefit from our information for their own profit. But, in this new AI-driven world, we may not have a choice in the matter — if we want a discount on a product we are buying, we need to hand over the data.
Using AI for the real problems
Even if you believe AI doesn’t need to become ubiquitous, its importance in new frontiers can’t be denied. A recent report from Accenture, which evaluated tasks in 12 countries, found that AI could increase productivity by up to 40 percent in 2035. In addition to efficiency gains, smart AI-based software can free up workers so they can work more strategically to help their companies grow.
And even more significantly, AI, working in collaboration with humans, can drastically change human life for the better. For example, in science and medicine it can help isolate genes to better understand how the human brain works, help cure diseases, and enable the blind to more easily navigate their world. In manufacturing it can help companies better identify products that are likely to have problems through more intelligent quality testing during each step of the manufacturing process.
While AI is poised to transform the digital world, it’s important to separate fact from hype. Not everyone needs machine learning now for everyday living, and maybe they never will. I must say, having my digital assistant know what I ate for breakfast, how often I checked my Twitter account or sharing information that I may not necessarily need when I’m on deadline, makes me skeptical about how necessary it really is. But, as with any disruptive shift, time may change this viewpoint as long as we are using technology as a driver to solving a problem and just not a means to its own end.
This article is published as part of the IDG Contributor Network. Want to Join?