Are you using the term ‘AI’ incorrectly?
In the meetings we have with our prospects and clients I often hear people using terms AI and Machine Learning as if they meant the same thing. But is it technically correct? Or is it carelessness? And if are trying to be really accurate, who and in what cases is really using the term ‘AI’ incorrectly?
I’m nowhere near to understanding complexities of those technologies and pointing every subtle difference so I can’t tell you much other than citing definitions which “speak” to my not-so-techie mind (e.g. seeing this write up yesterday of image interpretability work done by Google 🤯 my mind and kind of intimidated me on how much I do NOT know).
It turns out academia doesn’t consider Artificial Intelligence (AI) and Machine Learning (ML) interchangeable.
There are some subtle differences between those terms and the easiest explanation I found is that of Jason Eisner, computer science professor at Johns Hopkins. He states that:
“AI = building systems that can do intelligent things
NLP = building systems that can understand language ⊊ AI
ML = building systems that can learn from experience ⊊ AI
NLP ⋂ ML = building systems that can learn how to understand language
NLP pursues a set of problems within AI.
ML also pursues a set of problems within AI, whose solutions may be useful to help solve other AI problems.
Most AI work now involves ML because intelligent behaviour requires considerable knowledge, and learning is the easiest way to get that knowledge.”
Tech startup founders make a usual mistake of assuming that other people care enough about learning the distinction and using terms correctly — they just use language in a less formal way.
Not to mention, that technology choice doesn’t matter and it’s not a deciding factor of why a company should choose your solution just because it’s “AI-powered” (it may still work for investors though 🤑🤑🤑). A number of businesses now advertise systems as “AI” simply because they use a lot of data. Somehow, in the space between ego and carelessness to think through the problems we really mean to solve and the data we need, most systems advertised as “AI” are rather ineffective, often make problems look overcomplicated (Repeated story with “Let’s put EVERYTHING on blockchain!” while most security systems can go without) and on occasion require humans to act robotically a.k.a “human in the loop” so data is properly interpreted for a desired outcome.
So much for humanising our work.
Let’s go back to the roots of AI.
To understand the distinction between AI and Machine Learning we need to go back to AI’s humbling beginnings.
The term “AI” was born at the workshop held on the campus of Dartmouth College during the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true. But things didn’t go as fast as planned and in in 1973, in response to the criticism from James Lighthill and ongoing pressure from congress, followed by pessimism in the press, the US and British Governments stopped funding research into artificial intelligence, and the years that followed would later be known as an AI winter. AI became an embarrassing term on grant applications so academia was replacing its instances with “Machine Learning”.
With poorly defined terms, there’s not really such a thing as using them correctly. Nope, you can’t point fingers.
Language evolves, whether we like it or not.
Despite some years of struggles, the AI term finally got into a mass fame on promise to solve all our problems. Press and tech enthusiasts naturally distorted its meaning. The new way industry is using the term and explain the modal usage for readers who aren’t interested in subtle distinctions. It’s okay to let the language evolve as long as we keep up.
But watch this. What you’re calling AI can be just a reinforcement learning (RL) technique of Machine Learning. How to know? If your definition hinges on sequences of actions, planning, gathering information from an environment, figuring out policies for behaving in the future — then bingo — it’s RL.
I bet you may have heard of Deep Learning? Where does it stand? Deep learning (DL) is ML that uses a particular class of algorithms (neural networks) and it’s what industry tends to mean when it says AI.
If all you’ve heard of AI is via sci-fi movies like Ex-Machina or Blade Runner 2049, you’re probably imagining the superintelligence entities. If yes, then you’re thinking HLI (Human-like intelligence). That term is used for ‘entities’ evoking personhood, making “real” people balancing between experiencing awe and the discomfort of Uncanny Valley.
I hate to break it to you — but we’re nowhere near there. All those industry AI applications which are being built are not HLI and aren’t about building actual minds. For now it’s a neatly executed set of rules.
You may have heard of another term — the AGI — Artificial General Intelligence. Put simply, AGI can be defined as the ability of a machine to perform any task that a human can. Although the some applications highlight the ability of AI to perform tasks with greater efficacy than humans, they are not generally intelligent, i.e., they are exceedingly good at only a single function (does anyone want to play AlphaGo with DeepMind?) while having zero capability to do anything else. Thus, while an AI application may be as effective as a hundred trained humans in performing one task it can lose to a five-year-old kid in competing over any other task.
The real AGI would be when it works but you don’t know how it works.
Choices, choices…
Unless you’re working your way up in the academia and feel like your life depends on made up definitions of things we don’t fully comprehend —then relax.
If you’re wondering which technology can solve your business problem while not loading tons of R&D $$ into computing power, try this — the simplest approaches are usually the best, even if you doubt they will work.
If the simple thing performs well, that means you got yourself a solution that will be easier to maintain in production.
— —
Untrite helps to unify information from different silos, automatically enriching it with context to derive business value.
Our software provides clarity and augments know-how in data you already have. By using Machine Learning it helps computers understand complexity of human language in contracts, reports, emails and other documents. Organisations get a better view of the most relevant information in real time to make more informed, data-driven decisions.
Curious to see what can we do for you? Arrange for a demo.