.ai is not magic — let me explain it to you as if you were 5

Kamila Hankiewicz
9 min readJul 14, 2023

--

If you believe in AI being magic, I hate to break it to you, but someone has to — Santa Claus doesn’t exist and AI ain’t magic, despite its jaw-dropping capabilities. It’s a tool for humans made by humans, but one, that can revolutionise the way we live and do business. For those who are non-technical but curious about AI, I thought to use the ELI5 formula (“Explain Like I’m 5), using relatable examples.

Understanding AI’s learning process

Imagine having a new friend who observes and learns from everything you see and hear, just like a child does (let’s not go into ’84 / China stuff as yet, you’re only 5, remember? :).
This friend is AI. The more tasks it performs, the more helpful it becomes over time.

AI figures things out from patterns (i.e. mom likes a croissant in the morning, dad doesn’t eat breakfasts), understands different ways people express things, and can even point you to the most relevant information for your specific needs or perform a task that’s personalised, based on your past behaviour. That’s how we help our clients with Untrite tools.

Big organisations have a lot of people who generate a lot of information. It’s really hard for each person to know what information exists and what could be relevant to help them fix a client’s problem; like e.g. a broken machine that creates lots of cost when it’s not running, or quickly helping another person in danger. We make it easy to ‘ask’ computer in a human language about all the information it stores (just like you’d ask your friend who knows all the answers) and show you in one place all that you should know, to make a good decision.

Overall, that’s the grand vision of AI for business and personal use. A computer that acts as a trusted companion, helping people gather, organise, and interpret vast amounts of information. That way, we can make informed decisions, based on suggested successful strategies and valuable insights.

The foundation: quality data

Just like building a sturdy Lego tower requires a solid foundation, AI’s success relies on high-quality data. What is data and when is it considered of high quality, you ask? Imagine you have a special box where you can keep things.

This box can hold all kinds of blocks of things like toys, books, or even snacks. Now, imagine that instead of physical things, we have something called “data.” Data is like information that we put inside the box. It can be numbers, words, pictures, or even sounds. For AI to work, data must be labeled and organised correctly, so if you’re a messy kid, AI my not render you accurate and reliable information.

So what makes AI so unique from any other computer technology that has ever existed? It’s the way how smart people — like software engineers, data scientists, even neurobiologists, designed ways for computers to learn new information.
Before AI, people had to write a lot of code — that is lots of instructions for computers to execute. Just look how much paper was needed to write one (ok, a very complex one) code to execute when NASA sent Apollo 11 people to the moon:

this is Margaret Hamilton, director of software engineering for the project, standing next to a stack of paper containing the software, Credit: Nasa

AI doesn’t work that way.
You don’t need to explicitly tell a computer what to do. It learns like a child does — by seeing examples and figuring things out on its own (ok, kind of, but I’ll explain in later part of this post).

You don’t need to be a grown up (organisation) to start using AI

The great news is that AI is not exclusive, reserved for grown ups who have lots of data blocks, ohm, I meant, toys, books and so on, nicely sorted and well searchable. Those toys can even be in different rooms (systems) and with a bit of creativity and code, it’s possible to link and find how they relate to each other — make a magical thread called API, or build a special one just for those custom build rooms (adults call them legacy systems).

AI is a tool available to businesses of all sizes and stages of maturity. Sometimes, some companies just need a little bit of guidance from people like me or other people who love figuring problems using technology, and who can help find the right business areas which can greatly and fairly quickly benefit from AI. We call it the low hanging fruit cases. Sometimes, of course AI is not the best choice, that’s why you need people who are not afraid to tell the customer what are the most optimal, easier to implement and often cheaper options.

How do computers understand us?

There are different ways how we can teach a computer what we want — they are called reinforcement, supervised and unsupervised learning methods.

Think of supervised learning as having a helpful teacher by your side. The computer learns from labeled examples, associating inputs with correct answers. It uses this knowledge to predict answers for new questions.

Unsupervised learning, on the other hand, is like the computer exploring a treasure hunt without a map. It discovers hidden patterns and groupings within data, finding surprises and organising information in exciting ways.

When it comes to reinforcement learning, imagine teaching a computer to play a game, just like you learn through trial and error. The computer takes actions and receives rewards or penalties based on its performance. It learns from these experiences, adjusting its actions to maximise rewards over time. Just like a patient, determined friend who learns from mistakes and becomes better with each attempt.

You too are teaching AI

But AI is not only reserved for businesses. You’re using it too without even realising it. When you ask Alexa to play you a song or tell you a joke, it uses AI to understand what you mean. And if you interacted with it long enough, it may have already learned your music taste and sense of humour so, it will give you results that you like the most.

A smarter version of Alexa that understands all that you say, not only a simple commands is a software that uses Large Language Models and Natural Language Programming (that’s what ChatGPT uses). LLSs are those large data boxes, just like in the previous example — and here data is information about concepts, words, sentences and language in general. These models are trained on lots and lots of text, learning the patterns and context of human language. They can engage in conversations and generate responses that sound like they’re coming from a person. They can even guess emotions or understand if a sentence is a question or a statement.

The incredible thing is that computers don’t understand language like we do, but they have clever tricks up their sleeves. They break down words and phrases into little pieces and give them special codes and weights. This helps them recognise the meaning and relationships between words.
The weights are like special values that help us decide which information is more important.

And how do computers read images?

Now, you may have heard that nowadays can even read what’s on images, like when mommy broke a car and your insurance company used AI to analyse cost of damages based on the photos.
That’s true!

Imagine you have a special pair of glasses that can help you see things just like a computer does. Let’s use those glasses to understand how computer vision works for recognising cats.

Now, when the glasses look at a picture or a video, they start looking for certain clues to figure out if there’s a cat in it. They’re like super smart detectives!

First, they search for shapes that look like cat features, such as pointy ears, a cute little nose, and whiskers. If they find those shapes, they think, “Hmm, this could be a cat!”

Next, the glasses check for colours that are usually found on cats, like orange, black, white, or grey. They think, “Ah-ha! These colours are often seen on cats!”

But here’s the tricky part: the glasses have never seen a cat before, so they need some help. They start learning by looking at many pictures of cats and also pictures of things that are not cats. They compare what they see in the cat pictures with what they see in the non-cat pictures.

Over time, the glasses become smarter and learn to recognise patterns that are common in cat pictures. They notice that cats often have those specific shapes and colours we talked about earlier.

Now, when you show the glasses a new picture or a video, they use what they’ve learned to decide if it’s a cat or not. If they see those familiar shapes and colours that match what they’ve learned, they happily say, “Yes, it’s a cat!”

Computer vision is all about training the glasses, or in this case, the AI, to understand what cats look like by showing them many examples. And just like you learn new things by looking at pictures and figuring out what’s in them, the AI also learns by looking at lots and lots of pictures!

You too, are teaching machines

Now that you know how computers learn to read text, you may see how this can be applied in real life case of e.g. self-driving cars (they are those special cars that can drive themselves without a human driver). But before they can do that, they need to learn how to see and understand the world around them, just like we do when we drive.

To teach these cars, we use something called computer vision and machine learning. We give the cars special “eyes” called cameras that can see everything around them. These cameras take pictures and videos of what’s happening on the road.

Now, just like we learned to recognise objects like cars, buses, and traffic lights, we teach the self-driving cars to do the same. We show them lots of pictures and videos of different things they might see while driving, such as cars, people, stop signs, and traffic lights.

The cars have powerful computers inside them that look at these pictures and videos. They learn to find patterns and important features that help them understand what they’re seeing. For example, they learn that stop signs are red and have a specific shape, or that traffic lights have different colours like red, yellow, and green.

And just like we teach computers to recognise cats, when self-driving cars see more and more pictures and videos, they become smarter and better at recognising objects and understanding the rules of the road. They learn to make decisions like when to stop, when to turn, and how to avoid obstacles.

But here’s where we come in as teachers!

Sometimes, the self-driving cars come across things they’ve never seen before, like a new kind of traffic sign or a construction zone. When this happens, the cars might get a little confused and need help.
You may have seen these special ‘captcha’ puzzles, where you need to select an area of an object in question. This is when we help computers recognise objects on the photos and train them on massive amounts of such photos (data points).

You and (A)I?

AI may seem like magic, but it’s actually a powerful tool created by humans and it has an incredible potential to make ours lives and workplaces better and more human. AI is not just for grown-ups; businesses of all sizes can benefit from it with a little guidance. We can teach computers to understand us through different learning methods.

Rather than resisting tech advancements, we need to adapt and embrace the changes brought by AI. This means — we need to be proactive in acquiring new skills and knowledge that complement and enhance AI capabilities. If we work alongside AI systems, we’ll be able to find new, more exciting opportunities within our fields. Otherwise, just like IBM’s Chief Commercial Officer Rob Thomas recently said during a recent press conference:

“A.I. may not replace managers, but the managers that use A.I. will replace the managers that do not”.

IBM’s Chief Commercial Officer Rob Thomas

Now off you go to your sandbox. It’s time to (responsibly) play with AI.

--

--

Kamila Hankiewicz
Kamila Hankiewicz

Written by Kamila Hankiewicz

I'm all about tech, business and everything in between | @untrite.com @oishya.com @hankka.com | @untrite.com @oishya.com, @hankka.com, ex-MD Girls In Tech

No responses yet