Why mastering AI is a training problem
I know, I know, you’ve heard it before, don’t worry about it, it’s just a training problem.
Well, this time it is!
Having trained people to use AI, there is an emerging training gap that some will address, and others will set aside, consisting of three questions:
How do you know if the AI is right?
What to do if the AI gets it wrong.
What did I just learn?
So how do you know if the AI is right?
Again, AI101 states check the result.
But if you have to check for hallucinations, false references, extra arms and legs, incomplete minutes, isn’t that going to waste your time?
And doesn’t that require expert knowledge? And whose knowledge? Yours.
And expert, or not you’ll need critical thinking skills when dealing with AI. Same as searching Google or discerning fake news. Although with AI the consequences are greater.
And what do you do if AI gets it wrong.
Seriously wrong.
Yes, AI101 recommends change the prompt which works for simple requests. But if you ask AI to do a complex task and it fails, where did it go wrong? Nobody can say.
How do you investigate AI for instance not being able to combine two documents together using another as reference? Even if you know the manual steps you can’t know exactly what the AI did.
And even knowing what AI was trying to do and then solving requires complex problem-solving skills. And time. Both of which most people lack.
Although people can be trained to work with complexity. Unless the AI admits where it slipped up and why.
And the third problem is an adult learning one: retaining what you’ve learned. Isn’t it better for your long-term knowledge to carry out the unknown task yourself, that way you’ll find out. And you get to retain that knowledge. And then am able to move to the next difficulty. Rather than relying on artificially generated guesswork which may change the next time you enquire. By which time you’ve forgotten what you could’ve learned.
So, use AI. But use it for boring tasks like captions or capturing minutes where you can double check the results quickly and easily.
Or asking questions that you partially know the answer to, such as how to carry out conditional formatting in Excel or using diff in Word? You’ll know if it’s right or wrong. Just follow the prescribed steps.
Or better yet, even asking for a training regime such as asking to create a course for AI which again, if you have the experience and expertise, you can double check.
But don’t assume as most science fiction does that AI is omniscient - that it can solve complex problems. In fact, science fiction often depicts AI as being unable to solve complex problems. Ask HAL: “I’m sorry, Dave, I can’t do that.”
AI is software. AI uses an algorithm.
So, to know the AI is right, you need to learn and be a discerning expert with critical thinking skills.
If the AI gets it wrong, you will need problem-solving and troubleshooting skills.
As well as remembering what you learned: for the next time. Or rather the next more complex problem.
And aren’t these the skills we are using to make sense of this rather messy world that is neither two dimensional nor is black and white. AI or no AI.

