June 8, 2021by Cybersixgill

Beyond the Buzzwords: Superficial Intelligence

In the previous post we explored 2021’s numero uno TI buzzword – “real-time.” We’ve ventured beyond the meaning and dove deep into what “real-time” means and how to understand whether or not a certain real-time solution is suitable for you.

The runner-up, “AI,” has certainly been around the block for quite some time - some might argue for centuries. The following will try to explain what lies behind “AI” and its “entourage” (or “squad” for the younger readers): “deep learning,” “machine learning,” and the prom queen - “NLP.”

“I am completely operational, and all my circuits are functioning perfectly.”

– HAL 9000, 2001: A Space Odyssey (1968)

Artificial intelligence was founded as an academic discipline in 1955, on the assumption that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it".

In its essence, it is the hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can. The most common models that are used in today’s cybersecurity solutions are focused around learning and reasoning.

To put in super-layman terms (and my apologies to the experts in advance): machine learning (“supervised learning,” to be exact) occurs when an algorithm is taught by a human being (“tagged”) to classify, let’s say, a cat. For each picture, the human inputs (“tags”) “a cat” or “not a cat.” Do this over and over again, and the machine learns to identify a cat. I repeat: this is an over-simplification for the sake of the following argument.

Deep learning is a subset of machine learning in which an algorithm or a software program needs minimal human intervention and, provided the volume of available data is big enough, learns to identify (for example) a cat.

And finally, Natural Language Processing (NLP), is “a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data” (source). NLP helps Siri understand you or enable those annoying websites’ chatbots to become, well, less annoying.

“Artificial intelligence” (or “AI”) is the broadest term that has made its way to the heart of pop culture – from Space Odyssey's HAL 9000, through Short Circuit’s Johnny 5 (Yes, I’m that old. Get over it.), to WALL-E and Iron Man’s J.A.R.V.I.S. It’s no wonder then that tech marketers just “love, love, love AI” – and are taking creative liberties with the term.

On almost every tech company website you’ll see these two letters, followed by a hyphen and a word like “based” or “driven.” Try to control your gag reflex – these words are probably there for a reason.

Almost every company tries to position itself as a technological leader, and a lot of them actually are. But there’s a fine line between delivering true innovation and portraying a man-behind-the-curtain solution as “artificial intelligence.”

“Greetings, Professor Falken. Shall we play a game?”

– Joshua, WarGames (1983)

Those attempts are as old as time, or at least date back to 1770, when the Mechanical Turk was invented. Time for...wait for it...Wikipedia!

As explained on Wikipedia, the Mechanical Turk “was a fake chess-playing machine constructed in the late 18th century. From 1770 until its destruction by fire in 1854 it was exhibited by various owners as an automaton, though it was eventually revealed to be an elaborate hoax.”

(Amazon even riffed on the Mechanical Turk and branded what they describe as “artificial intelligence for processes outsourcing some parts of a computer program to humans, for those tasks carried out much faster by humans than computers” as Amazon Mechanical Turk – or MTurk – crowdsourcing marketplace.)

And then there was The Wizard of Oz. The iconic movie (and the novel on which it was based) inspired psychological experiments as well as a design-testing process known as the “Wizard of Oz method or “WoZing”. The essence? Something is exhibited as automation, when in fact there is a person/s operating behind the scenes, usually to claim a substantial and dramatic technological achievement.

Let’s get one thing straight:

“AI,” in its essence, is a bunch of statistics (and again, my apologies to all data scientists, mathematicians, and brainiacs who might be offended by this oversimplification). An artificial, supreme being is still a distant dream (or nightmare – depends who you ask). However, when it comes to this subject, marketers should communicate in a straightforward manner (no easy feat, I know) - if you have people behind the curtain, don’t braggadociously slap “AI” on top of your messaging.

For prospects of those “AI-based” solutions, you need to understand two things:

What type of “AI” is used? Deep learning? NLP? Machine learning? What is actually automated, and how?

What’s the impact of “AI” vs. a human-based, manual solution?

Answers to the first question could mean different things.

Deep learning, for instance, is autonomous, but it’s like a black box – you don’t know the rationale behind it. So you can further ask yourself whether autonomy is a benefit, and also whether or not you need to have an audit trail of some sort.

Machine learning solutions need supervision – so you’ll want to consider: what’s the state of the manpower/team supervising the tagging that “teaches” the machines? How does this impact performance, and how can it impact your business in terms of metrics and processes?

The latter is actually the foundation for answering the second question:

Now that you understand the impact of a human-based, manual solution, you need to further inquire what the impact is of the automation - the AI.

Remember that more often than not, AI is biased in one way or another. A biased threat detection system, for instance, can produce a lot of false positives, putting unnecessary strain on security teams.

It’s important, therefore, to understand how vendors are tackling the main drivers of AI bias (data, algorithm, team), and how it affects the final outcome.

For best results, a mix of different AI algorithms should be used.

Some tasks call for simpler automation, while others call for more refined approaches – usually in human-machine touchpoints where the human needs to understand a machine-made decision (for example, why the system deems a certain IOC relevant or why a vulnerability got a certain risk score).

This allows the security professionals to back their decision, take educated and calculated actions, and justify these steps to their superiors.

We have a saying at Cybersixgill:

“Never send a man to do a computer’s job.” But it also works the other way around. Just like any other challenge, knowing is half the battle. Yes, we’ve come a long way since the Mechanical Turk – but until artificial intelligence will be truly intelligent, we have to rely on our own intelligence, common sense, and critical thinking.

You may also like

Cybersixgill & ThreatQuotient logo lockup

June 20, 2024

A Conversation with Haig Colter, Director of Alliances at ThreatQuotient

Read more
CSG-IQ vs ChatGPT-Thumbnail

June 12, 2024

Navigating AI: Comparing ChatGPT to Cybersixgill IQ

Read more
CSG Report Generator Thumbnail

June 11, 2024

Overcoming staffing shortages with Cybersixgill’s AI-driven reporting

Read more