[Column] Amir Kanaan: The role of AI and machine learning in cybersecurity


If artificial intelligence (AI) is the practice of trying to make machines become more humanlike, machine learning (ML) is one of the approaches used to try to achieve that. With ML, computers are trained to analyse large amounts of data and identify patterns in it using algorithms created by humans. The technology has many applications across business and particularly in the field of cybersecurity. 

ML has the advantage of being able to analyse data far more quickly than a human can. It can therefore classify data based on set parameters very efficiently and flag data that falls slightly outside these established parameters. It can learn from past situations to recommend appropriate approaches and responses to a situation, as well as learning from historical data to make predictions.

ML can help identify malicious activity by using predetermined attack parameters and it can build up profiles of how hackers attempt to breach an organisation’s defences. It can also identify responses and mitigation approaches by studying previous attacks and identifying how effective different strategies were in dealing with incidents. As it expands its knowledge, it can start to make proactive recommendations on how to reduce risk.

There is no doubt that organisations need help in facing down attempts at cybersecurity breaches. According to Kaspersky Security Bulletin 2019, which collects data from millions of Kaspersky end users from 203 countries, 19.8% of user computers were subjected to at least one malware-class web attack between November 2018 and October 2019. Kaspersky solutions repelled more than 975 million attacks launched from online resources located all over the world and 273 million unique URLs were recognised as malicious by web antivirus components.

Kaspersky’s web anti-virus solutions also detected more than 24 610 126 unique malicious objects, while 755,485 computers of unique users were targeted by encryptors and almost 2.3 million computers of unique users were targeted by miners. Kaspersky solutions blocked attempts to launch malware capable of stealing money via online banking on 766,728 devices.

There are good reasons why cybersecurity solutions that utilise ML are an attractive investment for organisations. The sheer volume of threats is stretching security teams and leaving them unable to deal adequately with many essential tasks.

Take user access rights and privileges as an example. Administering these can be a considerable overhead, even in a smaller organisation. ML solutions are able to study the usage patterns of individuals and recommend changes to their access privileges, potentially removing weak links.

Developers are also focusing on how AI can improve multi-factor authentication (MFA). For example, if a user tends to log in consistently from one place, but then suddenly changes to another location or another device, ML-powered systems see this change in behaviour and can block access.

This process, known as risk-based authentication, goes beyond the traditional MFA approach of password, device type and fingerprint scan, by looking at context. In case of a flag being raised, the ML could be programmed to ask for further confirmation of the user’s identity, such as a one-time password to send to an assigned device or a facial scan. Human system administrators could also be alerted. As the datasets they examine become larger and previous interactions give them more context, ML systems become better at making judgements.

Organisations do not, of course, have it all their own way with AI and ML. While you are using smart technology to help identify and prevent attacks, cybercriminals can use AI to make their attacks more ‘intelligent’ and thus able to avoid detection.

Artificial intelligence could also help hackers make their social engineering strategies more effective. This popular hacking technique effectively involves tricking individuals to divulge information that can compromise personal or corporate security.

AI could be used to trawl for sensitive information on individuals and organisations, as well as being used to create content that can pass through typical cybersecurity filters, such as e-mail messages that look like they were written by humans. AI could also help in targeting misinformation campaigns more effectively and developing malware that looks like something legitimate.

For those looking to use artificial intelligence for entirely ethical purposes, its promise is great. There are, of course, those who would seek to misuse it as outlined above. To stay ahead of the cybercriminals, organisations will need to invest in tools that help them stay ahead of the cybersecurity curve and provide valuable assistance to their own cybersecurity teams.

Amir Kanaan is the Managing Director for Middle East, Turkey and Africa at Kaspersky.

Follow us on TelegramTwitterFacebook, or subscribe to our weekly newsletter to ensure you don’t miss out on any future updates.

Facebook Comments

TECHTRENDS PODCAST

TechTrends Media Editorial

Tracking and reporting on tech and business trends in Kenya and across Africa. Send tips to editorial@techtrendsmedia.co.ke

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button