The double-edged sword: how cyber criminals are exploiting machine learning

6 min read | James Walsh | Article | | Information technology sector

how cyber criminals are exploiting machine learning

Cyber criminals have their fingers on the pulse, and they move fast. They’re frequently among the first to adopt new technologies for their own gain, and industries often struggle to pre-empt vulnerabilities to stay one step ahead of criminals. As the rollout of machine learning for positive purposes accelerates, we’re also seeing a whole new range of cyber attacks.
 

We’re seeing this emerge at two levels:

1. Using machine learning to override common security tools.
2. The exploitation of machine learning systems themselves.

 

‘What is your mother’s maiden name?’: How machine learning is used to override standard security measures

We’re all familiar with CAPTCHA – the tool that asks you to either type out a random sequence of characters or to click on the images on the screen with a specified feature, like traffic lights, to prove you’re not a robot. Well, it turns out that robots can do this too, and cyber criminals are capitalising on it.

Machine learning can be taught to recognise objects like bikes, buses, and traffic lights. So, as image recognition improves, machine learning is seeing high success rates against leading CAPTCHA tests.

Even passwords are vulnerable. Machine learning can be used to examine our social media accounts and a dataset can be implied using information we willingly put into the public realm.

Think, for example, how often mums include their maiden name on social media, and then consider how often a password security question asks you for your mother’s maiden name. Best friend? Chances are you’ve tagged that person the most. And how many times have you used your date of birth in a password to make it more secure?

Targeted password infiltration attempts can be made with greater success than standard brute force methods, and they can run thousands of possible password combinations of your social media data in mere moments.

Malware is a huge problem for software and OS suppliers, since these suppliers depend on being able to identify malware and provide updates to protect users. Machine learning is being exploited as a means of creating Chameleon malware, which purposefully targets Wi-Fi access points and morphs its profile to avoid detection.

 

Fast-spreading ‘poisoned’ data: How cyber criminals can exploit vulnerabilities within machine learning systems

Criminals are increasingly exploiting new vulnerabilities within machine learning models themselves, for example by tampering with learning datasets.

Most machine learning algorithms need a large set of data to learn trends and train models. To lighten the load, developers sometimes download these datasets from public platforms. If a hacker is able to introduce a subtle change to a publicly available dataset, a single white pixel on an image would suffice, then this vulnerability can very easily spread across the web and could lead to a widespread data poisoning attack.

Similarly, if a hacker is able to tamper with a source dataset and introduce a bias, then the model becomes ‘poisoned’ and ineffective. Consider this scenario: machine learning reads a huge range of news articles to identify sentiment, but then it’s targeted by enough fake news articles to introduce a bias and impact the overall outcome.

In models with small-scale datasets, including some medical trials, it may even be possible to reverse engineer the machine learning model to uncover the original data, which could have huge implications for patient confidentiality.

 

Businesses are at risk

Machine learning can be very beneficial for businesses: it increases process automation, enhances user behaviour analysis, and improves security. However, as with all forms of technology, machine learning also has the potential to be exploited by cyber criminals, given the opportunity. AI-driven cyber threats remain prolific, with attackers manipulating models and AI-generated deepfakes for the purposes of social engineering. With so many global elections in 2024 the use of deep fakes is expected to rise. The AI-generated deep fakes are not just of politicians and celebrities but now are being utilised to impersonate key individuals and stakeholders within organisations.

It’s therefore essential for all organisations to deploy the necessary technologies and invest in hiring top-tier cyber security talent to make sure they remain one step ahead of the competition.

Are you looking to protect your organisation from the threat of cyber attacks? Register a job with us today and benefit from the expertise of our specialist consultants and our extensive talent network of cyber security experts. 

 

About this author

James Walsh, Business Director for Cyber Security, Hays UK&I

James Walsh is a CISMP-certified specialist in senior and executive-level technology recruitment and consulting across a wealth of industry sectors. His passion for cyber security began over ten years ago when he started exploring the intricacies and complexities that come hand-in-hand with the mass adoption of technology. This passion has led him to lead security practices across recruitment, consultancy and advisory across the cyber domain supporting large FTSE 100 organisations, government departments through to start ups.

As the Business Director for Cyber Security UK&I at Hays, James helps tech and cyber professionals progress their careers and ensures organisations have access to the very best cyber security talent and consulting solutions to help secure their businesses. 

Contact James today to find out more about how we can support your professional needs in cyber security.

articleId- 66486127, groupId- 20151