top of page

Crime in 2024: How Artificial Intelligence Could be Used Against us in the UK.

  • Writer: Ken Kirwan
    Ken Kirwan
  • May 21, 2024
  • 3 min read

In a world where technology is advancing at breakneck speed, artificial intelligence (AI) stands out as a powerful tool with vast potential. While AI promises to revolutionise industries and improve our daily lives, it also poses significant risks, especially when it falls into the wrong hands. The idea of AI being used to commit crimes may sound like science fiction, but it's becoming an increasingly real threat. Understanding these risks and knowing how to protect ourselves is crucial as we move forward.





One of the most alarming aspects of AI in crime is its potential to enhance existing criminal activities. For instance, cybercriminals could use AI to create more sophisticated phishing scams. Outside of the States the UK has had the most reports of phishing attacks targeted against businesses and individuals. Traditional phishing involves tricking individuals into providing personal


information through deceptive emails. With AI, these emails can be tailored to be incredibly convincing, analysing social media posts and other online behaviour as part of social engineering to craft messages that seem legitimate and personal. This increases the chances of victims falling for the scam, leading to financial losses and identity theft.


Another area where AI could be exploited by criminals is in automated hacking. Currently, hacking requires a significant amount of skill and time. However, AI algorithms can learn and adapt, finding vulnerabilities in systems at a much faster rate than human hackers. This could lead to an increase in data breaches, where sensitive information such as social security numbers, credit card details, and personal health records are stolen. The consequences can be devastating, both for individuals and organisations.


AI can also be used to manipulate information and spread misinformation. Deepfake technology, which uses AI to create realistic but fake videos and audio recordings, can be employed for malicious purposes. The Eyes on Crime team use Deepfake video technology for teaching and training purposes and the results are often impossible to distinguish from the real thing. Imagine a scenario where a deepfake video of a political leader making inflammatory statements goes viral. The potential for causing panic, disrupting social order, and even influencing elections is enormous. As the technology becomes more accessible, the line between truth and falsehood becomes increasingly blurred.

In addition to these threats, AI could be used to enhance physical crimes. Autonomous drones, for example, could be programmed to carry out surveillance or even deliver harmful payloads. Criminals could use these drones to bypass traditional security measures, making it harder for law enforcement to detect and prevent crimes. Similarly, AI-powered robots could be employed in illegal activities, performing tasks that would be too risky or impossible for humans.



Given these risks, it’s essential for potential victims to take steps to protect themselves. One effective strategy is to enhance our digital literacy. Understanding how AI can be misused and recognizing signs of phishing scams or deepfake content can go a long way in preventing victimization. Regularly updating passwords and using multi-factor authentication can also help safeguard personal information.


Moreover, staying informed about the latest security measures and adopting them can provide additional protection. For instance, using advanced cybersecurity software that employs AI to detect and respond to threats can be a powerful defence. These programs can monitor network activity, identify unusual behaviour, and take action before a breach occurs. Ensuring that all software and systems are up-to-date with the latest patches and security updates is another critical step.


On a broader scale, advocating for stronger regulations and ethical guidelines around the development and use of AI can help mitigate some of these risks. The UK Government and key organizations are working together at the proposed New Cyber Force Hub in Salmesbury, Lancashire to create frameworks that ensure AI is used responsibly and that there are consequences for those who exploit it for criminal purposes. Public awareness campaigns can also play a role in educating the population about the potential dangers and how to protect themselves.



In conclusion, while AI holds incredible potential for positive change, it also presents significant risks, particularly in the realm of crime. By understanding these risks and taking proactive measures to safeguard ourselves, we can better prepare for the challenges ahead. Embracing a culture of digital awareness and advocating for responsible AI use will be essential as we navigate this new technological frontier.



Ken Kirwan: Eyes on Crime Editor

Opmerkingen


bottom of page