Written by ExchangeWire PressBox
Original photo: Visoot
Birmingham City University (BCU), in partnership with Covatic, has unveiled groundbreaking research to protect AI models from cyber threats.
AI models are now integral to high-stakes industries like healthcare and autonomous vehicles. These systems exhibit high accuracy in analysing images, often surpassing human performance at quicker rates. However, they remain vulnerable to deceptive or adversarial attacks – malicious attempts to deceive AI systems by subtly altering input data.
One such method, known as a ‘black-box attack’, allows cyber attackers to test an AI model repeatedly to gather intelligence and find ways to manipulate its decisions. This could allow an AI-powered self-driving car to misread a stop sign as a speed limit sign; or misdiagnose a patient based on the images it’s provided.
Groundbreaking research from BCU, which was recently published in Expert Systems with Applications, has introduced a new defence mechanism for these AI models. By applying simple and random image adjustments – like rotations or resizing – before processing, the AI system becomes more resilient to deception and manipulation.
Original photo: Visoot
Birmingham City University (BCU), in partnership with Covatic, has unveiled groundbreaking research to protect AI models from cyber threats.
AI models are now integral to high-stakes industries like healthcare and autonomous vehicles. These systems exhibit high accuracy in analysing images, often surpassing human performance at quicker rates. However, they remain vulnerable to deceptive or adversarial attacks – malicious attempts to deceive AI systems by subtly altering input data.
One such method, known as a ‘black-box attack’, allows cyber attackers to test an AI model repeatedly to gather intelligence and find ways to manipulate its decisions. This could allow an AI-powered self-driving car to misread a stop sign as a speed limit sign; or misdiagnose a patient based on the images it’s provided.
Groundbreaking research from BCU, which was recently published in Expert Systems with Applications, has introduced a new defence mechanism for these AI models. By applying simple and random image adjustments – like rotations or resizing – before processing, the AI system becomes more resilient to deception and manipulation.