Written by Manufacturing.NET
Original photo by Ambacin
Birmingham City University (BCU), in partnership with Covatic, a leading provider of user privacy solutions, has unveiled research focused on protecting AI models from cyber threats.
A number of cutting edge industries are integrating AI models, as these systems exhibit high accuracy in analyzing data in all formats. However, they remain vulnerable to deceptive or adversarial attacks that aim at deceiving AI systems by subtly altering input data.
One such method, known as a ‘black-box attack’, allows cyber attackers to test an AI model repeatedly to gather intelligence and find ways to manipulate its decisions. This could allow an AI-powered self-driving car, for example, to misread a stop sign as a speed limit sign; or misdiagnose machine instrument readings based on the images it’s provided.
Research from BCU, which was recently published in Expert Systems with Applications, has introduced a new defense mechanism for these AI models.
Original photo by Ambacin
Birmingham City University (BCU), in partnership with Covatic, a leading provider of user privacy solutions, has unveiled research focused on protecting AI models from cyber threats.
A number of cutting edge industries are integrating AI models, as these systems exhibit high accuracy in analyzing data in all formats. However, they remain vulnerable to deceptive or adversarial attacks that aim at deceiving AI systems by subtly altering input data.
One such method, known as a ‘black-box attack’, allows cyber attackers to test an AI model repeatedly to gather intelligence and find ways to manipulate its decisions. This could allow an AI-powered self-driving car, for example, to misread a stop sign as a speed limit sign; or misdiagnose machine instrument readings based on the images it’s provided.
Research from BCU, which was recently published in Expert Systems with Applications, has introduced a new defense mechanism for these AI models.