AI consumes a lot of energy. Hackers could make it consume more.

The attack: But this kind of neural network means if you change the input, such as the image it’s fed, you can change how much computation it needs to solve it. This opens up a vulnerability that hackers could exploit, as the researchers from the Maryland Cybersecurity Center outlined in a new paper being presented at the International Conference on Learning Representations this week. By adding small amounts of noise to a network’s inputs, they made it perceive the inputs as more difficult and jack up its computation. 

When they assumed the attacker had full information about the neural network, they were able to max out its energy draw. When they assumed the attacker had limited to no information, they were still able to slow down the network’s processing and increase energy usage by 20% to 80%. The reason, as the researchers found, is that the attacks transfer well across different types of neural networks. Designing an attack for one image classification system is enough to disrupt many, says Yiğitcan Kaya, a PhD student and paper coauthor.

The caveat: This kind of attack is still somewhat theoretical. Input-adaptive architectures aren’t yet commonly used in real-world applications. But the researchers believe this will quickly change from the pressures within the industry to deploy lighter weight neural networks, such as for smart home and other IoT devices. Tudor Dumitraş, the professor who advised the research, says more work is needed to understand the extent to which this kind of threat could create damage. But, he adds, this paper is a first step to raising awareness: “What’s important to me is to bring to people’s attention the fact that this is a new threat model, and these kinds of attacks can be done.”

How to become successful

For more technology Updates

Latest Jobs in Pakistan

Best Scholarships for Needy students

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *