top of page
  • arikkl

Model Tampering

Updated: Mar 1, 2023

You have just finished developing a new model and believe in your model's fantastic abilities after you invested many resources into it. You believe that your model is a “tiebreaker” against other opponents on the field. Furthermore, after you’ve uploaded your model to the cloud’s production you were in euphoria. After a few weeks, your model seems to work worse than you’d expected - the parts that performed well in the lab did not work in the field and so does the business aspect. So, what happened? Is it possible that your opponents have committed a cyber-attack against your model?


This article doves into Tampering Attack. The purpose of this attack is to sabotage the model inference engine. The result of this attack is similar to the result of an Adversarial Attack, which we’ve discussed in a past article, although, the method on which the attacker relies differs.

While Adversarial Attack relies on the attacker's ability to find the mathematical weakness and input those weaknesses into the model, Tampering Attack is much more easy to manage. To attack a system via Tampering Attack, all the attacker needs is access to the system or the model.

Background

A model consists of two building blocks: a statistical model and pre/post-processing. Tampering one of these parts can influence directly the behavior of the inference system in the model and will probably shut down the system.

The data and weights structure is usually saved in a text file format such as JSON, HDF5, XML, etc. For example, leading libraries synch as Keras, are saving the architecture of the neural network and the weights in an HDF5 format. Since the data is saved as a text format, it can be altered easily via a wide variety of editing tools that are available and common. Furthermore, In some development environments, the model can be updated using their API, so if these interfaces are not secure enough, they may serve as an attack surface. This means that if the model files are not secured both at the level of the work environment and will maintain that an attacker who gets a grip on these files can easily update them and cause a significant impact on the functioning of the system.

How is Model Tampering Attack performed?

Direct damage to model parameters - To perform a tampering attack, the attacker must gain access to the environment in which the model files are stored, the files themselves, or the API that can be used to update the model. When the attacker obtains access, if the files are kept unencrypted, i.e as plain text, he could change the model’s parameters and disrupt it.

Damage to the classification weights - In classification systems, the final classification decision is based on the probability weights that are defined in the last layer of the neural network. If the model files are insecure, i.e, saved unencrypted, an attacker will gain a grip on these files, and he can change these weights. The result might be a severe disruption.

System software sabotage - Software sabotage can be done by replacing legitimate software components with "contaminated" software components that can cause a very wide range of possible damages. The damage might span from harming the data processing before and after it is transmitted to the decision engine, insertion of crypto coin mining components, and even breach of sensitive data that is used for the model training. Carrying out this attack requires access to the development and production environments in which the model is managed and maintained and thus it might be complicated.

How to defend against Tampering Attack

Organizational Policy for Model Access permissions and Reducing the permission list - As more developers are permitted to edit and modify model files, the attack surface increases, and the attacker may use these users' accounts for the attack. To minimize the attack surface the number of those that are permitted to access and manage the model files should be reduced to as much as possible. Furthermore, any change or even access to these files should be monitored and even approved in advance.

Secure storage of model files - The computing environment in which the model files are stored should be secured as much as pos

sible. In particular, the model files (On Rest) must be saved and encrypted in a way that will make it difficult for the attacker to modify them.

Protecting the model files in transit, between development and production environments - The model files and the processing software must be secured by electronic signature and encryption when they are being transferred between the environments. These protection mechanisms will ensure the authenticity and integrity of the files.

Monitoring - The directories in which the model files are stored should be monitored to identify any changes in the files.

Summary

Tampering attacks can impair the operation of the decision engine to varying levels of effect, from damaging the classification thresholds to completely shutting down the system. To carry out the attack, the attacker needs access to the model files, which can be prevented by using classic and basic security mechanisms such as access control, integrity checks, and monitoring.


37 views0 comments

Recent Posts

See All

Comentários


bottom of page