> Google DeepMind and GTIG have identified an increase in model extraction attempts or "distillation attacks," a method of intellectual property theft that violates Google's terms of service.
That’s rich considering the source of training data for these models.
Maybe that’s the outcome of the IP theft lawsuits currently in play. If you trained on stolen data, then anyone can distill your model.
> Google DeepMind and GTIG have identified an increase in model extraction attempts or "distillation attacks," a method of intellectual property theft that violates Google's terms of service.
That’s rich considering the source of training data for these models.
Maybe that’s the outcome of the IP theft lawsuits currently in play. If you trained on stolen data, then anyone can distill your model.
I doubt it will play out that way though.
It is difficult to feel this is as important as the article suggests. Just another "shoe on the other foot" situation.
"Distillation attack" feels like a loaded term for what is essentially the same kind of scraping these models are built on in the first place.