volume_mute

What is model poisoning?

publish date2025/09/23 20:54:41.280101 UTC

volume_mute

Correct Answer

This is when biased or malicious data is injected into the training process

Explanation

Model poisoning is where an attacker maliciously alters training data to produce unethical or harmful outputs.  Data exposure allows a model to process personal health data.  Licensing is a legal issue.  Conflicting instructions can confuse a model, but this isn't considered poisoning.

Reference

AWS Certified AI Practitioner (AIF-C01) Study Guide, Tom Taulli


Quizzes you can take where this question appears