ROBUST AND RESILIENT DEEP LEARNING MODELS AGAINST DATA POISONING AND EVASION ATTACKS

Deep learning Adversarial attacks Data poisoning Evasion attacks Model robustness Model resilience Adversarial machine learning Secure artificial intelligence Robust training Trustworthy AI

Authors

February 9, 2026

Downloads

Objective : This paper will focus on how deep learning models can be robust and resilient to such adversarial manipulations. Method : It gives a detailed insight into widely used poisoning and evasion attack techniques, evaluates their effect on model performance and reliability, and examines the available defense mechanisms that have been used to identify, foil, or counter these attacks. Results : The research paper also addresses the strong training strategies, anomaly identification approaches, and robust model architectures that make it more resistant to adversarial behavior. Novelty : Through the synthesis of the latest progress, the article will help in the creation of safe, trustful, and stable deep learning systems that can be used in the adversarial world with reliability.