Coursera
AI Security: Security in the Age of Artificial Intelligence Specialization

Gain next-level skills with Coursera Plus for $199 (regularly $399). Save now.

Coursera

AI Security: Security in the Age of Artificial Intelligence Specialization

Build Secure AI Systems End-to-End. Learn to identify, prevent, and respond to AI-specific threats across the entire ML lifecycle.

Reza Moradinezhad
Starweaver
Ritesh Vajariya

Instructors: Reza Moradinezhad

Included with Coursera Plus

Get in-depth knowledge of a subject
Intermediate level

Recommended experience

4 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace
Get in-depth knowledge of a subject
Intermediate level

Recommended experience

4 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace

What you'll learn

  • Secure AI systems using static analysis, threat modeling, and vulnerability assessment techniques

  • Implement production security controls including monitoring, incident response, and patch management

  • Conduct red-teaming exercises and build resilient defenses against AI-specific attack vectors

Details to know

Shareable certificate

Add to your LinkedIn profile

Taught in English
Recently updated!

December 2025

See how employees at top companies are mastering in-demand skills

 logos of Petrobras, TATA, Danone, Capgemini, P&G and L'Oreal

Advance your subject-matter expertise

  • Learn in-demand skills from university and industry experts
  • Master a subject or tool with hands-on projects
  • Develop a deep understanding of key concepts
  • Earn a career certificate from Coursera

Specialization - 13 course series

What you'll learn

  • Configure Bandit, Semgrep, PyLint to detect AI vulnerabilities: insecure model deserialization, hardcoded secrets, unsafe system calls in ML code.

  • Apply static analysis to fix AI vulnerabilities (pickle exploits, input validation, dependencies); create custom rules for AI security patterns.

  • Implement pip-audit, Safety, Snyk for dependency scanning; assess AI libraries for vulnerabilities, license compliance, and supply chain security.

Skills you'll gain

Category: AI Security
Category: Dependency Analysis
Category: Vulnerability Scanning
Category: Open Source Technology
Category: Secure Coding
Category: DevSecOps
Category: Analysis
Category: PyTorch (Machine Learning Library)
Category: Supply Chain
Category: Program Implementation
Category: Vulnerability Assessments
Category: Application Security
Category: Threat Modeling
Category: AI Personalization
Category: MLOps (Machine Learning Operations)
Category: Continuous Integration

What you'll learn

  • Analyze and evaluate AI inference threat models, identifying attack vectors and vulnerabilities in machine learning systems.

  • Design and implement comprehensive security test cases for AI systems including unit tests, integration tests, and adversarial robustness testing.

  • Integrate AI security testing into CI/CD pipelines for continuous security validation and monitoring of production deployments.

Skills you'll gain

Category: AI Security
Category: Threat Modeling
Category: Security Testing
Category: System Monitoring
Category: Scripting
Category: CI/CD
Category: Integration Testing
Category: MITRE ATT&CK Framework
Category: MLOps (Machine Learning Operations)
Category: Threat Detection
Category: Secure Coding
Category: Unit Testing
Category: DevSecOps
Category: Continuous Integration
Category: Application Security
Category: DevOps
Category: Test Case
Category: Prompt Engineering
Category: Continuous Monitoring

What you'll learn

  • Analyze inference bottlenecks to identify optimization opportunities in production ML systems.

  • Implement model pruning techniques to reduce computational complexity while maintaining acceptable accuracy.

  • Apply quantization methods and benchmark trade-offs for secure and efficient model deployment.

Skills you'll gain

Category: Model Deployment
Category: Model Evaluation
Category: Process Optimization
Category: Network Performance Management
Category: Convolutional Neural Networks
Category: Cloud Deployment
Category: Benchmarking
Category: Keras (Neural Network Library)
Category: Project Performance
Category: Network Model

What you'll learn

  • Apply infrastructure hardening in ML environments using secure setup, IAM controls, patching, and container scans to protect data.

  • Secure ML CI/CD workflows through automated dependency scanning, build validation, and code signing to prevent supply chain risks.

  • Design resilient ML pipelines by integrating rollback, drift monitoring, and adaptive recovery to maintain reliability and system trust.

Skills you'll gain

Category: AI Security
Category: CI/CD
Category: Model Evaluation
Category: Responsible AI
Category: Model Deployment
Category: Vulnerability Assessments
Category: Hardening
Category: Continuous Monitoring
Category: Engineering
Category: Vulnerability Scanning
Category: Containerization
Category: MLOps (Machine Learning Operations)
Category: AI Personalization
Category: Resilience
Category: Compliance Management
Category: Security Controls
Category: Threat Modeling
Category: Identity and Access Management
Category: DevSecOps
Category: Infrastructure Security

What you'll learn

  • Execute secure deployment strategies (blue/green, canary, shadow) with traffic controls, health gates, and rollback plans.

  • Implement model registry governance (versioning, lineage, stage transitions, approvals) to enforce provenance and promote-to-prod workflows.

  • Design monitoring triggering runbooks; secure updates via signing + CI/CD policy for auditable releases and controlled rollback.

Skills you'll gain

Category: Model Deployment
Category: AI Security
Category: CI/CD
Category: Data-Driven Decision-Making
Category: MLOps (Machine Learning Operations)
Category: DevOps
Category: Artificial Intelligence and Machine Learning (AI/ML)
Category: Cloud Deployment

What you'll learn

  • Analyze and identify a range of security vulnerabilities in complex AI models, including evasion, data poisoning, and model extraction attacks.

  • Apply defense mechanisms like adversarial training and differential privacy to protect AI systems from known threats.

  • Evaluate the effectiveness of security measures by designing and executing simulated adversarial attacks to test the resilience of defended AI model.

Skills you'll gain

Category: Vulnerability Assessments
Category: Cyber Threat Hunting
Category: AI Security
Category: Security Strategy
Category: Analysis
Category: Threat Modeling
Category: Security Engineering
Category: Model Evaluation
Category: Data Integrity
Category: Design
Category: Data Validation
Category: Information Privacy
Category: Responsible AI
Category: Security Testing
Category: Generative Adversarial Networks (GANs)

What you'll learn

  • Analyze real-world AI security, privacy, and access control risks to understand how these manifest in their own organizations.

  • Design technical controls and governance frameworks to secure AI systems, guided by free tools and industry guidelines.

  • Assess privacy laws' impact on AI, draft compliant policies, and tackle compliance challenges.

Skills you'll gain

Category: Data Loss Prevention
Category: Incident Response
Category: Threat Modeling
Category: Responsible AI
Category: Identity and Access Management
Category: Generative AI
Category: AI Security
Category: Data Security
Category: Governance
Category: Information Privacy
Category: Data Governance
Category: Cyber Security Policies
Category: Security Controls
Category: Security Awareness
Category: Risk Management Framework
Category: Personally Identifiable Information

What you'll learn

  • Design red-teaming scenarios to identify vulnerabilities and attack vectors in large language models using structured adversarial testing.

  • Implement content-safety filters to detect and mitigate harmful outputs while maintaining model performance and user experience.

  • Evaluate and enhance LLM resilience by analyzing adversarial inputs and developing defense strategies to strengthen overall AI system security.

Skills you'll gain

Category: AI Security
Category: Security Testing
Category: Large Language Modeling
Category: AI Personalization
Category: Vulnerability Assessments
Category: Security Controls
Category: System Implementation
Category: Threat Modeling
Category: Responsible AI
Category: Prompt Engineering
Category: Continuous Monitoring
Category: Security Strategy
Category: Vulnerability Scanning
Category: Penetration Testing
Category: LLM Application
Category: Scenario Testing
Category: Cyber Security Assessment

What you'll learn

  • Identify and classify various classes of attacks targeting AI systems.

  • Analyze the AI/ML development lifecycle to pinpoint stages vulnerable to attack.

  • Apply threat mitigation strategies and security controls to protect AI systems in development and production.

Skills you'll gain

Category: AI Security
Category: MITRE ATT&CK Framework
Category: Artificial Intelligence and Machine Learning (AI/ML)
Category: Application Security
Category: Threat Detection
Category: Security Engineering
Category: Threat Modeling
Category: Model Deployment
Category: MLOps (Machine Learning Operations)
Category: Data Security
Category: Security Controls
Category: Responsible AI
Category: Application Lifecycle Management
Category: Cybersecurity
Category: Vulnerability Assessments

What you'll learn

  • Apply machine learning techniques to detect anomalies in cybersecurity data such as logs, network traffic, and user behavior.

  • Automate incident response workflows by integrating AI-driven alerts with security orchestration tools.

  • Evaluate and fine-tune AI models to reduce false positives and improve real-time threat detection accuracy.

Skills you'll gain

Category: Anomaly Detection
Category: Data Analysis
Category: User Feedback
Category: Application Performance Management
Category: Process Optimization
Category: Time Series Analysis and Forecasting
Category: Query Languages
Category: Scalability
Category: Microsoft Azure
Category: Data Integration
Category: Site Reliability Engineering
Category: Generative AI

What you'll learn

  • Apply systematic patching strategies to AI models, ML frameworks, and dependencies while maintaining service availability and model performance.

  • Conduct blameless post-mortems for AI incidents using structured frameworks to identify root causes, document lessons learned, and prevent recurrence

  • Set up monitoring, alerts, and recovery to detect and resolve model drift, performance drops, and failures early.

Skills you'll gain

Category: Artificial Intelligence
Category: System Monitoring
Category: Model Deployment
Category: Problem Management
Category: Dependency Analysis
Category: AI Security
Category: Automation
Category: DevOps
Category: Sprint Retrospectives
Category: Incident Management
Category: Continuous Monitoring
Category: Site Reliability Engineering
Category: Patch Management
Category: Vulnerability Assessments
Category: Disaster Recovery
Category: MLOps (Machine Learning Operations)

What you'll learn

  • Explain the fundamentals of deploying AI models on mobile applications, including their unique performance, privacy, and security considerations.

  • Analyze threats to mobile AI models like reverse engineering, adversarial attacks, and privacy leaks and their effect on reliability and trust.

  • Design a layered defense strategy for securing mobile AI applications by integrating encryption, obfuscation, and continuous telemetry monitoring.

Skills you'll gain

Category: Encryption
Category: AI Security
Category: Continuous Monitoring
Category: Security Requirements Analysis
Category: Mobile Security
Category: Mobile Development
Category: Apple iOS
Category: Program Implementation
Category: Security Management
Category: Information Privacy
Category: System Monitoring
Category: Threat Modeling
Category: Threat Management
Category: Application Security
Category: Model Deployment

What you'll learn

  • Analyze how AI features like sensors, models, and agents make phones attack surfaces and enable deepfake-based scams.

  • Evaluate technical attack paths—zero-permission inference and multi-layer agent attacks—using real research cases.

  • Design a mobile-focused detection and response plan with simple rules, containment steps, and key resilience controls.

Skills you'll gain

Category: Incident Response
Category: Mobile Security
Category: AI Security
Category: Threat Modeling
Category: Information Privacy
Category: Prompt Engineering
Category: Exploit development
Category: Artificial Intelligence
Category: Endpoint Security
Category: Mobile Development Tools
Category: Deep Learning
Category: Hardening
Category: Security Controls
Category: Threat Detection

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV. Share it on social media and in your performance review.

Instructors

Reza Moradinezhad
Coursera
6 Courses4,041 learners
Starweaver
Coursera
514 Courses932,190 learners
Ritesh Vajariya
Coursera
23 Courses11,897 learners

Offered by

Coursera

Why people choose Coursera for their career

Felipe M.
Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."
Jennifer J.
Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."
Larry W.
Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."
Chaitanya A.
"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."

Frequently asked questions