Coursera
AI Security: Security in the Age of Artificial Intelligence Specialization

Gain next-level skills with Coursera Plus for $199 (regularly $399). Save now.

Coursera

AI Security: Security in the Age of Artificial Intelligence Specialization

Build Secure AI Systems End-to-End. Learn to identify, prevent, and respond to AI-specific threats across the entire ML lifecycle.

Reza Moradinezhad
Starweaver
Ritesh Vajariya

Instructors: Reza Moradinezhad

Included with Coursera Plus

Get in-depth knowledge of a subject
Intermediate level

Recommended experience

4 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace
Get in-depth knowledge of a subject
Intermediate level

Recommended experience

4 weeks to complete
at 10 hours a week
Flexible schedule
Learn at your own pace

What you'll learn

  • Secure AI systems using static analysis, threat modeling, and vulnerability assessment techniques

  • Implement production security controls including monitoring, incident response, and patch management

  • Conduct red-teaming exercises and build resilient defenses against AI-specific attack vectors

Details to know

Shareable certificate

Add to your LinkedIn profile

Taught in English
Recently updated!

December 2025

See how employees at top companies are mastering in-demand skills

 logos of Petrobras, TATA, Danone, Capgemini, P&G and L'Oreal

Advance your subject-matter expertise

  • Learn in-demand skills from university and industry experts
  • Master a subject or tool with hands-on projects
  • Develop a deep understanding of key concepts
  • Earn a career certificate from Coursera

Specialization - 13 course series

What you'll learn

  • Configure Bandit, Semgrep, PyLint to detect AI vulnerabilities: insecure model deserialization, hardcoded secrets, unsafe system calls in ML code.

  • Apply static analysis to fix AI vulnerabilities (pickle exploits, input validation, dependencies); create custom rules for AI security patterns.

  • Implement pip-audit, Safety, Snyk for dependency scanning; assess AI libraries for vulnerabilities, license compliance, and supply chain security.

Skills you'll gain

Category: Dependency Analysis
Category: AI Security
Category: Vulnerability Scanning
Category: DevSecOps
Category: Analysis
Category: Vulnerability Assessments
Category: PyTorch (Machine Learning Library)
Category: MLOps (Machine Learning Operations)
Category: AI Personalization
Category: Continuous Integration
Category: Application Security
Category: Open Source Technology
Category: Threat Modeling
Category: Secure Coding
Category: Supply Chain
Category: Program Implementation

What you'll learn

  • Analyze and evaluate AI inference threat models, identifying attack vectors and vulnerabilities in machine learning systems.

  • Design and implement comprehensive security test cases for AI systems including unit tests, integration tests, and adversarial robustness testing.

  • Integrate AI security testing into CI/CD pipelines for continuous security validation and monitoring of production deployments.

Skills you'll gain

Category: AI Security
Category: Threat Modeling
Category: Security Testing
Category: System Monitoring
Category: Unit Testing
Category: Scripting
Category: Prompt Engineering
Category: Threat Detection
Category: MITRE ATT&CK Framework
Category: Continuous Integration
Category: Secure Coding
Category: Integration Testing
Category: DevSecOps
Category: Application Security
Category: Test Case
Category: CI/CD
Category: MLOps (Machine Learning Operations)
Category: DevOps
Category: Continuous Monitoring

What you'll learn

  • Analyze inference bottlenecks to identify optimization opportunities in production ML systems.

  • Implement model pruning techniques to reduce computational complexity while maintaining acceptable accuracy.

  • Apply quantization methods and benchmark trade-offs for secure and efficient model deployment.

Skills you'll gain

Category: Model Deployment
Category: Network Performance Management
Category: Benchmarking
Category: Convolutional Neural Networks
Category: Process Optimization
Category: Project Performance
Category: Keras (Neural Network Library)
Category: Model Evaluation
Category: Cloud Deployment
Category: Network Model

What you'll learn

  • Apply infrastructure hardening in ML environments using secure setup, IAM controls, patching, and container scans to protect data.

  • Secure ML CI/CD workflows through automated dependency scanning, build validation, and code signing to prevent supply chain risks.

  • Design resilient ML pipelines by integrating rollback, drift monitoring, and adaptive recovery to maintain reliability and system trust.

Skills you'll gain

Category: AI Security
Category: CI/CD
Category: Resilience
Category: Model Evaluation
Category: DevSecOps
Category: Engineering
Category: Identity and Access Management
Category: Containerization
Category: Security Controls
Category: Vulnerability Scanning
Category: Responsible AI
Category: Vulnerability Assessments
Category: Hardening
Category: Infrastructure Security
Category: Threat Modeling
Category: Compliance Management
Category: AI Personalization
Category: Model Deployment
Category: Continuous Monitoring
Category: MLOps (Machine Learning Operations)

What you'll learn

  • Execute secure deployment strategies (blue/green, canary, shadow) with traffic controls, health gates, and rollback plans.

  • Implement model registry governance (versioning, lineage, stage transitions, approvals) to enforce provenance and promote-to-prod workflows.

  • Design monitoring triggering runbooks; secure updates via signing + CI/CD policy for auditable releases and controlled rollback.

Skills you'll gain

Category: AI Security
Category: Model Deployment
Category: DevOps
Category: CI/CD
Category: Cloud Deployment
Category: Data-Driven Decision-Making
Category: MLOps (Machine Learning Operations)
Category: Artificial Intelligence and Machine Learning (AI/ML)

What you'll learn

  • Analyze and identify a range of security vulnerabilities in complex AI models, including evasion, data poisoning, and model extraction attacks.

  • Apply defense mechanisms like adversarial training and differential privacy to protect AI systems from known threats.

  • Evaluate the effectiveness of security measures by designing and executing simulated adversarial attacks to test the resilience of defended AI model.

Skills you'll gain

Category: Analysis
Category: Threat Modeling
Category: Design
Category: Model Evaluation
Category: Data Validation
Category: Security Strategy
Category: Generative Adversarial Networks (GANs)
Category: Security Engineering
Category: Vulnerability Assessments
Category: Responsible AI
Category: Data Integrity
Category: Cyber Threat Hunting
Category: Security Testing
Category: Information Privacy
Category: AI Security

What you'll learn

  • Analyze real-world AI security, privacy, and access control risks to understand how these manifest in their own organizations.

  • Design technical controls and governance frameworks to secure AI systems, guided by free tools and industry guidelines.

  • Assess privacy laws' impact on AI, draft compliant policies, and tackle compliance challenges.

Skills you'll gain

Category: Security Awareness
Category: Identity and Access Management
Category: Risk Management Framework
Category: Data Governance
Category: Data Loss Prevention
Category: Information Privacy
Category: Responsible AI
Category: Data Security
Category: Generative AI
Category: Security Controls
Category: Threat Modeling
Category: Governance
Category: Cyber Security Policies
Category: AI Security
Category: Incident Response
Category: Personally Identifiable Information

What you'll learn

  • Design red-teaming scenarios to identify vulnerabilities and attack vectors in large language models using structured adversarial testing.

  • Implement content-safety filters to detect and mitigate harmful outputs while maintaining model performance and user experience.

  • Evaluate and enhance LLM resilience by analyzing adversarial inputs and developing defense strategies to strengthen overall AI system security.

Skills you'll gain

Category: Large Language Modeling
Category: Security Testing
Category: AI Security
Category: System Implementation
Category: Vulnerability Assessments
Category: Continuous Monitoring
Category: Vulnerability Scanning
Category: Security Controls
Category: AI Personalization
Category: Threat Modeling
Category: Security Strategy
Category: Penetration Testing
Category: LLM Application
Category: Responsible AI
Category: Prompt Engineering
Category: Scenario Testing
Category: Cyber Security Assessment

What you'll learn

  • Identify and classify various classes of attacks targeting AI systems.

  • Analyze the AI/ML development lifecycle to pinpoint stages vulnerable to attack.

  • Apply threat mitigation strategies and security controls to protect AI systems in development and production.

Skills you'll gain

Category: AI Security
Category: Application Lifecycle Management
Category: Threat Detection
Category: Model Deployment
Category: MITRE ATT&CK Framework
Category: Threat Modeling
Category: Data Security
Category: Vulnerability Assessments
Category: Application Security
Category: Responsible AI
Category: Security Engineering
Category: Security Controls
Category: Cybersecurity
Category: MLOps (Machine Learning Operations)
Category: Artificial Intelligence and Machine Learning (AI/ML)

What you'll learn

  • Apply machine learning techniques to detect anomalies in cybersecurity data such as logs, network traffic, and user behavior.

  • Automate incident response workflows by integrating AI-driven alerts with security orchestration tools.

  • Evaluate and fine-tune AI models to reduce false positives and improve real-time threat detection accuracy.

Skills you'll gain

Category: Anomaly Detection
Category: Scalability
Category: Microsoft Azure
Category: Time Series Analysis and Forecasting
Category: Generative AI
Category: Query Languages
Category: Process Optimization
Category: Site Reliability Engineering
Category: Data Integration
Category: User Feedback
Category: Data Analysis
Category: Application Performance Management

What you'll learn

  • Apply systematic patching strategies to AI models, ML frameworks, and dependencies while maintaining service availability and model performance.

  • Conduct blameless post-mortems for AI incidents using structured frameworks to identify root causes, document lessons learned, and prevent recurrence

  • Set up monitoring, alerts, and recovery to detect and resolve model drift, performance drops, and failures early.

Skills you'll gain

Category: Model Deployment
Category: Automation
Category: Disaster Recovery
Category: Site Reliability Engineering
Category: Dashboard
Category: Sprint Retrospectives
Category: Dependency Analysis
Category: DevOps
Category: Incident Management
Category: Patch Management
Category: Continuous Monitoring
Category: System Monitoring
Category: MLOps (Machine Learning Operations)
Category: Problem Management
Category: Artificial Intelligence
Category: AI Security
Category: Vulnerability Assessments

What you'll learn

  • Explain the fundamentals of deploying AI models on mobile applications, including their unique performance, privacy, and security considerations.

  • Analyze threats to mobile AI models like reverse engineering, adversarial attacks, and privacy leaks and their effect on reliability and trust.

  • Design a layered defense strategy for securing mobile AI applications by integrating encryption, obfuscation, and continuous telemetry monitoring.

Skills you'll gain

Category: Continuous Monitoring
Category: Encryption
Category: AI Security
Category: Information Privacy
Category: Security Requirements Analysis
Category: Security Management
Category: Program Implementation
Category: Application Security
Category: Apple iOS
Category: Mobile Development
Category: Mobile Security
Category: Threat Management
Category: System Monitoring
Category: Threat Modeling
Category: Model Deployment

What you'll learn

  • Analyze how AI features like sensors, models, and agents make phones attack surfaces and enable deepfake-based scams.

  • Evaluate technical attack paths—zero-permission inference and multi-layer agent attacks—using real research cases.

  • Design a mobile-focused detection and response plan with simple rules, containment steps, and key resilience controls.

Skills you'll gain

Category: Mobile Security
Category: Incident Response
Category: AI Security
Category: Information Privacy
Category: Security Controls
Category: Endpoint Security
Category: Mobile Development Tools
Category: Exploit development
Category: Prompt Engineering
Category: Threat Detection
Category: Threat Modeling
Category: Deep Learning
Category: Artificial Intelligence
Category: Hardening

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV. Share it on social media and in your performance review.

Instructors

Reza Moradinezhad
Coursera
6 Courses4,050 learners
Starweaver
Coursera
514 Courses933,743 learners
Ritesh Vajariya
Coursera
23 Courses11,985 learners

Offered by

Coursera

Why people choose Coursera for their career

Felipe M.
Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."
Jennifer J.
Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."
Larry W.
Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."
Chaitanya A.
"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."

Frequently asked questions