Keep Calm and Study On - Unlock Your Success - Use #TOGETHER for 30% discount at Checkout

ISTQB Certified Tester AI Testing (CT-AI) Practice Exam

ISTQB Certified Tester AI Testing (CT-AI) Practice Exam


About ISTQB Certified Tester AI Testing (CT-AI) Exam

The ISTQB® Certified Tester AI Testing (CT-AI) is a globally recognised specialist certification that equips testing professionals with the knowledge and practical skills required to test AI-based systems and to leverage artificial intelligence as a tool within the broader software testing process. The CT-AI certification addresses both dimensions of the AI-testing relationship: how to test AI-based systems effectively, and how AI technologies can be applied to enhance and optimise conventional testing activities. By achieving this certification, professionals demonstrate the ability to contribute meaningfully to the quality assurance of AI-powered products while staying ahead of the rapidly evolving technology landscape.


Who should take the exam?

The CT-AI certification is designed for professionals involved in testing AI-based systems or those who wish to apply AI techniques within their testing practice. It is suitable for a broad range of roles across software development and quality assurance:


  • Software Testers and Test Analysts seeking to develop expertise in AI system testing
  • Data Analysts working at the intersection of data quality and AI model validation
  • Test Engineers and Test Consultants responsible for designing and executing AI test strategies
  • Test Managers and Test Leads overseeing quality assurance for AI-based products
  • User Acceptance Testers involved in validating AI-driven features and behaviours
  • Software Developers who want to understand testing principles for AI systems they build
  • Project Managers, Quality Managers, and Business Analysts requiring foundational AI testing knowledge
  • IT Directors, Operations Team Members, and Management Consultants advising on AI quality and governance


Candidates must hold the ISTQB® Certified Tester Foundation Level (CTFL) certificate prior to sitting the CT-AI examination.


Exam Overview


Exam Details

Certification Title

Certified Tester AI Testing (CT-AI)

Issuing Body

ISTQB® — International Software Testing Qualifications Board

Certification Stream

Specialist Stream

Syllabus Version

v1.0

Number of Questions

40 multiple-choice questions

Total Points

47 points

Passing Score

31 points (approximately 66%)

Exam Duration

60 minutes (+25% for non-native speakers)

Prerequisite

ISTQB® Certified Tester Foundation Level (CTFL)

Delivery Format

Classroom | Virtual | E-Learning | Self-Study



Exam Prerequisites

Candidates must hold the ISTQB® Certified Tester Foundation Level (CTFL) certification as a mandatory prerequisite before sitting the CT-AI examination. This requirement ensures that all candidates possess a solid understanding of core software testing principles before engaging with the specialist AI testing domain.
Holders of the CT-AI certification are eligible to continue their professional development across any of the following ISTQB® certification streams:

  • Core Stream — Advanced and Expert Level certifications in testing management, test analysis, and technical testing
  • Agile Stream — certifications addressing agile and DevOps testing practices
  • Specialist Stream — additional domain-specific certifications, including security, performance, and model-based testing


Course Outline

The CT-AI syllabus is structured across eleven comprehensive knowledge domains, spanning foundational AI concepts, machine learning principles, AI-specific testing challenges, specialized test techniques, and the application of AI within the testing process itself:

Domain 1 - Introduction to AI

  • Definition of AI and the AI Effect — foundational concepts and the evolving nature of artificial intelligence
  • Narrow, General, and Super AI — distinctions between AI capability tiers and their implications for testing
  • AI-Based vs Conventional Systems — key differences that affect testing approaches and strategies
  • AI Technologies — overview of core AI techniques including machine learning, natural language processing, and computer vision
  • AI Development Frameworks — commonly used frameworks and their relevance to test planning
  • Hardware for AI-Based Systems — understanding hardware dependencies and their impact on system behaviour
  • AI as a Service (AIaaS) — considerations for testing cloud-hosted AI components and third-party AI services
  • Pre-Trained Models — testing implications when incorporating pre-trained models into AI-based systems
  • Standards, Regulations and AI — applicable regulatory frameworks and standards governing AI system quality


Domain 2 - Quality Characteristics for AI-Based Systems

  • Flexibility and Adaptability — assessing a system's capacity to perform effectively across varied inputs and contexts
  • Autonomy — evaluating systems that operate and make decisions independentlyEvolution — testing systems that learn and change behaviour over time
  • Bias — identifying and mitigating algorithmic, sample, and training data bias
  • Ethics — ensuring AI systems operate in accordance with ethical principles and do not cause harm
  • Side Effects and Reward Hacking — detecting unintended behaviours arising from optimization objectives
  • Transparency, Interpretability, and Explainability — verifying that AI decisions can be understood and justified
  • Safety and AI — testing for safe system behaviour, particularly in high-stakes operational environments


Domain 3 - Machine Learning (ML) — Overview

  • Forms of ML — supervised, unsupervised, reinforcement, and semi-supervised learning
  • ML Workflow — end-to-end process from data collection and model training through to evaluation and deployment
  • Selecting a Form of ML — criteria and considerations for choosing the appropriate ML approach
  • Factors Involved in ML Algorithm Selection — trade-offs relating to accuracy, complexity, and interpretability
  • Overfitting and Underfitting — identifying and addressing model generalization failures

Domain 4 - ML Data

  • Data Preparation as Part of the ML Workflow — data cleaning, transformation, and feature engineering
  • Training, Validation, and Test Datasets in the ML Workflow — dataset design and management strategies
  • Dataset Quality Issues — detecting and resolving problems including missing data, imbalance, and noise
  • Data Quality and Its Effect on the ML Model — understanding how data characteristics drive model performance
  • Data Labelling for Supervised Learning — quality considerations for manual and automated labelling processes


Domain 5 - ML Functional Performance Metrics

  • Confusion Matrix — interpreting classification outcomes using true/false positive and negative results
  • ML Functional Performance Metrics for Classification, Regression, and Clustering — precision, recall, F1, RMSE, and cluster quality measures
  • Limitations of ML Functional Performance Metrics — understanding where metrics can mislead or be insufficient
  • Selecting ML Functional Performance Metrics — matching metrics to business objectives and model type
  • Benchmark Suites for ML Performance — using standardized benchmarks to evaluate and compare model quality


Domain 6 - ML Neural Networks and Testing

  • Neural Networks — architecture, layers, activation functions, and testing considerations
  • Coverage Measures for Neural Networks — neuron coverage, layer coverage, and other deep learning-specific adequacy criteria


Domain 7 - Testing AI-Based Systems — Overview

  • Specification of AI-Based Systems — challenges in defining testable requirements for AI behaviours
  • Test Levels of AI-Based Systems — applying unit, integration, system, and acceptance testing to AI components
  • Test Data for Testing AI-Based Systems — designing and sourcing appropriate test data for AI validation
  • Testing for Automation Bias in AI-Based Systems — detecting over-reliance on automated AI recommendations
  • Documenting an AI-Based Component — capturing model metadata, versioning, and behavioural specifications
  • Testing for Concept Drift — verifying model performance remains valid as real-world data distributions shift
  • Selecting a Test Approach for an ML System — risk-based and model-type-specific approach selection



Domain 8 - Testing AI-Specific Quality Characteristics

  • Challenges Testing Self-Learning Systems — managing evolving behaviour and non-static test oracles
  • Testing Autonomous Self-Learning Systems — validating systems that adapt and act without human intervention
  • Testing for Algorithmic, Sample, and Inappropriate Bias — fairness testing strategies across demographic and contextual dimensions
  • Challenges Testing Probabilistic and Non-Deterministic AI-Based Systems — handling variability and stochastic outputs
  • Challenges Testing Complex AI-Based Systems — emergent behaviour, integration complexity, and black-box limitations
  • Testing Transparency, Interpretability, and Explainability — techniques to evaluate how well a system's reasoning can be understood
  • Test Oracles for AI-Based Systems — defining and deriving expected outcomes for non-deterministic systems
  • Test Objectives and Acceptance Criteria — establishing measurable quality gates for AI-based systems


Domain 9 - Methods and Techniques for Testing AI-Based Systems

  • Adversarial Attacks and Data Poisoning — testing system robustness against malicious inputs and corrupted training data
  • Pairwise Testing — applying combinatorial techniques to manage AI input space complexity
  • A/B Testing — comparing model versions in controlled or live environments to evaluate performance differences
  • Back-to-Back Testing — validating consistency across multiple model implementations or versions
  • Metamorphic Testing (MT) — exploiting known relationships between inputs and outputs to generate test cases without explicit oracles
  • Experience-Based Testing of AI-Based Systems — applying exploratory and domain expertise-driven approaches
  • Selecting Test Techniques for AI-Based Systems — matching techniques to system type, risk profile, and testing objective


Domain 10 - Testing Environments for AI-Based Systems

  • Test Environments for AI-Based Systems — infrastructure requirements for hardware, data pipelines, and model dependencies
  • Virtual Test Environments for Testing AI-Based Systems — simulation and emulation strategies for scalable AI testing


Domain 11 - Using AI for Testing

  • AI Technologies for Testing — overview of AI tools and techniques applicable to software testing workflows
  • Using AI to Analyze Defect Reports — automated classification, clustering, and prioritization of defect information
  • Using AI for Test Case Generation — leveraging AI to produce test cases from requirements, models, or historical data
  • Using AI for the Optimization of Regression Test Suites — AI-driven test selection and prioritization to reduce execution time
  • Using AI for Defect Prediction — applying predictive models to identify high-risk areas before testing begins
  • Using AI for Testing User Interfaces — visual AI techniques for automated GUI validation and anomaly detection

Tags: ISTQB Certified Tester AI Testing (CT-AI) Practice Exam, ISTQB Certified Tester AI Testing (CT-AI) Free Test, ISTQB Certified Tester AI Testing (CT-AI) Online Course, ISTQB Certified Tester AI Testing (CT-AI) Study Guide, ISTQB Certified Tester AI Testing (CT-AI) Training