← Back to Presentations
Online

Human-AI Comparative Evaluation of Test Case Generation in Scenario-Oriented Software Testing

Speakers: Rawan Habarneh

Track: Track 5: Emerging Trends of AI/ML

📑 No Slides 🎬 No Video

Abstract

This research evaluates the effectiveness of AI-generated test cases (using GPT-4) against test cases constructed using conventional manual testing approaches in scenario-driven software testing. Manual test cases developed by applying established black-box testing methods, while GPT-4 generated test cases through structured prompts. Three scenarios—easy, moderate, and complex—used to conduct the evaluation under equivalent conditions. The primary comparisons in the present study evaluated defect detection capability, test coverage, efficiency of execution, and scenario relevance. The results indicate that AI-generated test cases provide better coverage, are faster to generate, and more effectively detect edge case faults; notably when evaluating the complex scenario. Procedural/manual testing found to be stronger in contextual reasoning and for safety critical interpretation. Overall, this research concludes that AI-generated testing is a complement to procedural/manual testing methods not a replacement. The results support a "hybrid" testing approach for modern software testing and quality assurance.

Speakers

Rawan Habarneh
Student
zarqa university

Details

Type
Online
Model
OFFLINE
Language
EN
Timezone
UTC+8
Views
162
Likes
26