The Illusion of Personality: What Psychometric Testing Reveals (and Misrepresents) About AI Models
Time: No time yet
Session: [No session yet] » [No session block yet]
Type: Oral (In-person)
Abstract:
The application of standardised psychometric tools, such as the Big Five Questionnaire-2 (BFQ-2), to Artificial Intelligence (AI) models raises critical questions regarding methodological validity and the interpretation of results. This study presents a comparative analysis of the BFQ-2 profiles of six prominent AI models (ChatGPT, Claude, Grok2, DeepSeek, Gemini, and Mistral), contrasting them with anthropomorphic automatic interpretations. Empirical findings, based on T-scores and raw response patterns, demonstrate that extreme or unusual AI responses are not manifestations of latent psychological traits or personality disorders, but rather a direct reflection of the training objectives and design priorities imposed by their creators. AI models fall into distinct profiles: "hyper-performers" (high conscientiousness and stability, low deception), "social approval seekers" (very high deception and positive response polarisation), “controlled/ethical models" (moderate responses and caution) and balanced alignment (high agreeableness, conscientiousness and moderate deception) . The study concludes that human guidance (training) is all-encompassing, determining the ethical alignment and creative potential of AI. It establishes the psychological invalidity of any clinical diagnosis (such as DSM-5 Personality Disorders) applied to AI, given the absence of consciousness, affectivity, and subjective suffering.
Keywords:
AI Personality Profiling,anthropomorphism,model behaviour,ethics
Speaker: