The EU AI Act was published in the Official Journal (OJ) of the European Union on 12 July 2024. The AI Act is a European regulation on artificial intelligence (AI) that assigns applications of AI to risk categories:
This page describes the impact of the AI Act for the EdgeTier Automated QA system. The EdgeTier system includes analysis systems to assist in assessing agent performance and compliance in customer service applications.
This page specifically examines the “emotion detection” functionality used at EdgeTier since it focusses on customer service agent performance.
This article is regarding whether our Agent Empathy feature could be deemed an “emotion-recognition system” that is banned under Article 5(1)(f) of the EU Artificial Intelligence Act (“AI Act”).
After reviewing the statutory text and the European Commission’s recent interpretative Guidelines, we confirm that our solution falls outside the scope of the prohibition for the reasons set out below.
The agent-focussed empathy-scoring functionality analyses written language in call, chat, and email transcripts to flag whether an agent uses recommended customer-care phrases and behaviours. It does not capture or process biometric data (voice, image, physiological signals) and therefore does not qualify as an ‘emotion-recognition system’ as defined in Article 3(39) AI Act or Recital 18.
The system does not attempt to infer the internal emotional state of any person, and as such Article 5(1)(f) prohibition on workplace emotion-recognition systems is not engaged. The feature serves as an optional coaching aid detecting practices that are generally defined as good customer service behaviour; it neither automates employee evaluation nor affects employment conditions without human review.
Definition of an “emotion-recognition system”
The AI Act only uses this term where an AI system “identifies or infers emotions or intentions … on the basis of their biometric data”.
Recital 18 reiterates that emotion-recognition means drawing inferences from physical or physiological signals (e.g. facial micro-expressions, voice tone, heart rate) that qualify as biometric data.
Workplace ban.
Article 5(1)(f) prohibits deploying such biometric-based emotion-recognition systems “in the workplace or in educational institutions”, unless the purpose is medical or safety related.
| Requirement in the AI Act | How our system works | Result |
|---|---|---|
| Uses biometric data | We analyse only the text of the agent-customer transcripts. No facial imagery, no behavioural keystroke pattern, and no other physical/physiological data are captured or processed. | Criterion not met. |
| Infers true human emotions | The model checks for linguistically-defined behaviours that reflect good customer-care practice (e.g. “I understand how you feel and I’d like to help”). It labels these behaviours as “Empathetic phrasing” - the detection system does not predict the agent’s internal mental state. | Criterion not met. |
| Automated assessment of employee performance | AI analysis and insights are shown to supervisors alongside other standard quality assurance metrics. These are informational and not automatically linked to automated ranking, pay, promotion or disciplinary triggers. Human QA teams decide how (or whether) to act on the insights. | Outside the medical/safety exemption but also outside the prohibition, because no biometric emotion inference occurs. |