In March 2026, one thing is clear: artificial voice has become nearly indistinguishable from human voice over the phone. The global call center AI market, estimated at $3.98 billion in 2025, is expected to reach $4.89 billion in 2026 according to Precedence Research, with a trajectory toward $30.69 billion by 2035 (22.66% CAGR). This explosive growth raises a fundamental question for call centers: how to automate without deceiving?
The Rise of AI Voice Agents: Where Do We Stand?
AI voice agents crossed a decisive technological threshold in late 2025. End-to-end latency dropped below 300 milliseconds, matching human reaction times. Two technical approaches coexist:
Native Audio Processing
Models like GPT-4o Realtime (OpenAI) and Gemini 2.0 Flash (Google) process audio directly, bypassing the traditional STT → LLM → TTS chain (Speech-to-Text → language model → Text-to-Speech). They natively perceive tone, intonation, and pace, eliminating conversion overhead and drastically reducing latency.
Optimized Modular Stack
For companies preferring control over each component, the modular pipeline has accelerated significantly. Engines like Cartesia Sonic-3 achieve voice generation latency of 90 ms, combined with ultra-fast inference engines like Groq. This approach offers greater control over voice and conversational logic.
Emotionally Intelligent Voices
The latest generation of speech synthesis includes fine emotional control: sighs, laughter, expressive pauses, and tone adaptation based on the speaker's detected prosody. High-quality French voices now match English voice standards, making francophone deployments fully viable.
Concrete Use Cases in Call Centers
AI voice agent adoption in call centers is structured around several scenarios:
AI Receptionist
Operating 24/7, the voice agent handles incoming calls, answers frequent questions, books appointments, and qualifies requests before transferring to a human. According to Gartner, AI chatbots could save $80 billion in labor costs per year by 2026 in the sector.
Automated Outbound Lead Qualification
Voice agents can now conduct structured pre-qualification conversations: verifying interest, collecting key information, booking appointments. The time savings for sales teams are considerable, but this practice raises transparency questions towards prospects.
CRM Integration and Personalization
Through protocols like the Model Context Protocol (MCP), voice agents access CRM data in real time (customer history, orders, tickets). The agent can personalize responses based on customer context without human intervention. For more on technical integration, see our API documentation.
The Flip Side: Vocal Deepfakes and Abuse Risks
The same technology making AI voice agents effective also fuels automated robocalls and vocal deepfake scams. France's CNIL defines deepfakes as audio, photo, or video content created or modified using AI techniques, with realism increasingly difficult to distinguish from authentic content.
Concrete Risks for Call Centers
- Voice identity theft: a fraudster can clone an advisor's or manager's voice to obtain confidential information.
- Fake customer scams: synthetic voices can simulate legitimate customers to extract personal data.
- Trust damage: if customers discover they're speaking to an AI without being informed, the trust relationship is permanently damaged.
According to CNIL, creating a montage using a person's image or voice without consent is punishable by one year imprisonment and a €15,000 fine (Article 226-8 of the French Penal Code).
Regulatory Framework: AI Act and Transparency Obligations
The European AI Act (Regulation 2024/1689) imposes specific obligations regarding deepfakes and AI systems interacting with humans:
Disclosure Requirement
Any AI system directly interacting with natural persons must be designed and developed so that the persons concerned are informed they are interacting with an AI system. Concretely, for a call center using a voice agent, this means:
- Clearly announcing at the start of the call that the interlocutor is an AI.
- Allowing transfer to a human at any time.
- Not deliberately attempting to deceive about the nature of the interlocutor.
Deepfake Obligations
The AI Act requires that anyone generating synthetic audio content (deepfake) disclose that it was created or manipulated by AI. In the call center context, this reinforces transparency obligations around synthetic voice use.
Potential Penalties
Non-compliance can result in fines up to €15 million or 3% of global annual turnover. Call centers must adapt their practices now.
How Call Centers Can Reconcile AI and Authenticity
Several best practices are emerging for responsible adoption:
1. Systematic Transparency
Inform the customer from the first second that they're speaking to an AI agent. This honesty, far from deterring, strengthens trust: studies show customers prefer knowing who they're talking to, even if it's a high-performing AI.
2. Seamless Human Escalation
Implement an immediate escalation mechanism to a human advisor, triggered either by the customer or automatically when the AI detects a complex situation or frustration.
3. Monitoring and Detection
Deploy vocal deepfake detection tools to protect call centers against impersonation attacks. Voice identity verification technologies are advancing alongside synthesis tools.
4. Governance and Training
Train teams on vocal deepfake-specific risks, establish an AI voice usage charter, and document processes for GDPR and AI Act compliance.
5. Regular Synthetic Voice Audits
Periodically verify that voices used by AI agents comply with individual rights (no unconsented cloning) and company quality standards.
What CIOs Need to Anticipate
For CIOs, integrating AI voice involves critical infrastructure choices:
- Network latency: real-time voice agents require network latency below 100 ms. The choice between cloud and on-premises hosting directly impacts conversational quality.
- Voice data security: voice recordings are biometric data under GDPR. Their storage and processing require enhanced protection measures.
- Interoperability: prioritize solutions compatible with open standards (SIP, WebRTC, MCP) to avoid vendor lock-in.
- Hidden costs: beyond per-minute AI call costs, factor in training, human supervision, regulatory compliance, and incident management costs.












