
OpenAI's Realtime Voice Models Move the Interface From Chat to Live Work
OpenAI's new realtime voice models bring reasoning, translation, and streaming transcription into production voice agents.
The next interface fight is not about who can make a chatbot sound pleasant. It is about who can keep a live spoken task moving while the software thinks, calls tools, and recovers from mistakes.
OpenAI announced three realtime audio models for developers on May 7, 2026: GPT-Realtime-2 for live voice reasoning and tool use, GPT-Realtime-Translate for multilingual voice experiences, and GPT-Realtime-Whisper for streaming transcription. The company said GPT-Realtime-2 expands context for agentic workflows, supports parallel tool calls, improves recovery behavior, and lets developers tune reasoning effort. OpenAI also published pricing and availability through the Realtime API.
Voice AI is becoming an operational interface. Customer support, travel changes, field service, healthcare intake, sales qualification, accessibility, and multilingual support all depend on low-latency interaction. The model cannot simply answer. It has to listen continuously, manage interruptions, disclose tool activity, preserve domain terms, and hand off safely when the task leaves its authority.
The architecture in one picture
graph TD
A[Live speech] --> B[Realtime model]
B --> C[Intent and context]
C --> D[Tool calls]
C --> E[Translation]
C --> F[Streaming transcript]
D --> G[Task completion]
E --> H[Cross-language conversation]
F --> I[Notes and records]
G --> J[Human review when needed]
The operational scorecard
| Model | Primary role | Operational test |
|---|---|---|
| GPT-Realtime-2 | Voice reasoning and tool use | Can it complete tasks under interruption |
| GPT-Realtime-Translate | Live multilingual voice | Can it preserve meaning at conversational speed |
| GPT-Realtime-Whisper | Streaming speech to text | Can it support captions, notes, and downstream workflow |
| Agents SDK guardrails | Developer safety layer | Can teams enforce domain-specific policy |
Voice is harder than chat
A chat interface gives users time to read, edit, scroll, and retry. Voice removes much of that safety margin. The user may be driving, walking, helping a customer, or speaking in a second language. Latency becomes part of trust. Silence feels like failure. A wrong answer can be harder to inspect. That is why realtime voice models need conversational pacing, not only speech synthesis.
For this story, the practical reading is specific: Voice AI is becoming an operational interface. Customer support, travel changes, field service, healthcare intake, sales qualification, accessibility, and multilingual support all depend on low-latency interaction. The model cannot simply answer. It has to listen continuously, manage interruptions, disclose tool activity, preserve domain terms, and hand off safely when the task leaves its authority. That reading matters because executives are trying to distinguish a durable operating shift from a short news cycle. The headline creates attention, but the deployment path decides value.
The strongest organizations will avoid treating the announcement as a mandate. They will identify the exact workflow affected, define what data enters the system, decide which tools the AI can call, and set a review standard before the pilot expands. That discipline is not bureaucracy. It is what lets teams move quickly without losing the ability to explain the result.
There is also a talent question. AI does not remove the need for expert operators. It changes where their time goes. Analysts, engineers, support leads, security reviewers, and compliance teams spend less time on repetitive drafting or search and more time on judgment, exception handling, measurement, and system improvement. Teams that ignore that shift will either over-automate or underuse the technology.
The economic question is equally direct. A capability is valuable only when it changes a constraint. The constraint might be response time, remediation backlog, language coverage, compute availability, compliance evidence, or policy uncertainty. If a deployment does not name the constraint, it will be difficult to defend later. If it does name the constraint, the team can measure before and after with less room for vague success claims.
The important feature is tool transparency
OpenAI describes preambles and audible tool transparency, where the agent can signal that it is checking a calendar or looking something up. That sounds small, but it solves a real interaction problem. People tolerate short delays when they understand the reason. Enterprise users also need to know when an AI system is consulting a tool, using customer data, or waiting on an external system.
For this story, the practical reading is specific: Voice AI is becoming an operational interface. Customer support, travel changes, field service, healthcare intake, sales qualification, accessibility, and multilingual support all depend on low-latency interaction. The model cannot simply answer. It has to listen continuously, manage interruptions, disclose tool activity, preserve domain terms, and hand off safely when the task leaves its authority. That reading matters because executives are trying to distinguish a durable operating shift from a short news cycle. The headline creates attention, but the deployment path decides value.
The strongest organizations will avoid treating the announcement as a mandate. They will identify the exact workflow affected, define what data enters the system, decide which tools the AI can call, and set a review standard before the pilot expands. That discipline is not bureaucracy. It is what lets teams move quickly without losing the ability to explain the result.
There is also a talent question. AI does not remove the need for expert operators. It changes where their time goes. Analysts, engineers, support leads, security reviewers, and compliance teams spend less time on repetitive drafting or search and more time on judgment, exception handling, measurement, and system improvement. Teams that ignore that shift will either over-automate or underuse the technology.
The economic question is equally direct. A capability is valuable only when it changes a constraint. The constraint might be response time, remediation backlog, language coverage, compute availability, compliance evidence, or policy uncertainty. If a deployment does not name the constraint, it will be difficult to defend later. If it does name the constraint, the team can measure before and after with less room for vague success claims.
Why translation changes the market
Realtime translation turns voice AI from a domestic support feature into a global operations layer. A multilingual support center can let customers speak naturally while agents or automated systems receive live translated context. Education, creator platforms, travel, medical intake, and cross-border sales all become easier to serve. The hard part will be preserving terminology, privacy, consent, and escalation paths when conversations cross languages.
For this story, the practical reading is specific: Voice AI is becoming an operational interface. Customer support, travel changes, field service, healthcare intake, sales qualification, accessibility, and multilingual support all depend on low-latency interaction. The model cannot simply answer. It has to listen continuously, manage interruptions, disclose tool activity, preserve domain terms, and hand off safely when the task leaves its authority. That reading matters because executives are trying to distinguish a durable operating shift from a short news cycle. The headline creates attention, but the deployment path decides value.
The strongest organizations will avoid treating the announcement as a mandate. They will identify the exact workflow affected, define what data enters the system, decide which tools the AI can call, and set a review standard before the pilot expands. That discipline is not bureaucracy. It is what lets teams move quickly without losing the ability to explain the result.
There is also a talent question. AI does not remove the need for expert operators. It changes where their time goes. Analysts, engineers, support leads, security reviewers, and compliance teams spend less time on repetitive drafting or search and more time on judgment, exception handling, measurement, and system improvement. Teams that ignore that shift will either over-automate or underuse the technology.
The economic question is equally direct. A capability is valuable only when it changes a constraint. The constraint might be response time, remediation backlog, language coverage, compute availability, compliance evidence, or policy uncertainty. If a deployment does not name the constraint, it will be difficult to defend later. If it does name the constraint, the team can measure before and after with less room for vague success claims.
Where transcription becomes infrastructure
Streaming transcription is not only captions. It is the input layer for summaries, compliance records, CRM updates, coaching, meeting notes, and search. If transcription happens while the conversation is still live, software can suggest next actions before the call ends. That changes the economics of support and sales because follow-up work can be compressed into the conversation itself.
For this story, the practical reading is specific: Voice AI is becoming an operational interface. Customer support, travel changes, field service, healthcare intake, sales qualification, accessibility, and multilingual support all depend on low-latency interaction. The model cannot simply answer. It has to listen continuously, manage interruptions, disclose tool activity, preserve domain terms, and hand off safely when the task leaves its authority. That reading matters because executives are trying to distinguish a durable operating shift from a short news cycle. The headline creates attention, but the deployment path decides value.
The strongest organizations will avoid treating the announcement as a mandate. They will identify the exact workflow affected, define what data enters the system, decide which tools the AI can call, and set a review standard before the pilot expands. That discipline is not bureaucracy. It is what lets teams move quickly without losing the ability to explain the result.
There is also a talent question. AI does not remove the need for expert operators. It changes where their time goes. Analysts, engineers, support leads, security reviewers, and compliance teams spend less time on repetitive drafting or search and more time on judgment, exception handling, measurement, and system improvement. Teams that ignore that shift will either over-automate or underuse the technology.
The economic question is equally direct. A capability is valuable only when it changes a constraint. The constraint might be response time, remediation backlog, language coverage, compute availability, compliance evidence, or policy uncertainty. If a deployment does not name the constraint, it will be difficult to defend later. If it does name the constraint, the team can measure before and after with less room for vague success claims.
What buyers should measure
The useful metrics are task completion, fallback rate, latency, interruption recovery, terminology retention, handoff quality, and customer satisfaction after a failure. A voice model that sounds natural but cannot recover from changed intent will create frustration. A voice model that can call tools but hides what it is doing will create compliance concerns. Production voice agents need both conversational grace and operational accountability.
For this story, the practical reading is specific: Voice AI is becoming an operational interface. Customer support, travel changes, field service, healthcare intake, sales qualification, accessibility, and multilingual support all depend on low-latency interaction. The model cannot simply answer. It has to listen continuously, manage interruptions, disclose tool activity, preserve domain terms, and hand off safely when the task leaves its authority. That reading matters because executives are trying to distinguish a durable operating shift from a short news cycle. The headline creates attention, but the deployment path decides value.
The strongest organizations will avoid treating the announcement as a mandate. They will identify the exact workflow affected, define what data enters the system, decide which tools the AI can call, and set a review standard before the pilot expands. That discipline is not bureaucracy. It is what lets teams move quickly without losing the ability to explain the result.
There is also a talent question. AI does not remove the need for expert operators. It changes where their time goes. Analysts, engineers, support leads, security reviewers, and compliance teams spend less time on repetitive drafting or search and more time on judgment, exception handling, measurement, and system improvement. Teams that ignore that shift will either over-automate or underuse the technology.
The economic question is equally direct. A capability is valuable only when it changes a constraint. The constraint might be response time, remediation backlog, language coverage, compute availability, compliance evidence, or policy uncertainty. If a deployment does not name the constraint, it will be difficult to defend later. If it does name the constraint, the team can measure before and after with less room for vague success claims.
The next competitive battleground
The voice market will not be won by model quality alone. It will be won by the vendor that packages model latency, pricing, guardrails, telephony integration, agent frameworks, observability, and data residency into a deployable system. OpenAI's release points in that direction. It gives developers raw capability, but the real contest will be which platforms help companies turn voice into a reliable work surface.
For this story, the practical reading is specific: Voice AI is becoming an operational interface. Customer support, travel changes, field service, healthcare intake, sales qualification, accessibility, and multilingual support all depend on low-latency interaction. The model cannot simply answer. It has to listen continuously, manage interruptions, disclose tool activity, preserve domain terms, and hand off safely when the task leaves its authority. That reading matters because executives are trying to distinguish a durable operating shift from a short news cycle. The headline creates attention, but the deployment path decides value.
The strongest organizations will avoid treating the announcement as a mandate. They will identify the exact workflow affected, define what data enters the system, decide which tools the AI can call, and set a review standard before the pilot expands. That discipline is not bureaucracy. It is what lets teams move quickly without losing the ability to explain the result.
There is also a talent question. AI does not remove the need for expert operators. It changes where their time goes. Analysts, engineers, support leads, security reviewers, and compliance teams spend less time on repetitive drafting or search and more time on judgment, exception handling, measurement, and system improvement. Teams that ignore that shift will either over-automate or underuse the technology.
The economic question is equally direct. A capability is valuable only when it changes a constraint. The constraint might be response time, remediation backlog, language coverage, compute availability, compliance evidence, or policy uncertainty. If a deployment does not name the constraint, it will be difficult to defend later. If it does name the constraint, the team can measure before and after with less room for vague success claims.
The operating question
The operational question for buyers is not whether the announcement is impressive. It is whether the capability can be connected to a workflow with a named owner, a measurable baseline, a review path, and a failure procedure. AI programs fail when they stop at access. They work when a team can describe what changed, what evidence was collected, which humans remained accountable, and what happens when the system is wrong.
For this story, the practical reading is specific: Voice AI is becoming an operational interface. Customer support, travel changes, field service, healthcare intake, sales qualification, accessibility, and multilingual support all depend on low-latency interaction. The model cannot simply answer. It has to listen continuously, manage interruptions, disclose tool activity, preserve domain terms, and hand off safely when the task leaves its authority. That reading matters because executives are trying to distinguish a durable operating shift from a short news cycle. The headline creates attention, but the deployment path decides value.
The strongest organizations will avoid treating the announcement as a mandate. They will identify the exact workflow affected, define what data enters the system, decide which tools the AI can call, and set a review standard before the pilot expands. That discipline is not bureaucracy. It is what lets teams move quickly without losing the ability to explain the result.
There is also a talent question. AI does not remove the need for expert operators. It changes where their time goes. Analysts, engineers, support leads, security reviewers, and compliance teams spend less time on repetitive drafting or search and more time on judgment, exception handling, measurement, and system improvement. Teams that ignore that shift will either over-automate or underuse the technology.
The economic question is equally direct. A capability is valuable only when it changes a constraint. The constraint might be response time, remediation backlog, language coverage, compute availability, compliance evidence, or policy uncertainty. If a deployment does not name the constraint, it will be difficult to defend later. If it does name the constraint, the team can measure before and after with less room for vague success claims.
The procurement reality
Procurement teams are now asking harder questions because the first wave of generative AI spending created mixed results. Usage grew quickly, but measurable return did not always follow. The next round of budgets will favor systems that reduce cycle time, error rates, rework, backlog, support cost, or compliance overhead. A vendor story that cannot connect capability to those metrics will be treated as an experiment rather than a platform.
For this story, the practical reading is specific: Voice AI is becoming an operational interface. Customer support, travel changes, field service, healthcare intake, sales qualification, accessibility, and multilingual support all depend on low-latency interaction. The model cannot simply answer. It has to listen continuously, manage interruptions, disclose tool activity, preserve domain terms, and hand off safely when the task leaves its authority. That reading matters because executives are trying to distinguish a durable operating shift from a short news cycle. The headline creates attention, but the deployment path decides value.
The strongest organizations will avoid treating the announcement as a mandate. They will identify the exact workflow affected, define what data enters the system, decide which tools the AI can call, and set a review standard before the pilot expands. That discipline is not bureaucracy. It is what lets teams move quickly without losing the ability to explain the result.
There is also a talent question. AI does not remove the need for expert operators. It changes where their time goes. Analysts, engineers, support leads, security reviewers, and compliance teams spend less time on repetitive drafting or search and more time on judgment, exception handling, measurement, and system improvement. Teams that ignore that shift will either over-automate or underuse the technology.
The economic question is equally direct. A capability is valuable only when it changes a constraint. The constraint might be response time, remediation backlog, language coverage, compute availability, compliance evidence, or policy uncertainty. If a deployment does not name the constraint, it will be difficult to defend later. If it does name the constraint, the team can measure before and after with less room for vague success claims.
The architecture lesson
Most successful deployments will use layered architecture. The model handles reasoning and language. The workflow layer handles permissions, tool access, state, and retries. The policy layer handles what the system is allowed to do. The observability layer records inputs, outputs, tool calls, and decisions. The human layer reviews exceptions and owns judgment. Removing any layer makes the system faster in a demo and weaker in production.
For this story, the practical reading is specific: Voice AI is becoming an operational interface. Customer support, travel changes, field service, healthcare intake, sales qualification, accessibility, and multilingual support all depend on low-latency interaction. The model cannot simply answer. It has to listen continuously, manage interruptions, disclose tool activity, preserve domain terms, and hand off safely when the task leaves its authority. That reading matters because executives are trying to distinguish a durable operating shift from a short news cycle. The headline creates attention, but the deployment path decides value.
The strongest organizations will avoid treating the announcement as a mandate. They will identify the exact workflow affected, define what data enters the system, decide which tools the AI can call, and set a review standard before the pilot expands. That discipline is not bureaucracy. It is what lets teams move quickly without losing the ability to explain the result.
There is also a talent question. AI does not remove the need for expert operators. It changes where their time goes. Analysts, engineers, support leads, security reviewers, and compliance teams spend less time on repetitive drafting or search and more time on judgment, exception handling, measurement, and system improvement. Teams that ignore that shift will either over-automate or underuse the technology.
The economic question is equally direct. A capability is valuable only when it changes a constraint. The constraint might be response time, remediation backlog, language coverage, compute availability, compliance evidence, or policy uncertainty. If a deployment does not name the constraint, it will be difficult to defend later. If it does name the constraint, the team can measure before and after with less room for vague success claims.
The market implication
The market is shifting from model access to system ownership. A buyer can already reach powerful models through several providers. What remains scarce is a reliable operating model for using those models inside regulated, high-value, or failure-sensitive work. That is why distribution, governance, support, integration, and evidence are becoming as important as raw benchmark gains.
For this story, the practical reading is specific: Voice AI is becoming an operational interface. Customer support, travel changes, field service, healthcare intake, sales qualification, accessibility, and multilingual support all depend on low-latency interaction. The model cannot simply answer. It has to listen continuously, manage interruptions, disclose tool activity, preserve domain terms, and hand off safely when the task leaves its authority. That reading matters because executives are trying to distinguish a durable operating shift from a short news cycle. The headline creates attention, but the deployment path decides value.
The strongest organizations will avoid treating the announcement as a mandate. They will identify the exact workflow affected, define what data enters the system, decide which tools the AI can call, and set a review standard before the pilot expands. That discipline is not bureaucracy. It is what lets teams move quickly without losing the ability to explain the result.
There is also a talent question. AI does not remove the need for expert operators. It changes where their time goes. Analysts, engineers, support leads, security reviewers, and compliance teams spend less time on repetitive drafting or search and more time on judgment, exception handling, measurement, and system improvement. Teams that ignore that shift will either over-automate or underuse the technology.
The economic question is equally direct. A capability is valuable only when it changes a constraint. The constraint might be response time, remediation backlog, language coverage, compute availability, compliance evidence, or policy uncertainty. If a deployment does not name the constraint, it will be difficult to defend later. If it does name the constraint, the team can measure before and after with less room for vague success claims.
The competitive response
Competitors will respond in predictable ways. Large platforms will bundle the capability into existing suites. Specialist vendors will argue that domain-specific evaluation and workflow depth beat general models. Cloud providers will package infrastructure and management controls. Consulting firms will turn the story into transformation programs. Buyers should expect rapid feature imitation and slower proof of durable value.
For this story, the practical reading is specific: Voice AI is becoming an operational interface. Customer support, travel changes, field service, healthcare intake, sales qualification, accessibility, and multilingual support all depend on low-latency interaction. The model cannot simply answer. It has to listen continuously, manage interruptions, disclose tool activity, preserve domain terms, and hand off safely when the task leaves its authority. That reading matters because executives are trying to distinguish a durable operating shift from a short news cycle. The headline creates attention, but the deployment path decides value.
The strongest organizations will avoid treating the announcement as a mandate. They will identify the exact workflow affected, define what data enters the system, decide which tools the AI can call, and set a review standard before the pilot expands. That discipline is not bureaucracy. It is what lets teams move quickly without losing the ability to explain the result.
There is also a talent question. AI does not remove the need for expert operators. It changes where their time goes. Analysts, engineers, support leads, security reviewers, and compliance teams spend less time on repetitive drafting or search and more time on judgment, exception handling, measurement, and system improvement. Teams that ignore that shift will either over-automate or underuse the technology.
The economic question is equally direct. A capability is valuable only when it changes a constraint. The constraint might be response time, remediation backlog, language coverage, compute availability, compliance evidence, or policy uncertainty. If a deployment does not name the constraint, it will be difficult to defend later. If it does name the constraint, the team can measure before and after with less room for vague success claims.
The implementation trap
The common implementation trap is choosing the most visible workflow instead of the most measurable one. Executive attention gravitates toward dramatic examples, but reliable gains often start in narrower work: triage, routing, summarization with citations, draft generation with review, test creation, document comparison, alert enrichment, and support follow-up. Those workflows have clear inputs and outputs, which makes evaluation possible.
For this story, the practical reading is specific: Voice AI is becoming an operational interface. Customer support, travel changes, field service, healthcare intake, sales qualification, accessibility, and multilingual support all depend on low-latency interaction. The model cannot simply answer. It has to listen continuously, manage interruptions, disclose tool activity, preserve domain terms, and hand off safely when the task leaves its authority. That reading matters because executives are trying to distinguish a durable operating shift from a short news cycle. The headline creates attention, but the deployment path decides value.
The strongest organizations will avoid treating the announcement as a mandate. They will identify the exact workflow affected, define what data enters the system, decide which tools the AI can call, and set a review standard before the pilot expands. That discipline is not bureaucracy. It is what lets teams move quickly without losing the ability to explain the result.
There is also a talent question. AI does not remove the need for expert operators. It changes where their time goes. Analysts, engineers, support leads, security reviewers, and compliance teams spend less time on repetitive drafting or search and more time on judgment, exception handling, measurement, and system improvement. Teams that ignore that shift will either over-automate or underuse the technology.
The economic question is equally direct. A capability is valuable only when it changes a constraint. The constraint might be response time, remediation backlog, language coverage, compute availability, compliance evidence, or policy uncertainty. If a deployment does not name the constraint, it will be difficult to defend later. If it does name the constraint, the team can measure before and after with less room for vague success claims.
The governance burden
Every useful AI system creates a governance burden because it changes who knows what, who can do what, and who is responsible for the result. The burden is manageable when teams define authority clearly. It becomes dangerous when a model borrows human credentials, touches sensitive data without classification, or creates records that no one reviews. Governance should be built into the workflow rather than bolted on after adoption spreads.
For this story, the practical reading is specific: Voice AI is becoming an operational interface. Customer support, travel changes, field service, healthcare intake, sales qualification, accessibility, and multilingual support all depend on low-latency interaction. The model cannot simply answer. It has to listen continuously, manage interruptions, disclose tool activity, preserve domain terms, and hand off safely when the task leaves its authority. That reading matters because executives are trying to distinguish a durable operating shift from a short news cycle. The headline creates attention, but the deployment path decides value.
The strongest organizations will avoid treating the announcement as a mandate. They will identify the exact workflow affected, define what data enters the system, decide which tools the AI can call, and set a review standard before the pilot expands. That discipline is not bureaucracy. It is what lets teams move quickly without losing the ability to explain the result.
There is also a talent question. AI does not remove the need for expert operators. It changes where their time goes. Analysts, engineers, support leads, security reviewers, and compliance teams spend less time on repetitive drafting or search and more time on judgment, exception handling, measurement, and system improvement. Teams that ignore that shift will either over-automate or underuse the technology.
The economic question is equally direct. A capability is valuable only when it changes a constraint. The constraint might be response time, remediation backlog, language coverage, compute availability, compliance evidence, or policy uncertainty. If a deployment does not name the constraint, it will be difficult to defend later. If it does name the constraint, the team can measure before and after with less room for vague success claims.
The next six months
The next six months will separate announcement value from production value. Watch customer evidence, not only vendor claims. Watch whether teams expand usage after the first pilot. Watch whether legal and security teams become blockers or partners. Watch whether the system survives messy exceptions, not only scripted demos. Durable adoption will look less like magic and more like better operating discipline.
For this story, the practical reading is specific: Voice AI is becoming an operational interface. Customer support, travel changes, field service, healthcare intake, sales qualification, accessibility, and multilingual support all depend on low-latency interaction. The model cannot simply answer. It has to listen continuously, manage interruptions, disclose tool activity, preserve domain terms, and hand off safely when the task leaves its authority. That reading matters because executives are trying to distinguish a durable operating shift from a short news cycle. The headline creates attention, but the deployment path decides value.
The strongest organizations will avoid treating the announcement as a mandate. They will identify the exact workflow affected, define what data enters the system, decide which tools the AI can call, and set a review standard before the pilot expands. That discipline is not bureaucracy. It is what lets teams move quickly without losing the ability to explain the result.
There is also a talent question. AI does not remove the need for expert operators. It changes where their time goes. Analysts, engineers, support leads, security reviewers, and compliance teams spend less time on repetitive drafting or search and more time on judgment, exception handling, measurement, and system improvement. Teams that ignore that shift will either over-automate or underuse the technology.
The economic question is equally direct. A capability is valuable only when it changes a constraint. The constraint might be response time, remediation backlog, language coverage, compute availability, compliance evidence, or policy uncertainty. If a deployment does not name the constraint, it will be difficult to defend later. If it does name the constraint, the team can measure before and after with less room for vague success claims.
The source trail
This article is based on public reporting and primary material available on May 12, 2026. Vendor claims are treated as claims unless they have been independently verified in production by customers, auditors, regulators, or public technical evidence.
- OpenAI product announcement: https://openai.com/index/advancing-voice-intelligence-with-new-models-in-the-api/
- OpenAI product releases page: https://openai.com/news/product-releases/
- Times of India coverage: https://timesofindia.indiatimes.com/technology/tech-news/openai-launches-gpt-realtime-voice-suite-for-developers/articleshow/130946298.cms
The careful reading matters because several of these stories involve reported deals, phased rollouts, forward-looking product claims, or government policy processes. Those categories can change as contracts are signed, products reach users, and evidence becomes public.
Analysis by Sudeep Devkota, Editorial Analyst at ShShell Research. Published May 12, 2026.