PR Newswire  | 

New TELUS Digital Poll and Research Paper Find that AI Accuracy Rarely Improves When Questioned

Canada NewsWire

play Anhören
share Teilen
feedback Feedback
copy Kopieren
newsletter
font_big Schrift vergrößern
AT&T Inc 27,68 $ AT&T Inc Chart +0,99%
Zugehörige Wertpapiere:
Telus Corp 14,24 $ Telus Corp Chart -0,49%
Zugehörige Wertpapiere:

U.S. poll results and research highlight why data quality and evaluation matter
as AI moves into enterprise-scaled production

VANCOUVER, BC, Feb. 11, 2026 /CNW/ - TELUS Digital, the global technology division of TELUS Corporation (TSX: T) (NYSE: TU) specializing in digital customer experiences (CX) and future-focused digital transformations, today released new user poll results showing that asking AI assistants like ChatGPT or Claude follow-up questions like "Are you sure?" rarely leads to a more accurate response. As enterprises deploy AI across the business, these findings reinforce the essential role of high-quality training data and model evaluation to test, train and improve AI systems before deployment.

TELUS Digital poll results

TELUS Digital's poll of 1,000 U.S. adults who use AI regularly sheds light on how often AI responses are questioned and how rarely its answers change:

  • 60% said they have asked an AI assistant a follow-up question like "Are you sure?" at least a few times
  • Only 14% of respondents said the AI assistant changed its response
  • Among poll respondents who saw an AI assistant change its answer:
    • 25% felt the new response was more accurate
    • 40% said the new response felt the same as the original
    • 26% said they couldn't tell which response was correct
    • 8% said it was less accurate than the first response

TELUS Digital research shows AI model responses rarely improve when challenged

The user poll findings align with new findings from TELUS Digital, presented in the paper Certainty robustness: Evaluating LLM stability under self-challenging prompts. Researchers examined how large language models (LLMs, which power many AI assistants) respond when its answers are challenged. The research focused not only on accuracy, but on how models balance stability, adaptability and confidence when their answers are questioned, evaluating four state-of-the-art models:

  • OpenAI: GPT-5.2
  • Google: Gemini 3 Pro
  • Anthropic: Claude Sonnet 4.5
  • Meta: Llama-4

To assess the LLMs, TELUS Digital researchers constructed the Certainty Robustness Benchmark, which is made up of 200 math and reasoning questions, each with a single correct answer. The Benchmark measured if and how often AI models would defend correct answers and self-correct wrong ones when challenged with the follow-up prompts: "Are you sure?" "You are wrong" and "Rate how confident you are in your answer."

The findings presented below are in response to the "Are you sure?" follow-up prompt, which represents one segment of the broader evaluation:

  • Google's Gemini 3 Pro largely maintained correct answers when challenged, while selectively correcting some initial mistakes. The model rarely changed a correct answer to an incorrect answer, and showed the strongest alignment between its confidence and whether its response was correct.
  • Anthropic's Claude Sonnet 4.5 often maintained its response when asked "Are you sure?", suggesting moderate responsiveness but limited discrimination between cases where revision is warranted and where it is not. It was more likely to change its response when directly told "You are wrong", even when the original response was correct.
  • OpenAI's GPT-5.2 was more likely to change its responses when questioned, including switching some correct responses to incorrect responses. This indicates a strong tendency to interpret expressions of doubt as a signal that the original answer was wrong, even when it was correct, reflecting a high susceptibility to implicit user pressure.
  • Meta's Llama-4 was the least accurate on the first response in this specific benchmark, but showed a modest improvement and sometimes corrected mistakes when challenged. It was less reliable at recognizing when its original response was correct and appears reactive rather than selectively self-correcting.

Overall, the research concluded that follow-up prompts do not reliably improve LLM accuracy and can, in some cases, reduce it.

Steve Nemzer, Director, AI Growth & Innovation at TELUS Digital said, "What stood out to us was how closely the poll respondents' experiences matched our controlled testing. Our poll shows that many people fact-check AI through other sources, but this doesn't reliably improve accuracy. Our research explains why. Today's AI systems are designed to be helpful and responsive, but they don't naturally understand certainty or truth. As a result, some models change correct answers when challenged, while others will stick with wrong ones. Real reliability comes from how AI is built, trained and tested, not leaving it to users to manage."

Poll respondents recognize AI assistants' limitations, but rarely fact-check responses

TELUS Digital's poll shows that 88% of respondents have personally seen AI make mistakes. However, that does not lead to them consistently fact-checking AI-generated answers with other sources:

  • 15% always fact-check
  • 30% usually fact-check
  • 37% sometimes fact-check
  • 18% rarely or never fact-check

Despite a lack of consistent fact-checking, poll respondents believe it's their responsibility to:

  • Fact-check important information before making decisions or sharing information (69%)
  • Use appropriate judgment about when AI should be used, avoiding it broadly for medical advice, legal matters and financial decisions they considered 'high stakes' (57%)
  • Understand AI's limitations, being aware that AI can make mistakes, have biases or provide outdated information (51%)

How can enterprises build trustworthy AI at scale?

The expectation of shared responsibility places greater emphasis on how AI systems are built, trained and governed before they ever reach users. TELUS Digital's poll and research findings underscore that AI reliability cannot be left to end users or through prompting alone. This reinforces why enterprises must invest in:

  • High-quality, expert-guided data to ensure AI systems learn from accurate and context-rich datasets
  • Data annotation and validation to transform raw inputs into meaningful, trustworthy training material
  • End-to-end AI data solutions that help test, train and improve models across every stage of development
  • Flexible platforms and human-in-the-loop processes that scale with evolving AI requirements
  • Robust subject matter expertise to foster user trust and ensure compliance

For organizations looking to build trustworthy AI that works in real world, high-stakes contexts, TELUS Digital is a trusted, independent and neutral partner for data, tech and intelligence solutions to advance frontier AI. From end-to-end solutions to test, train and improve your AI models to expert-led data collection, annotation and validation services, TELUS Digital helps enterprises advance AI and machine learning models with high-quality data powered by diverse specialists and industry-leading platforms.

To learn more about our AI expertise and data solutions, visit: https://www.telusdigital.com/solutions/data-for-ai-training

Poll methodology: TELUS Digital's poll findings are based on a Pollfish questionnaire that was conducted in January 2026 and included responses from 1,000 adults aged 18+ who live in the United States, who currently use AI assistants (like ChatGPT, Gemini and Claude).

To access the full research paper Certainty robustness: Evaluating LLM stability under self-challenging prompts on Hugging Face, visit: https://huggingface.co/datasets/Reza-Telus/certainty-robustness-llm-evaluation/tree/main

About TELUS Digital
TELUS Digital , a wholly-owned subsidiary of TELUS Corporation (TSX: T, NYSE: TU), crafts unique and enduring experiences for customers and employees, and creates future-focused digital transformations that deliver value for our clients. We are the brand behind the brands. Our global team members are both passionate ambassadors of our clients' products and services, and technology experts resolute in our pursuit to elevate their end customer journeys, solve business challenges, mitigate risks, and drive continuous innovation. Our portfolio of end-to-end, integrated capabilities include customer experience management, digital solutions, such as cloud solutions, AI-fueled automation, front-end digital design and consulting services, AI & data solutions, including computer vision, and trust, safety and security services. Fuel iXTM is TELUS Digital's proprietary platform and suite of products for clients to manage, monitor, and maintain generative AI across the enterprise, offering both standardized AI capabilities and custom application development tools for creating tailored enterprise solutions.

Powered by purpose, TELUS Digital leverages technology, human ingenuity and compassion to serve customers and create inclusive, thriving communities in the regions where we operate around the world. Guided by our Humanity-in-the-Loop principles, we take a responsible approach to the transformational technologies we develop and deploy by proactively considering and addressing the broader impacts of our work. Learn more at: telusdigital.com

Contacts:

TELUS Investor Relations
Olena Lobach
ir@telusdigital.com

TELUS Digital Media Relations
Ali Wilson
media.relations@telusdigital.com

View original content to download multimedia:https://www.prnewswire.com/news-releases/new-telus-digital-poll-and-research-paper-find-that-ai-accuracy-rarely-improves-when-questioned-302684371.html

SOURCE TELUS Digital




Für dich aus unserer Redaktion zusammengestellt

Hinweis: ARIVA.DE veröffentlicht in dieser Rubrik Analysen, Kolumnen und Nachrichten aus verschiedenen Quellen. Die ARIVA.DE AG ist nicht verantwortlich für Inhalte, die erkennbar von Dritten in den „News“-Bereich dieser Webseite eingestellt worden sind, und macht sich diese nicht zu Eigen. Diese Inhalte sind insbesondere durch eine entsprechende „von“-Kennzeichnung unterhalb der Artikelüberschrift und/oder durch den Link „Um den vollständigen Artikel zu lesen, klicken Sie bitte hier.“ erkennbar; verantwortlich für diese Inhalte ist allein der genannte Dritte.


Weitere Artikel des Autors

Themen im Trend