NICOSIA, CY / ACCESS Newswire / January 13, 2026 / Edubrain released new research insights and expert guidance on how people use modern AI chatbots, where the technology helps, and where it creates real risks. The update also highlights findings from a recent Edubrain survey focused on behavior of Americans aged 18-28 and their emotional reliance on human-sounding AI tools.

As AI chatbots become a daily tool for students and working adults, Edubrain is seeing public discussion split between hype and alarm.
Harry Southworth, Head of AI Development at Edubrain, said:
"Ever since the rise of large language model chatbots, I've seen the public conversation shift in two distinct directions. Either people are completely obsessed with the ‘brilliance' of the technology, or they are terrified of ‘thinking machines' taking over the world. I think the reality is somewhere in between."
The way chatbots operate - through math, prediction, and pattern matching - has significant ethical and psychological consequences that aren't discussed enough.
Southworth said:
"At the end of the day, text-based AIs aren't intelligent in a human way. Indeed, they don't have a ‘brain' that reasons or understands meaning. However, the way they actually operate, through math, prediction, and pattern matching, has significant ethical and psychological consequences that I don't think we discuss enough."
What These Models Actually Are and Are Not
Edubrain emphasizes that modern chatbots produce language that can feel authoritative, even when it is wrong. Southworth said:
"In my experience, these systems are essentially just word-prediction engines. Initially, they've been fed massive datasets consisting of books, websites, articles, and forums. Accordingly, they've used that data to learn the statistical relationships between phrases".
When you give them a prompt, they generate a response by calculating which sequences of words are most likely to follow one another based on what they saw during training.
The researchers often use the term "stochastic parrots". The model mimics language, but it doesn't actually understand what it's saying. It has no intentions or real comprehension.
More importantly, it lacks a "truth meter" to determine if a statement is accurate, ethical, or logical. Instead, it is designed to sound plausible to the user, rather than being accurate.
Mr. Southworth commented on this: "That's why I'm never surprised when an AI gives a fluent, confident answer that turns out to be total nonsense.
My tip is: treat the bot as a fast draft tool, nothing more. I always tell people to ask the AI to list its assumptions and define its terms before it starts. If I'm looking for facts, I insist on quotes and named sources that I can check personally."
Why AI Struggles With the Truth
In a theoretical scenario where a dataset has ten lies and only three facts, the AI is likely going to pick the lie because it sees that pattern as more "statistically probable." It doesn't have a way to check if something is actually true in the real world; it just estimates what is most likely to come next.
Because these systems rely so heavily on how often a pattern appears in their data, they can actually favor false information if it's common in the dataset. Experts in AI ethics have noted that this makes it particularly challenging to filter out misinformation.
When an AI faces conflicting information, it doesn't pause to think it through or resolve the contradiction; it simply selects the pattern that carries the most weight in its training.
Mr. Southworth adds: "I suggest "triangulating" your answers as a habit. Ask for a few different sources or angles, then ask the AI what specifically would make its answer wrong. For factual work, always run a "failure mode" check to identify potential points of weakness in the logic."
The Problem With "Model Collapse"
Since these models are trained on Internet data, and the Internet is now becoming full of AI-generated text, we're seeing a phenomenon called "model collapse".
The variety and richness of human language gets drowned out by synthetic noise, provoking a degradation of text quality. As more AI content circulates, arises the risk of losing the distinctiveness of our cultural expression. This is precisely where "AI hallucinations" come from. Even the "smartest" systems still confidently make things up, and we don't see that limitation going away anytime soon.
Edubrain's Harry Southworth comments on this: "Emphasize the use of primary sources and recent data. Always ask the bot to tell if a claim is "timeless" or "time-sensitive" and to provide supporting dates of the piece being published. If it can't date a claim with confidence, don't repeat it as fact."
The Trust Paradox
The biggest mistake people make is assuming the AI is actually "thinking" like a human.
Because it can write fluent paragraphs, respondents assume reasoning is happening behind the scenes. In reality, it's just deep pattern matching. This leads to what researchers refer to as the AI Trust Paradox.
This issue can lead to a disaster in fields such as mental health, medicine, or law. The more fluent the AI gets, the easier it is to miss a subtle, dangerous error hidden in a well-written narrative.
Mr. Southworth says: "Don't let a smooth talker trick you into automatically trusting them. Always ask the AI for a short answer and then a separate list of reasons why it might be wrong. For the big stuff, use AI to brainstorm questions, but rely on human experts for the decisions."
Bias, Manipulation, and Misinformation
AI doesn't have a moral code; it just mirrors the biases in its data.
Chatbots can reproduce or even amplify stereotypes. Without strict oversight, AI tools can produce outputs that are unfair or damaging.
Beyond bias, there's also the issue of misinformation. Studies show that chatbots can influence people's opinions on politics, even when providing incorrect information. Since AI lacks ethical judgment, it can't filter content for truth - that's a responsibility that belongs to the humans who build and use it.
Expert Tip: Run a bias check on anything sensitive. Ask for alternative framings and request the "most likely stereotype" the answer might be leaning on. If real people are affected, insist on neutral wording and human review.
Mental Health and Emotional Risks
One thing that shouldn't be overlook is the psychological impact. Because these systems appear human-like, users form emotional attachments and expect empathy that isn't there. The survey shows that some Gen Z users are treating these bots like actual "friends" or "therapists."
Real human support involves empathy and ethical understanding - neither of which AI possesses. Misinterpretation in a mental health context can cause real and tremendous harm.
Creativity in AI
The creative process in AI is best described as "pattern recombination." AI struggles with deep originality or genuine problem framing. It's just reorganizing pieces of text it has already seen. It doesn't innovate because it has no cultural context or internal model of meaning.
Conclusion
Edubrain says the goal is not fear, but informed use supported by verification habits and human oversight.
Southworth said:
"I believe we should embrace AI with respect for its potential, but we must do so with our eyes wide open. It's a powerful tool, but it isn't a sentient thinker or an oracle. Its ability to produce coherent text doesn't mean it's universally and automatically right."
About Edubrain
Edubrain is an educational technology platform focused on reshaping how students approach learning and academic problem-solving. The company develops AI-powered tools that help students understand complex topics, complete homework more confidently, and build long-term learning skills across a wide range of subjects. Edubrain's solutions are designed to support real academic journeys by combining accuracy, clarity, and accessibility in one place.
Serving students across more than 90 disciplines, including math, science, engineering, and the humanities, Edubrain provides step-by-step explanations, instant feedback, and tailored academic support that adapts to different learning styles and academic levels. Driven by values of innovation, educational access, and respect for individual learning needs, Edubrain prioritizes secure access to high-quality academic tools.
Media Contact:
Company: Edubrain.ai
Name: Chloe Bennett
Website: http://edubrain.ai
Email: chloe@wmb-digital.com
SOURCE: Edubrain
View the original press release on ACCESS Newswire