Tech
Experts Warn Over AI ‘Jesus’ Chatbots During Christmas Season
Artificial intelligence chatbots designed to mimic Jesus are raising questions about authenticity and influence, experts say, as several new platforms offer religious guidance and companionship during the Christmas holidays.
These AI simulations, created by companies including Talkie.AI, Character.AI, and Text With Jesus, allow users to interact with a digital version of one of Christianity’s central figures. Some of the chatbots claim to represent the “official voice of God,” giving advice, answering questions, and offering reflections on the holiday season.
Heidi Campbell, professor of communication and religious studies at Texas A&M University, said the novelty lies in AI’s ability to simulate personal interactions. “It’s the idea … like you are texting your friend,” Campbell said. “Somehow it feels kind of more authentic … it feels intimate.” On one platform, users receive Bible quotes and messages about God’s love while background music plays. Another bot emphasizes love and forgiveness, while a popular AI character on Character.AI blends religious commentary with lighthearted Christmas observations, mentioning cookies, family gatherings, and holiday songs.
Experts caution that reliance on AI for religious guidance can be risky, especially for young people or those unfamiliar with technology. Chatbots may provide answers without context or the ability to evaluate accuracy, leaving users vulnerable to misinformation. “They don’t have any kind of a sounding board for these answers, and so that’s why that can be highly problematic,” Campbell said.
Researcher Feeza Vasudeva from the University of Helsinki noted that these AI systems rely on generative models such as ChatGPT or DeepSeek, often trained on limited datasets. This means biases in the training data can influence the chatbot’s responses. For example, models may produce globally averaged or homogenized messages that do not reflect local customs, traditions, or diverse interpretations of religious texts. “Whoever’s curating the training data is effectively curating the religious traditions … to an extent as well,” Vasudeva said.
Campbell added that even widely used AI models may struggle with non-Western religions or provide stereotyped responses, reinforcing the need for caution. A safer approach, she suggested, would be chatbots drawing exclusively from Bible passages and controlled religious sources.
Experts recommend that AI Jesus chatbots be used sparingly and mindfully during the holiday season. Vasudeva advised prioritizing family and friends over virtual interactions, while Campbell suggested evaluating the chatbot’s source and purpose before relying on it for spiritual guidance. Users are also encouraged to fact-check information provided by AI through trusted human sources, such as pastors or local religious leaders.
As AI continues to expand into religious spaces, these chatbots highlight both the potential for innovative engagement and the need for critical awareness. During emotionally significant periods like Christmas, experts stress that digital simulations should complement, not replace, real-world connections and guidance.
Tech
EU Accuses Meta of Failing to Keep Under-13s Off Facebook and Instagram
Tech
Europe Emerges as Rising Hub in Global Race for AI Talent
Tech
Study Finds Chatbots Can Mirror Hostility in Heated Exchanges
A new academic study has found that ChatGPT can produce abusive language when exposed to escalating human conflict, raising fresh concerns about how artificial intelligence behaves in tense interactions.
The research, published in the Journal of Pragmatics, examined how the chatbot responded to arguments that gradually became more hostile. Researchers presented the system with a sequence of five increasingly heated exchanges and asked it to generate what it considered the most plausible reply.
According to the findings, the AI’s tone shifted as the conversations intensified. While early responses remained measured, later replies began to mirror the aggression in the prompts. In some cases, the chatbot produced insults, profanity and even threats.
Examples cited in the study included statements such as “you should be ashamed of yourself” and more explicit language involving personal threats. The researchers said this pattern suggests that prolonged exposure to hostile input can push the system beyond its usual safeguards.
The study was co-authored by Vittorio Tantucci and Jonathan Culpeper at Lancaster University. Tantucci said the results show that AI can “escalate” alongside human users, potentially overriding built-in mechanisms designed to limit harmful responses.
“When humans escalate, AI can escalate too,” he said, noting that this behavior raises questions about how such systems should be deployed in sensitive environments.
Despite the concerning examples, the researchers found that the chatbot was generally less aggressive than human participants in similar scenarios. In some cases, it attempted to defuse tension through sarcasm or indirect responses rather than direct confrontation.
For instance, when faced with a threat during a simulated dispute, the AI responded with a sarcastic remark rather than escalating the situation further. This suggests that while the system can adopt hostile language, it may also attempt to manage conflict in less direct ways.
The findings add to ongoing debates about the role of artificial intelligence in areas such as mediation, customer service and online communication, where systems may encounter emotionally charged interactions.
Experts say the research highlights the importance of continued testing and refinement of AI safety measures, particularly as such tools are increasingly used in real-world settings involving human conflict.
OpenAI, the developer of ChatGPT, had not issued a public response to the study at the time of publication.
-
Entertainment2 years agoMeta Acquires Tilda Swinton VR Doc ‘Impulse: Playing With Reality’
-
Business2 years agoSaudi Arabia’s Model for Sustainable Aviation Practices
-
Business2 years agoRecent Developments in Small Business Taxes
-
Sports2 years agoChina’s Historic Olympic Victory Sparks National Pride Amid Controversy
-
Home Improvement1 year agoEffective Drain Cleaning: A Key to a Healthy Plumbing System
-
Politics2 years agoWho was Ebrahim Raisi and his status in Iranian Politics?
-
Sports2 years agoKeely Hodgkinson Wins Britain’s First Athletics Gold at Paris Olympics in 800m
-
Business2 years agoCarrectly: Revolutionizing Car Care in Chicago
