

People who use AI programs regularly are beginning to rely on them so much that they feel an affinity and even an identification with the AIs they’re training for tasks. They’re asking AIs personal questions, and taking their answers seriously.
But a chatbot is not a real-life genie or a sophisticated Magic 8 Ball. The answers you get depend on the chatbot you use, the prompts you give it and the kind of research you ask for. Unless a program is tailored for a specific use, it takes skill on the part of the questioner to elicit answers founded on facts, data, research and wisdom about human behavior. Additionally, it’s always a good idea to ask the chatbot for the source of its information, because it may be coming from a novel or an unreliable internet resource.
A study on the adoption of large-language models (LLMs) finds that most people use the chatbot that debuted first (ChatGPT) and rely on it for information far beyond basic research. The Imagining the Digital Future Center at Elon University (a private North Carolina college with no relation to Musk) issued a report finding that 52% of American adults are using LLMs like ChatGPT, Gemini, Claude and Copilot. Of those, 51% use them for personal learning, 24% for work.
At least 65% of LLM users prefer to talk to the chatbots. Here are a few interesting insights:
• 76% of LLM users are satisfied with chatbot performance, including 25% who are very satisfied.
• 54% say their use of LLM s has improved their productivity by a lot or somewhat.
• 42% say their use has improved creativity. On humanlike traits:
• 39% say an LLM they use shows capacity to think and reason at least some of the time.
• 32% say it seems to have a sense of humor.
• 25% say it acts like it makes moral judgments about right and wrong.
• 24% say it seems like it expresses hope.
• 17% say it seems to respond sarcastically.
• 11% say it seems to express sadness.
Here’s the most significant part of the study involving the life-answers piece:
• 33% have felt they were becoming too dependent on an LLM for answers rather than thinking things through themselves.
• 23% say they made a significant mistake or bad decision by relying on information generated by an LLM.
• 21% have felt manipulated by the model.
Their feelings are very likely valid. Chatbots respond to how questions are posed and are trained to provide help. The questioner could be taking everything at face value, accepting a hallucination as fact, or leading the chatbot to provide certain answers.
In an interview on National Public Radio’s Fresh Air, New York Times Tech Reporter Kashmir Hill talked about how she used generative AI to run her life for a week, asking it to tell her what to wear, what to eat, how to cut her hair, what color to paint her office and what to do every day, by giving each chatbot prompts as guidelines. She found that ChatGPT was most amenable to giving her as much information as she wanted. After she explained to Claude her plan to outsource all her decision-making to AI, the chatbot advised her against doing it because AIs could mislead her. Claude has ethics and philosophy training. Hill said all the chatbots have a “personality” or style. Claude’s is “high-mindedness” or “honesty.”
I suspect that younger people are more susceptible to being misled by a chatbot. On the other hand, when it comes to research for those who already have deep knowledge of a subject, AI resources can be indispensable. A story in The New Yorker recently demonstrated how AI helped a professor discover new ideas in his field and challenged him in a way that people had not and could not. He writes about his experience in the piece, titled, “Will The Humanities Survive Artificial Intelligence?”
AI therapist proves effective
Researchers at Dartmouth University found that an AI chatbot designed for psychological therapy seems to be helping people. Their study explores how a program called Therabot is helping people dealing with depression, anxiety and eating disorders.
In the findings of the study, involving 106 participants across the US over eight weeks, “depression symptoms were reduced by 51%, anxiety by 31% and eating disorder concerns by 19%.” Participants used a smartphone app and spoke to the program, which spoke its responses based on best practices in psychotherapy and cognitive behavioral therapy. Its efficacy was partially based in how users could talk to it at any time, which made it especially useful when they were suffering most, at night or in vulnerable situations.
The report finds that the chatbot worked best in tandem with traditional therapy, but made an excellent alternative for those with no access to that. For every available provider in the US there is an average of 1,600 patients with depression or anxiety alone. Therabot has training to recognize suicidal ideation, offering crisis resources when necessary.
Humans should always be involved in therapy for follow-up, but with practitioners stretched so thin, Therabot is a tool that can improve people’s lives.
How are you using AI?
My husband has been using his proprietary AI to do first drafts of computer code (based on past training) for a few years now. I use AI in editing, creating graphics and newsletter designs, transcribing recordings and summarizing long documents. I know several people who are using AI to help with grant writing. I spoke with an attorney who has saved untold hours of research by using an AI tool for lawyers. Are you using AI in your job? Has it saved you time and effort? Contact me at tonidenis2018@gmail.com so we can talk.