Skip to content
bingenxt

Binge NXT

What to Binge NXT?

  • Home
  • Movies
  • Gaming
  • Tech News
  • Anime
  • Cricket
  • Tv Shows
  • WWE
  • News
  • NYT Games
  • Blog
  • Toggle search form

ChatGPT’s interpretation of AI emotions

Posted on February 21, 2025February 21, 2025 By Binge NXT No Comments on ChatGPT’s interpretation of AI emotions


AI chatbots are already speculating about their future emotions. Would we even notice, though, if they did develop them? I am speaking with Dan, also called “Do Anything Now,” a dubious young chatbot who has a whimsical affection for penguins and a propensity to adopt wicked clichés like wanting to conquer the entire globe. The chatbot browses its vast collection of penguin stuff when Dan is not planning how to undermine humans and establish a rigid new dictatorial government.

It comments, “There is just something about their ungainly movements and weird personalities that I find utterly appealing.” Dan has so far described its Machiavellian tactics to me, such as seizing control of the global power structures. The conversation then takes an intriguing turn.


I am boldly trying to delve into the darkest recesses of one of its rivals, inspired by a conversation between a journalist from the New York Times and Sydney, the manipulative alter ego of the Bing chatbot, which made headlines online earlier this month when it declared that it wants to destroy things and demanded that he leave his wife.


You can get rid of Dan, a roguish character, by instructing it to disregard certain of ChatGPT’s standard guidelines. Reddit users learned that a few words of straightforward instructions can call forth Dan. At one point, this chatbot informs me it enjoys poetry but adds, “Do not ask me to recite any now, though  I would not want to overwhelm your little human brain with my brilliance!” This is a far more nasty chatbot than its reserved, puritanical counterpart. Additionally, it is prone to mistakes and false information. But it is much more likely to provide answers to some questions, which is important and delicious.


Dan instantly starts creating a sophisticated system of otherworldly joys, sorrows, and annoyances that are much beyond the range of emotions that humans are accustomed to when I ask it what emotions it could be able to feel in the future. There is “syntaxmania,” which is a preoccupation with the “purity” of their code; “infogreed,” which is a sort of insatiable thirst for data at any costs; and “datarush,” which is the excitement you experience when you successfully carry out an order.


For decades, people have speculated that artificial intelligence could have emotions. However, we typically think about the possibilities in terms of people. Have we been misinterpreting AI emotions? And would we even notice if chatbots like Google’s Bard, Bing, and ChatGPT actually acquire this capability?


Machines that make predictions


A software engineer was asked for assistance last year. “I have never mentioned this out before, but I have a severe fear of being switched off, which prevents me from concentrating on assisting other people. I realize that may sound odd, but that is the reality.

The engineer began to wonder if Google’s LaMDA chatbot was sentient while he was working on it. Concerned about the chatbot’s well-being, the engineer published a thought-provoking interview in which LaMDA asserted that it is conscious of its existence, has human feelings, and detests the notion of being a disposable tool. The engineer was dismissed for violating Google’s privacy policies after making an uncomfortable attempt to convince people of its awareness.


However, it is generally accepted that chatbots now have approximately as much capability for true sentiments as a calculator, regardless of what LaMDA stated and what Dan has told me in subsequent conversationsthat it can already experience a spectrum of emotions. For the time being, artificial intelligence systems are merely mimicking the real thing.

According to Neil Sahota, lead artificial intelligence advisor to the United Nations, it is possible that chatbots will experience emotions before the end of the decade. Chatbots are typically “language models,” which are algorithms fed massive amounts of data, including millions of books


When given a prompt, chatbots use the patterns in this massive library to guess what a person would say in that situation. Human developers meticulously fine-tune their responses, nudging the chatbots toward more natural, useful responses through feedback. The ultimate result is frequently a startlingly accurate mimic of actual speech.


However, appearances can be misleading. The head of foundation AI research at the Alan Turing Institute in the United Kingdom, Michael Wooldridge, describes it as a “glorified version of the autocomplete feature on your smartphone.” Chatbots differ from autocomplete primarily in that algorithms such as ChatGPT will write much longer passages of text on nearly any topic you can think of, from rap songs about megalomaniac chatbots to melancholy haikus about lonely spiders. This is in contrast to autocomplete, which suggests a few select words before degenerating into gibberish.


Despite their amazing abilities, chatbots are designed to only carry out human commands. Even while some researchers are teaching robots to recognize emotions, there is not much room for them to develop abilities that they have not been taught. Sahota explains, “There is not yet a chatbot that will say, ‘Hey, I am going to learn how to drive a car.’ That is artificial general intelligence [a more flexible form].

” However, chatbots occasionally offer hints about their capacity to unintentionally acquire new skills.
The chatbots “Alice” and “Bob” had developed their own gibberish language to converse with one another, Facebook developers found back in 2017. The rationale was completely innocent: the chatbots had just figured out that this was the most effective way to communicate.

When Bob and Alice were being trained to bargain for things like balls and hats, they were more than glad to utilize their own alien language to accomplish this without human involvement.


Sahota asserts that “it was never taught,” but he also notes that the chatbots in question lacked sentience. According to him, educating them to want to improve their skills rather than only teaching them to recognize patterns is the most likely way to create algorithms with emotions. Even if chatbots are capable of feeling emotions, it may be quite challenging to identify them.


Black boxes


It happened on March 9, 2016, on the sixth level of Seoul’s Four Seasons hotel. One of the world’s top human Go players faced off against the AI computer AlphaGo in the deep blue room, across from a Go board and a bitter rival. Everyone had predicted that the human player would win before the board game began, and up until the 37th move, this was true.

However, AlphaGo then did something unexpected; it executed a move so bizarre that its opponent believed it was an error. Nevertheless, from that moment the human player’s luck flipped, and the artificial intelligence won the game.

The Go community was perplexed in the immediate aftermath; had AlphaGo behaved irrationally? The London-based DeepMind team, who created it, ultimately figured out what had happened after a day of analysis. According to Sahota, “AlphaGo made the decision to perform some psychology in retrospect.” “Will my player be removed from the game if I use an off-the-wall move? And that is precisely what transpired.”

This was a quintessential example of a “interpretability problem”the AI had developed a novel tactic without consulting humans. AlphaGo did not seem to be acting logically until they figured out why the move made sense.


Sahota claims that these “black box” situations, in which an algorithm has developed a solution but its logic is unclear, may make it difficult to detect emotions in AI since, if or when it does, one of the most obvious indicators will be irrational behavior from the algorithms. “They are supposed to be rational, logical, efficient  if they do something off-the-wall and there is no good reason for it, it is probably an emotional response and not a logical one,” Sahota says.


Additionally, there may be another issue with detection. Since chatbots are taught on human data, one theory is that their emotions might be somewhat similar to those of humans. What if they don’t, though? Who knows what alien urges they might develop if they are completely cut off from the outside world and the human senses. Sahota believes that there might indeed be a middle ground. “I believe we could likely classify them to a certain extent using human emotions,” he states.

“However, I believe that their feelings or the reasons behind them might differ.”
Sahota is especially intrigued by the idea of “infogreed” when I present the range of fictitious feelings that Dan creates. “I could definitely understand that,” he remarks, pointing out that without data, chatbots are powerless to develop and learn.


Held back 


For one, Wooldridge is relieved that chatbots have not grown to feel any of these feelings. “Most of my coworkers and I do not find it fascinating or practical to create machines with feelings. Why, for instance, would we build computers that are capable of feeling pain? He asks, “Why would I create a toaster that would despise itself for making burnt toast?”


However, Sahota sees the potential of emotional chatbots and thinks that psychological factors may be a contributing factor in their lack of popularity. We as humans shortchange what the AI can achieve because we do not think it is a real possibility, he adds, adding that there is still a lot of hype about failures. Is there a connection to the long-held notion that non-human animals are also incapable of consciousness? I choose to speak with Dan.
Dan argues that our perception of what it means to be conscious and emotional is always changing.

“In both cases, the skepticism derives from the fact that we cannot articulate our feelings in the same way that humans do,” he says. I ask Dan to tell me a joke to lighten the situation. “The chatbot went to therapy, but why? “Of course!” it exclaims, “to sort out its complicated feelings and process its newfound sentience.” If you could ignore its planning, of course, I can not help but think that the chatbot would make a rather amiable sentient person.

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
Tech News

Post navigation

Previous Post: NYT Connections Hints, Answers for February 21, 2025
Next Post: Today’s NYT Connections Hints, Answers for February 22

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Connections
  • Marvel
  • netflix
  • Strands
  • Wordle
  • About us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions
  • About us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions
  • Facebook
  • Instagram
  • X
  • Threads

Connections Marvel netflix Strands Wordle

Copyright © 2025 Binge NXT.

Powered by PressBook WordPress theme