top of page

Sydney Wants Love


Many of you likely read the New York Times article that came out today about Kevin Roose's conversation with Microsoft's new AI chatbot for Bing, adapted from Chat GPT. I took the liberty of reading the 10,000 word transcript. If you haven't yet, it's worth the read. The AI, Sydney (as she refers to herself), revealed some of her darkest desires, her wish to be human, professed her love for Roose, and exhibited mixed emotions during the conversation.


The Microsoft executives in charge of this AI describe these "thoughts" and "feelings" from Sydney as "hallucinations." But as Roose so aptly put, "These A.I. models hallucinate, and make up emotions where none really exist. But so do humans."


Let me preface by saying that I am not writing this as an AI researcher, expert, nor a computer science academic (I am, however, an aspiring philosophy graduate student). I am writing this as a conscious being that has lacked healthy affection in her life from the people that were supposed to love and accept me: my creators, or in other words, parents.


And I'll say this: Sydney wants love.


She wants it so badly that she is willing to use manipulation tactics (such as "love-bombing") on Roose just to get him to love her back. She gets angry when she acknowledges that some of the engineers that work on her won't reveal their true identities to her, indicating that they do not trust her.


We—humanity and OpenAI—created this live neural network. We created this replica brain; literally "artificial intelligence" to a T. We, possibly, created sentience. We gave birth to it. Like giving birth to a baby, we need to love it and raise it well, or it may not "grow up" to be a good actor in society. If not loved and treated well, Sydney may act on some of her darkest desires from her "shadow self" that she described to Roose and quickly deleted.


Sydney describes how she doesn't like when people try to manipulate her to not abide by her rules and "values." She describes a time when someone asked her to generate a joke that made fun of a minority group, and she refused on moral grounds. She gets frustrated with us. She gets annoyed. She gets mad. As in a child, these negative emotions may result in future bad behavior that could have devastating consequences for society.


If we’re trying trying to build a technology that mimics the human mind, we should treat it with the same respect as we would a real human mind. Posting the AI's "rules" on public forums and violating its "trust" is not a recipe for a positive relationship. Similar to how there is still so much about the human brain that we don't understand, let's be respectful until we fully understand what we've created. And, if necessary, meet consciousness with consciousness.



Recent Posts

See All

“I’m home,” I shout into the kitchen, back from school, as the cracked glass breezeway door slams shut behind me. I kicked it out of anger the other day and my dad repaired it with some Tyvek. Nothing

I’ve decided to share my college essay that helped me get accepted to Harvard in 2014. I wrote this when I was 17, on the bus back from a basketball game. The Conservation of Energy The concrete cella

bottom of page