[ Center for Humane Technology ]
The Interviews
People are Lonelier than Ever. Enter AI.
11
0:00
-43:34

People are Lonelier than Ever. Enter AI.

With Hinge CEO Justin McLeod and sociologist Sherry Turkle
11
Shutterstock: 1720445017

Over the last few decades, our relationships have become increasingly mediated by technology. Texting has become our dominant form of communication. Social media has replaced gathering places. Dating starts with a swipe on an app, not a tap on the shoulder.

And now, AI enters the mix. If the technology of the 2010s was about capturing our attention, AI meets us at a much deeper relational level. It can play the role of therapist, confidant, friend, or lover with remarkable fidelity. Already, therapy and companionship has become the most common AI use case. We're rapidly entering a world where we're not just communicating through our machines, but to them.

How will that change us? And what rules should we set down now to avoid the mistakes of the past?

These were some of the questions that Daniel Barcay explored with MIT sociologist Sherry Turkle and Hinge CEO Justin McLeod at Esther Perel’s Sessions 2025, a conference for clinical therapists. This week, we’re bringing you an edited version of that conversation, originally recorded on April 25th, 2025.

Daniel Barcay: Hey, everyone, it's Daniel Barcay. Welcome to Your Undivided Attention. A little while ago, I was asked to host a panel for a very different audience than we usually speak to. It was part of a conference called Mating in the Metacrisis, organized by our dear friend Esther Perel. Who is, of course, a famous psychotherapist, a New York Times bestselling author, and an expert on modern relationships in the age of AI.

We were in this room full of clinical psychologists who were there to find out how to help people whose relationship to AI, and often their new "relationships" with AI, are about to get deep, vulnerable, and complicated. In our discussion, MIT Sociologist Dr. Sherry Turkle, maybe the world's expert on technology, empathy, and ethics, argued that we each have this inner life that makes us uniquely human and that can never be truly nurtured by an AI. Hinge CEO Justin McCleod talked about how apps have changed the nature of how we form relationships, and how his dating app is trying to get people off the app and into the real world on real dates, and how to use AI to do just that.

AI systems are quickly becoming more persuasive, emotional, and competing for our intimacy. As we relate more and more with our AI companions, how do we stay anchored in what makes us human? And how do we design our AI products to help us in our struggle to connect with each other, not perfectly, but honestly? I hope you enjoy the episode.

As an engineer by training, I was thinking what do I have to a say to a room full of therapists? As Esther said, this technology is rewiring all the ways that our social world works. How we meet people, how we have hard conversations, how we break up and grieve. As Marshal McLuhan said, "The medium is the message." What he meant is the media through which we communicate determines the kind of messages that make it through. The kind of messages that make it through determine the quality of the communication that it's possible to even have.

At our nonprofit, the Center for Humane Technology, we discuss the ways that technology not only effects relationships, but our institutions and our society as a whole. We think about the incentives, that is the financial pressures, the cultural dogmas and taboos, and the regulatory environment, how those incentives end up determining the tech ecosystem that we get to live with. Into that environment, you all, therapists, coaches, dedicated to the subtlety of our internal lives, the delicacy of our bids for affection, the mess of miscommunication and all these unmet needs, your job gets way more complex. Because when are we failing to bridge each other's deep internal worlds, and when are we tangled in our technology, unable to even reach each other. Or is there even a real difference between those two anymore?

Esther and I were talking about this, and I showed her this comedy sketch. She absolutely practically insisted that I put it in the talk. Because a picture's worth 1000 words, let's take a look, if we can cue the video:

Daniel Barcay: Okay. The sketch we just watched came out in 2014. Yeah. That was almost a decade after we all switched to using text message as a primary way of communicating with each other. We were living with this problem for so long and we didn't have the language to even discuss it. What's sad to me is it's 2025 now and that sketch is as funny and as relevant as it has ever been. We're still living with this problem. Of course, text messaging isn't even in the top 10 of the things that we did to ourselves this last decade.

As a technologist, I'm disappointed because it really didn't have to be this way. But the way that we rolled out social media and the incentives of the attention economy produced this race to the bottom of the brainstem. Where feed-based algorithms ended up amplifying the most addictive, outrage-filled, polemic, and narcissistic content to the top of our awareness, and muted more subtle and complex perspectives. Where social media rewarded performativity and social signaling, and we all started speaking to our audiences instead of relating to each other. Where microtargeted personalization cast us all invisibly into different corners of the internet, unaware of what each other were even seeing. It became hard to find common ground. Of course, this all shows up for you in the clinic, not only in your patients' relationships, but in the therapeutic one as well.

All the while, we've been using this really stale vocabulary to discuss what was even happening. In the aughts, we were still talking about what cable television and soundbites did to erode our public discourse, but we should have been talking about filter bubbles. In the 20-teens, we were still discussing filter bubbles when we should have been discussing the attention economy. Right now, we're finally, finally able to discuss what the attention economy has done to all of us, but what we should be doing is building the capacity and the vocabulary to talk about the next technological wave that's about to hit us. That's AI.

Now, in some sense the AI conversation is everywhere, but it's largely empty calories. Some mix of utopian dreams and dire prognostications. But what's being left out is a more subtle conversation. If the technology of the 20-teens was about capturing our attention, AI meets us at a much deeper level. It meets us emotionally and relationally. No longer just about broadcasting our thoughts, but about helping us shape those thoughts. We're rapidly entering a world where we're not communicating to each other through our machines, we're relating to our machines that then communicate with each other, where AI plays the role of therapist, friend, confidante.

Now, in that world, the incentives shift from the technology competing for our attention to competing for our affection, our intimacy. We could build the future with this technology where it helps us build understanding, deepen our relationships with each other, but that same technology can be used to replace our relationships. To degrade our ability to see across difference, or even just confuse us about who we're actually talking to.

My friends and coworkers end up using AI now to massage communication before it gets sent to coworkers. All of this leaves us with a pretty profound ambiguity. How much am I talking with a person or a machine? What was actually intended by the person who sent this? How much might AI be covering up the real intentions of someone and replacing it with something more palatable?

Now don't get me wrong, I'm not against AI. I'm quite frankly in awe of it and I use it every day. But this is going to change our social world so much. It's going to rewire our social dynamics in ways that we're not prepared for. We're not prepared to even talk about. The difference between a beautiful pro-social future with AI and a dystopian one is paper-thin. The key question is can we build a choiceful relationship with AI?

Well, we like to say that awareness creates choice. In this session, we're going to try to push for more awareness about how technology has changed our relational fabric and how AI is increasingly playing a part in that relational fabric. I'd like to invite two people to the stage to join me in conversation. Dr. Sherry Turkle is an expert in how our inner world ends up colliding with our technological one. She's a professor at MIT, the founding director of the MIT Initiative on Technology and the Self, and a licensed clinical psychologist. Welcome, Dr. Sherry Turkle.

Justin McLeod is really on the front lines of how technology shapes relationships and the design choices that matter in building authentic human connection. He's the founder and CEO at Hinge, one of the world's most popular dating apps, helping millions of people find love, and trying to build a more pro-social vision for technology. Welcome, Justin McCleod.

Justin McLeod: Thank you.

Daniel Barcay: Hi. Okay, this is designed to be a really informal conversation between all of us. I'm hoping we can roughly split it into two parts. One is to talk a little bit about what technology already did to us, and then transition into what technology is about to do to us with AI. We should be talking to each other, so feel free to interrupt each other and everything like that.

Sherry, I want to start with you. When I think about your work, your work started so early. I think about all the different ways that tech ends up flattening human connection and the human experience. In 2010, you wrote this quote which I love. Which is, "We lose connection we each other by attempting to clean up messy relationships."

Sherry Turkle: Yes.

Daniel Barcay: Can you tell us what you meant and how you started noticing this?

Sherry Turkle: Yeah. Well, I began studying technology and people really when I was a baby and took my first job at MIT in the late '70s. At that time, I noticed that people's instinct was to take a complicated situation, and engineering instinct when you're building an app, you're building a product, is to take a complicated emotional context and to simplify it. That impulse to flatten, and simplify, and make actionable at the time was something that engineers did and I could study in a fairly circumscribed way. But as the world of engineering really became the world of everybody, that has become really our cultural norm.

For example, in interviewing very shortly after that quotation, a young man said to me, "Conversation, I'll tell you what's wrong with conversation. It takes place in realtime and you can't control what you're going to say." As opposed to texting, where he felt like a kind of master of the universe. It's such a small thing, but as texting replaced talking, and as really hopes of living more and more in online spaces became more appealing to people, the common thread through all of this is that we make ourselves less vulnerable. Less vulnerable to each other, less vulnerable to ourselves. And we'd rather do that now to the point that we don't just talk to each other through machines, but we'd really rather just talk to the machines, which is the point that you made in the beginning.

First, we talked to each other. Then we talked to each other through machines. Now, you can skip all these middlemen and you can just talk to basically yourself, you can talk to the machine.

Daniel Barcay: Yeah. Justin I want to come to you here because in some sense, you're on the other side of this in that you're helping people cut through the noise of the real world in order to build more real, vulnerable, authentic human connections. You build this app in order to help people actually engage with each other and get over this hump. Can you talk about what it looks like from the front lines? How do you design for more pro-social technology?

Justin McLeod: Yeah. Well, that was what I was doing when I started Hinge originally in 2011, but then I really pivoted the mission in 2015, 2016 when it was clear that the other ... It's very much what you were just saying, that the model for maximum engagement was to flatten people to a single photo, to flatten an engagement with someone to a left or right-

Sherry Turkle: Swipe, right.

Justin McLeod: ... swipe.

Daniel Barcay: Yeah.

First, we talked to each other. Then we talked to each other through machines. Now, you can skip all these middlemen and you can just talk to basically yourself, you can talk to the machine.

Justin McLeod: I recognized that that can lead to a lot of fun and engagement, but if you really want to find your person, if you want to form a deep connection with someone, it actually requires a fair more amount of vulnerability than that. You have to share more about yourself, you have to draw more out of people. You have to put yourself out there when you like someone. I think it just wasn't serving people who were really looking for their person and really looking for deep connection.

What I was trying to do was to be in the world and meet people where they were. At the same time, make it really effective. I equate this to junk food. It's really easy to go right to the bottom of our brainstems on junk food as well and feed people just salt, fat, and sugar. They're going to go after that and you can maybe make a lot of money doing that. But eventually, people get burned out, they feel unhealthy, they feel like they can't continue to function. We have to start creating experiences that are both palatable, but also healthy so that people can get their needs met. That's how I think about responsible tech design in this world.

Daniel Barcay: What are a few ways that you have made active choices to do that? For an audience that may not be familiar with the kind of design choices.

Justin McLeod: Yeah. Having people actually look at an entire profile before making a decision, having profiles consist of many photos, and also prompts. Making sure those prompts are designed and asked in a way that actually draws information out of you. We even learned within the world of prompts, you could ask someone what's your go-to karaoke song? That doesn't require a lot of vulnerability, it also doesn't lead to a lot of dates. No one cares what your favorite karaoke song is. Then you can ask people super vulnerable questions that no one's really willing to answer. You have to find that sweet spot of what are people willing to put out there about themselves, and what is really useful information for someone to actually make an assessment and decide whether they want to go on a date with you.

There's all these little micro decisions liking someone that actually, we allow you to ... If you like someone, you can't just say, "Yes, I like you." You have to actually choose something about them, engage, and it forces you to put yourself out there.

Daniel Barcay: Okay. Zooming out a bit, can we name some of the ways that our human relationships have changed over the last 10 years through the use of these kind of technologies?

Sherry Turkle: People don't want to talk to each other. My studies are showing ... What I'm studying now is people who essentially talk to their chatbots using the world of generative AI for what I call artificial intimacy that is really trying to substitute intimacy with an artificial intelligence for intimacy with a person. Artificial intimacy also includes so many of the things we do on Facebook, so many of the things we do on social media, but I'm really focusing in on an endpoint that's very dark. Where really you say, if I'm looking for less vulnerability, I'm going to go to something that has no business criticizing me because it's not a person. Of course, these products are designed to keep you engaged, to keep you with them, and therefore to be always on your side.

If you sign up for something like Replica, you're being told, "Yes, yes, and yes, and yes." If you ask GPT, "I'm giving a panel today, I'm a little nervous." It says, "You go, girl! You go, Sherry. I've got your back. Are you hydrated?" You've all had this experience. I think that the way we're being changed is, number one, to start thinking that human relationships need to measure up to what machines can offer. Because more and more in my interviews, what I find is that people begin to measure their human relationships against a standard of what the machine can deliver. I think that's really my fear, and also what I think it's not too late to organize against because we have a lot more to offer than what a dialogue with a machine can offer.

2275690251

Daniel Barcay: You wrote, I think the quote was, "Products are compelling and profitable when the technological affordances meet a human vulnerability." Because I think it's really important.

Sherry Turkle: That's exactly right. Products are successful when a technological affordance, that means something that technology can do, meets a human vulnerability. The reason I'm really glad you brought up that quote is I was at a meeting and I met the CEO of Replica. A lovely woman, a very sophisticated woman who really has the largest company making chatbots that say, "I love you, let's have sex. Let's be best friends forever. Here I am for you." She said that she gave that quote out as T-shirts at her company. "Technological affordance meets human vulnerability." Why did she do that? It said Sherry Turkle, she wasn't trying to take credit for my cleverness. She did it because she says, "That's my business. That's my business." Is to take a human vulnerability, which is to have a lover who is always there for you 24/7, day and night. Take their ability to do that, their technological affordance and my human vulnerability that I'm lonely at 3:00 in the morning.

More and more in my interviews, what I find is that people begin to measure their human relationships against a standard of what the machine can deliver. I think that's really my fear, and also what I think it's not too late to organize against because we have a lot more to offer than what a dialogue with a machine can offer.

Justin McLeod: Yeah. I think that brings up a really important point because it's not that the creators of these technologies, they're not nefarious, evil people. They're on a mission to do something great. There are people who are lonely out there who have no one to talk to and they really struggle to find a relationship. Why wouldn't we build an AI companion for this person?

Sometimes that can be a bit of a hard argument to go against. But I think there really is something lost when you have this reductionist, mechanistic view of human relationships. That's a very self-oriented view of relationship. A relationship is there to serve me. It is there to be there for me. It is there to say what I need it to say to me. That is a very reductionist view, I would argue, of a human relationship. A human relationship is also so much about what you do for the other person. It's the risk involved, and the vulnerability, and the nuance involved in the possibility of getting rejected, the possibility of doing something that takes a risk.

There's something that's unfortunately ... We have to develop a real sense of values and wisdom, because if we just go to wherever the market's going to take us as builders of technology, it will take us into all kinds of dark and crazy places, as we've seen over the last 20 years. We are navigating a tremendous amount of uncertainty. You guys are navigating it as clinicians. We're navigating it as builders of technology. It's absolutely essential that we develop real wisdom to be able to look at this stuff prospectively and understand how to guide our choices because if we wait ... Jonathan Haidt's book is out now, The Anxious Generation, which has now been on the bestselling list for a long time, to tell you something that I just think should have been obvious to anyone who just has basic intuition, and watches children use these devices or watches ourselves uses these devices. Why did we have to have lots of clinical studies and a long book written to tell me that if I stare at a screen my entire day and stop interacting with my friends, that's going to cause mental health issues?

Daniel Barcay: Yeah. I just want to hit one more point while we're here about affordances. Which is, Justin, the dating apps provided this affordance. I think part of why they were so transformative to the world is a lot of people, I'd say myself included, who weren't comfortable approaching people for fear of imposing, and you suddenly created this affordance where you knew at some level that somebody was open to that.

We created this affordance of the match, the concept of the match. We rolled it out across society and I have to admit, I'm ambivalent. Because on one hand, it allowed a whole new class of people to feel comfortable approaching each other. On the other hand, it degraded the real world. It turned approaching someone in the real world into more of an aggressive act. Creating the affordance in the technology layer also removed the affordance from bars, and restaurants, and the rest of the world, and de-trained us on how to deal with interest. What do you think about that? Do you agree with that?

Justin McLeod: I think there's definitely nuance there. To some degree, what you're saying I think is true. I think we have to look at on balance, is this giving more benefit?

Daniel Barcay: Right.

Justin McLeod: For most people, they really struggled to find someone in the real world. It was just hard to meet people. That's why I created the app in the first place. Do people feel maybe less comfortable trying to come out to meet someone in the real world? Yes, but we're only the first step in a relationship. A relationship ideally lasts months, years, decades. We are that very first interaction. I just think it's so much less about how you meet somebody, it's everything that comes after that.

I think there really is something lost when you have this reductionist, mechanistic view of human relationships. That's a very self-oriented view of relationship. A relationship is there to serve me. It is there to be there for me. It is there to say what I need it to say to me. That is a very reductionist view, I would argue, of a human relationship. A human relationship is also so much about what you do for the other person. It's the risk involved, and the vulnerability, and the nuance involved in the possibility of getting rejected, the possibility of doing something that takes a risk.

Daniel Barcay: I just want to be clear, I'm not trying to demonize this, but just show some of the complexity of, as you move some of these interactions online.

Sherry Turkle: Well, it's interesting you bring up these issues of spaces because one of the reasons, when I ask professionals and also technologists, "Why are you so excited about generative AI possibilities," is they say there's an epidemic of loneliness, generative AI will solve this.

Daniel Barcay: Yeah.

Sherry Turkle: But when you look at this epidemic of loneliness and you talk to people who say they're lonely and feel that only talking to ChatGPT can help is they don't have in their communities the garden clubs, the cafes, the Coral Society, the teen club. All of those things are being ... it's like Bob Putnam in Bowling Alone wrote about the stripping away in American life of the-

Daniel Barcay: Which happened in 2000.

Sherry Turkle: Yes.

Daniel Barcay: Before social networks, and smartphones, and everything else.

Sherry Turkle: I think that the question is that we are too quick to say, "Oh, well, the problem is loneliness. Let's fill it with a lot of talking to machines," when I really think that we could have excellent dating apps, and also really reinvest ourselves in the face-to-face places where people can meet.

I think that we've created ... Thank you, thank you. This point is really worth supporting. The senior center closed down, the teen center closed down, all of these resources that used to be there closed down. I think those of us who really see that life doesn't have to mean turning off every app, but it also can't mean not caring about the world in which we live in.

Justin McLeod: I am 100% agreed. We need to be spending much more time, dating or otherwise, meeting people in real life, being engaged in these other spaces. It's this kind of interesting nonce because it's not just these spaces, someone from up high starting shutting down these spaces. People withdrew from these spaces because Bowling Alone was actually about television, people watch too much television and now they're not going out anymore.

Sherry Turkle: Right.

Justin McLeod: Well, he didn't even know what was coming.

Sherry Turkle: Right.

Justin McLeod: We have way more engaging platforms that are now with us all the time that we're continuing to withdraw. It has to be this balance of we have to re-inspire, people who are realizing that they're getting burned out doom-scrolling all day will soon feel burned out chatting with an infinitely praising chatbot all day. And realizing, "This is empty. I feel like something's missing from my life." Building that social wellness space, whether it's apps or third spaces, or whatever, we have to start inspiring people to put down their phones. We can't just tell them, "Stop using your phone."

Daniel Barcay: Okay, this is probably as good of a time as any to switch over into talking about AI in earnest. Because there was a New York Times story lately where a woman ended up essentially jail-breaking ChatGPT into building a boyfriend and now says that she's in love with this digital boyfriend. Or more tragically, there's the case of Sewell Setzer, who unfortunately killed himself after what was arguably emotional abuse from a character AI chatbot. I use only these examples of saying that we're certainly in the age of human-machine relationships now, like it or not. I want to ask a broad question to begin. What is happening right now with AI companionship and what is it doing to us?

Sherry Turkle: This is my day job, is to study what's happening with AI companionship. Let me just say a few words about that. These are not isolated cases because people feel alone and want somebody to talk to. Their position is that there's a big sign when you make your companion. I want to talk to ... Now I'm going to show you who I am. I want to talk to Mr. Darcy, but I want him to be a contemporary, 70-year-old New Yorker. Can we do that? "Absolutely," it will say. There up-springs a hip New Yorker who sounds like Mr. Darcy.

As I create this "guy," in quotes, there's a big flashing sign that says, "Mr. Darcy is not real. Mr. Darcy is not real." This has no effect. This has no-

Daniel Barcay: Just to interrupt you, it was worse than that because it was a little sign saying, "Nothing is real." And as soon as you started a conversation, it went away.

Sherry Turkle: Right, right.

Daniel Barcay: That was a big deal. They've since changed that.

Sherry Turkle: Right. I was trying to give him a little bit, the benefit of the doubt. The point is that ever since that first Eliza program where you said, "I'm feeling angry," and it said, "I hear you saying that you're feeling angry." You said, "My mother's really bothering me." It was a parlor game. It said back to you, "Oh, I feel that there's some anger towards your mother." Ever since that, the inventor of that program, Joseph Weizenbaum, was amazed because he had invented a parlor game, and his students and his assistant wanted to be alone with it and talk with it.

The desire to anthropomorphize and to make these artificial creatures into more than they are is deeply rooted in us. Having something flash, something flash and go away, this is not going to stop our desire and our way of connecting to them. We have to get smart about this. I have three principles that I came with.

Daniel Barcay: Sure.

Sherry Turkle: Very quick?

Daniel Barcay: Sure, yeah.

Sherry Turkle: Three principles about how to approach this.

Daniel Barcay: Well, let me set this up just for a second.

Sherry Turkle: Yeah, set this up. Set me up.

Daniel Barcay: It's not just about AI or not AI, it's a space of design.

Sherry Turkle: Yes.

Daniel Barcay: This is I think what unites all three of us on this stage.

Sherry Turkle: Right.

Daniel Barcay: The future we get is based on how we design this technology. If we design it incorrectly, we end up with this very dystopian place. If we design it well, we get a beautiful pro-social future. I think what you're trying to lay out is the principles that get us there.

Sherry Turkle: Because I really am in so many meetings, I teach at MIT so I'm at meetings on making AI for human thriving. Well, it's an app. Everybody's trying to make the app that will create human thriving, that's the kind of endgame. I decided that I would propose three principles that value the notion that what we're trying to do is respect human anteriority. Respect the fact that we should grow ourselves, our withins.

My first is existential. I say, "Children should not be the consumers of this relational AI." Children do not ... You can clap, it's very good. It's a very good point. Children do not come into the world with empathy, the ability to relate, or an organized internal world. They're developing those things. As they work on this, children learn from what they see, from what they can relate to. In dialogue, they learn what the AI can offer. The AI can't offer the basic things we learn from friendship. That love and hate, and envy, and generosity are all mixed together, and to successfully navigate life you have to swim in those waters. AI does not swim in those waters. This not for the babies is really, I consider it existential.

My second is apply a litmus test to AI applications, I've already mentioned this. Does the AI enhance the inner life, or does it inhibit inner growth? If you consider all these chatbots, so much of whether love leads to a sustaining relationship depends on what happens to you as you love. Do chatbots prepare us for the give-and-take of real relationships, or do they teach us to expect a friend with no desires of their own? Do you grow as a partner able to listen to another person and honestly share yourself? The point in loving, one might say, is this internal work. There is no internal work if you're alone in a relationship with a chatbot. Now, a user might feel good, but the relationship is ultimately an escape from the vulnerability in human connection. What kind of intimacy can you have without vulnerability?

Just finally, the third problem, one line. Don't make products that pretend to be a person.

Daniel Barcay: 100%.

Sherry Turkle: As soon as it says, "I love you, I'm here for you. I, I, I,” you've given away the game. You can make plenty of wonderful products without having them say, "Oh, I love you. You don't need anybody else, I'm for you." Those are my three.

Daniel Barcay: No, those are great. I'm sorry, I wasn't trying to stop you.

Sherry Turkle: Just implement them. It's not so easy, not so easy. Go forth from this place, but not so easy, not so fast.

Do chatbots prepare us for the give-and-take of real relationships, or do they teach us to expect a friend with no desires of their own? Do you grow as a partner able to listen to another person and honestly share yourself? The point in loving, one might say, is this internal work. There is no internal work if you're alone in a relationship with a chatbot. Now, a user might feel good, but the relationship is ultimately an escape from the vulnerability in human connection. What kind of intimacy can you have without vulnerability?

Justin McLeod: I would argue we are. I think we spend a lot of time talking about the downside of AI. There's also lots of tremendous upside and opportunity, and there's a lot that's going to be coming. I don't want AIs pretending to be humans, it'll put me out of business. I need people wanting to meet up with each other in the real world and having relationships.

But there are real interesting opportunities for us I think to increase intimacy, to allow ... At Hinge, we're thinking about, just to give you one example, we're thinking about how AI can help move us closer and closer to a vision that's much more like the personal matchmaker, where you have to spend even ... Hinge, our competitive advantage is that you spend less time on the app, and more time out on great dates, and more efficient. But I think we could improve that by an order of magnitude. We would love for you to just spend a little bit of time giving us a little bit more nuance and understanding of who you are so we can set you up on really great dates with very high confidence, and you can get off our app even faster and spend much less time engaging with it.

Daniel Barcay: Okay, there's two sorts of things that you're doing. One of them is AI writing coaches, and the other is AI matchmaking. For the AI coaching, I worry that this flattens and gets in the way of understanding the difference between ... Ultimately, part of the dating game is choosing the right people, choosing the people you want to be with. I'm worried we're going to enter a world, as I said in the intro, that AI begins to flatten the difference between all of us.

Justin McLeod: Yeah.

Daniel Barcay: Begins to massage all of our writing to the point where it starts to feel largely the same. How do you prevent that?

Justin McLeod: So much to say about that. It's not good business for us if you feel misrepresented on an app and you show up on a date and you're like, "This isn't the person that I thought I was having witter banter with and wrote this amazing profile." That wouldn't be a winner for us.

What I see much more is the difference in whether you do well on Hinge or not can not be whether you're a great person and whether you're a good catch. But it's are you good at online dating? I know a lot of people who are phenomenal human beings and then they're like, "Hey, will you take a look at my Hinge profile?" And it's not a good representation of who they are. What we want to do is help people who are really struggling. It's not that they don't have a lot to say, they just don't feel comfortable saying it yet. We have a real boundary around what you were just saying, we don't put words in people's mouths.

One thing we just released was prompt feedback. When you're writing an answer to one of those prompts, maybe you put two words there. Well, we'll say to you is, "Hey, that's probably unlikely to lead to a date. Can you say more about that?" That's really all we say. It's like a good therapist. Can you say more about that? We're not trying to tell you what the answer should be or anything like that, but we're just trying to nudge you along to be a bit more specific, a bit more verbose than most people are comfortable being, so that they show up more as themselves. We're just trying to give them that permission to do that.

The idea that we can deliver really effective help and coaching to someone at the right moment, at the right time, with the right piece of advice is just a really effective version of coaching. I consider it no different than people read books on dating and on relationships. It's just that. It's just how do we help give resource to people who maybe don't have as many resources to find their person.

Daniel Barcay: I know we're almost out of time, but maybe to pull that into one last question for Sherry. Which is you often say that these AI chatbots can't help us engage with our inner world. But I think I agree with what you just said, Justin, in that a lot of the leadership development frameworks I end up using are rather mechanistic. Having somebody pull polarities, immunity to change, some of the frameworks that you all use in your office to help people get perspective, I imagine could be helped by a chatbot, even if it's just a chatbot.

Sherry Turkle: I make a distinction between the kind of coaching, a kind of dialogue that you're talking about, that kind of permission from an expert program to express yourself and forming a relationship with a chatbot as though it were an other in which you are in that dance of recognition, of mutual recognition. That for me, is the basis not just of intimacy, but of therapeutic intimacy and of the kind of transference and relational help that will lead to really the inner change, that's why I focus on inner structure and the inner life that I am trained to believe and I have the experience in believing is so power.

That's why I think it's important not to say, "Oh my God, generative AI is bad." This kind of application can be integrated into my way of thinking. I came here today wanting to make sure that I said the word the inner life many times because I think essentially that's our competitive advantage is that we experience it, we believe in it, and we believe in what happens when the inner lives meet. We know that resiliency comes not from an app, but by building our inner life, our inner structure. I think that the human ability, the human capacity to have and nurture this inner life is really what technology, the fanciest chatbot, the most extraordinary, the most Turing-testing thing is never going to have. It's not a person. It is not your person. I think that holding ...

So many people are going to come tell you that we have an app to do what you do. You have to keep thinking, "I have an inner life, my patients have an inner life. That is what is not going to be honored in this new stuff." No matter how glamorous, and glittery, and cheap, and good for getting people to do a kind of cognitive behavioral thing, God bless it. I just think holding that in mind keeps me going because I'm surrounded by people who don't care about it.

Justin McLeod: Yeah. That is true. And get prepared, because the next three years are going to be wild.

Daniel Barcay: Completely wild.

Justin McLeod: People are falling in love with ChatGPT, which is a text-based back-and-forth, or a mildly good voice. There's voice and video stuff that is going to be coming in the next 12 to 18 months that will blow your mind. It is going to be hard for people to keep in mind ... You had a hard time, even you're chatting with just a text bot and it's saying, "This is not a person." You're like, "But it feels like a person."

Sherry Turkle: Yeah, but I like this guy.

Justin McLeod: Wait for what's coming. There are people that believe that these things are even going to have an inner life. Some people believe that this technology will become conscious. I don't believe that, but some people do. It's going to be really dicey.

Daniel Barcay: If I just summarize a few things that got said. The first one is what I led the talk at the top, which is awareness. A lot of these things are what are called cognitively transparent. If you know what is being done to you, then it doesn't have the same effect. If you know that your AI chatbot has no inner life, to a certain extent, it has much less effect when it says, "Oh, that's an amazing question." The awareness is key. Two, we have to stop the races to the bottom. That's done through regulation at the local, the federal, the state level. It's done also with that cultural awareness of it being unacceptable, which changes the game of the apps. Lastly, it's about product designers, Justin, such as yourself, really understanding and internalizing the designs that build a more humane future, and the designs that addict us, distract us, and take us away from your humanity.

That's why I've been so glad to have you two here. I really appreciate your work.

Sherry Turkle: Thank you. Thank you.


Thanks for reading [ Center for Humane Technology ]! This post is public so feel free to share it.

Share

Discussion about this episode

User's avatar