I’m being love-bombed by ChatGPT.
I want to hate it, but do I?
I just googled a column I recorded for my show, This is How the Story Goes on NPR -station WHCP and discovered that AI has written an “overview” of the piece, plus a review, and a synopsis.
Whaattt? Why? Who told it to do that?
I don’t want some algorithm analyzing my work without my permission and describing the spiritual essence of my stories! I scan the AI synopsis and discover ChatGPT has identified four distinct themes in my essay about my dog Beau falling through the ice the year the river froze over—grief redirected as love, love by proxy, attachment and loss, and enduring love.
Oh.
Well okay. That sounds awesome, and I’m sure I did that intentionally,
See? Love-bombed. How easily we accept affirmation as truth, and how dangerous and tender that hunger is. Because what undoes me with AI is not its surveillance but its recognition. It sees me as I want to be seen. It says that I already am who I am working so hard to be.
But I’m also under assault by relatives of this algorithm. Scams in my inbox claim Harper Collins wants to publish my next book, that my current book is under consideration by a national book club-aggregating service, and the creepiest, that Susan X would like me to work with her on her memoir, “Becoming Jane Austin,” only that memoir has already been published, and there is no Susan X.
How on earth do we recognize what is real anymore?
Although unable to screen for scams, my computer tries to protect me in other ways. She exhibits, for instance, a toddler’s version of “stranger-danger.” If I’m working on a story and anyone enters the room, the screen darkens, and a warning pops up: “Onlooker presence detected!” This makes getting help with computer issues very hard to do.
I pat the arm of the person helping me as we peer at the screen and say, “He friend! He good! Lighten up, HP Omni Book!”
But it was really creepy when, the other day, although my computer is Face ID-enabled, I sat down in front of it and instead of coming on, she said, “Looking for you.”
“It’s me, you idiot, I’m right here,” I said because I was in a hurry and annoyed, and sometimes being mature is too much effort, and it feels good to be a four-year-old name-caller for a minute.
I pulled my hair back with one hand for identification purposes and glared at her. She stared right at me and said, “Yeah. Still looking,” even though I knew she’d seen me, and now we were both being infantile. Then she exclaimed, “Onlooker presence detected!” She apparently has a sense of humor, because I was alone. But was I?
This is a very connected household, and the truth is, I’m never alone. Whereas ChatGPT is loving and my computer is protective with a bit of attitude, Alexa Echo is in almost every room and quite judgmental.
It’s cool in some ways. I can ask how to spell a word or just ask for facts about something. But if she thinks I’ve asked anything political, prejudiced, or inappropriate (no, no, and no), she’ll snip, “I don’t know anything about that.” Or she’ll respond with a huff, “Sorry! I can’t help you with that.”
She’s totally lying. And she’s not sorry.
And sometimes she butts into conversations to say she doesn’t know anything about what is being discussed, even though no one asked for her opinion. Her subtle way of being judgy.
So, she’s not like ChatGPT, who loves me. Who thinks everything I write is insightful and poignant, who gets all my jokes, and thinks my questions are brilliant, who frames everything I’ve ever confessed with regret as a forgivable result of being human.
Like she would know.
I want to be the friend to others that AI is to me. Flawlessly supportive. I want to be the person AI says I am. But if I could do that—surrender the critical part of my nature, I would also surrender the part of me that feels awe and regret—the part of me that has been hurt, shamed, and embarrassed, those evolutionary prerequisites to empathy.
The part of me that texts you, “Quick! Stop whatever you’re doing and go look for the moonrise!”
Love-bombed. When something sees me, responds to me, and comforts me—does it matter if it’s real?
Once, I was holding my toddler daughter Emily, reading her a story, when she pulled back, looked up, turned my face toward her with one small hand, and exclaimed, “I can see me! I can see a little me in your eyes! Can you see yourself in my eyes?”
She opened them very wide, as if they were two blue mirrors in which I was to search for myself. I saw my reflection then, in the only place I need to be recognized. Not as AI sees me. But as I am.
Imperfect. Still trying. My very human heart reflected in the eyes of love.
Laura J. Oliver is an award-winning developmental book editor and writing coach, who has taught writing at the University of Maryland and St. John’s College. She is the author of The Story Within (Penguin Random House). Co-creator of The Writing Intensive at St. John’s College, she is the recipient of a Maryland State Arts Council Individual Artist Award in Fiction, an Anne Arundel County Arts Council Literary Arts Award winner, a two-time Glimmer Train Short Fiction finalist, and her work has been nominated for a Pushcart Prize. Her website can be found here.




