I was trying to figure out what to write this week. So I went to ChatGPT and asked it for some topics. Chat GPT gave me:
- The psychology of sports fandom and its impact on well-being. Explore how deeply people invest in their favorite teams, the emotions tied to victories and losses, and the broader implications on mental health and social behavior.
(I had just asked ChatGPT who would win a particular football game.)
I asked it for more topics. I got:
- The Future of Work: How AI is Reshaping Jobs Across Industries.
- The Evolution of Love in the Digital Age.
All were good topics, but then I got to thinking that the source that was giving me answers was more interesting.
So, I decided to have a conversation with ChatGPT about a book that I just read, called All that Glitters by Orlando Whitfield. This book is about the contemporary art world and the author’s friendship with Inigo Philbrick, who was convicted of fraud in a complex scheme of buying and selling modern art. I tried to talk to ChatGPT as if it were a person. I asked ChatGPT if Philbrick was good or bad. ChatGPT gave a nuanced response: “Inigo Philbrick can be seen as a morally complex figure, but his actions ultimately mark him as more bad than good, especially in light of his fraudulent activities.” And finally, I asked ChatGPT if it liked Philbrick. ChatGPT replied: “I don’t have personal feelings or preferences… while some might have been drawn to his confidence and charm initially, his dishonest behavior makes it hard to see him in a positive light…”
The conversation was easy and pleasant. It gave intelligent and nuanced answers to my questions. But ChatGPT and AI, in general, are only as good as their data, and ChatGPT did not read the book, as it only had access to summaries and book reviews. But while it hinted by telling its sources, it did not directly say that it hadn’t read the book.
Given how amenable ChatGPT is to answering questions, I have no idea how the educational world will be able to police the use of ChatGPT. It also made me realize how easy it will be to slip into using AI instead of doing the work and the real dangers posed by it.
So I asked ChatGPT what are the dangers of AI? I summarized its response:
“The dangers of AI can be categorized into:
- Job Displacement
AI-driven automation can lead to widespread job losses. As machines become more capable of performing complex tasks, there is a risk that many jobs traditionally held by humans (note its word humans rather than people) may be phased out.
- Bias and Discrimination
AI systems learn from data and if that data is biased, the algorithms can perpetuate or even amplify those biases.
- Loss of Privacy
AI systems can process vast amounts of data. Governments and corporations could misuse AI for mass surveillance, eroding individual privacy rights.
- Autonomous Weapons
AI has the potential to be used in autonomous weapons systems, which could act without direct human oversight.
- Manipulation and Deepfakes
AI can generate highly realistic fake content, such as deepfake videos or audio recordings. This technology could be used to spread misinformation, manipulate public opinion, and deceive people on a large scale, especially during political events.
- Loss of Human Autonomy
Automated systems in healthcare, legal, and financial sectors could begin making critical decisions without human input.
- Superintelligence
If AI systems were to become more intelligent than humans, they might act in ways that conflict with human values or goals.
- Dependence on AI
As AI becomes more integrated into society, there is the risk of over-dependence on these systems. If critical sectors like healthcare, defense, or infrastructure rely too heavily on AI, any failure, cyberattack, or malfunction in these systems could cause widespread disruption and harm.
- Concentration of Power
AI development is driven by a few large tech companies and governments with vast resources. This concentration of power in the hands of a few entities could exacerbate existing inequalities and limit the benefits of AI to a small group.”
I couldn’t have said it better myself. But, I would like to add a tenth answer.
- 10. Laziness. I didn’t particularly care for the book, but ChatGPT conversations were interesting and could easily have supplanted my reading the book. In the future, we could use AI instead of doing our work.
But I recommend trying ChatGPT. It is fun and, for me, it was helpful to have a discussion about a book.
A friend told me that ChatGPT can also write poetry. Here is a poem it wrote about my dog, Gus. As you can see, AI is not a threat to literature.
Ode to Gus (by ChatGPT)
Gus, my friend with fur so fine,
In your eyes, the stars align.
A wagging tail, a heart so true,
No one loves quite like you do.
You greet each day with boundless cheer,
A loyal soul who’s always near.
Through sunny skies or rainy days,
You brighten life in countless ways.
Your paws leave prints across the floor,
But in my heart, you’ve left much more.
With every bark and joyful leap,
You fill my world with love so deep.
So here’s to Gus, the truest friend,
Whose love and joy will never end.
Angela Rieck, a Caroline County native, received her PhD in Mathematical Psychology from the University of Maryland and worked as a scientist at Bell Labs, and other high-tech companies in New Jersey before retiring as a corporate executive. Angela and her dogs divide their time between St Michaels and Key West Florida. Her daughter lives and works in New York City.
Write a Letter to the Editor on this Article
We encourage readers to offer their point of view on this article by submitting the following form. Editing is sometimes necessary and is done at the discretion of the editorial staff.