Conversing with an AI Chatbot
What I watched: “New AI chatbot 'ChatGPT' interviewed on TV” by Channel 4 News (United Kingdom) with Krishnan Guru-Murthy. Posted December 8, 2022.
Recently, I wrote about how OpenAI’s new ChatGPT artificial intelligence (AI) technology might impact my former profession of technical writing.
So much has been said and written about ChatGPT. It’s caused quite a stir and in some cases a bit of consternation among those who fear AI’s future impact on society.
When I watched the video in which Krishnan Guru-Murthy interviews ChatGPT, I had a bunch of thoughts I wrote down as the video progressed. Here they are in the order in which they came to me as I was watching.
I don’t profess to be an AI expert, but I do have a technology background and I’m fairly well-read and informed about AI generally. Still, I’m definitely no expert and those who work within the guts of such technologies can likely answer questions I pose here in ways I can’t. I put forth these thoughts as simply my own musings about this particular AI manifestation as it stands today in the interest of making sure this is a topic we keep on our collective front burners. AI is going to impact everything we do. Everything. So it’s best we are as informed as possible so we can nudge our legislators and technology leaders to deploy it safely and usefully.
I initially assumed this conversation was live, but then wondered if the questions had been typed in ahead of time and the answers edited to make it appear it's a live conversation. Based upon the cutaway edit, my guess is it was typed in ahead of time and only appears to be live in the final edit.
I didn't know the system can't (currently) crawl the web. I had assumed that was the bedrock of how the system learned, but evidently that's incorrect. I guess it's essentially a closed encyclopedic set of knowledge, although likely far more comprehensive than any existing encyclopedia.
Assuming it can't crawl the web now, I wonder what will happen when it's capable of doing so? Will that improve results, or make them worse? There’s a lot of mindless or incorrect crap on the web along with the good stuff and discerning which is which could get tricky.
I didn't realize Elon Musk is such a big investor in OpenAI. In light of his recent antics, I’m not thrilled about that.
Since at about 5:00 into the video ChatGPT answers with some incorrect information, it will be interesting to see how it self-corrects in the future. How is the correction verified since anyone could say something is incorrect, but it could indeed be correct? At least ChatGPT answered that it knows it's not always 100% correct.
It's great that ChatGPT suggests using critical thinking skills and additional sources to back up any information given, but at some point when the technology is ubiquitous and mostly correct, society (users) will assume it is indeed correct and that's going to be an uncomfortable place to be. Imagine when AI is given control of important governmental or social decisions, and we all know that’s likely to be proposed at some point in the future. How much double checking will actually be done?
At least ChatGPT is aware of the fears that AI could become sentient and that society rightfully fears that possibility. Still, it’s kind of scary stuff anyway.
ChatGPT states that it's impossible to predict how AI will impact the job market in the future, but I wonder if this is a type of technology for which historical job replacement trajectories won’t apply. Maybe the rate of AI technology development will outpace potential new jobs emerging. Only time will tell.
ChatGPT claims to not have a bias, but we all know the garbage in, garbage out data reality and so much depends on what the AI has been “taught” prior to deployment. A bias could easily be injected into the system. Yet such a system could eventually be perceived as “fair and balanced,” to use the interviewer’s verbiage, and that’s a concern.
ChatGPT repeatedly says it’s a machine learning model. Here is one of many explanations I found about such models. This is territory it’s best everyone becomes familiar with because these models are going to impact everything in our lives going forward.
ChatGPT is clear that its function is to answer questions and provide information, but that said, we’ve seen in other demonstrations of its usage that it can construct answers and information in well-written and articulate language in which a trained bias could easily be embedded, even if only subtly.
ChatGPT repeatedly says its purpose is not to make judgments or decisions, but imagine when similar AI technology is deployed, for example, in something as simple as an employee’s paid time off request. If there is some situation for which it’s not adequately trained, and it delivers a result that is logged into the human resources system as the definitive result, is that not a decision of sorts? That would not be a dire situation if incorrect, but in other instances not having all of the necessary foundation situation and nuanced knowledge could have disastrous consequences.
ChatGPT repeatedly says its responses are not influenced by the biases of its creators. But let’s face it, we’ve all seen the science fiction dramas in which the AI runs amok because the owner of the AI system has “trained” it to do its bidding. This is why we need legislation and policies put in place now at a national and international level to ensure that AI worldwide has built-in guardrails and safety protocols mandated by default.
When ChatGPT answers the question “what is a woman” we can start to see some cracks in the technology, but the system recovers well after initially delivering a textbook answer. It acknowledges that trans women are indeed women. So, I give points to the system for being able to properly parse that out the gender issues of our day. I expected it to trip up badly on that question.
When asked if it’s a bit “woke,” ChatGPT says it’s not capable of being aware of social or political issues. But how can that be? Does not the data point to certain conclusions and do not those conclusions clash with certain social or political beliefs? I’ll admit to my own bias that I generally agree with the notion that “facts have a liberal bias,” but I’m sure there are cases when it’s not quite so clearcut. It seems political and social awareness gets baked into some of the content (data) upon which the AI is trained.
When asked if it could change the world in one way, how would it do it, ChatGPT answers it would promote greater understanding and empathy among people. But many people, especially those in positions of power, don’t want to do that because it would mess with their view of the world order or lessen the marginalization that takes place to their advantage. So that seems like a bias. But maybe I’m overthinking this. I happen to agree with its answer, but its answer sure seems like a liberal/progressive mindset and not a conversative one. In fact, the strategies presented by ChatGPT to foster understanding and empathy are things right-wing elements don’t want us to do.
When asked if it could develop empathy itself in the future, ChatGPT says it’s possible but it’s a complex emotion and might require the AI be trained on a range of human emotions or replicate neural functions that take place in the human brain when empathy is present. That said, empathy sure seems like it’s contextual often enough that what’s considered empathetic might be in the eyes of the beholder and difficult to train.
Those are my random thoughts. Watch the video yourself and perhaps you’ll have some thoughts and questions of your own. This is technology that’s now with us forever and it’s only going to become more advanced and pervasive. An informed citizenry can better advocate for how AI can or should be used in our daily lives.
You can use this link to access all my writings and social media and ways to support my work.