By James Maroney
State Sen., D-14
Artificial intelligence, or AI, is like a superhero with a dual identity. On one hand, it possesses incredible powers to process vast amounts of data, make predictions and even learn and adapt on its own. It’s like having a team of brilliant, tireless analysts at your beck and call, ready to crunch numbers and uncover insights at lightning speed.
But with great power comes great responsibility, and the dangers of AI are also very real. Like a rogue vigilante, it can cause unintended harm if it’s not properly trained or monitored. Let’s not forget the classic fear of machines taking over the world, which may sound like a sci-fi cliché but is still a legitimate concern. As with any superhero, it’s important to keep a close eye on AI and ensure it’s using its powers for good.
I cannot take credit for writing the above paragraphs. On second thought, maybe that is debatable. They were written by ChatGPT after I gave it the prompt to “write a witty and engaging paragraph on the powers and dangers of AI.”
AI has been all over the news recently, whether it is ChatGPT, GPT-4, DALL-E or any of the other various AI tools that are popping up. If you haven’t heard of ChatGPT yet, it is a tool made by OpenAI that uses artificial intelligence to respond to user prompts. It can be used to write papers, emails, articles or any of a number of other tasks.
While the work product from ChatGPT may be impressive, AI is still in its nascent stages and presents us with both tremendous promise and perils. You present the topic and the chatbot can answer your questions, write, copy and even draft your emails. Recently, we saw in South Windsor that a fake newsletter was created using ChatGPT and then circulated. We have also seen fake images being created by AI and disseminated via social media.
While we can envision the many ways that AI will be able to help make us more efficient and improve our lives, we can also see real dangers. The truth is we don’t know what is coming next. At a public hearing on SB 1103, a bill I authored regarding the state’s use of artificial intelligence, an expert testified that regenerative AI (AI that will write itself) is about 10 to 20 years away. Unfortunately, as a government often the way that we regulate is wrong; we often allow technologies to develop and embed themselves within society before regulating them. In many cases it may be too late, as the horse has already left the barn.
I believe it is so important that we start to regulate the state government’s use of AI. We need to require impact assessments ahead of implementing AI in decision-making processes and ensure that there are no disparate impacts. We have already seen that AI can impact us all. Hiring algorithms have been shown to discriminate based on age. Some algorithms have given higher interest rates for loans based on race. And many government-used algorithms in other states, ranging from provision of SNAP benefits to deciding when to investigate reported incidences of child abuse, have been shown to discriminate based on income.
SB 1103 would require assessments ahead of the implementation of AI in specific high-risk incidences. It would create policies and procedures to govern the state’s use of AI, and it would create a task force to work toward creating a Connecticut AI bill of rights for our residents. While there is tremendous potential for the use of AI to create significant efficiencies, we need to be certain we aren’t causing unintended harms before fully embracing the use of AI in state government.
This is a scary article. After learning about the first 2 paragraphs, I wonder who actually wrote the remainder of the article – maybe AI making it sound good? I also wonder what will happen to future generation’s talent and capabilities with AI writing papers, etc and we no longer have to think on our own! Really!? Where is it going to stop? Senator Maroney is right – government better do a solid job regulating AI before AI has the chance to regulate us!