Keeping Us and Our Kids Informed About AI

Photo of a rabbit that the computer has identified as a mollusk

“I have to interrupt – look at this!” my older kid said, shoving my phone in my face. “It thinks Hoppity is a moth!”

“Uh-huh, that is funny,” I replied, trying to summon some enthusiasm through my sickness-addled state.

This was just the latest – but far from only – time the phone’s artificial intelligence (AI) has misidentified our pet rabbit. It has also identified him as a cat, a bulldog, and a guinea pig. This particular quirk of the phone has provided a natural, relatively easy way to talk about this new technology.

While I tried to go back to taking a nap, I overheard my husband explaining that predictive AI (the type of AI that is used in ChatGPT and image-generating software) works by predicting what comes next in text or what is likely to be there in an image. These AI programs don’t have eyes or ears, so they have no idea if they gave a person six fingers or wholesale made up a fact. He used that to explain that’s why you can’t trust these programs to provide accurate information. The generative AI programs have no clear understanding of what makes sense in a final image or text, only if it fits its general pattern.

With AI becoming so much more prominent in our lives, it’s important that we and our kids understand it and its issues. So here are some things to consider talking to your kids about. You may be familiar with some of these points, but I hope others are relatively new to you. This is not saying AI – even generative AI – should never be used. But we should be thoughtful about how we use it.

  1. There are many different types of AI! While the type that most people are using and seeing is called generative AI, different types are being used to look for new materials to make solar panels, identify patterns in cancer rates, analyze medical images, and categorize galaxies. A lot of people are saying “AI this” and “AI that” when they really mean generative AI and even then, only specific uses. Not all AI is created equally.
  2. AI uses a lot of environmental resources. Most AI (no matter what type) requires a lot of computing power, which in turn uses a lot of water and electricity. Unfortunately, it’s very difficult to find out how much, because the big tech companies aren’t willing to provide that information. However, we can get somewhat of a sense of it from the federally-owned supercomputers that use AI for scientific research. Summit at the Department of Energy’s Oak Ridge National Laboratory (which was the most powerful computer in the world until recently) is 2,000 times more efficient in terms of processsing than supercomputers in 2004, but still uses a lot more energy than they do. Now, how much energy and water you are willing to use to accomplish tasks will depend on each person. Some people may think no one should use AI, some think it should be used for scientific research, some think it should be used to make certain tasks easier, and some think it should be used regularly.
  3. Generative AI cannot judge its own accuracy – you have to fact-check everything it provides. Generative AI has a tendency to be like the dad in Calvin and Hobbes. If it can’t come up with an answer, it will just make one up and state it confidently. Except Calvin’s dad knows he’s making up nonsense – the program doesn’t. This has come up in everything from cases where a lawyer used ChatGPT and it fabricated past court cases to Google’s AI implying that fusion energy can produce commercial electricity. (This was a personal experience and it saying taht is very, very wrong.)
  4. Using generative AI can deny the user the chance to build needed skills. In my professional field, science communication, there has been a lot of discussion about using AI to find or summarize scientific papers. Listening to a marketing person talk about all of the tasks AI could do, I kept thinking “but those are all of the ways you practice and learn that skill!” You learn to understand research by reading the papers. You learn to make connections between different areas of science by searching for papers and summarizing them. Those are foundational skills. If you don’t learn them, you can’t build on them to learn more advanced skills. In history or English class, you’re not really there to know the details of some battle or have a full understanding of Ethan Frome. You’re there to learn how to learn and make an argument. You can’t learn that if you’re relying on AI. This may not be a relevant point if you’re using AI for a one-time thing, like making a vacation agenda, but it’s essential to consider for any academic or professional use.
  5. Generative AI can limit creativity. The science fiction author Ted Chiang (who wrote the gorgeous short story that the movie Arrival is based on) has a terrific essay in the New Yorker about why generative AI will never truly produce art. His argument is that creating art – even bad art – requires making lots and lots of choices. Every one of those choices contributes to the unique final product. Even the most derivative story draws on the author’s experiences and culture. When your input is limited to writing a brief prompt, the choices made are inherantly limited and narrow. Also, if you use AI to do things that a writer or artist would otherwise do, it reduces the number of people who can make a living off of such activities. That minimizes the overall creativity in the world.
  6. Generative AI uses other people’s art and writing as fodder for its training set. Computer scientists “train” AI by feeding it huge amounts of data. When they use AI for categorization or identification – like identifying types of galaxies or cancerous tumors – they feed it lots and lots of images of that thing. The AI then performs a certain task and the trainers provide it with feedback about the accuracy of its responses. The AI “learns” by gradually adjusting its responses to match the feedback. Generative AI uses massive amounts of text or images as its training data. But unlike in most scientific research, the people who created those text or images didn’t consent to them being used as training data. Some people argue that what AI is doing with the data is similar to what people do, while others think that the ethics are shady.
  7. Using AI can limit relationship-building. Another point that Chiang makes is that when we put effort into a piece of communication, it is an act of care for the person we are communicating to. When we write an email to someone, we want them to put the effort into reading it – so we should put the effort into writing it. When we fake that effort by relying on generative AI, it’s not being authentic. Horror and erotica author Chuck Tingle has a different but related angle – that AI inherantly lacks the bigger context of the relationship between creator and audience. That relationship is a vibrant, ever-changing context that art is created and experienced in. Even using AI for purposes other than creation can limit our relationship building. Humans have never been able to hold all of the information they need to know in their own heads. We’ve evolved to rely on each other’s minds. We’ve already minimized this somewhat through tools like Google. It provides us access to endless information, but also means that we’re not asking friends and family for their knowledge. In my opinion, AI makes this even worse. It puts those things that we rely on others for in a machine’s hands, weakening those interdependent ties. I know when I ask my kids, “Hey, what’s another word for…?” it shows them that I value their knowledge and competence. Similarly, using AI to learn or practice new languages – like Duolingo keeps pushing me to at least four times a session – means that we’re not practicing with other people. 

There are a lot of different ways to use generative AI and these points may come up in some ways more than others depending on the way you use it. “Writing” a book using generative AI is very different than using it to identify parts of your writing that are difficult for others to understand. Screwing around to make funny images is different than using it to replace an artist that you would otherwise pay. 

In an increasing number of cases, it’s almost impossible to avoid AI because it’s being integrated into so many different programs. But knowing the advantages, disadvantages, and issues with AI can make you and your kids informed users (or non-users) of it. And help you to know that a rabbit is a rabbit, not a moth.

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy