The world is constantly changing, and when change occurs, people need to adapt. That is what schools across the globe are doing regarding their A.I. policies. There is a constant back and forth about the ethics of A.I. and whether or not it should be allowed in the learning environment.
Schools have been trying to adapt to the new A.I. era in many ways. One way HHS has been trying to evolve in this new age of technology is by offering a short A.I. unit at the start of the year in their English classes. Students learn how A.I. can be addictive and start out as just a tool used to help, but students become far too reliant on it, making it into a crutch.
Eli Smock, a student at the high school, commented, “I think that A.I. use is good in moderation…it can be helpful when it’s used for studying or correcting grammar or word choice, but when you use it too much it makes it so that the work isn’t even yours.”
Schools have also been using A.I. checkers, but they can be controversial because of their tendency to be unreliable. Teachers have resorted to other means such as one called white-texting in which they add white text to the instructions of an assignment that would be ignored by any students just reading the assignment but it is seen by A.I. if the text were copy-pasted, causing the A.I. to include details about something totally unrelated. Once teachers see this unrelated information in an essay’s content, they know it was likely the work of artificial intelligence. Another, more reliable way teachers can check is by checking the document’s history to see if the text was added over time or just copy-pasted from somewhere else.
A group at MIT decided to do some research into how generative A.I. affects our critical thinking abilities. They created three groups and had them answer SAT questions while monitoring their neural activity. The first group had access to ChatGPT when answering their questions while the second only had access to Google. The third group had nothing and had to rely solely on their brains to come up with their answers. The researchers found that the first group of ChatGPT users had the lowest neural activity by far among the groups.
While the MIT study has yet to been peer reviewed, Nataliya Kosmyna, one of the authors of the paper, explained, “What really motivated me to put it out now before waiting for full peer review is in 6-8 months, [I fear] there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely detrimental,” she added to hammer home the idea that A.I. is the most dangerous for the development of young children.
Another question that the use of A.I. raises is whether or not it is ethical. A.I. is often trained on sources all across the internet, giving it a tendency to use people’s work without their consent, especially with A.I. generated images. This begs the question of who really owns the A.I.’s output and if the original creators should receive compensation for their work being used.
Additionally, A.I. is incredibly bad for the environment because of the amount of energy it uses. Language models, especially the more advanced ones such as ChatGPT, use large amounts of energy to function, leaving a huge carbon footprint. According to the Environmental and Energy Study Institute, a large data center can use up to five million gallons of water in a day. For reference, that is about the same amount that Hingham would use. Nik Romania added, “I don’t like generative A.I. because it drains a lot of water, pollutes the air more, and uses a lot of electricity.”
Graeme Baker highlighted the importance of companies doing their part to reduce environmental damage, noting, “I think we’ve seen this before with major companies trying to place the burden of usage of their product onto the consumer. This was a big thing that people did with carbon footprint, and it’s really not successful…It’s mostly in the hands of larger corporations to be resolving those issues.” While the companies are still largely at fault for the environmental issues caused by A.I., refusing to use A.I. pressures companies into scrapping their A.I. programs to avoid losing large sums of money.
Because of these factors, there is a growing online movement against artificial intelligence. People are refusing to use it and even going so far as to call those who use ChatGPT and similar models “artificially intelligent,” noting how they always seem to go to A.I. first instead of thinking for themselves.
Student opinions on the matter vary slightly, but mostly have the same ideas. Kaia Johnson, a freshman, stated, “I don’t think people should use A.I. for too much because it inhibits learning ability and that’s the whole purpose of school. It can also be really wrong and biased.”
Will Monti, a sophomore, generally agreed with her but had a slightly more restrictive view, suggesting, “I think they should not let kids use it to get answers, but if they really don’t understand something and there’s no one available to help them, I think that they should be able to understand something.”
Overall, the Hingham Public School’s student guide sums it up pretty well in their opening statement of, “Understanding and making use of technological tools are important elements in preparing students for the future. Generative AI and other technologies can enhance a well-rounded education but are not substitutes for critical thinking, analysis, and writing skills, and the use of generative AI systems may actually prevent students from developing these important abilities. In addition, the information provided by generative AI systems can be factually incorrect or biased.”
While A.I. may seem enticing, in truth it can easily become a crutch that students rely on too heavily that also has the potential to steal from writers and artists and cause environmental harm, so it is best to think before using ChatGPT on that essay.
Sources:
https://time.com/7295195/ai-chatgpt-google-learning-school/
https://www.eesi.org/articles/view/data-centers-and-water-consumption