The rise of Artificial Intelligence has affected the world in unimaginable ways, whether it through its economic impact, drawing in 1.6 trillion dollars in investment money, or its social impact, impacting the everyday decisions of its users. As the technology continues to develop, its influence will continue to reach greater heights. Most recently, this rapid growth in impact can be demonstrated in its adoption by major United States government agencies, including the Department of Defense. Although the implementation of AI will help in optimization for the government, the journey to get to this point has been anything but optimal, with many disagreements between pioneering AI companies and the US government.
The theoretical use of AI in government agencies could lead to many positives, streamlining systems and optimizing decision making. To implement this technology into their systems, the government must strike a deal with one of the major private players in the ever evolving AI race. The most prominent candidates for the deal that have arisen in the past few weeks have been Anthropic, the company behind the AI ChatBot Claude, and Open AI, the company behind Claude’s competitor, ChatGPT.
Anthropic’s first official partnership with the US government came in November 2024, the first of any company in the industry. Since then, Anthropic has worked with the United States on many projects, including the creation of custom cybersecurity AI models for all of the US government agencies. This led to a two-year, 200 million dollar deal in July of 2025 given by the US government to provide “frontier AI capabilities” for the Department of Defense, in hopes that they would have unrestricted access to their technologies.
However, this relationship recently has turned into a feud, as this past February, Anthropic CEO Dario Amodei had a moral disagreement with the use of their products, specifically in the Department of War. As stated by Amodei, Anthropic refused to let their product be utilized in two distinct cases. The first case was the domestic mass surveillance of the US population, and the second case was the development of fully autonomous weapons. His justification was that these use cases were not in the citizens best interest and that the technology is too unreliable. Although the US government did not plan on following through on either of these according to multiple sources, in the eyes of the Department of War, this refusal was a breach in the contract, and in turn Anthropic lost their 200 million dollar deal.
The US government cutting ties with Anthropic due to their ethical safeguards meant that they now had to find a new company that would be willing to step in. This is when OpenAI entered the picture.
Right after the collapse of the deal between Anthropic and the US government, OpenAI CEO Sam Altman proposed a deal with the latter. They have not disclosed if there will be any restrictions on the US government in using their model ChatGPT, as at the time of writing, the deal is still in its beginning stages and the details have not yet been released.
Overall, the US government’s search for an AI company willing to meet their needs has certainly been complex and will continue to develop within the coming weeks and months, however it does raise the question of how ethical boundaries will continue to play a role in the rapidly expanding field of AI. Although innovation is important and should be encouraged however possible, ethical boundaries should be established in order to protect humanity from collapse.
Spencer Farish, a Sophomore at Hingham High School, in response to what the boundaries of Artificial Intelligence should be, states,”I believe that AI should be permitted to parse through existing government data that respective agencies already have permission to look through, but I don’t think it should extend to collecting new data on the general population. As anyone who has used AI knows, it makes mistakes, and therefore I believe it should never be given the ability to perform any operations. It is up to the government agents to utilize AI as any other tool at their disposal to inform and speed up their decisions, but not to replace them.”
Orlando Vittorini, a Sophomore at Hingham High School, adds,”Although many private Artificial Intelligence companies claim to be unbiased, outside studies and my personal research have proven otherwise. Their bias along with the absence of emotions concludes that in the world of today, AI should not be the sole contributor to decision making.” Despite AI’s rapid improvement, it should not be trusted to replace the decision makers that impact countless lives.





























