Elon Musk has recently come under scrutiny for his initiatives within the U.S. government through the Department of Government Efficiency (DOGE) where he is reportedly leveraging artificial intelligence (AI) as a tool for significant budget cuts, with an aim to reduce the federal deficit by at least $1 trillion. While the ambitions may be high, experts have raised alarming concerns that this could lead to unethical practices including security breaches, biased decisions in staff dismissals, and the potential cutting of essential personnel who provide crucial government services.
Reports indicate that the DOGE team has started using AI to analyze sensitive data from federal agencies, such as education programs and procurement contracts, which raises red flags regarding the protection of sensitive information and the fairness of decisions made by AI systems. David Evan Harris, a researcher in AI ethics, voiced his apprehension in a CNN interview saying, “It’s just so complicated and difficult to rely on an AI system for something like this, and it runs a massive risk of violating people’s civil rights.”
Musk’s leadership has been controversial, drawing parallels to his acquisition of Twitter, where he enacted drastic layoffs and budget cuts that resulted in operational disruptions. Experts fear that applying such corporate strategies to government operations could have disastrous consequences. John Hatton, vice president of policy at the National Active and Retired Federal Employees Association, added, “It’s a bit different when you have a private company. You do that in the federal government, and people may die.”
As AI is increasingly integrated into government operations, concerns grow that biases inherent in these technologies could affect marginalized communities disproportionately. Previous studies suggest that AI systems can favor certain demographics, leading to inequitable treatment. There are worries that AI might inadvertently discard qualified personnel based on biased assessments.
Furthermore, some former employees of the United States Digital Services (USDS), now under the aegis of DOGE, have expressed deep concerns by resigning in protest. They stated, “We will not use our skills as technologists to compromise core government systems, jeopardize Americans’ sensitive data, or dismantle critical public services.” Their resignations highlight a growing discontent with the technological approaches being deployed under Musk’s directive.
In response to the mounting pressures, the White House has indicated that the administration, under Trump, remains committed to addressing internal dissent with firmness. White House press secretary Karoline Leavitt asserted that attempts to deter President Trump’s agenda through protests or lawsuits are futile.
Amidst ongoing uncertainties, there is an urgent need for transparency regarding DOGE’s utilization of AI. Key questions remain unaddressed: what specific AI tools are being employed, how they are vetted, and whether human oversight exists in decision-making processes involving personnel cuts. As Julia Stoyanovich, an AI ethics expert, noted, it is crucial for users of AI technology to clearly define their goals and ensure adequate testing to measure effectiveness. This ongoing situation underscores the delicate balance between adopting innovative technologies and protecting civil rights and essential government services.