Scott Bessent and Jerome Powell Issue Stern Warning Regarding Anthropic Model Risks

A significant shift in the regulatory landscape for artificial intelligence occurred this week as two of the most influential figures in American finance and policy converged on a singular concern. Scott Bessent, the prominent hedge fund manager and economic advisor, joined Federal Reserve Chair Jerome Powell in delivering a private but pointed warning to the chief executives of the nation’s largest banks. The message was clear: the integration of Anthropic’s advanced artificial intelligence models into the core of the global financial system carries profound risks that may not yet be fully understood.

The meeting, which included leaders from Wall Street’s most systemic institutions, focused on the rapid adoption of large language models for high-stakes decision-making. While the banking sector has been eager to leverage AI for everything from credit scoring to automated trading, Powell and Bessent expressed specific anxieties regarding the ‘black box’ nature of these systems. As financial institutions increasingly look to Anthropic as a primary provider of generative AI, the potential for systemic instability has become a top priority for federal oversight.

Jerome Powell’s involvement highlights the Federal Reserve’s growing concern that AI could trigger a new type of financial contagion. Unlike traditional software, generative models can produce unpredictable outputs or ‘hallucinations’ that could lead to catastrophic errors in risk assessment. If multiple major banks rely on the same underlying Anthropic model for their logic, a single flaw in that model could propagate across the entire economy simultaneously. This creates a single point of failure that the Fed is desperate to avoid in its pursuit of monetary stability.

Official Partner

Scott Bessent provided the market-driven perspective, noting that the speed of AI deployment often outpaces the development of internal guardrails. For Bessent, the risk is not just technical but philosophical. He cautioned that over-reliance on these models could erode the human judgment that has historically served as a final check against market irrationality. The concern is that if bank CEOs delegate too much authority to algorithms, they lose the ability to intervene when those algorithms begin to behave in ways that defy economic logic.

Anthropic, a company that has markets itself on the premise of ‘AI safety,’ finds itself in a complex position. While its Claude models are often cited as more constrained and ethical than those of its competitors, the sheer scale of its adoption in the financial sector has turned it into a systemic entity. The warning from Powell and Bessent suggests that even the safest AI models are not immune to the dangers of misuse or over-implementation in sensitive environments. It is a reminder that in the world of high finance, even a slightly smarter algorithm can be a liability if it is not governed with extreme skepticism.

The discussions also touched upon the competitive pressure that banks feel to keep up with technological trends. CEOs reported that the race to implement AI is driven by a need for efficiency and a fear of being left behind by more agile fintech rivals. However, the feedback from the meeting suggests that the regulatory pendulum is beginning to swing back toward caution. The Fed is likely to introduce more rigorous stress testing for AI applications, requiring banks to prove they have ‘human-in-the-loop’ systems that can override AI decisions in real-time.

As the financial industry processes this warning, the immediate impact will likely be a slowing of AI integration in customer-facing and risk-heavy roles. Banks may now face a higher burden of proof when justifying their use of Anthropic’s technology. The collaboration between a potential Treasury official like Bessent and the current Fed Chair signals a rare moment of bipartisan and cross-institutional agreement on the need for restraint. For the tech industry, it serves as a wake-up call that the corridors of power in Washington and Wall Street are no longer willing to accept innovation at any cost.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use