Mythos under the microscope – banks are checking AI that can attack systems

When Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened this week for an emergency meeting with the CEOs of Wall Street’s biggest banks, one thing was clear: something serious was in the air. The topic was Mythos – the latest AI model from Anthropic that can find security vulnerabilities faster and more effectively than anything before.

Mythos from Anthropic a threat to the banking system?

According to Bloomberg, Bessent and Powell met with Wall Street executives to warn them that Mythos should be taken seriously and encourage them to use it as a tool to identify weaknesses in their own systems. Paradox? Only theoretically, because in practice a model that can be used for an attack is also intended to protect against attack.

Banks testing Mythos internally include JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America and Morgan Stanley. JPMorgan was the only financial institution listed as an official launch partner with access to the model, with the others joining more quietly.

Why does Anthropic restrict access at all? The company itself admitted that Mythos is “currently far ahead of any other AI model in terms of cyber capabilities” and that it “heralds the coming wave of models capable of exploiting vulnerabilities in ways that far outpace the capabilities of defenders.”

In other words, they have created something that they themselves are afraid of. The model was discovered by, among others, a 17-year-old FreeBSD vulnerability that gave an attacker full access to the machine without authentication. This is no longer idle boasting, but real use, which can actually cause quite a stir in the banking sphere… and beyond.

Is Mythos really so brilliant and dangerous at the same time?

Some experts, however, claim that the alarm is exaggerated and that it is simply a clever enterprise-class sales strategy. Skepticism is justified, because hype around new AI models is almost standard today.

Meanwhile, Anthropic wages yet another battle, but this time in court. The company is in a legal dispute with the Trump administration after the War Department classified Anthropic as a supply chain risk – a decision that came after failed negotiations on restrictions on government use of AI models. The situation is at least peculiar: on the one hand, the government encourages banks to use Mythos, on the other hand, it takes Anthropic to court.

British financial regulators are also joining the discussion and analyzing the risk associated with Mythos.