• Cybercriminals are abusing LLMs to help them with hacking activit

    From TechnologyDaily@1337:1/100 to All on Fri Jun 27 17:30:07 2025
    Cybercriminals are abusing LLMs to help them with hacking activities

    Date:
    Fri, 27 Jun 2025 16:22:00 +0000

    Description:
    Legitimate AI tools are being hijacked by hackers.

    FULL STORY ======================================================================New research shows AI tools are being used and abused by cybercriminals Hackers are creating tools that exploit legitimate LLMs Criminals are also training their own LLMs

    Its undeniable that AI is being used by both cybersecurity teams and cybercriminals, but new research from Cisco Talos reveals that criminals are getting creative. The latest development in the AI/cybersecurity landscape is that uncensored LLMs, jailbroken LLMs, and cybercriminal-designed LLMs are being leveraged against targets.

    It was recently revealed that both Grok and Mistral AI models were powering WormGPT variants that were generating malicious code , social engineering attacks, and even providing hacking tutorials - so it's clearly becoming a popular tactic.

    LLMs are built with security features and guardrails, ensuring minimal bias and outputs that consist with human values and ethics, as well as making sure the chatbots dont engage in harmful behaviour, such as creating malware or phishing emails - but there are work arounds.

    Save up to 68% on identity theft protection for TechRadar readers!

    TechRadar editors praise Aura's upfront pricing and simplicity. Aura also includes a password manager, VPN, and antivirus to make its security solution an even more compelling deal.

    Preferred partner ( What does this mean? ) View Deal Jailbroken and uncensored

    The so-called uncensored LLMs observed in this research are versions of the
    AI models that operate outside of the normal constraints. This means that
    they are able to carry out tasks for criminals and create harmful content. These are quite easy to find, the research shows, and are simple to run -
    with only relatively simple prompts required.

    Some criminals have gone one step further, creating their own LLMs, such as WormGPT, FraudGPT, and DarkGPT. These are marketed to bad actors and have a whole host of nefarious features. For example, FraudGPT claims to be able to create automatic scripts for replicating logs/cookies, write scam pages/letters, find leaks and vulnerabilities, and even learn to code/hack.

    Others navigate around the safety features of legitimate AI models through jailbreaking chatbots. This can be done using obfuscation techniques, which include Base64/Rot-13 encoding, using different languages, L33t sp34k,
    emojis, and even morse code.

    As AI technology continues to develop, Cisco Talos expects cybercriminals to continue adopting LLMs to help streamline their processes, write
    tools/scripts that can be used to compromise users and generate content that can more easily bypass defenses. This new technology doesnt necessarily arm cybercriminals with completely novel cyber weapons, but it does act as a
    force multiplier, enhancing and improving familiar attacks, the report confirms. You might also like Take a look at our picks for the best malware removal software around Check out our choice for the best AI tools Identity fraud attacks using AI are fooling biometric security systems



    ======================================================================
    Link to news story: https://www.techradar.com/pro/security/cybercriminals-are-abusing-llms-to-help -them-with-hacking-activities


    --- Mystic BBS v1.12 A47 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)