• As OpenAIs Sora blows us away with AI-generated videos, the infor

    From TechnologyDaily@1337:1/100 to All on Mon Feb 19 17:00:06 2024
    As OpenAIs Sora blows us away with AI-generated videos, the information age
    is over let the disinformation age begin

    Date:
    Mon, 19 Feb 2024 16:49:46 +0000

    Description:
    Sora is a phenomenal example of modern AI's capabilities, but the seedy underbelly of the internet has me worried already...

    FULL STORY ======================================================================

    AI video generation isnt a new thing, but with the arrival of OpenAIs Sora text-to-video tool , its never been easier to make your own fake news.

    The photorealistic capabilities of Sora have taken many of us by surprise. While weve seen AI-generated video clips before, theres a degree of accuracy and realism in these incredible Sora video clips that makes me a little nervous, to say the least. Its undoubtedly very impressive, but its not a
    good sign that my first reaction to that video of playing puppies was immediate concern. It's a bit unsettling that the possible harbinger of truth's destruction has arrived in the form of golden retriever puppies. (Image credit: OpenAI)

    Our lovely editor-in-chief Lance Ulanoff penned an article earlier this year discussing how AI will make it impossible to determine truth from fiction in 2024 , but he was mostly talking about image-generation software at the time. Soon, anyone will be able to get their hands on a simple and easy-to-use tool for producing entire video clips. Combined with the existing power of voice deepfake artificial intelligence (AI) software, the potential for politically-motivated video impersonation is more prevalent than ever. 'Fake news!' cried the AI-generated Trump avatar

    Now, I dont want to simply sit here and fearmonger endlessly about the
    dangers of AI. Sora isnt widely available just yet (its currently invite-only), and I genuinely do believe that AI has plenty of use cases that could improve human lives; implementations in medical and scientific professions can potentially serve to eliminate some of the busywork faced by doctors and researchers, making it easier to cut through the chaff and get to the important stuff.

    Unfortunately, just as with Adobe Photoshop before it, Sora and other generative AI tools will be used for nefarious purposes. Trying to deny this is trying to deny human nature. We already saw Joe Bidens voice hijacked for robocall scams how long will it be before ersatz videos of political figures start to flood social media? It only takes one person with malicious intent for an AI tool to become dangerous. (Image credit: Shutterstock)

    Sora, much like OpenAIs flagship AI product ChatGPT , probably wont be the tool used to produce this fakery. Sora and ChatGPT both have a large number
    of safety guardrails in place to prevent them from being used to produce content that goes against OpenAIs user guidelines. For example, prompts that request explicit sexual content or the likeness of others will be rejected. OpenAI, in its defense, does state that it intends to continue engaging policymakers, educators and artists around the world to understand their concerns.

    However, there are ways to circumvent these guardrails I tested this myself, and the results were occasionally hilarious and OpenAIs transparent approach to AI development means that Sora imitators will surely pop up everywhere in the not-too-distant future. These knock-offs (much like chatbots based on ChatGPT) wont necessarily have the same safety and security features.
    Robocop, meet Robocriminal

    AI tools are already being used for a lot of dodgy stuff online. Some of it
    is relatively harmless; if you want to have a steamy R-rated conversation
    with an AI pretending to be your favorite anime character, Im not here to judge you (well, maybe I am a little, but at least its not a crime). Elsewhere, though, bots are being used to scam vulnerable internet users, spread misinformation online, and scrape social media platforms for peoples personal data.

    The power of something like Sora could make this even worse, allowing for
    even more sophisticated fakery. Its not just about what the AI can make, remember its what a talented video editor can do with the raw footage provided by a tool like Sora. A bit of tweaking here, a filter there, and suddenly we could have grainy phone-camera footage of a prominent politician beating up a homeless person in an alley. Dont even get me started on how high-quality AI video generation is practically guaranteed to disproportionately affect women thanks to the recent online trend of AI-powered revenge porn.

    The worst part? Its only going to get harder to determine the fakes from reality. Despite what some AI proponents might tell you, theres currently no reliable way to definitively confirm whether footage is AI-generated. OpenAI CEO Sam Altman has previously come under fire for misuse of ChatGPT by malign third-party groups. (Image credit: JASON REDMOND/AFP via Getty Images)

    Software for this does exist, but doesnt have a great track record. When Scribblr tested out several AI detection tools , it found that the paid software with the highest success rate (Winston AI) was only correct 84% of the time. The most accurate free AI detector (Sapling) only offered 68% accuracy. As time goes on, this software might improve but the incredibly fast development of generative AI could outpace it, and theres always the
    risk of false positives.

    Sure, many AI-produced videos and images can be readily identified as such by a seasoned internet user, but the average voter isnt so eagle-eyed, and the telltale signs - usually weird morphing around human digits and limbs, or unrealistic camera movements - are only going to fade as the technology improves. Sora represents an enormous leap forward, and Im frankly a bit concerned about what the next big jump will look like. The era of disinformation

    When we discuss AI deepfakes and scams, were often doing so on quite a macro scale: AI influencing upcoming elections , an AI-generated CFO stealing 25 million dollars , and AI art winning a photography contest are all prime examples. But while the idea of secret AI senators and chief executives does worry me, its on the small scale where lives will truly be ruined.

    If youve ever sent a nude photo, then congrats your jealous ex can now skim your social media for more material and turn that into a full-blown sex tape. Accused of a crime, but youve got a video recording that exonerates you? Tough, the courts AI-detection software returned a false positive, and now youre being hit with an extra felony count for producing fake evidence. Individuals stand to lose the most in the face of emergent new technologies like Sora I dont care about big corporations losing money because of a hallucinating chatbot .

    Weve been living in a time where the sum total of human knowledge is almost entirely accessible to us from the little rectangles in our pockets, but AI threatens to poison the well. Its nothing new this isnt the first threat to facts that the internet has faced, and it wont be the last, but it very well could be the most devastating to date. Calling it quits

    Of course, you can say same sh*t, different day about all this and you wouldnt be wrong. Scams and disinformation arent new, and the targets of technologically-augmented deception havent changed: its mostly the very young and the very old, those who havent learned enough about tech yet or arent
    able to keep up with its relentless march.

    I hate this argument, though. Its defeatist, and it fails to acknowledge the sheer power and scalability that AI tools can put in the hands of scammers. Snail mail frauds have been around for decades, but lets be honest: that
    takes a lot more time and effort than ordering a bot to write and send thirty thousand phishing emails. The rise of AI has enabled online phishing scams to become faster, easier, and larger in scale than ever before. (Image credit: Shutterstock)

    Before I wrap this up, I do want to make one thing clear, because my social inboxes invariably get clogged up with angry AI lovers whenever I write an article like this: Im not blaming AI for this. Im not even blaming the people who make it. OpenAI seems to be taking a more cautious and transparent approach to deep learning technology than Id expect from many of the worlds biggest corporations. What I want is for people to be fully aware of the dangers, because text-to-video AI is simply the latest trick in the bad
    actors playbook.

    If youd like to try out Sora for yourself (hopefully for more wholesome purposes), you can make an OpenAI account today by following this link but bear in mind that unless you have an invite, the software isnt available just yet. OpenAI is treading carefully this time around, with the first wave of testers comprised mainly of red teamers who are stress-testing the tool to eliminate bugs and vulnerabilities. There isnt an official release date just yet, but it likely wont be far off if OpenAIs previous releases are anything to go on. You might also like... Hands on: Google Gemini the new Assistant has plenty of ideas ChatGPT is getting human-like memory and this might be
    the first big step toward General AI Apple could be working on a new AI tool that animates your images based on text prompts



    ======================================================================
    Link to news story: https://www.techradar.com/computing/artificial-intelligence/as-openais-sora-bl ows-us-away-with-ai-generated-videos-the-information-age-is-over-let-the-disin formation-age-begin


    --- Mystic BBS v1.12 A47 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)