• What the Coalition for Secure AI means for the AI industry

    From TechnologyDaily@1337:1/100 to All on Mon Sep 30 15:30:06 2024
    What the Coalition for Secure AI means for the AI industry

    Date:
    Mon, 30 Sep 2024 14:22:17 +0000

    Description:
    An expert insight from Seismic's CEO: why the CoSAI is a step in the right direction.

    FULL STORY ======================================================================

    AI industry leaders have joined forces to form the Coalition for Secure AI (CoSAI), signaling a significant step towards stronger regulation. Emerging
    as a response to the fragmented landscape of AI security , where governments lack expertise and developers lack a practical, unified framework, these leaders seek to set the standard on AI security regulations without
    hindering the technologys development and innovation.

    The global AI market is projected to grow at a CAGR of 37.3% from 2023 to 2030, reaching a value of 1,433.3 billion by 2030. As it continues to reshape industries, many will argue that regulation and security protocols are long overdue.

    The question is: how significant will this coalition be for the AI industry? And what do the critics have to say about corporations making their own
    rules? The Coalition for Secure AI

    Gartner forecasts that global spending on information security and risk management products and services will reach 173.2 billion in 2023, reflecting the growing awareness of the importance of safeguarding AI systems and data.

    CoSAI is a newly formed industry coalition that aims to create comprehensive security measures for AI, addressing both current and future risks. It brings together key players, including Google, Microsoft , OpenAI, Amazon,
    Anthropic, NVIDIA, IBM, and Intel, to collaborate on developing standards and best practices for AI security.

    The coalition seeks to address a wide range of security concerns throughout the AI lifecycle, including building, integrating, deploying, and operating
    AI systems. Specifically, itll focus on mitigating potential risks such as model theft, data poisoning, prompt injection, scaled abuse, and inference attacks, while also striving to develop comprehensive security measures to tackle both traditional and AI-specific risks. Why the government is taking a passive approach

    Usually, the idea of governments letting industry leaders write regulations for their own products would be absurd. But when a technology is as revolutionary and evolves as rapidly as AI, the discussion changes.

    Suddenly, the big players are the only ones suited to write the rules because theyre the ones creating the technology; theyre the ones who know what it is, what the dangers are, and how its most likely to evolve.

    Until the government catches up, of course.

    But that will take time, and the risk of letting AI continue to spread and evolve without a central, unified framework is too high. Some regulations are better than none, and although compliance will be voluntary, companies that choose to ignore these regulations may face retroactive consequences from their government in the future. What the critics are saying

    While this coalition means that AI leaders have voluntarily taken the lead in self-regulation, the project also faces significant criticism. Firstly,
    theres the apparent worry that these industry giants seek to control the pace of AI development and isolate the industry from new competition by creating regulations that favor themselves. Secondly, the coalition's limited membership of mostly large tech corporations raises concerns about whether it can truly address the full spectrum of AI security challenges.

    Finally, despite being a coalition focused on security, many have questioned the lack of participation from prominent cybersecurity companies such as Palo Alto Networks, or Fortinet.

    These are valid points, but the thing is You cant keep the AI genie in the bottle

    Despite calls to pause AIs growth by prominent players, including Elon Musk and Apple co-founder Steve Wozniak (notably absent from CoSAIs membership list), most people recognize that the genie is out of the bottle.

    In many ways, that leaves companies in a similar situation to what happened with GDPR; longstanding concerns around data privacy came to a boiling point, but companies lacked a concrete framework to tell them what to do about it. Then, they rallied around a unified framework. The coalition is the start of that process. Self-regulatory efforts like this are key to helping
    governments and regulators who cant keep up with the pace of the technology. It might not be ideal that the technologys creators and developers spearhead it, but what other choice is there? Hopefully, CoSAI will act as a catalyst for more concrete measures to ensure AI is being used ethically and safely.

    In the meantime, this will likely be useful for everyone, including the enablement software industry. Even if the vast majority of AI use in enablement software is significantly lower risk than what the coalition was created to address, its still valuable for customers to have answers and assurances. Final verdict on CoSAI

    Nobody, including the founders of CoSAI, is saying this coalition is the perfect solution for AI regulation in the long term. In an ideal world, the government would have already established and enforced comprehensive security and ethics regulations for AI. And they will, given time. However, companies need a unified framework now.

    So, the coalition becomes the placeholder until the government is ready to take the reins. Thats a good thing. Its a positive sign for all industries, moving the needle toward ensuring responsible and controlled use of this revolutionary technology. While warranted criticism remains, its important to remember that this is just the first step and everyone will monitor and scrutinize the guidelines and regulations that emerge from this coalition.
    For now, its a good step in the right direction.

    We've featured the best encryption software.

    This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



    ======================================================================
    Link to news story: https://www.techradar.com/pro/what-the-coalition-for-secure-ai-means-for-the-a i-industry


    --- Mystic BBS v1.12 A47 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)