• Mitigating AI-related risks: soft approach, hard approach or some

    From TechnologyDaily@1337:1/100 to All on Tue Apr 15 10:00:08 2025
    Mitigating AI-related risks: soft approach, hard approach or something in the middle?

    Date:
    Tue, 15 Apr 2025 08:50:44 +0000

    Description:
    As AI technology continues to evolve, the differences between regulatory approaches is becoming more apparent.

    FULL STORY ======================================================================

    The riskiness of AI development results from the fact that modern AI tools
    are pushing ethical boundaries under the existing legal frameworks that
    werent made to fit them. However, the way regulators choose to proceed in light of this varies greatly between different countries and regions.

    The recent AI Action Summit in Paris highlighted these regulatory
    differences. Notably, its final statement focused on matters of inclusiveness and openness in AI development while only broadly mentioning safety and trustworthiness and without emphasizing specific AI-related risks, such as security threats or existential dangers. Drafted by 60 nations, the statement was not signed by either the US or the UK, which shows how little consensus there is in this space. How different regulators tackle AI risks

    Countries and regions differ from each other in how they regulate AI development and deployment. Nonetheless, most fit somewhere between the two poles constituted by the United States at one point of the extreme and the European Union at the other. The US way: innovate first, regulate later

    The United States has no federal-level acts regulating AI in particular; instead it relies on voluntary guidelines and market-based solutions. Key pieces of legislation include the National AI Initiative Act, which aims to coordinate federal AI research, the Federal Aviation Administration Reauthorization Act and the National Institute of Standards and Technologys (NIST) voluntary risk management framework.

    In October 2023, President Biden issued an Executive Order on Safe, Secure
    and Trustworthy Artificial Intelligence, establishing standards for critical infrastructure, enhancing AI-driven cybersecurity and regulating federally funded AI projects. However, the US regulatory landscape remains fluid and subject to political shifts. In January 2025, President Trump revoked Bidens executive order, signaling a potential pivot towards promoting innovation and away from regulation.

    Criticisms of the USs approach include its fragmented nature, leading to a complex web of rules, lack of enforceable standards, and gaps in privacy protection. However, the countrys laissez-faire approach to regulating AI may very well change in the future. For instance, in 2024 alone, state
    legislators introduced almost 700 pieces of AI legislation and there have
    been multiple hearings on AI in governance, AI and intellectual property,
    etc. This shows that the US government doesnt shy away from regulation but is looking for ways of implementing it without compromising too much on innovation. The EU way: damage-prevention approach

    The European Union has taken a very different approach. In August 2024, the European Parliament and Council introduced the Artificial Intelligence Act
    (AI Act), widely regarded as the most comprehensive piece of AI regulation to date. Employing a risk-based approach, the act imposes the most stringent rules on high-sensitivity AI systems, e.g., those used in healthcare and critical infrastructure, while low-risk applications face only minimal oversight. Certain applications, such as government-run social scoring systems, are explicitly prohibited.

    Similar to the GDPR, the act mandates compliance not only within the EUs borders but also from any provider, distributor, or user of AI systems operating in the EU or offering AI solutions to its market (even if the
    system has been developed outside it). This may pose some challenges for US and other non-EU providers of integrated products. Criticisms of the blocs approach include its alleged failure to set a gold standard for human rights, excessive complexity and lack of clarity and highly exacting technical requirements at a time when the EU is seeking to bolster its competitiveness. Regulatory middle ground

    The United Kingdom has adopted a lightweight framework somewhere between the EU and the US. The framework is based on certain principles and core values like safety, fairness and transparency. Existing regulators, such as the Information Commissioner's Office, are empowered to implement these
    principles within their respective domains.

    In November 2023, the UK founded the AI Safety Institute (AISI), evolving
    from the Frontier AI Taskforce. AISI is tasked with evaluating the safety of advanced AI models, collaborating with major AI developers to conduct safety tests and promoting international standards. The UK government has also published an AI Opportunities Action Plan, outlining measures to invest in AI foundations, implement cross-economy adoption of AI and foster homegrown AI systems. Criticisms of the UKs approach to AI regulation include limited enforcement capabilities (all eyes, no hands), a lack of coordination between sectoral legislation and a lack of a central regulatory authority.

    Other major countries have also found their own place somewhere on the US-EU spectrum. Canada has introduced a risk-based approach with the proposed Artificial Intelligence and Data Act (AIDA), designed to balance innovation with safety and ethical considerations. Japan has emphasized a human-centric approach to AI, publishing guidelines to promote trustworthy development.

    In China, AI regulation is tightly controlled by the state, with recent laws requiring generative AI models to align with socialist values and undergo security assessments. Australia, meanwhile, has released an AI ethics framework and is exploring updates to its privacy laws to address emerging challenges. Establishing international cooperation

    As AI technology continues to evolve, the differences between regulatory approaches will become even more apparent. Whatever the individual approach certain countries take regarding data privacy, copyright protection and other aspects, a more coherent global consensus on key AI-related risks is badly needed. International cooperation is crucial in establishing baseline standards that both address key risks and foster innovation.

    Currently, global organizations like the Organisation for Economic
    Cooperation and Development (OECD), the United Nations and several others are working to establish international standards and ethical guidelines for AI. The path forward requires everyone in the industry to find some common
    ground. And if we consider that innovation is moving at light speed, the time to discuss and agree is now.

    We've listed the best laptop for computer science students .

    This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



    ======================================================================
    Link to news story: https://www.techradar.com/pro/mitigating-ai-related-risks-soft-approach-hard-a pproach-or-something-in-the-middle


    --- Mystic BBS v1.12 A47 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)