Artificial intelligence has gone from being a subject confined to computer science textbooks to a mainstream phenomenon. It’s given us incredible advancements, like chatbots and celebrity voice reproductions, that are entertaining and engaging. However, AI also poses a threat to society as we know it, with potential disruptions to social norms, industries, and even the fortunes of tech companies. Some experts even believe that AI could eventually surpass human intelligence.
A recent survey conducted by the Pew Research Center found that a majority of Americans, 52% to be precise, are more concerned than excited about the increased use of artificial intelligence. Privacy concerns and worries over human control of AI technologies top the list of concerns for many.
Governments around the world are now grappling with how to harness the transformative power of AI while mitigating its negative effects. Countries like Brazil and Japan have taken steps to clarify existing laws to protect data, privacy, and copyright. Israel and Japan have focused on providing guidelines for AI classification and use. Other countries, like the United Arab Emirates, have created working groups to establish best practices around AI and are seeking public input on regulations.
Despite calls for international cooperation in regulation and inspection from industry leaders like OpenAI, concrete laws regarding AI regulation are scarce. However, progress is being made in some countries. Let’s take a closer look at how different nations are addressing the challenges and questions surrounding AI.
In Brazil, a draft AI law has been proposed, outlining users’ rights when it comes to interacting with AI systems. AI providers must disclose information about their AI products to users, and users have the right to know if they are interacting with an AI. They also have the right to an explanation of how an AI made a specific decision or recommendation. AI developers must conduct risk assessments before bringing products to market, and there are prohibitions on harmful AI systems. Liability for any damage caused by AI systems rests with the developers.
China has also published draft regulations for generative AI and is seeking public input on the rules. These regulations emphasize that developers are responsible for the output created by their AI and impose restrictions on training data sourcing. AI services must generate true and accurate content. China has set ambitious goals for its tech and AI industries and aims to achieve world-leading levels by 2030.
The European Parliament has approved the AI Act, which categorizes AI into different risk levels. Unacceptable AI systems are banned, while high-risk AI requires approval before going to market. Limited-risk AI should be appropriately labeled for user awareness. The act still needs approval from the European Council.
Israel has published a draft policy that seeks to regulate AI by prioritizing responsible innovation. The policy emphasizes respect for the rule of law, fundamental rights, human dignity, and privacy. It encourages self-regulation and a soft approach to government intervention, urging sector-specific regulators to intervene when needed.
Italy briefly banned ChatGPT due to concerns over user data collection but has since allocated funds to support workers at risk of being displaced by automation. The country aims to retrain workers and provide digital skills training to unemployed individuals.
Japan has adopted a soft law approach, avoiding prescriptive regulations for AI use. Instead, the country is closely monitoring AI developments to avoid stifling innovation. AI developers in Japan currently rely on adjacent laws, such as those related to data protection, as guidelines.
Regulating AI is a complex and evolving process. Various countries are taking different approaches, each with its own set of challenges and considerations. As AI continues to advance, finding a balance between reaping its benefits and mitigating risks remains a crucial task for governments worldwide.