AI is a hot topic from the end of 2022. Generative AI and the future it could make have been talked about all over the world from the time OpenAI put ChatGPT out into the wild. Where capitalists see it as a new way of making money people with older thoughts have a different perspective. They see it as a computer being super smart to eventually end their creators and rule them.
These people mostly are the people who always have a lot to say. When Generative Artificial Intelligence is talked about, most people know that technology is just a tool and doesn’t have a mind. Users need to “do good” with it. And if that can’t be done as “good” varies from person to person, then democratic governments need to step in and set rules.
Countries on Regulating AI:
A lot is being said about the ability of generative AI to create content, with applications that can create marketing materials and can translate the voices of the podcasters into multiple languages. However, with huge promise comes enormous danger, like the propagation of false information at scale, particularly in the form of deepfakes; using authors’ work without credit; automation causing a number job loss and many more.
Due to these possible drawbacks, governments around the world are working to develop Artificial intelligence legislation that protects safety and its fair use while still fostering innovation.
USA:
So far, the USA government has allowed tech businesses to devise their own AI regulations. However, lawmakers argue that AI regulation is required. In 2023, they attended numerous talks with prominent AI businesses ranging from ChatGPT to Nvidia. When appropriate, lawmakers have devised to give licenses and certificates for AI models with high risks.
Meanwhile, the Consumer Financial Protection Bureau, the Department of Justice, the Equal Employment Opportunity Commission, and the Federal Trade Commission have stated that existing laws already cover many AI uses.
United Kingdom:
The United Kingdom, home to DeepMind and Synthesia wants to avoid heavy-handed laws that may hinder innovation. According to Reuters, the regulation would be to ensure safety, justice, responsibility, and transparency.
The government wants to play an important role in regulating AI. It plans to convene its inaugural AI safety summit on November 1 and 2. “The UK has long been home to transformative technologies of the future,” stated Prime Minister of UK Rishi Sunak. “To fully embrace the extraordinary opportunities of artificial intelligence, we must grip and tackle the risks to ensure it develops safely in the years ahead.”
Unlike US regulators, who have discussed establishing an independent body to oversee the licensing of high-risk AI models, the country intends to delegate responsibility for governing AI to existing regulators for human rights, health and safety, and competition rather than establishing a new body dedicated to the technology. Academics have questioned the independent regulator, stating that establishing a new body will take time.
EU (European Union):
Since 2021, the EU has been working toward the enactment of the AI Act, which would be the first AI law in the Western world. The EU has a track record of placing tighter rules on the tech industry than other industries.
The act suggests classifying AI systems based on their risk. Higher-risk systems, such as artificial intelligence hiring tools and exam scoring software, would be subject to more stringent compliance standards, such as data validation and documentation, than lower-risk systems. The proposed standards also prohibit intrusive and discriminatory uses of AI, such as real-time remote biometric identification systems in public spaces or predictive policy systems based on profiling.
Thierry Breton, a member of the European Commission committee, stated last month that the “EU is building an AI Pact to assist companies in preparing for the implementation of the AI Act and that entrepreneurs, not only Big Tech, must be included in conversations about developing AI governance.” Startups have already stated that the EU’s planned policy is overly restrictive.
China:
China has strict policies regarding AI like all other things, such as generative AI providers must be subjected to a security audit, and AI tools must live up to the communist values. The limitations do not apply to productive AI technologies developed only for usage outside of the country. Chinese tech businesses such as Tencent, Baidu, and Huawei have their Research and Development centers in Silicon Valley.
The largest internet companies in China like Alibaba, JD, and Baidu have introduced AI chatbots to compete with OpenAI’s ChatGPT. China has already stated that “it wants to be the world leader in AI by 2030 and has put out plans to commercialize AI in a variety of fields ranging from smart cities to military applications” CNBC.
Japan:
Japan is moving toward weaker restrictions limiting the use of artificial intelligence. This week, PM Japan Fumio Kishida vowed that “the next economic package will include funds for AI research in small and medium-sized businesses” which might help Japan regain its technological advantage.
They also stated that “utilizing copyrighted photos to train AI models does not infringe copyright rules” implying that generative AI providers can operate copyrighted work without obtaining permission from the image’s owners. “In Japan, works for information analysis can be used regardless of the method, whether for non-profit purposes, for profit, for acts other than reproduction, or for content obtained from illegal sites,” stated by a member of the Constitutional Democratic Party of Japan’s House of Representatives.
However, the spokesperson noted that utilizing the image against the will of the copyright owners is problematic from a rights protection standpoint and emphasized the need for “new regulations to protect copyright holders.”
Brazil:
Lawmakers in Brazil have started to write legislation for AI that includes requiring vendors of AI systems to submit a risk assessment before rolling out the product to the public.
Regardless of the risk rating of AI systems, regulators believe that people affected by those systems have the right to an explanation about a judgment or a suggestion made within 15 days of the request. Academic experts say it’s hard to describe why an AI system performs something.
Israel:
The country, which has been in talks with Israeli and US-based internet businesses, has proposed regulation centered on “responsible innovation” that safeguards the public while simultaneously boosting Israel’s high-tech industry.
The government has stated, if vaguely, that AI policy will include the employment of “soft” regulatory measures to manage AI as well as the acceptance of ethical concepts similar to those accepted globally. Regulation will be implemented “in the coming years.”
Italy:
Italy was the first Western jurisdiction to ban ChatGPT due to illegal data collecting in March. Since then, the government has announced that it has earmarked 30 million euros (or $33 million) to assist unemployed individuals and workers in positions in danger of automation in improving their digital skills.
Why AI Should not be regulated:
Much writing has been done on the idea that tech billionaires want AI to be regulated. One thing needs to be made clear: that is Just PR. Also, if rules do come, they want them to be made in their way. They and their followers have made excellent points over the past few months. Here are some of the best ones.
1. Stopping progress and new ideas:
One could argue that rules will slow down AI progress and breakthroughs. They think companies will only be competitive in the international market if they can try and learn. We have yet to see solid proof, though, that this is true. It would need to answer the question of whether or not unrestrained innovation is good for society as a whole. It’s not all about making money. When making new stars and billionaires, the EU might need to catch up with China and the US. Is that that bad? We still have social safety nets, free health care, maternity leave, and six weeks of vacation every year. If this means that, because of rules, a multimillionaire can’t become a billionaire, then so be it.
The non-international competitiveness point is much more relevant to the current conversation: rules can make it harder for new companies to start by imposing high costs, standards, or requirements on developers or users. This makes existing companies stronger. The EU has already seen this happen when the GDPR was implemented. Regulations will need to make room for tiny businesses to try new things. This is something that is already being talked about at the EU level. Being so small, SMEs can only do a little damage, since AI’s power grows exponentially.
2. Complex and Difficult Implementation:
Rules about tools that can change the world often need to be clearer and more broad to be helpful. This can make them hard to follow and implement in different places. This is especially true when you consider that there aren’t any clear rules in the field. After all, what are risks and ethics if not based on culture?
Because of this, the need to find a balance between international standards and national sovereignty is a very touchy issue. AI works across boundaries, and to regulate it, countries need to work together and coordinate. Different law systems and cultural differences can make this hard to do. They are going to say this.
However, only a few people ask for a single rule to apply worldwide. No matter what people who want a “New START” policy may say, AI is not the same as the atomic bomb in a lot of ways. It will have its own rules, just like the rest of the world. We can only ask that everyone agrees on what risks the technology poses and that people work together to fill in the gaps between and within regional rules.
3. Overregulation and unintended consequences are possible:
In addition, we know that rules sometimes keep up with how quickly technology changes. AI is an area that is changing fast. Every day, new techniques and uses are found for AI. We need to be able to deal with new problems, risks, and chances as they come up. This means we need to stay agile. It can be challenging for governments to keep up with new technologies and apply rules to them, but that has never stopped anyone, and the world is still standing.
At the same time, governments need to ensure that new businesses that aren’t related to AI don’t get caught up in old rules, which could have bad results. For example, we would want ecosystems to stay the same because a carbon capture system uses something like generative AI to suggest areas that need to be cleaned up.
It’s good to avoid too much red tape and paperwork, but that doesn’t mean you should do nothing. The EU’s plan for risk-based government is an excellent way to deal with these problems. There are clear enough definitions of risks for everyone in the area to be affected, but changes can be made if artificial intelligence changes.
On the whole, plenty of the good comes from controlling AI, and less harm.
Why AI should be Regulated:
There are a lot of reasons to regulate Gen. AI, especially when it comes to protecting people who are weak or less covered. It’s easy to not care about large-scale and automatic discrimination if you’ve never been discriminated against.
1. Ensure Ethical AI Use:
To begin, it is clear that current digital laws need to be regulated to be applied to AI technology. To do this, you must protect users’ privacy (and their info). When working with models that use a lot of data, AI companies should put money into solid cybersecurity… and give up some money because you shouldn’t sell user info to other companies. American companies seem to naturally and willfully get this wrong when they are not regulated.
As the AI Act says, it is also essential for tech companies to make sure that systems that deal with sensitive topics don’t have any bias or discrimination built in. That means A) making sure none is added on purpose and B) removing any natural biases so they don’t get replicated on a large scale. This can’t be changed, and if regulators need crash tests, so be it.
More generally, rules can help users, coders, and other people interested in generative AI trust, be open, and be responsible to each other. We will be able to make better decisions and believe the decisions of others if everyone tells us where the outputs of AIs come from, what they’re used for, and what their limits are—society as a whole, needs this.
2. Protecting Human Rights and Safety:
Regulations must go beyond the “basics” to protect everyone from the many safety risks of AI. Most of the risks will be caused by people. Bad people can use generative AI to spread false information or make deepfakes. These things are straightforward to do, but companies don’t seem to be able to stop them—mainly because they don’t want to tag material made by AI. While our teenage daughters ask why we didn’t do it sooner, the next election may rest on new rules being put in place.
Additionally, we should not allow people to physically hurt other people using generative AI, as AI has been said to be able to describe the best way to make a dirty bomb. Again, I don’t see any reason for us to keep letting it happen in its current form if a company can’t stop it as best it can.
All of this is said without even talking about AI-driven war or self-driving weapons, which should never be made available. But this worst-case situation is so awful that we often use it to hide the many other problems with AI. Don’t worry about data privacy when Terminator is just around the corner. Avoid letting the doomers take your attention away from the dull but essential truth: if we don’t have solid rules for AI that deal with these issues, society may die a thousand cuts instead of one single weaponized blow.
This is why we need to make sure that businesses agree to create systems that are moral and in line with human ideals. This is easier said than done, but having a goal is an excellent place to begin.
3. Reducing the Social and Economic Impact:
There are some critical issues that the AI Act (or any other suggested rule) needs to address fully. They will need to be looked at more in the coming years, but because of how they work, it is hard to regulate them without going too far. This doesn’t mean they aren’t needed, though.
First, there needs to be fair pay for people whose data is used to teach algorithms that will give a lot of money to a few. If we don’t do this, we’ll keep making the same mistakes and widen the income gap. It will be hard to do this because few law examples match what’s happening now.
It will also be essential to deal with job loss and unemployment caused by generation AI. It is believed that AI will change most jobs, and when jobs become more automated, unemployment often rises. A report from BanklessTimes.com says that AI could take away 800 million jobs by 2030, which is 30% of the world’s labor.
“AI could also shift job roles and create new ones by automating some aspects of work while allowing humans to focus on more creative or value-adding tasks,” they’ll say. But for others, it’s been decades of hopelessness. We need a legal plan for jobs that will be automated or replaced by AI (trainees, UBI, etc.).
Ultimately, it will be essential to keep the world’s markets safe from economic monopolies that AI powers. Because of network effects, it is almost impossible to catch up to a big internet company because there isn’t enough data or computing power. For decades, antitrust rules haven’t been changed much. This can’t go on any longer. In this case, rules won’t make us less aggressive. It could make things worse for the business.
There is also a very interesting video discussing whether we should regulate AI. Watch it below:
Final Thoughts:
The game of rules and regulations has just begun. In the future, governments will have to work together to set up broad frameworks and promote and encourage information sharing and working together across disciplines.
These systems need to change and work together to keep up with the latest developments in AI. It’s essential to do regular reviews and updates, and it’s also good to try new things in sandbox settings.
Lastly, any rules that are made will be followed or not followed based on how the public is involved and how decisions are made. We need to include a wide range of stakeholders in discussions about regulations and have the crowd in choices about AI policy. Getting the message across that this is for us/them will help governments fight back against pressure from tech companies.
The path to regulation is long; as of now, no basic LLM meets the requirements of the EU AI Act. In the meantime, China’s rules focus on controlling content rather than risks, which makes it even harder for people to say what they want.
The game of rules and regulations has just begun. But now that we’ve started, everything is different.