Tuesday, March 21Welcome

Can the world’s de facto tech regulators really keep AI in check?


Artificial intelligence is creeping into every aspect of our lives. AI-powered software triages inpatients to determine who gets what treatment, determines if an asylum seeker is lying or telling the truth on their application, and analyzes situations It even evokes the odd conceit of comedy. More recently, tools of this kind have helped killer robots select targets in the war in Ukraine. AI systems have been proven time and time again to be systematically biased, but these debates are becoming more pressing as AI systems become more and more important in our lives. I’m here.

In typical technology fashion, AI-driven tools are advancing far faster than the laws that could theoretically govern them. But the European Union, the world’s de facto technology watchdog, is working to catch up and plans to finalize the Billboard AI law this year.

The use of AI in surveillance and surveillance technology is one of the key issues plaguing ongoing negotiations. Software used by law enforcement and border protection agencies increasingly relies on things like facial recognition and social media scraping tools. These tools accumulate data on huge numbers of people and use this information to decide if and for how long they are allowed to cross the border. They must remain imprisoned.

The EU’s draft regulations are premised on the fact that such a system could pose serious risks to people’s rights and well-being. This is especially true if it’s built by a private company that likes to keep their code under tight control.

The AI ​​Law aims to establish a framework for assessing the relative risks of different types of AI systems, categorizing them into four tiers. High-risk tools such as welfare subsidy systems and surveillance technology software. Limited risk systems like chatbots. Minimal risk systems such as email spam filters.

But it has some surprising omissions. Dutch MP Kim van Spalentak, who represents the Green Party, said the European Council would create a carve-out that would allow law enforcement and immigration agencies to continue to use these tools on a broad scale. immediately noticed. In early December, more than 160 civil society organizations released a statement, saying the law did not consider the use of AI at borders, unfairly affecting those already on the fringes of society, such as refugees and asylum seekers. I expressed concern about what I was giving.

“The risk is that we don’t have the right systems in place to create a world that continues to believe in AI Kool-Aid and prevent AI from causing harm. [harm] About our basic rights,” said Van Spalentak.

AI legislation may also face enforcement challenges. This regulation applies primarily to companies and other entities that develop and design AI systems, and not to public or other institutions that use them. For example, facial recognition systems have very different implications depending on whether they are used in a consumer context (i.e. face recognition on Instagram) or at border crossings to scan a person’s face upon entry into the country. may have

Karolina Ivanska, Digital Citizen Space Advisor at the European Center for Non-Profit Law in The Hague, said: “While the level of risk appears to be different in both of these situations, the AI ​​Act is primarily aimed at developers of her AI systems, and due diligence in how the systems are actually used. ,” she told me.

There has been much debate about how the proposed regulation will or will not protect people’s rights, but this is only part of the picture. According to Michael Veale, his college professor at University College London, who specializes in digital rights, “AI law needs to be understood for what it is: a legislative and market instrument.” According to Wiel, the reason the European Commission is taking action here is that member states have enacted various laws on AI at the national level that create barriers to trade in the internal market. is. “The concern is that there will be different rules for each member state, making it impossible to trade AI systems,” he said.

European actions to develop rules on AI aim to develop a “harmonized market” for the trading of AI systems. “That is the basic logic of AI law above all else,” Veale told me.

Under the current draft of the law, high-risk tools include AI used in education, employment, or law enforcement. For high-risk AI, we set requirements for designing, labeling, and documenting new technology. For all other systems (systems deemed not to be of high risk), the law prohibits Member States from regulating the system at all. “This will allow low-risk systems to move and trade freely within the Commonwealth,” he said.

But Veale believes its goal is simple. “When we say we trade AI systems, it ignores many of the practical realities of how AI business models work. “It’s a legislative idea,” he said. “It’s not ‘Let’s make the best human rights in the world.’ It’s ‘Let’s remove trade barriers in the technology industry.'”

This regulation does not establish an independent body to review or evaluate these technologies. In return, companies are expected to report their activities truthfully. A quick look at Silicon Valley gives many people reason to believe this won’t work. With the current draft, “you don’t even have to ask a third-party private agency to check the documents,” Veale said. “Only by self-certifying to the standards of the law can you promise with your little finger that you did it right.”

Karolina Iwanska was equally concerned about certification requirements. Especially when it comes to high-risk category tools. The regulation requires providers to develop risk management systems and ensure that training data is relevant, representative, and free of bias. This is the Achilles heel of such tools. From Latanya Sweeney’s seminal work on racism in Google’s search algorithm in 2013 to today, when ChatGPT, the newest AI-powered chatbot, indulges in casual racism when commenting on the value of being different. There is ten years of research on this topic. People’s lives based on ethnicity. AI tends to mirror our society. Some people do worse harm than others when trained on our unjust realities and unrepresentative data samples.

these days, Experts worry that regulations may not be fully aware of how complex these technologies are and how difficult it will be to change them once they are up and running. “There is an assumption that the system can be fixed,” he says Iwanska. For example, systematic biases are not considered. “It’s one thing for him to prevent biases from being coded into systems, or to ensure that systems are built using data that is representative of society and is not affected, but AI will always be that way.” It reflects the creator. It’s mostly wealthy white men.

Iwanska also says the drafters are offering only lip service to the real need for transparency and accountability regarding these tools. The AI ​​Act now requires technology providers to include the system’s intended purpose, developer, contact details, and certificate number. But “there is nothing about the content of how the system works, what criteria it uses, how it works, etc. It’s a major drawback that we feel undermines the scrutiny of the United States,” she said.

The self-certification model borrows from other European regulated areas, but few are as important to society as AI governance. Veale was also concerned about the pitfalls of this approach. “The rules are for human oversight, prejudice, accuracy, and other basic rights,” he said. “These things are not only self-certified by companies trying to use this to lessen the burden on them, but are concocted and elaborated in an ongoing, completely closed and anti-democratic process. Now, even before the law is passed.”

Of course, the law is still under consideration. It’s impossible to know for sure how public sector use of AI will change. “As the legislative process is still underway, we will have a definitive answer in the coming months,” Ivanska said. She’s still not sure what impact this process will have. “[We] We can expect this proposal to change significantly,” she said. “But it is not yet clear in which direction it will go: whether it will improve or weaken.”

Alex Engler, a Governance Research Fellow at the Brookings Institution, believes that where Europe leads, the world will follow. The European Union is a strong consumer market for him of 450 million people, and in recent years has been able to drive Big his technology in part through regulatory moves, so he believes that his AI law in the EU I’m sure it will change the way manufacturers of such systems do things. Works worldwide. Engler has already seen a Europe-wide backlash against AI-powered surveillance systems that it expects to be tightened by EU market-wide regulation. In fact, the European Data Protection Supervisor has welcomed plans to ban military spyware of the type used to monitor politicians and journalists as part of the proposed media freedom law. And in November 2022, he said, the Italian data protection authority banned the use of facial recognition systems and other intrusive biometric analytics until the end of 2023 or until a law covering their use was adopted.

EU legislation is part of a broader movement to draw boundaries on the development and use of AI systems. In the United States, the White House Office of Science and Technology has submitted a blueprint for the AI ​​Rights Bill after his year-long consultation with the public, experts and industry. This followed a draft Algorithm Accountability Act that was introduced to Congress in March 2022. And in July 2022, the plan for America’s Data and Privacy Protection Act moved out of the commission stage with rare bipartisan support.

But Americans shouldn’t wait for anything to change anytime soon, especially with the new Congress convening this year. says Mr. “There is no evidence that something like Algorithm Accountability Act is gaining momentum, and there is a lot of skepticism about data and privacy protection laws,” he added.

One reason is the challenge of avoiding the complex morass of AI legislation. This is a global problem. “I don’t think you can write down a single set of rules that apply to all algorithms,” Engler said. “Can AI be regulated? If you’re hoping there will be a single law that solves the problem, no.” “That’s what we have to do. In a way, it’s harder and less flashy, right?” Engler said. “This is a government-wide change to improve our understanding of technology.”

Despite both the political and technical challenges policymakers have had to grapple with to reach consensus on the regulation, Dutch MP Van Sparrentak believes it will be worth the effort. “Most importantly, when AI is introduced, people will no longer stand empty-handed at their computers,” she said. “They can understand why the system has made certain decisions about their lives and be transparent about it.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *