Response to:
AI technology has been advancing more quickly than the regulations governing it, creating a blurry ethical landscape for businesses and individuals. Without clear regulations or laws guiding AI design in the US, companies have historically relied solely on their own ethical standards and perceptions of right and wrong when developing AI models.
However, this is about to change as the European Union finalizes its AI Act, which provides guidelines and rules on virtual assistance, biometric recognition, and autonomous systems. The AI Act enforces a standard set of principles set forth by the European Commission in order to ensure “ethical AI”.
Specifically, the AI Act enforces requirements and restrictions on the use of artificial intelligence (AI) and specifically on machine learning (ML). Regulations in the Act include the right to explain why decisions are made by AI systems, the need for specific human oversight for certain AI models, and the necessity of prior testing and audit for a model’s accuracy and bias.
The AI Act is being closely watched by the US, as its comprehensive regulations are paving the way for potential legislation in the US. Already, the European Union is advising the US government on how to regulate its own AI market, and suggests the US not only look to the EU’s regulations, but also to those from Japan and China, which have recently implemented AI regulations.
It’s clear the US will eventually need to develop a regulatory system of its own, and US businesses will need to prepare now to stay ahead of the curve before AI regulations become legislated. Understanding the EU’s AI Act is a good starting point, as its comprehensive rules can provide a sense of the areas that future regulations in the US might focus on. For US businesses, now is the time to familiarize themselves with AI regulations, to ensure that when regulations are released, they’re ready to comply and are prepared to take advantage of the burgeoning AI market.
Leave a Reply