Skip to content
NOWCAST KOCO 10pm-10:30pm Sunday Night
Live Now
Advertisement

Can the US effectively regulate AI use?

While the US is still solidifying domestic AI legislation, global bodies such as the EU are finding common ground.

Can the US effectively regulate AI use?

While the US is still solidifying domestic AI legislation, global bodies such as the EU are finding common ground.

Advertisement
Can the US effectively regulate AI use?

While the US is still solidifying domestic AI legislation, global bodies such as the EU are finding common ground.

Artificial intelligence, once a new frontier associated with science fiction and futurism, is rapidly becoming commonplace. Although some tech innovators may tout the benefits of AI, many global representatives across the political spectrum are reticent. Countries with adversarial relationships, like China and the U.S., are even hosting joint summits to tackle the issue headfirst. Concerns range from global threats to national security to more domestic issues such as cyber-bullying in schools. The Biden administration’s proactive efforts have included airing congressional hearings, announcing a whitepaper for AI use called the "AI Bill of Rights," and creating a bipartisan task force. However, countries may still struggle to track and penalize AI misuse, especially abuse, within their own borders. Although the FBI has sentenced criminals for creating generative AI porn, many victims of deepfake materials are left to seek their own path to justice as the legislation further solidifies at the state and federal levels. However, reaching common ground and passing actual enforceable laws is possible, as shown by the European Union’s vote on AI. The EU recently passed a landmark act outlining best practices for AI development and use, in addition to financial penalties for those who violate their policies.The EU also created a Compliance Checker to help developers determine the level of risk an AI program could pose before introducing it into the EU’s market. The risk could range from minimal, such as video games or spam filters, to limited, such as informing users when they’re interacting with a chatbot. High-risk includes “automated processing of personal data to assess various aspects of a person’s life, such as work performance, economic situation, health, preferences, interests, reliability, behaviour, location or movement,” according to the EU, while prohibited risk includes “social scoring” and “compiling facial recognition databases.”Reaching a global consensus on AI use may be an unrealistic goal, but communities should continue to voice their concerns and questions so that AI’s current blind spots sharpen into view for future use.

Artificial intelligence, once a new frontier associated with science fiction and futurism, is rapidly becoming commonplace.

Although some tech innovators may tout the benefits of AI, many global representatives across the political spectrum are reticent.

Advertisement

Countries with adversarial relationships, like China and the U.S., are even hosting joint summits to tackle the issue headfirst. Concerns range from global threats to national security to more domestic issues such as cyber-bullying in schools.

The Biden administration’s proactive efforts have included airing congressional hearings, announcing a whitepaper for AI use called the "AI Bill of Rights," and creating a bipartisan task force.

However, countries may still struggle to track and penalize AI misuse, especially abuse, within their own borders.

Although the FBI has sentenced criminals for creating generative AI porn, many victims of deepfake materials are left to seek their own path to justice as the legislation further solidifies at the state and federal levels.

However, reaching common ground and passing actual enforceable laws is possible, as shown by the European Union’s vote on AI.

The EU recently passed a landmark act outlining best practices for AI development and use, in addition to financial penalties for those who violate their policies.

The EU also created a Compliance Checker to help developers determine the level of risk an AI program could pose before introducing it into the EU’s market. The risk could range from minimal, such as video games or spam filters, to limited, such as informing users when they’re interacting with a chatbot.

High-risk includes “automated processing of personal data to assess various aspects of a person’s life, such as work performance, economic situation, health, preferences, interests, reliability, behaviour, location or movement,” according to the EU, while prohibited risk includes “social scoring” and “compiling facial recognition databases.”

Reaching a global consensus on AI use may be an unrealistic goal, but communities should continue to voice their concerns and questions so that AI’s current blind spots sharpen into view for future use.