<<< see more articles

AI, the Lawless Guest in Our Pockets

How regulations will shape the industry in the near future 

I’m sure you’ve heard of Artificial Intelligence. In fact, I’m sure you’re sick of hearing about it at this point. While I’ll personally never get my fill of AI chatter, I understand where you may be coming from.  

AI is everywhere – the digital assistants in our homes, the robotic maps in our cars taking us to our next destination, and even our phones, which open via facial recognition so we can use even more AI-enabled systems. 

It’s hard to believe that while AI has been around in some degree since the 1950s, just a decade ago the average person considered technology to exist solely within the realm of science fiction. As with most emerging and new technologies, development and adoption have preceded regulation, which is just now catching up. And that’s a good thing, the key to keeping consumers’ data, health and finances secure.  

Currently, there are no comprehensive federal laws for AI in the United States. In this void, individual states have begun proposing bills to regulate the use of AI within their borders which has led to widespread confusion across the country. But it’s likely a state-specific approach could become contradictory for developers and prove difficult to enforce, meaning we may see nationwide regulation proposed in the near future.  

Understanding what these regulations will look like will be critically important for each industry, as they will impact which AI systems can be deployed, how they can be used and more. While domestic regulations are still uncertain, the United States may follow the European Union’s current proposal of risk-based regulations.  

The EU Commission is proposing a risk-based approach to establish the first-ever legal framework for AI applications. This proposal aims to not only protect the fundamental rights of citizens and their data, but also provide AI developers with clear obligations and regulations for AI development and deployment. Their risk-based approach has four tiers: Minimal Risk, Limited Risk, High Risk, and Prohibited. Most marketing applications will likely fall within Minimal or Limited Risk, although there is a possibility for certain tools or applications to fall within the High Risk category. Risk designation is an ongoing process with several considerations at play, so you may need to think about your company’s specific tools and use case to determine how these regulations may apply.  

  1. Minimal Risk 
    Minimal or no-risk applications comprise the majority of AI deployments. Applications within this tier will not be regulated.  

Ex.: AI-enabled video games, recommendations from streaming services, plug-ins like ad blocks or email spam filtering.  

  1. Limited Risk
    Limited-risk applications will typically result in transparency obligations on behalf of the developers and/or content producers.  

Ex.: ChatBots or deep fakes, which require disclosure that the end user is chatting with an AI bot, or that the video they’re seeing is not actually of the real person.  

  1. High Risk
    High-risk applications are the most varied and fall into the most complex category within the risk-based approach. This area is to be highly regulated, requiring adequate risk assessment and data governance practices, robust and secure data, clear and adequate information to the user, and activity logs among other obligations. 

Ex.: Applications considered to have the potential for a significant impact on health, safety or finances. High-risk applications span critical infrastructure, education, legal processes, employment and law enforcement (just to name a few).  

  1. Prohibited
    Prohibited or “unacceptable risk” is a small group of AI applications that will not be allowed within the EU.  

Ex.: Social scoring, remote biometric identification within publicly accessible spaces (except by law enforcement under certain conditions), deployments that may promote or encourage dangerous or violent behavior.  

This tiered approach allows for advancements within the AI space without placing unnecessary hurdles in front of benign applications, all while ensuring the safety of users in riskier applications.  

At the end of the day, designation will need to be specific to each application of AI, as uses within a single industry will vary greatly and developers and users alike should ask: Where do your AI applications fall? Will you need to ensure you have transparency obligations met, or do you need to focus on a robust data governance system in order to meet regulatory standards?  


let’s connect

about the author

MK
Matt Kaupa  Matt leads Luquire's analytics practice, delivering insights and marketing performance analysis for our clients. He’s worn many different hats throughout his career, working on digital analytics implementation, data visualization, lead generation forecasting and data mining. Joining us most recently from Publicis Groupe, his varied experience includes brands such as UnitedHealthcare, Pella, Bread Financial, Ameriprise Financial and HealthPartners.

newsworthy

more musings...