With the rise of data analytics to be an essential aspect of corporate America, the growth of Big Data to incorporate machine learning and artificial intelligence (AI) has left policymakers even more flummoxed. An event today on capitol hill explored the implications of this crossover and what rules, if any, regulators and lawmakers should pursue to tame its impact.

Much of the concern raised by activists and the Federal Trade Commission (FTC) so far has focused on bias and discrimination. How does it enter into the systems and how can we prevent it? As Chris Calabrese, VP for policy at the activist group Center for Democracy and Technology (CDT), explained at a panel discussion hosted by the Internet Caucus Advisory Committee, algorithms are not inherently good or bad, but are merely a “recipe” or formula “for achieving a particular result.” Bias is something that “can creep in” from either “the ingredients” or the goal.

Leighanne Levensaler, Senior Vice President, Corporate Strategy for Workday, a provider of enterprise cloud applications in the areas of financial management, human capital management, and analytics, responded that those inherent issues in AI made it important to still involve human “cooks” in the recipe.

The 2016 Big Data report from the Federal Trade Commission (FTC), according to Neil Chilson, the FTC’s Acting Chief Technologist, hit on two potential injuries from Big Data analytics: inaccuracy; and too much accuracy. But are companies getting the results they actually want and are the algorithms correct? Chilson suggested that companies already “have a lot of incentives” to get these decisions and algorithms correct.

Calabrese suggested that, since "algorithms are being used constantly across society," incentives for their proper use need to be considered for those people who create the algorithms. What if an algorithm does not fit with the user's expectations of what it can do?

Sign Up for Our Newsletters

Expert perspectives and the latest information on the trends, techniques, and legislative action impacting the marketing research and data analytics community.

 

We are committed to keeping your e-mail address confidential. We do not sell, rent, or lease our subscription lists to third parties, and we will not provide your personal information to any third party individual, government agency, or company at any time unless compelled to do so by law.

Despite proposals at the FTC for “algorithmic transparency,” said Joshua New, policy analyst at the Center for Data Innovation, we’ve seen “no consensus on what it means.” It could mean complete openness and open sourcing, or it could mean companies being held accountable for both their algorithms and the outcomes of their algorithms.

Transparency, according to the Chilson, is one way to “make sure consumers are making informed choices,” but the overall goal of the FTC remains to avoid unfair and deceptive acts or practices. Algorithms are more accurate than ever, but “some can explain themselves and some can't." Chilson, however, pushed back on requirements for complete transparency; if the FTC required disclosure of a new and innovative algorithm it would reduce the incentive to develop any others, which would clearly not benefit consumers.

CDT’s goals for algorithms are “auditability and explainability,” such that a consumer can understand what they can do to succeed when dealing with an algorithm (like how to improve a credit score or google page ranking) and can interact with the algorithm in a meaningful way. Calabrese insisted that companies need to “prove that algorithms are working the way they are supposed to work,” which will allow regulators like the FTC “to hold companies' feet to the fire.”

Dean Garfield, president of the Information Technology Industry Council (ITIC), emphasized that this is all “an active area of research and should remain so.” Artificial intelligence is experiential in nature, “more like the way a toddler thinks” than an adult. “How can we explain the implications of an algrotighm to help explain it and it's outcomes? What frameworks do we need to put in place to deal with potential wrong from AI and who is responsible if it does go wrong?” Garfield said that “we need to recognize that we are still in the learning phase,” rather than rushing to create rules.

Cc: more and more decisions will be made autonomously in our future.

All algorithms are not created equally, Levensaler pointed out. While Big Data finds and/or produces patterns and AI can give recommendations, it is all just more input into “better and more informed decisions.”

Calabrese responded that, “we want to improve on the human decision making.” However, Chilson worried that we sometimes “falsely assume” that AI means means smart, including morally, when systems can only do that (well, or at all) if they are programmed and taught well.

The FTC’s focus, for now, is on “specific harms we want to stop” and “what laws can or can't already work” to address those harms, and then what changes might be needed. In conclusion, Chilson stressed the need to look at all existing policy tools before “launching something and calling it AI legislation.”

View a list of our Company and Corporate Research Department Members who have agreed to follow Insights Association's Code of Standards and Ethics. If you'd like more information on becoming a company or corporate research department member, or have questions about the membership status of your organization, please contact us.