In a post early last year, I raised the issue of privacy rights in the use of big data. After attending the AI (artificial intelligence) Summit in New York City in December, I believe it is necessary to expand that call to the wider spectrum of technology that is under the banner of AI, including machine learning. There is no question that increased computing power, reduced costs, and improved developer skills have made machine learning programs more affordable and powerful. As discussed at the conference, the various facets of AI technology have reached far past financial services and fraud detection into numerous aspects of our life, including product marketing, health care, and public safety.
In May 2018, the White House announced the creation of the Select Committee on Artificial Intelligence. The main mission of the committee is "to improve the coordination of Federal efforts related to AI to ensure continued U.S. leadership in this field." It will operate under the National Science and Technology Committee and will have senior research and development officials from key governmental agencies. The White House's Office of Science and Technology Policy will oversee the committee.
Soon after, Congress established the National Security Commission on Artificial Intelligence in Title II, Section 238 of the 2019 John McCain National Defense Authorization Act. While the commission is independent, it operates within the executive branch. Composed of 15 members appointed by Congress and the Secretaries of Defense and Commerce—including representatives from Silicon Valley, academia, and NASA—the commission's aim is to "review advances in artificial intelligence, related machine learning developments, and associated technologies." It is also charged with looking at technologies that keep the United States competitive and considering the legal and ethical risks.
While the United States wants to retain its leadership position in AI, it cannot overlook AI's privacy and ethical implications. A national privacy advocacy group, EPIC (or the Electronic Privacy Information Center), has been lobbying hard to ensure that both the Select Committee on Artificial Intelligence and the National Security Commission on Artificial Intelligence obtain public input. EPIC has asked these groups to adopt the 12 Universal Guidelines for Artificial Intelligence released in October 2018 at the International Data Protection and Privacy Commissioners Conference in Brussels.
These guidelines, which I will discuss in more detail in a future post, are based on existing regulatory guidelines in the United States and Europe regarding data protection, human rights doctrine, and general ethical principles. They call out that any AI system with the potential to impact an individual's rights should have accountability and transparency and that humans should retain control over such systems.
As the strict privacy and data protection elements of the European Union's General Data Privacy Regulation take hold in Europe and spread to other parts of the world, I believe that privacy and ethical elements will gain a brighter spotlight and AI will be a major topic of discussion in 2019. What do you think?
By David Lott, a payments risk expert in the Retail Payments Risk Forum at the Atlanta Fed