AI and Business Ethics – A Matter of Trust?

Artificial Intelligence (AI) algorithms are already driving critical real-world business processes and decisions like setting prices on consumer products, selecting candidates for job interviews, judging who gets a loan, or aggregating and synthesizing data for corporate reporting and ESG ratings. It is the IT sector and research communities, not governments, that set the pace, also when it comes to defining the basis for (self-)regulation. The immediate concern about AI may be its inherent bias.

I talked to Dr Clara Neppel, Senior Director Europe at the Institute of Electrical and Electronics Engineers (IEEE) in Vienna, about the ethical and regulatory aspects of AI technology with a view to corporate reporting. Representing more than 423,000 members in over 160 countries, the IEEE is the world’s largest technical professional organisation dedicated to advancing technology for the benefit of humanity.

ML: The tech sector and research communities are setting the pace for AI technology. How can we make sure that its use will drive sustainable development based on human shared values rather than the opposite?

CN: The increased public interest around intelligent technologies opens a window of opportunity to reinforce the critical, self-reflecting discourse and action within techno-scientific communities. While AI needs to serve pragmatic functions of law and engineering, it must also reflect the European cultural values we wish to imbue within the cities, systems, and devices used within our society today.

Those involved in technology development and deployment should engage to ensure a broader understanding of the impacts their adoption will have on individuals and society. We therefore urge the building of permanent bridges of dialogue to cross-sector experts that represent a holistic and cultural perspective, including not only technical experts, but also economists, sociologists, philosophers, spiritual leaders, artists, and the public at large.

Such self-reflection and cross-fertilization would make the dynamics of the process of technology creation and evolution more stable and predictable. It would also help to ensure that the rapid evolution of technical systems does not further develop into a threat to social cohesion, to rights of individuals, democracy and the rule of law as its practical expression.

ML: Considering the fast pace and transformation potential of AI technology, who is taking ownership of providing a regulatory framework at global level?

Image Clara Neppel 500x500CN: There is definitely an emerging demand for ethical and legal frameworks, standards and policy in the field of Autonomous and Intelligent Systems (A/IS.) There are already initiatives from various institutions, from the EU, OECD, UN to national governments standardization bodies. We believe that the European Union’s High-Level Expert Group on Artificial Intelligence is a step in the right direction and we are proud to be represented by Raja Chatila, Chair of the IEEE Global Initiative on Ethics of A/IS. Practical actions and outcomes could include bottom up standards, professional guidelines, and codes of ethics prioritizing societal and ethical considerations over mere functionality and speed to market.

ML: How can we make sure that AI algorithms only source accurate data for materiality analyses and corporate reports? Do we need “deep learning” capabilities to assess the validity of data?

CN: What makes data “valid”? The answer is complicated and almost always ties to an understanding of something messier and uglier than the world of spreadsheets, tables and columns: The real world. Is data about a specific mining project “valid”? We have to go into the mine to measure and audit things – that is often the only way to find out.

It MAY and SHOULD be possible to train algorithms to detect “bad data”. But that would have to be by deducing flaws from the context of the data, like inconsistencies or formatting, rather than by reviewing “valid” information from the REAL WORLD where the data comes from. This is a serious problem. The IEEE therefore supports proper regulation for collection of personal data, access and control, similar to the European Union’s General Data Protection Regulation. We also urge development of initiatives toward new models of data handling and exchange, reliant on distributed communities of trust versus centralized control.

ML: How can we make sure that ESG data providers and rating agencies deploy AI technology without bias and black-box mentality?

CN: This would require general regulation and standards about the collection and use of data. One of our standard initiatives on Algorithmic Bias Considerations provides developers of algorithms for autonomous or intelligent systems with protocols to avoid negative bias in the creation of their code. Bias could include the use of subjective or incorrect interpretations of data like mistaking correlation with causation. The standard will also include benchmarking procedures and criteria for selecting validation data sets, establishing and communicating the application boundaries for which the algorithm has been designed, and guarding against unintended consequences. In the meantime, I am confident that ESG data providers will develop a sense of pride by being fully transparent about the origin and weighting of the data they are offering.

ML: How do you envision the further development of AI technology 10 years from now in terms of opportunities and risks?

CN: AI technologies have an enormous potential to serve people and even solve global challenges. We have to make sure that it advances humanity as a whole. Progress in intelligent technologies should not further increase inequality and social tensions through hyper-concentration of wealth and power among a very tiny class of privileged people, further marginalizing and disenfranchising ‘the rest of the world’. In general, the intelligence of technical systems should expand our space of autonomy and self-determination, not diminish it.

This interview was first published by the Center of Corporate Reporting in “The Reporting Times” (#13) on October 1, 2018.


Dr Clara Neppel is the Senior Director of the IEEE European office in Vienna, where she is responsible for the growth of IEEE’s operations and presence in Europe, focusing on the needs of industry, academia, and government. She holds a Ph.D. in Computer Science from the Technical University of Munich and a Master in Intellectual Property Law and Management from the University of Strasbourg.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.