With successfully trained AI technology, a company essentially creates a new worker. But where to begin?

In my last post, I discussed the potential impacts artificial intelligence (AI) technologies are having on our society today. I also outlined just what makes AI technologies “responsible.” In this post, I will get a little more specific as to how businesses can develop and deploy AI technologies on a foundation of responsibility.

Responsible and ethical considerations around responsible AI

As I said in my last post, any business looking to capitalize on the potential of AI should also acknowledge the impact the technology is likely to have on people and society as a whole.

For businesses, this means changing the way they view AI, from systems that are merely programmed to systems that learn; AI technologies built as programs are useful only for a finite set of tasks, while learning-based AI technologies have a much wider repertoire.

Raising AI requires addressing many of the same challenges faced in human education and growth, including:

  • Fostering an understanding of right and wrong.
  • Imparting knowledge without bias.
  • Building self-reliance while emphasizing the importance of collaborating and communicating with others.

By taking on the responsibility of “raising” AI, companies can create portfolios of AI systems with varied skills. Once AI systems are trained, these skills can be redirected throughout the workforce as needed, and remain available to the company as long as it needs them.

Because of this, a company’s AI needs to be aligned to the company’s core values and ethical principles. And by so doing, companies create trust with their consumers and society.

To develop and use AI in a responsible way, businesses should take several factors into consideration, including:

  • Bias, drift and other unintended consequences
  • Growth vs. fixed mindset
  • Trust and transparency
  • Privacy
  • Diversity

The responsible AI imperative

To help businesses integrate these factors into AI design from the beginning, Accenture has developed a practical approach to responsible AI.

This approach addresses the imperative to:

  • Design—architecting and deploying AI with trust (e.g., privacy, transparency and security) by design built in, including building systems that lead to “explainable” AI.
  • Monitoring—monitoring and auditing the performance of AI against key value-driven metrics, with respect to algorithmic accountability, bias, and cybersecurity.
  • Reskilling—democratizing AI learning across an enterprise’s stakeholders, emphasizing augmentation vs. replacement, and reskilling the workforce that is displaced by robots (more on this in my next post).
  • Governance—creating the right framework to allow AI to flourish, anchored to industry and society’s shared values, ethical guardrails and accountability frameworks.

How can businesses meet the responsible AI imperative?

It is absolutely essential that businesses view responsible AI as a collective effort. Businesses and government leaders should proactively address the critical issues raised by AI by inventing new models and approaches built on the principle of responsible AI.

To meet the responsible AI imperative, Accenture encourages businesses to:

  • Emphasize education and training.
  • Reinvigorate a company’s code of ethics.
  • Help create adaptive, self-improving regulation and standards to keep pace with technological change.
  • Establish sound security practices.
  • Integrate human intelligence with machine intelligence by reconstructing work to take advantage of the respective strengths of each.

Responsible AI is a collective effort.

In my next post, I will take a look at the impact responsible AI is likely to have on the workforce.

Until then, I encourage you to access our Accenture Technology Vision 2018 report. Please also take a look The Responsible AI Imperative by my colleague Deb Santiago.

 

Submit a Comment

Your email address will not be published. Required fields are marked *