now loading...
Wealth Asia Connect Middle East Treasury & Capital Markets Europe ESG Forum TechTalk
Understanding ESG / TechTalk
How Asian companies ensure ethical AI use
Data protection, transparency, UN guidance key to harnessing tech for 'social good'
Bayani S. Cruz 30 Aug 2024

Asian companies are ensuring the ethical use of artificial intelligence (AI) by protecting sensitive data, maintaining transparency and sticking to United Nations-level guidance and standards. However, there is still work to be done in terms of helping organizations to adopt responsible and ethical AI practices, and identifying enablers or assets that can help institutions, non-governmental organizations (NGOs) and small and medium-sized enterprises (SMEs) navigate ethical AI practices.

In general, the ethical use of AI is essential to ensure that these powerful technologies are developed and deployed in ways that benefit society as a whole, rather than causing harm or exacerbating existing problems.

The ethical use of AI in Asia was discussed on August 27 in a Mckinsey & Co podcast, whose topic was “How Asia is harnessing AI for social good”. The podcast, which was hosted by Angela Buensuceso, a McKinsey communications specialist, also featured Ahmed El Saeed, UN Global Pulse’s regional head for Asia-Pacific; Ankit Bisht, a Mckinsey partner and the leader of its digital practice; and Jennifer Echevarria, Global Telecom’s vice-president of enterprise data and strategic services.

Globe Telecom, a leading telecoms provider in the Philippines, has established a governance framework for the use of AI in the workplace to ensure, Echevarria stresses, that it addresses real-world challenges.

“For data-driven organizations such as Globe, it’s very important that we protect sensitive data,” Echevarria adds. “ It’s a constant balancing act. At the end of the day, it starts with us, and how we are using AI in our lives and in our work. We, of course, safeguard sensitive data because at the heart of privacy for us are the values of trust and respect. You realize that it takes years to build trust, but only minutes to lose it.”

At the regional level in Asia, the UN guidelines, El Saeed notes, have been developed to give practical steps for countries that wish to establish more robust frameworks for AI development or deployment when it comes to ethics and the responsible use of AI.

“Essentially, the one thing that I would like to highlight is that it is important to make sure that AI models introduce the concept of being inclusive by design,” El Saeed points out. “Making sure that the training data is responsive and reflects the realities of different groups is very important.”

In the case of Globe Telecom, Echevarria says, it is committed to ensuring transparency in how the company uses AI models. “We regularly review, test and evaluate our models to make sure that there are no potential biases. Then hopefully, in everything that we do, we are accountable to ourselves, our company, and our customers.”

When using AI in the workplace, it is also important, El Saeed shares, to ensure that every step, from design and development to deployment, streamlines important principles, such as transparency, fairness and equity, human-centricity, and accountability.

“Those can only be introduced in a consistent way through stakeholder interaction and open communication with the different stakeholders and groups,” he says. “It is very important to have transparent communication. We also need to assure citizens of the transparency of the development of AI, and how it can respond to everyone’s needs.”

For his part, Bisht says that there are many countries today that are building a number of different assets and enablers to allow for ethical and responsible AI deployment, whether they involve principles, laws, regulations or frameworks coming from industry institutions, or even voluntary commitments, declarations and standards that have been put in place by various leading private sector or other institutions.

He cites Singapore as an example of a country that has a strong framework for how to manage AI and governance.

“The one thing that we need to think about," Bisht adds, "is how to ensure that there is inclusion in how we help all kinds of organizations to adopt responsible and ethical AI practices, all the way from larger institutions to smaller NGOs and SMEs that need to be able to navigate the relatively complex setup today.”

This would involve identifying what are the enablers or assets that can help institutions, NGOs and SMEs navigate ethical AI practices, whether it’s from incident databases used as tools to be able to prevent biases and malicious use, or to promote security, robustness, transparency of development practices or explainability.

It’s also important to have the resources to train frontline users, create change management that is easy to understand in local languages, is equitable and adapts to the moving target of fast-evolving AI.

“Doing this well is going to be critical because there are two sides of this equation,” Bisht states. “Unless we’re able to navigate the risks – through these kinds of enablers that we have talked about – to manage ethical and responsible use, we will not be able to get to the scale that we need to impact the lives of millions and millions of people in so many different ways and achieve the SDGs [UN Sustainable Development Goals].”

Conversation
Henry Allen
Henry Allen
macro strategist and vice president
Deutsche Bank
- WILL JOIN THE EVENT -
Exclusive Roundtable
Accessing Asia - How to invest in a dynamic market
Learn More
Conversation
Maxime Perrin
Maxime Perrin
head of sustainable investment
Lombard Odier Investment Managers
- JOINED THE EVENT -
Webinar
Sustainable investing - the new market standard
View Highlights