ENGLISH
    • 한국어
    • ENGLISH
    • 日本語
    • 中文-繁體

    2022.10.04 ESG

    [NC] AI Ethics Framework | Transparency

    NC dreams of a future where the world is connected with joy. The future includes practicing safe, transparent, and unbiased technology ethics to pursue righteous joy. To this end, NC fully understands the importance of AI technology development and ethical responsibility and actively managing its technology. In particular, it has established [NC] AI Ethics Framework to enable sustainable development of AI technology into the one based on ‘human values’.

    The three core values of [NC] AI Ethics Framework include ‘Data Privacy’, ‘Unbiased’, and ‘Transparency’. NC pays attention to protect users’ data and to prevent social biases in the course of AI technology development and aims to develop ‘explainable AI’. Also, to enhance social awareness of AI ethics, it is carrying out multiple projects including discussion with the world’s renowned scholars, joint research with external parties, and research support. In this article, we will explain the reason that NC has selected ‘Transparency’ as a core value and the activities that it is carrying out in relation to such a core value.


    AI Pursuing the Value of Transparency

    AI technology has been widely used in job interviews. However, applicants may wonder whether AI would focus more on their behaviors like their gaze and voice rather than the content of their answer. Fundamentally, such a concern is raised because AI works like a ‘black box’. In other words, while we may explain how the AI model is trained based on which data characteristics, users have no access to the ‘ground truth’*.

    The AI technology that NC aims to develop should be easy to understand, and its decision-making process and result should be able to be explained. To this end, NC is making active efforts for technology sharing including the ones to enhance the chance of explainability of AI system and disclosing papers and core technologies on AI’s operation principles.

    *Ground truth: The actual nature of the problem that is the target of a machine learning model

    1. Sharing of operation principles and core technologies of AI models

    NC is not only sharing AI-related information and technology, but also contributing to enhanced explainability of AI’s decision-making process and result and level of users’ understanding. In 2021, it disclosed more than 10 AI research papers and open-source codes to help users more easily understand AI models’ decision-making process.

    In particular, while NC’s NLP Center has operated NLP Hub(https://nlphub.ncsoft.com) where users may experience all ongoing research projects, it could not disclose all of its technologies. However, it plans to share with the public the internal research projects and data that could be disclosed in a gradual manner so that an even larger number of people could view and utilize them. It also plans to build a platform where users could gather with one another. The following is examples of AI models disclosed by NC:

    Multi-Agent Reinforcement Learning Invades MMORPG: ‘Lineage Clone Wars’ (GDC 2022)

    The ‘self-play learning’ technology that enables AI’s learning through self-play was introduced. An AI that may flexibly respond to diverse variants was created so that a few dozen people could carry out a battle together. This was the world’s first case where the technology was advanced to the level of commercialization. (NC introduces a new MMORPG content based on multi-agent AI reinforcement learning for the first time in the world.)

    “Adversarial Multi-Task Learning for Disentangling Timbre and Pitch in Singing Voice Synthesis”, Interspeech 2022.

    The research proposed the adversarial multi-task learning be carried out to disentangle timbre and pitch while modeling lyric and note information in a simultaneous manner. Ultimately, predictability of not only the mel-spectrograms, but also the auxiliary characteristics like timbre and pitch were enhanced through GAN learning. As a result, the predictability of mel-spectrogram was strengthened at the same time, enabling NC to create both the most natural synthesis of singing voices and the singing voices of multiple singers through a single model. (Introduction to three papers published at 2022 Interspeech)

    2. Disclosure of an AI conversation dataset that could be interpreted and explained

    NC has also disclosed ‘FoCus Dataset(For Customized conversation dataset)’, which is an AI conversation dataset that could be interpreted and explained. Users may understand the source of AI’s learning data and how the data were collected and treated. Through this, users may confirm the source based on which AI is making decisions and understand AI models in a more complete way.

    The strength of this dataset is that it could materialize a conversational technology that performs at the same level as the one using ultra-large language models without having to use such models. Ultra-large models require an enormous number of parameters, strong learning capacity, and extensive learning data. Therefore, they require much effort to collect data, and only large-sized companies may utilize them, since up to a few dozen billion KRW of expenses would occur per learning. However, the gap in research between large-sized and small-sized companies will widen even further, since small-sized ones cannot afford the expenses required for data collection or learning. However, in case a conversational technology that performs at the same level as the one using ultra-large language models without having to use such models could be materialized, expenses and efforts required for data collection may be reduced.

    FoCus Dataset was introduced through a joint research project between NC and Korea University. The joint research team published and announced the paper in **AAAI 2022, the world’s best conference on artificial intelligence, and the team is running workshops to share research results and shared tasks. Also, the team was invited to ***COLING 2022 to be held in Gyeonggju in October on related topics and would give a lecture and a presentation about their paper. Even though it has not been directly applied to commercial services, the data are considered as a pioneer, since they could be safely used after removal of unethical expressions and personal information. NC has disclosed these data, since new conversation technologies are suggested in the NLP area due to concern about expenses and environment. NC plans to continue its efforts to actively participate in discussions and technology developments in academia. 

    **AAAI: Association for the Advancement of Artificial Intelligence

    ***COLING 2022: The International Conference on Computational Linguistics

    3. Establishment of AI behavioral pattern analysis system

    NC is developing AI log analysis and visualization tools for behavioral assessment and pattern analysis of AI system to make sure that the system would work based on its designers’ intention. Thanks to the tools, it could explain and interpret the behaviors carried out and decision made by the AI system. More specifically speaking, AI’s behavioral patterns may be visualized through graphs and be analyzed in an efficient manner. Also, to confirm whether a certain behavior was made either intentionally or unintentionally (e.g., bugs), the interim results in the decision-making process would be saved and reviewed visually. Through this, we may review the reason why AI has acted in an abnormal way in more details and respond to it.

    Visualization tools for behavioral assessment and pattern analysis of AI system

    For an AI that Could Be Trusted with Ease of Mind

    To address the perception that AI is an ‘incomprehensible, unstable being’, its transparency should be enhanced. In case AI causes issues and it is difficult to address them because we do not understand its decision-making process, we may not be able to utilize AI in our lives with ease of mind. NC believes that the ethical values should also provide ‘comfort of mind to people’. In this sense, ‘transparency’ is an important means to help ‘people have ease of mind’. In particular, the tools to analyze AI’s behaviors for enhancement of transparency are to confirm users’ grievances caused by AI’s behaviors and to discover their reasons. Since development and enhancement of AI behavioral analysis tools could prevent damages caused by AI, they could be considered as ethical acts in a broader sense. They may catch two birds with a stone, since they may help address issues that may occur in the course of AI development. NC will make its best efforts to develop measures to enhance AI’s transparency so that people could utilize AI in their lives with much ease of mind.