ENGLISH
    • 한국어
    • ENGLISH
    • 日本語
    • 中文-繁體

    2024.04.18 Press Release

    NCSOFT Presents AI Research Papers at the World’s Largest Technical Conference, ‘IEEE ICASSP 2024’

    ICASSP, now in its 49th year, stands as the world’s renowned technical conference on Acoustics, Speech, and Signal Processing.

    NCSOFT presents four papers on AI application and multimodal research, demonstrating its prowess in AI technology on the global stage at an international conference.

    PANGYO, Korea (April 18, 2024) – NCSOFT today announced that it presented research papers at one of the most renowned international technical AI conferences, the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2024.

    ICASSP is an international conference held annually in various countries around the world. Marking its 49th year, the ICASSP 2024 is held at COEX this year in Seoul, Korea. Over 4,000 researchers from all around the world are attending the conference to present on different multimodal generative AI technologies including signal processing.

    At this year’s conference, NCSOFT has published four papers, covering topics of generating visually dehallucinative instructions in multimodal language models, improving the robustness of feature extraction models used in face recognition, addressing the challenge of barge-in scenarios, and a facial encoder module for the pre-trained multi-speaker.

    The papers presented by NCSOFT address the practical application of AI technology as well as the potential of multimodal language model technology that understands and learns from various data types, including text, images, videos, and audio.

    Based on these research findings, NCSOFT plans to advance its AI technology that understands multimodal data in various fields such as text, images, and audio, to be utilized in game development.

    More information on NCSOFT’s four papers published at the ICASSP 2024 can be found on NCSOFT's official blog.