The 4th IEEE Workshop on
Artificial Intelligence for Art Creation


Taipei
July 18-22, 2022

Call for Papers


Artificial Intelligence (AI) has already fueled many academic fields as well as industries. In the area of art creation, AI has demonstrated its great potential and gained increasing popularity. People are greatly impressed by AI painting, composing, writing, and designing. And the emerging technology of metaverse even provides more opportunity for AI Art. AI has not only exhibited a certain degree of creativity, but also helped in uncovering the principles and mechanisms of creativity and imagination from the perspective of neuroscience, cognitive science and psychology.

This is the 4th AIART workshop to be held in conjunction with ICME 2022, which is a flagship international conference in the field of multimedia. AIART 2022 aims to bring forward cutting-edge technologies and most recent advances in the area of AI Art, which enable the creation, analysis, understanding, and rendering of art works, digitally or physically. The theme topic of AIART 2022 will be Affective computing for AI Art. And we plan to invite 5 keynote speakers to present their insightful perspectives on AI Art.

We will edit and publish a book with a title like Artificial Intelligence for Art Creation, Understanding, and Protection based mainly on the significant extension of the papers accepted by AIART 2022. The book is to appear in the Book Series of Multimedia Computing, Communication, and Intelligence published by Taylor and Francis Group.

We sincerely invite high-quality papers presenting or addressing issues related to AI Art, including but not limited to the following topics:

  • Affective computing for AI Art
  • Theory and practice of AI creativity
  • Neuroscience, cognitive science and psychology for AI Art
  • AI Art for metaverse
  • AI for painting generation
  • AI for music/sound synthesis, composing, matching and instrument digital design
  • AI for poem composing and synthesis
  • AI for typography and graphic design
  • AI for fashion, makeup and virtual human
  • AI for style transfer of artworks
  • AI for aesthetics understanding, analysis, assessment and prediction
  • Authentication and copyright issues of AI artworks

Paper Submission

Authors should prepare their manuscript according to the Guide for Authors of ICME available at Author Information and Submission Instructions: http://2022.ieeeicme.org/author-info.html.

Submission address: http://2022.ieeeicme.org/author-info.html


Submit link

Important Dates


Submissions due
April 9, 2022
Workshop date
July 22, 2022

Keynotes (1/5)


Keynote 1


Speaker:

Shiguang Shan

Title:

Face Video-based Affective Computing

Time:

8:35 – 9:15, July 22, 2022

Abstract:

While face recognition has been successfully applied in extensive real-world applications, similar and extended technologies are expected to understand human users deeper and deeper in terms of psychology and even physiology. In this talk, I will introduce some recent progresses that my team has made along this direction. The technologies I will present include 1) video-based heart rate estimation based on deep learning, 3) AU detection with self-supervised learning on large-scale unlabeled videos, 4) consumer camera-based eye gaze estimation by leveraging self-supervised eye and gaze feature disentangling. Finally, I will end my talk with discussions on future directions.

Biography:

Prof. Shiguang Shan received Ph.D. degree in computer science from the Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS), Beijing, China, in 2004. He has been a full Professor of this institute since 2010 and now the deputy director of CAS Key Lab of Intelligent Information Processing. His research interests cover computer vision, pattern recognition, and machine learning. He has published more than 300 papers and received more than 28000/GS citations. He served as Area Chairs for many international conferences including CVPR, ICCV, AAAI, IJCAI, ACCV, ICPR, FG, etc. And he was/is Associate Editors of several journals including IEEE T-IP, Neurocomputing, CVIU, and PRL. He was a recipient of the China’s State Natural Science Award in 2015, and the China’s State S&T Progress Award in 2005 for his research work. He became an IEEE Fellow in 2021.

Keynotes (2/5)


Keynote 2


Speaker:

Steve DiPaola

Title:

Cognitive AI Modelling Approaches for Use in Art, Creativity, Social Good and Health Applications

Time:

10:25 – 11:00, July 22, 2022

Abstract:

Taking a cognitive science modelling approach (i.e., human creativity within the context of the aesthetic goal mechanisms of the brain, like sensory perception) DiPaola will demonstrate the work of his lab and their custom-built top-down AI models of human creativity, empathy and expression. DiPaola will describe how his models are being used in new forms of computation art, health, and social good systems while also being used to understand and evaluate how the creative and behavioral mind works. Bringing the human more into AI systems also implies real time sensing of he human condition including affective tools that monitor brainwave, heart, and other physiological and movement functions. Combining this with models of emotion and empathy, AI creativity systems can have benefits in fields in health, social good and ethics.

Biography:

Steve DiPaola, past Director of the Cognitive Science Program at Simon Fraser University (SFU), and lab director of the iVizLab, (ivizlab.org) - a PhD based lab on Artificial Intelligence (AI) using human cognition theories of creativity, empathy and expression. He came to SFU from Stanford University where some of his creative AI systems were used in generative game creation including the best-selling game of that year, “The Sims. As both a scientist (with over 100 papers on AI and cognition) and artist (including having one of the first one person shows in AI art to travel the world), DiPaola has written code for his AI “creative on its own” artworks that has been shown in major galleries and museums including The Whitney, the MOMA, The Smithsonian, Tate, and galleries in NYC, London and LA.

Keynotes (3/5)


Keynote 3


Speaker:

Si Liu

Title:

Multimodal Analysis

Time:

13:00 – 13:35, July 22, 2022

Abstract:

I will introduce my latest work in the following four aspects. 1) Human object interaction (HOI) detection and segmentation. HOI segmentation is a new task aiming to predict the relationship between human and the surrounding entities, which are expressed in the form of pixel-level mask. 2) Image and video segmentation, and referring image and video segmentation. The latter means that given the referring expression and image/video, the model needs to predict the pixel-level masks of the entities described by the referring expression. 3) Active perception. The model needs to navigate and locate remote objects according to language instructions. 4) Artificial intelligence + art. This topic includes automatic video background music generation, language-guided image editing and detail-preserved makeup transfer.

Biography:

Si Liu is currently a full professor and doctoral supervisor of Beihang University. She is the recipient of the National Science Fund for Excellent Young Scholars. Her research interests include cross-modal multimedia intelligent analysis and classical computer vision tasks. She has published more than 60 CCF A papers and been cited over 8000 times on Google Scholar. She has won the Best Paper Awards of ACM MM 2021 and 2013, the Best Video Award of IJCAI 2021, and the Best Demo Award of ACM MM 2012. She serves as the director and deputy secretary of China Society of Image and Graphics. She is currently the associate editor of IEEE TMM and IEEE TCSVT, and has served as the area chair of ICCV, CVPR, ECCV, ACM MM, and other top conferences many times.

Keynotes (4/5)


Keynote 4


Speaker:

Gus Xia

Title:

A Hierarchical Representation Learning Approach for Source Separation, Transcription, and Music Generation

Time:

14:30 – 15:05, July 22, 2022

Abstract:

With interpretable music representation learning, music source separation problems are well connected with transcription problems, and transcription problems can be transformed into music arrangement problems. In particular, Gus will discuss two recently developed models. The first one used a pitch-timbre disentanglement to achieve source separation, transcription, and synthesis. The second one used cross-modal chord-texture disentanglement to solve audio-to-symbolic piano arrangement. In the end, Gus will show his vision of a unified hierarchical representation-learning framework that bridges music understanding and generation.

Biography:

Gus is an Assistant Professor in Computer Science at NYU Shanghai. He received his Ph.D. in the Machine Learning Department at Carnegie Mellon University in 2016, and he was a Neukom Fellow at Dartmouth from 2016 to 2017. Gus is also a professional Di and Xiao (Chinese flute and vertical flute) player. He plays as a soloist in NYU Shanghai Jazz ensemble, Pitt Carpathian Ensemble, and Chinese Music Institute of Peking University, where he also served as the president and assistant conductor.

Keynotes (5/5)


Keynote 5


Speaker:

Simon Colton

Title:

Computational Creativity in the Age of Generative Deep Learning

Time:

15:50 – 16:25, July 22, 2022

Abstract:

In the field of Computational Creativity, we study how to hand over creative responsibilities to AI systems in art, design and science projects. We consider how to evaluate progress towards truly creative AI systems and position this within broader philosophical contexts. Recently, deep learning approaches have revolutionised the generation of artefacts such as visual artworks, texts and musical compositions, with recent models such as GPT-3 and DALLE-2 producing incredibly sophisticated results. Many computational creativity projects using other AI techniques and bespoke approaches have been radically superseded by straightforward application of such pre-trained neural models. In this talk, I consider how computational creativity can contribute to and guide the development generative deep learning techniques, and how the field can benefit by moving from artefact generation to automatic ideation.

Biography:

Simon Colton is a Professor of Computational Creativity, AI and Games at the School of Electronic Engineering and Computer Science of Queen Mary University of London in the UK, and Professor of Computational Creativity at SensiLab, in the Faculty of Information Technology, Monash University in Australia. He has been an AI researcher for 25 years and has written or co-authored nearly 250 publications. He has held an EPSRC leadership fellowship and European Area Chair, as well as leading research projects totalling £15m+. He is well known for developing and experimenting with creative AI systems such as HR for mathematical invention, The Painting Fool automated artist, The WhatIf Machine for ficitional ideation and the Wevva and ArtDoneQuick iOS apps for casual creation of videogames and visual art. He has also made substantial contributions to the philosophy of computational creativity, and his latest thinking is that AI systems should create artworks in order to communicate their own experiences, and help people understand what it is like to be an AI system, using a framework called the Machine Condition.

Conference Program


Technical Program Committee (Tentative)


  • Ajay Kapur, California Institute of the Arts, USA
  • Alexander Lerch, Georgia Institute of Technology, USA
  • Alexander Pantelyat, Johns Hopkins University, USA
  • Bahareh Nakisa, Deakin University, Australia
  • Baoqiang Han, China Conservatory of Music, China
  • Baoyang Chen, Central Academy of Fine Arts, China
  • Beici Liang, Tencent Music Entertainment Group, China
  • Bing Li, King Abdullah University of Science and Technology, Saudi Arabia
  • Björn W. Schuller, Imperial College London, UK
  • Bob Sturm, KTH Royal Institute of Technology, Sweden
  • Changsheng Xu, Institute of Automation, Chinese Academy of Sciences, China
  • Dongmei Jiang, Northwestern Polytechnical University, China
  • Haifeng Li, Harbin Institute of Technology, China
  • Haipeng Mi, Tsinghua University, China
  • Hongxun Yao, Harbin Institute of Technology, China
  • Jesse Engel, Google, USA
  • Jia Jia, Tsinghua University, China
  • Jianyu Fan, Microsoft, Canada
  • John See, Multimedia University, Malaysia
  • Juan Huang, Johns Hopkins University, USA
  • Junping Zhang, Fudan University, China
  • Kejun Zhang, Zhejiang University, China
  • Ke Lv, University of Chinese Academy of Sciences, China
  • Kenneth Fields, Central Conservatory of Music, China
  • Lai-Kuan Wong, Multimedia University, Malaysia
  • Lamtharn Hanoi Hantrakul, ByteDance, USA
  • Lei Xie, Northwestern Polytechnical University, China
  • Lin Gan, Tianjin University, China
  • Long Ye, China University of Communication, China
  • Maosong Sun, Tsinghua University, China
  • Mei Han, Ping An Technology Art institute, USA
  • Mengjie Qi, China Conservatory of Music, China
  • Ming Zhang, Nanjing Art College, China
  • Mohammad Naim Rastgoo, Queensland University of Technology, Australia
  • Nick Bryan-Kinns, Queen Mary University of London, UK
  • Nina Kraus, Northwestern University, USA
  • Pengtao Xie, University of California, San Diego, USA
  • Philippe Pasquier, Simon Fraser University, Canada
  • Qin Jin, Renmin University, China
  • Rebecca Fiebrink, University of London, UK
  • Rick Taube, University of Illinois at Urbana-Champaign, USA
  • Roger Dannenberg, Carnegie Mellon University, USA
  • Rongfeng Li, Beijing University of Posts and Telecommunications, China
  • Rui Wang, Institute of Information Engineering, Chinese Academy of Sciences, China
  • Ruihua Song, Renmin University, China
  • Shangfei Wang, University of Science and Technology of China, China
  • Shasha Mao, Xidian University, China
  • Shiguang Shan, Institute of Computing Technology, Chinese Academy of Sciences, China
  • Shiqi Wang, City University of Hong Kong, China
  • Si Liu, Beihang University, China
  • Simon Lui, Tencent Music Entertainment Group, China
  • Tiange Zhou, NetEase Cloud Music, China
  • Weiming Dong, Institute of Automation, Chinese Academy of Sciences, China
  • Wei-Ta Chu, National Chung Cheng University, Taiwan
  • Wei Li, Fudan University, China
  • Weiwei Zhang, Dalian Maritime University, China
  • Wei Zhong, China University of Communication, China
  • Xi Shao, Nanjing University of Posts and Telecommunications, China
  • Xiaojing Liang, NetEase Cloud Music, China
  • Xiaopeng Hong, Harbin Institute of Technology, China
  • Xiaoyan Sun, University of Science and Technology of China, China
  • Xiaoying Zhang, China Rehabilitation Research Center, China
  • Xihong Wu, Peking University, China
  • Xinfeng Zhang, University of Chinese Academy of Sciences, China
  • Xu Tan, Microsoft Research Asia, China
  • Yanchao Bi, Beijing Normal University, China
  • Yi Qin, Shanghai Conservatory of Music, China
  • Ying-Qing Xu, Tsinghua University, China
  • Yirui Wu, Hohai University, China
  • Yuanchun Xu, Xiaoice, China
  • Zhiyao Duan, University of Rochester, USA

Organizing Team


Luntian Mou

Beijing University of Technology

Beijing, China

ltmou@bjut.edu.cn


Dr. Luntian Mou is an Associate Professor with Beijing Institute of Artificial Intelligence (BIAI), the Faculty of Information Technology, Beijing University of Technology. He was a Visiting Scholar with the University of California, Irvine, from 2019 to 2020. And he was a Postdoctoral Fellow at Peking University, from 2012 to 2014. He initiated the IEEE Workshop on Artificial Intelligence for Art Creation (AIART) on MIPR 2019. His current research interests include multimodal machine learning, personal health navigation, affective computing, multimedia computing, intelligent transportation, and artificial intelligence. He has a research background in multimedia security, copy detection and video fingerprinting. And he serves as a Co-Chair of System subgroup in AVS workgroup and IEEE 1857 workgroup as well. He is a Member of IEEE (SA, SPS), ACM, CCF, CAAI, CSIG, and MPEG China.

Feng Gao

Peking University

Beijing, China

gaof@pku.edu.cn


Dr. Feng Gao is an assistant professor with the School of Arts, Peking University. He has long researched in the disciplinary fields of AI and art, especially in AI painting. He co-initiated the international workshop of AIART. Currently, he is also enthusiastic in virtual human. He has demonstrated his AI painting system, called Daozi, in several workshops and drawn much attention.

Zijin Li

Central Conservatory of Music

Beijing, China

lzijin@ccom.edu.cn


Dr. Zijin Li is a Professor with the Department of AI Music and Music Information Technology, Central Conservatory of Music. She was a Visting Scholar with McGill University. Her current research interests include music acoustics, music creativity, new musical instrument design and Innovation theory of music technology. She is the guest editor of Frontiers: Human-Centred Computer Audition: Sound, Music, and Healthcare and Journal of Cognitive Computation and Systems(JCCS)SI: Perception and Cognition in Music Technology. She is committee chair of New Interface Music Expressions(NIME2021), IEEE MIPR AI Art Workshop , China Sound and Music Technology Conference (CSMT), China AI Music Development Symposium, China Musical Instrument Symposium. She served as the judge of the New Music Device Invention Award of International "Danny award", International Electronic Music Competition (IEMC) and NCDA Awards.

Jiaying Liu

Peking University

Beijing, China

liujiaying@pku.edu.cn


Dr. Jiaying Liu is currently an Associate Professor with the Wangxuan Institute of Computer Technology, Peking University. She received the Ph.D. degree (Hons.) in computer science from Peking University, Beijing China, 2010. She has authored over 100 technical articles in refereed journals and proceedings, and holds 43 granted patents. Her current research interests include multimedia signal processing, compression, and computer vision. Dr. Liu is a Senior Member of IEEE, CSIG and CCF. She was a Visiting Scholar with the University of Southern California, Los Angeles, from 2007 to 2008. She was a Visiting Researcher with the Microsoft Research Asia in 2015 supported by the Star Track Young Faculties Award. She has served as a member of Membership Services Committee in IEEE Signal Processing Society, a member of Multimedia Systems & Applications Technical Committee (MSA TC), Visual Signal Processing and Communications Technical Committee (VSPC TC) in IEEE Circuits and Systems Society, a member of the Image, Video, and Multimedia (IVM) Technical Committee in APSIPA. She received the IEEE ICME 2020 Best Paper Awards and IEEE MMSP 2015 Top10% Paper Awards. She has also served as the Associate Editor of IEEE Trans. on Image Processing, and Elsevier JVCI, the Technical Program Chair of IEEE VCIP-2019/ACM ICMR-2021, the Publicity Chair of IEEE ICME-2020/ICIP-2019, and the Area Chair of CVPR-2021/ECCV-2020/ICCV-2019. She was the APSIPA Distinguished Lecturer (2016-2017).

Wen-Huang Cheng

National Chiao Tung University

Taiwan

whcheng@nctu.edu.tw


Dr. Wen-Huang Cheng is a Professor with the Institute of Electronics, National Chiao Tung University (NCTU), Taiwan, where he is the Founding Director with the Artificial Intelligence and Multimedia Laboratory (AIMMLab). His current research interests include multimedia, artificial intelligence, computer vision, machine learning, social media, and financial technology. He is a co-organizer of the 2018 International Workshop on AI Aesthetics in Art and Media, in conjunction with 2018 ACCV.

Ling Fan

Tezign.com

Tongji University Design Artificial Intelligence Lab

Shanghai, China

lfan@tongji.edu.cn


Dr. Ling Fan is a scholar and entrepreneur to bridge machine intelligence with creativity. He is the founding chair and professor of Tongji University Design Artificial Intelligence Lab. Before, he held teaching position at the University of California at Berkeley and China Central Academy of Fine Arts. Dr. Fan co-founded Tezign.com, a leading technology start-up with the mission to build digital infrastructure for creative contents. Tezign is backed by top VCs like Sequoia Capital and Hearst Ventures. Dr. Fan is a World Economic Forum Young Global Leader, an Aspen Institute China Fellow, and Youth Committee member at the Future Forum. He is also a member of IEEE Global Council for Extended Intelligence.
Dr. Fan received his doctoral degree from Harvard University and master's degree from Princeton University. He recently published From Universality of Computation to the Universality of Imagination, a book on how machine intelligence would influence human creativity.