Sounding out extra-normal AI voice
Non-normative musical engagements with normative AI voice and speech technologies, at AIMC 2024
This paper delves into SpeechBrain, OpenAI and CoquiTTS voice and speech models within a research-through-design inspired, exploratory research engagement with pre-trained speech synthesis models to find their musical affordances in an experimental vocal practice.
This paper discusses the subversion of the normative function of speech recognition and speech synthesis models to provoke nonsensical AI-mediation of human vocality. Emerging from a research-through-design inspired proceess, we uncover the generative potential of non-normative usage of normative AI voice and speech models and contribute with insights about the affordances of Research-through-Design to inform artistic working processes with AI models. How do AI-mediations reform our understandings of human vocality? How can artistic perspectives and practice guide the uncovering of knowledge when working with technology?
The paper is accessible via the following link: https://aimc2024.pubpub.org/pub/extranormal-aivoice/release/1
Acknowledgements
This work was supported by the Wallenberg AI, Autonomous Systems and Software Program – Humanities and Society (WASP-HS) funded by the Marianne and Marcus Wallenberg Foundation and the Marcus and Amalia Wallenberg Foundation.