Imagine to be able to decode the improvisation code of any musician playing any instrument just after few notes. And imagine to be able to create infinite alternative tracks that improvise along, without any problem of scales, harmony, pitch or BPM. And imagine this in REAL TIME.
Imagine to jump on stage without any pattern, track, stem and imagine, as an electronic musician, to be able to improvise along with any musician. And imagine something unimaginable : while he plays any instrument, electronic or acoustic, your machine creates in real time an infinite number of tracks that improvise following the musician, regardless of pitch, harmony or bpm. A real time orchestra at your fingertips, always ready for you to conduct.
Imagine to be able to tear down the obstacles that, at the state of the art, make impossible a duet of improvisation between an acoustic musician and an electronic musician … the electronic musician has always needed patterns, loops, tracks, stems, grids, predetermined scales that have always made electronic live sets pretty cold and not very versatile…till now…
Imagine to sit in a studio and to start playing a melody…and imagine to create in real time, from this ﬁrst melody, inﬁnite other melodies following your style, inﬁnite diﬀerent arrangements to choose from for your production ﬁles. Imagine to boost to unimaginable speed and eﬃciency your production workﬂow reaching territories and boundaries that your creativity and productive capacity could have only dreamed of. DON’T IMAGINE THIS ANYMORE, BECAUSE IT ALREADY EXIST AND IT’S CALLED A-MINT
A-MINT (Artiﬁcial Musical Intelligence) has been developed by researchers Francesco Riganti Fulginei and Antonino Laudani, following the idea and the vision of the eclectic artist, Alex Braga: an artiﬁcial intelligence able to understand in real-time the improvising code of any musician on stage and create an inﬁnite number of scores playing an improvisation along with the musician, always in tune, always on “tempo”. There are some key points in this approach that makes our project diﬀerent from the state of the art of AI systems designed to generate music: A-MINT learns in real-time by listening the human performer, without any past musical knowledge. Like a musician with a natural talent, A-MINT listen her teacher and after few notes, she (A-MINT is female, as a conceptual mother of a new way of making music) starts to predict her own improvisation creating music never existed before. The aim of A-MINT is to generate constantly new music, since she has been created with a good ear and a not very long memory. In this way she will play in a diﬀerent way even if the human performer repeats the same musical composition, making each performance unique. How is all this possible? Unlike most experiments of AI with music, that use machine learning or deep learning, A-MINT instead uses an high performing low-level implemented learning algorithm expressely tailored to the training of our AI and speciﬁcally coded by our team.