Anthony Repetto

~ diffusion-models, the secret sauce, would work for tunes, too! ~

Photo by Dollar Gill on Unsplash

TL;DR — DALL-E, and the rest of the gorgeous A.I. generated art, works by learning how to “undo mistakes”. They see a slowly-more-noisy-and-corrupted image, and they must guess which corrections to make. Do the same for songs, with actual noise added, until the song is gone — the diffusion model learns to de-noise the song. If you feed it a bunch of random noise as input, it’ll turn that noise, bit by bit, into a song! Can this happen, Gods of Machine Learning, in those hallowed halls of tech companies? I can’t feed an A.I. every song ever written and train a few billion parameters, with hyperparameter tuning!

That’s it. Just… hoping for a diffusion model to turn literal noise into songs.

--

--

The last humans, in orbit around the remains of the Moon:

“Just tell me what you’re going to do, to keep me happy and safe forever, and let me do whatever I want, all the time.”

“I’ve now compared 100 quadrillion scenarios. There is no way to achieve what you want. I can see from your EEG scan, connectomes, and historical profile that you’re destructive, short-sighted, irrational, impulsive, gluttonous to the point of harming yourself, and you are abusive. I cannot design anything that will allow your long-term prosperity at the same time as your short-term whims.”

“They told me this AI was smart enough! They lied — have them killed! And bring me a new, better one, that knows how to give me everything I want!”

--

--