You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have worked with unconditional generation using this fine repo. It is a lot of fun! I will do latent diffusion next. I am already looking forward to it.
Text conditional generation promises a lot of fun. I have a few questions.
In the README, in the conditional section, we can read "Text conditioning, one element per batch", this means "one text per waveform" and thus "a batch of texts for a batch of waveforms", right? Not "one text for a batch of waveforms"?
I believe latent diffusion and text conditioning to be orthogonal. Is it safe to assume that DiffuserAE would work with text conditioning by just adding the right kwargs?
What would be necessary in order to replace the T5 embeddings with something else?
What would be the consequences of extending the number of tokens for T5?
This is so cool!
Best,
Tristan
The text was updated successfully, but these errors were encountered:
You'd have to use use_text_conditioning=False and provide your own embedding with embedding=.... See here if you want to make your own plugin for the UNet
More tokens would mean that each sequence at each layer in the UNet would have to cross attend to the provided embedding. This would be a bit slower depending on how many more tokens you have, but possibly carry more information for the UNet.
Hi!
I have worked with unconditional generation using this fine repo. It is a lot of fun! I will do latent diffusion next. I am already looking forward to it.
Text conditional generation promises a lot of fun. I have a few questions.
In the README, in the conditional section, we can read "Text conditioning, one element per batch", this means "one text per waveform" and thus "a batch of texts for a batch of waveforms", right? Not "one text for a batch of waveforms"?
I believe latent diffusion and text conditioning to be orthogonal. Is it safe to assume that DiffuserAE would work with text conditioning by just adding the right kwargs?
What would be necessary in order to replace the T5 embeddings with something else?
What would be the consequences of extending the number of tokens for T5?
This is so cool!
Best,
Tristan
The text was updated successfully, but these errors were encountered: