You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I look in to each parameter's gradient, I found all attention blocks except for the last one in guidance encoder get nan gradient. If not use accelerate just torch distributed, this will lead to error. And then after looking into the encoder's code, I found lots of attention block are initialized, saved or loaded but only the last layer is used in forward process.
The text was updated successfully, but these errors were encountered:
When I look in to each parameter's gradient, I found all attention blocks except for the last one in guidance encoder get nan gradient. If not use accelerate just torch distributed, this will lead to error. And then after looking into the encoder's code, I found lots of attention block are initialized, saved or loaded but only the last layer is used in forward process.
The text was updated successfully, but these errors were encountered: