You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As you request in README, I'm here to open a issue about "how this is done", you must have seen the paper already
We follow Ho et al. (2022b) in jointly training all the models in the Imagen Video pipeline on images
and videos. During training, individual images are treated as single frame videos. We achieve this
by packing individual independent images into a sequence of the same length as a video, and bypass
the temporal convolution residual blocks by masking out their computation path. Similarly, we
disable cross-frame temporal attention by applying masking to the temporal attention maps. This
strategy allows us to use to train our video models on image-text datasets that are significantly larger
and more diverse than available video-text datasets.
As you request in README, I'm here to open a issue about "how this is done", you must have seen the paper already
We follow Ho et al. (2022b) in jointly training all the models in the Imagen Video pipeline on images
and videos. During training, individual images are treated as single frame videos. We achieve this
by packing individual independent images into a sequence of the same length as a video, and bypass
the temporal convolution residual blocks by masking out their computation path. Similarly, we
disable cross-frame temporal attention by applying masking to the temporal attention maps. This
strategy allows us to use to train our video models on image-text datasets that are significantly larger
and more diverse than available video-text datasets.
https://imagen.research.google/video/paper.pdf
The text was updated successfully, but these errors were encountered: