You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Thank you for this task. If I want to make attention in the background that I need to change rather than the object. For example, in domain A and B, we have horse images, when translating from domain A - B, I want to keep the same horse in domain B but the background will be changed as domain A. How I can do that? Thank you in advance.
The text was updated successfully, but these errors were encountered:
I have the same question with you. In other word, how the Attention Network can output a mask (attention map) to keep eye on the foreground object in unsupervised setup? According to the paper, the network architecture of Generators and Attention Networks are almost same except the final activation function. When the final activation function is sigmoid with output channel is 1, output of the network is attention map. I don‘t know how that works.
Moreover, Figure 7 in the paper shows Attention Network can focus on foreground object in early of training. It is amazing! The losses are adversarial loss and cycle-consistency loss during early of training. There are no label information guides the Attention Network to focus on foreground object.
Hi,
Thank you for this task. If I want to make attention in the background that I need to change rather than the object. For example, in domain A and B, we have horse images, when translating from domain A - B, I want to keep the same horse in domain B but the background will be changed as domain A. How I can do that? Thank you in advance.
The text was updated successfully, but these errors were encountered: