Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If I want to make attention in background. #27

Open
deep0learning opened this issue Jul 19, 2019 · 1 comment
Open

If I want to make attention in background. #27

deep0learning opened this issue Jul 19, 2019 · 1 comment

Comments

@deep0learning
Copy link

Hi,
Thank you for this task. If I want to make attention in the background that I need to change rather than the object. For example, in domain A and B, we have horse images, when translating from domain A - B, I want to keep the same horse in domain B but the background will be changed as domain A. How I can do that? Thank you in advance.

@jian3xiao
Copy link

jian3xiao commented Mar 15, 2020

I have the same question with you. In other word, how the Attention Network can output a mask (attention map) to keep eye on the foreground object in unsupervised setup? According to the paper, the network architecture of Generators and Attention Networks are almost same except the final activation function. When the final activation function is sigmoid with output channel is 1, output of the network is attention map. I don‘t know how that works.

Moreover, Figure 7 in the paper shows Attention Network can focus on foreground object in early of training. It is amazing! The losses are adversarial loss and cycle-consistency loss during early of training. There are no label information guides the Attention Network to focus on foreground object.

I am looking forward to discussing with you and the author @deep0learning @AlamiMejjati

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants