You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all thanks for such a great and clean report :)
Original Code Metrics : ( Code can reach metrics stated in paper )
room type 0th, accuracy = 0.9637
room type 1th, accuracy = 0.5979
room type 2th, accuracy = 0.7952
room type 3th, accuracy = 0.8953
room type 4th, accuracy = 0.7489
room type 5th, accuracy = 0.7112
room type 6th, accuracy = 0.8018
room type 9th, accuracy = 0.774
room type 10th, accuracy = 0.9872
TF2DeepFloorPlan Metrics :
room type 0th, accuracy = 0.96
room type 1th, accuracy = 0.1541
room type 2th, accuracy = 0.638
room type 3th, accuracy = 0.6605
room type 4th, accuracy = 0.7862
room type 5th, accuracy = 0.4969
room type 6th, accuracy = 0.2339
room type 9th, accuracy = 0.6621
room type 10th, accuracy = 0.9565
I have been unsuccessful in figuring out what's the reason. Especially the poor boundary class detection is leading the postprocess to fill all rooms to one single color. I have tried using exact hyper-parameters as stated in paper and trained from scratch. I have also tried your pre-trained model. Nothing seems to reach paper metrics.
If you have an evaluation code or anything that could be used to reach metrics of paper, please let me know.
Help is much appreciated, Thanks
@rammyram
The boundary class detection is the core to guide the room type classification. So if it does not perform well, it is normal to cause bad room detection.
In their paper, they have used another dataset (from Japan real estate company, exclusive for academia) that I can't download. That could be the main reason. Meanwhile, my model is only trained upon their preprocessed TFrecord.
Otherwise, you can try annotate more data, to reduce imbalance of classes since there could be too little examples for specific classes. Please see zlzeng/DeepFloorplan#17 . Also, updated the How-to-run 2nd step in README.md to remind users to be cautious about the difference.
And most of the time, you can't really achieve the same performance as the academia and some research companies since you don't necessarily know what they did exactly in their measurements.
This discussion was converted from issue #8 on June 15, 2022 11:30.
Heading
Bold
Italic
Quote
Code
Link
Numbered list
Unordered list
Task list
Attach files
Mention
Reference
Menu
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
First of all thanks for such a great and clean report :)
Original Code Metrics : ( Code can reach metrics stated in paper )
room type 0th, accuracy = 0.9637
room type 1th, accuracy = 0.5979
room type 2th, accuracy = 0.7952
room type 3th, accuracy = 0.8953
room type 4th, accuracy = 0.7489
room type 5th, accuracy = 0.7112
room type 6th, accuracy = 0.8018
room type 9th, accuracy = 0.774
room type 10th, accuracy = 0.9872
TF2DeepFloorPlan Metrics :
room type 0th, accuracy = 0.96
room type 1th, accuracy = 0.1541
room type 2th, accuracy = 0.638
room type 3th, accuracy = 0.6605
room type 4th, accuracy = 0.7862
room type 5th, accuracy = 0.4969
room type 6th, accuracy = 0.2339
room type 9th, accuracy = 0.6621
room type 10th, accuracy = 0.9565
I have been unsuccessful in figuring out what's the reason. Especially the poor boundary class detection is leading the postprocess to fill all rooms to one single color. I have tried using exact hyper-parameters as stated in paper and trained from scratch. I have also tried your pre-trained model. Nothing seems to reach paper metrics.
If you have an evaluation code or anything that could be used to reach metrics of paper, please let me know.
Help is much appreciated, Thanks
Beta Was this translation helpful? Give feedback.
All reactions