You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to use your pre-trained strudent model in my experiments. I cannot find your training dataset and the test data file cannot be read by h5py normally. I have tried the dataset you metioned in readme, but student model input need 224224 images, while pubfig83 is 100100, so I don't know how your model deal with transforming. So is it convenient for you to publish your pubfig65 dataset and give more details about your student model(e.g. label) or tell me how you transform images from 100100 to 224224?
The text was updated successfully, but these errors were encountered:
What we did in our experiments was to upsample all images to 224*224 (cause we need to fit it into VGG-Face). This should be fairly simple to do when you load images. If you use Keras.preprocessing.image, here is the code snippet you can use to specify the resolution you want.
from keras.preprocessing import image
target_size = (224, 224)
img = image.load_img(img_path, target_size=target_size)
x = image.img_to_array(img)
I want to use your pre-trained strudent model in my experiments. I cannot find your training dataset and the test data file cannot be read by h5py normally. I have tried the dataset you metioned in readme, but student model input need 224224 images, while pubfig83 is 100100, so I don't know how your model deal with transforming. So is it convenient for you to publish your pubfig65 dataset and give more details about your student model(e.g. label) or tell me how you transform images from 100100 to 224224?
The text was updated successfully, but these errors were encountered: