Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crash while trying to upscale <16px per dimention images #33

Open
MikiP98 opened this issue Sep 27, 2024 · 0 comments
Open

Crash while trying to upscale <16px per dimention images #33

MikiP98 opened this issue Sep 27, 2024 · 0 comments

Comments

@MikiP98
Copy link

MikiP98 commented Sep 27, 2024

Tested with 4x4 and 8x8 image.
After trying to upscale the 4x4 image by a factor of 2, the program crashed with:

ValueError: could not broadcast input array from shape (4,4,3) into shape (15,4,3)

on line:

new_img[0:pad_size, pad_size:-pad_size, :] = np.flip(image[0:pad_size, :, :], axis=0) #top

in file:

"{...}\site-packages\RealESRGAN\utils.py"

Image file:
16-bit-P3-blue

Test code:

# coding=utf-8
import PIL.Image
import torch

from RealESRGAN import RealESRGAN


device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

image = PIL.Image.open("../../input/16-bit-P3-blue.png")

model = RealESRGAN(device, scale=2)
model.load_weights(f'weights/RealESRGAN_x{2}.pth')
image = model.predict(image)

As a temporal workaround I'm uscaling all input images to the minimum size of 16px for the smaller dimention using PIL scaling.
I haven't tested this much, so IDK if this workaround is fullproof. It may require minumum size of 8 or 32 for a scaling factor of 4 as far as I know.
I will return to this if I ever test it more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant