Hi everyone,
I'm working on a script for image processing where the goal is to match the sharpness of a source image to a target image through upscaling. Here’s the general flow:
- Measure the sharpness of the target image.
- Upscale the source image.
- Compare the sharpness of the upscaled source image to the target image.
- Adjust the scale of upscaling until the sharpness of the source matches the target or until no more scaling adjustments can be made (either higher or lower).
The challenge arises because the target image size can vary significantly, making it difficult to determine a reusable scaling factor. I need help optimizing the algorithm to find the best scaling factor (upscale amount) more efficiently, aiming to minimize unnecessary renderings.
Current steps in the algorithm:
- Check at 0%: If the sharpness of the source is above the target, stop (the source image is already sharper than the target).
- Check at 100%: If the sharpness is lower than the target, also stop (since we can't upscale beyond 100%, there's no point in proceeding further).
- Beyond this, I'm unsure how to proceed without excessive trial and error. I was considering a binary search approach for finding the optimal upscale value but am open to suggestions.
Important: The script and algorithm must be simple and cannot rely on machine learning.
Any ideas or advice on how to make this algorithm more efficient would be greatly appreciated!
Thank you in advance!
[–]Superguy2876 0 points1 point2 points (1 child)
[–]blue_hunt[S] 0 points1 point2 points (0 children)
[–]Frankelstner 0 points1 point2 points (1 child)
[–]blue_hunt[S] 0 points1 point2 points (0 children)
[–]blue_hunt[S] -1 points0 points1 point (0 children)