I’m trying to reduce the size of photo thumbnails (100x75 px) generated with SixLabors ImageSharp, using the JpegEncoder. However the images don’t seem to vary in file size much, despite quality level used.
In my legacy System.Drawing code, when I used an ImageCodecInfo encoder and an EncoderParameter for Imaging.Encoder.Quality set to 30, I’d receive nice low-quality images — about 2k. Perfect for thumbnails.
With ImageSharp, no matter what I set the SixLabors.ImageSharp.Formats.Jpeg.JpegEncoder.Quality to, the images are always about 24k. Quality can be set 0-100, and the visible quality goes down, but the file size doesn’t dip much — even when the jpegs are comically compressed. Always about 24k.
Can anyone explain why this is? Why does an image at .Quality 5 get written to about the same size as 90, despite dramatic visible compression? Are there other properties I need to set with this encoder? Should I be using a different format for better results?
image.Mutate(x => x.Resize(width, height));
SixLabors.ImageSharp.Formats.Jpeg.JpegEncoder encoder = new SixLabors.ImageSharp.Formats.Jpeg.JpegEncoder();
encoder.Quality = 30; //0-100. 30 was my System.Drawing code.
image.Save(thumbnailPath, encoder);
EDIT: My legacy code may have "extracted" the thumbnail image from the source image, via System.Drawing.GetThumbnailImage, then saved that. These thumbs may be very small to begin with. Sounds like this is different than scaling a new thumbnail from the source image? Legacy code:
Image theThumb = theImage.GetThumbnailImage(width, height, myCallback, IntPtr.Zero);
//set up parameter for compression quality
EncoderParameters encoderParameters = new EncoderParameters(1);
encoderParameters.Param[0] = new EncoderParameter(System.Drawing.Imaging.Encoder.Quality, 30);
//codec to use
ImageCodecInfo encoder = ImageCodecInfo.GetImageEncoders()[1]; //"image/jpeg"
theThumb.Save(thumbnailPath, encoder, encoderParameters);
...Seems like the MS documentation is saying about the same -- extracted thumbnails are different than scaling the source image:
If the Image contains an embedded thumbnail image, this method retrieves the embedded thumbnail and scales it to the requested size. If the Image does not contain an embedded thumbnail image, this method creates a thumbnail image by scaling the main image.
(I'm new to .NET 6 tho, so not sure what "Platform extensions" are and if this will work in a .NET Core library. Looks like they may be part of the "Windows Compatibility Pack" NuGet package, but is for Windows-only deployments. Since I'm deploying on Mac this System.Drawing.GetThumbnailImage may not work -- thus working with ImageSharp.)
EDIT: The ImageSharp code is scaling a new image from the source, and it appears to retain EXIF data. My legacy code thumbs (extracted from the source) don't contain EXIF data. This may be the source of the bloat. Especially if there is a thumbnail in the EXIF data itself.
I'll try finding a method or library to scrub the EXIF prior to saving a new scaled thumbnail, and see if that is smaller.
Thanks!
It was the XMP metadata. Resizing source images down to thumbnails maintains their meta collections. You must remove it prior to saving. Alternatively, use the method to extract the actual thumbnail from the metadata rather than create it new.
Related question on the EXIF removal:
How do I clear an Image's EXIF data with ImageSharp?
Related question on extracting thumbnails: