I have a method for compressing images to certain size, so I get a NSData with desired Bytes, but when I convert this NSData to UIImage, and UIImage into NSData, I can check the size of NSData increase (original value multiplied by 4 aprox.)
UIImage *image = [UIImage imageWithData:imageData];
NSData *newData = UIImageJPEGRepresentation(image,1.0f);
Why does this happen?? I just want to obtain control about UIImage file size (for storing in a database).
I post my code below.
Thanks in advance.
+ (NSData *) dataOfCompressingImage:(UIImage *)pImage toMaxSize:(CGSize)pSize toFileSizeKB:(NSInteger)pFileSizeKB{
CGSize aSize;
if (pSize.width==0 || pSize.height==0){
aSize = pImage.size;
}else{
aSize = pSize;
}
UIImage *currentImage = [self imageWithImage:pImage scaledToSize:aSize];
NSData *imageData = UIImageJPEGRepresentation(currentImage, 1.0f);
NSInteger currentLength = [imageData length];
double factor = 1.0;
double adjustment = 1.0 / sqrt(2.0); // or use 0.8 or whatever you want
while (currentLength >= (pFileSizeKB * 1024)){ //
factor *= adjustment;
NSLog(@"factor: %f", factor);
@autoreleasepool{
imageData = UIImageJPEGRepresentation(currentImage, factor);
}
if (currentLength == [imageData length]){
break; //Exit While. Reached maximum compression level
}else{
currentLength = [imageData length];
}
}
return imageData;
}
It is happening because when you perform the UIImageJPEGRepresentation you are using a quality factor of 1.0, causing the frequency components of the JPEG to be stored with a excessively high resolution.
Think of it this way, if you had an array of floats you COULD store them as an array of doubles if you wanted. These doubles would take up twice as much memory due to the data type size, but not really provide much more resolution because of the limited resolution of the starting data.