Is there any information on how to choose a compression quality for images uploaded. Obviously transmission size is an issue here, but at the same time, you wouldn't want to compromise on quality.
In my tests, I find it hard to see a measurable difference in OCR results, even between 10% quality and 100%. The results are different, at times (in terms of which blocks it interprets as text, and parts that result in gibberish in either case), and lower quality seems to cause more desperate (and futile) attempts to decode certain hard to decipher elements - but no actual mistake are made.
Is there a more definite answer from the professionals out there?
asked 20 Jan '13, 20:50
I would answer that there is no general recommendation since optimal compression will depend on:
Basically, you should find out optimal "quality" by experimenting with most typical images you are going to recognize. And of course, using same compression library you will use later in production. That means, if you are doing both iPhone and Android applications, you should do two sets of experiments, and in no case rely on the same experiment done from the desktop.
answered 20 Jan '13, 21:38
Andrey Isaev ♦♦