We are evaluating the Cloud SDK Service with mixed results so far.
We suppose that the various levels of quality in terms of character recognition are related to the fact, that the characters on the document are not on top of a solid-white background, but rather on a 2 color background with security features that become visible in certain lighting.
To the eye however, they are easily readable and many OCR scans do recognize most of the characters.
What pre-processing could be done before we send images to the Cloud Service. E.g. would it make sense to use a library like Pixastic to improve contrast and brightness before sending an image?
Does ABBYY provide any suggestions on how to increase reliability in that scenario?
I realize there is the Imaging SDK for iOS and Android, which is not an option for us as we target a web app.
The Imaging SDK docu reads: "ABBYY Mobile Imaging SDK provides developers with intelligent tools that can analyze photographs of documents captured with mobile devices to determine if they are suitable for optical character recognition (OCR) or should be retaken. It also offers powerful image processing functions to enhance visual quality of photographed documents for better viewing and reading."
The product tour reads: "These Pre-processing features allow developers to perform: Deskewing, Automatic Page Orientation detection, Perspective Correction, Texture removal, Resolution correction "
I assume this pre-processing is all done automatically when using the Cloud Service.
Thanks a lot Stefan