I am uploading 2 images:

source 1: http://diffusion.loto-quebec.com/Res_prod/Quebec49/2013_08/Quebec49_2013_08_17_det_AN.gif source 2: http://diffusion.loto-quebec.com/Res_prod/Quebec49/2013_08/Quebec49_2013_08_07_det_AN.gif

They are almost identical... but the structure of the response given is vastly different and we can't write a reusable script to get the values out of the response!

Why are the responses so different?

asked 21 Aug '13, 14:19

Philsmy's gravatar image

Philsmy
132


The issue occurs because ABBYY Cloud OCR SDK is not suitable well for reproducing the layout of such images, it is mostly suitable for the documents. We recommend either to use the field-level recognition (specify the exact coordinates of the text fragments you want to recognize), or export the result to an XML-file and process the recognized text with its coordinates as you need.

link

answered 12 Sep '13, 17:24

Anastasia%20Galimova's gravatar image

Anastasia Ga... ♦♦
790112

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Tags:

×6
×3

Asked: 21 Aug '13, 14:19

Seen: 642 times

Last updated: 12 Sep '13, 17:24

© 2016 ABBYY. All rights Reserved. www.ABBYY.com | Privacy Policy | Legal