Businesses, governments, and researchers continue to struggle with extracting usable data from PDF files, despite AI advances. These digital documents contain valuable information for everything from scientific research to government records, but their rigid formats make extraction difficult.
"PDFs are a creature of a time when print layout was a big influence on publishing software," Derek Willis, a lecturer in Data and Computational Journalism at the University of Maryland, told ArsTechnica. This print-oriented design means many PDFs are essentially "pictures of information" requiring optical character recognition (OCR) technology.
Traditional OCR systems have existed since the 1970s but struggle with complex layouts and poor-quality scans. New AI language models from companies like Google and Mistral now attempt to process documents more holistically, with varying success. "Right now, the clear leader is Google's Gemini 2.0 Flash Pro Experimental," Willis notes, while Mistral's recent OCR solution "performed poorly" in tests.
[ Read more of this story ](
https://it.slashdot.org/story/25/03/11/1726218/why-extracting-data-from-pdfs-remains-a-nightmare-for-data-experts?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.