Extraction method, device and equipment of table information and storage medium

文档序号:8325 发布日期:2021-09-17 浏览:32次 中文

1. A method for extracting table information, the method comprising:

acquiring an image and a text;

determining a vertical dividing line in the image according to the gray level distribution of the image in the horizontal direction, wherein the gray level distribution of the position of the vertical dividing line in the image meets the condition;

determining a horizontal dividing line in the image according to a starting line of a line-feed text in the text, wherein the horizontal dividing line is positioned above the starting line in the image;

according to the vertical dividing line and the horizontal dividing line, dividing the image to obtain at least one cell area;

and acquiring form information according to the information included in the at least one cell area.

2. The method of claim 1, wherein determining a vertical partition line in the image according to a gray scale distribution of the image in a horizontal direction comprises:

counting the gray value of each pixel point in the image to obtain a statistical result, wherein the statistical result is used for indicating the gray distribution of the image in the horizontal direction;

determining a text boundary in the image according to the statistical result, wherein the change rate of the gray distribution of the position of the text boundary in the image is greater than a first threshold value;

determining a vertical segmentation line in the image based on the text boundaries, the vertical segmentation line being located between adjacent text boundaries in the image.

3. The method according to claim 2, wherein the counting the gray scale value of each pixel point in the image to obtain a statistical result comprises:

binarizing the gray value of each pixel point in the image to obtain gray data, wherein the gray data comprises the binarized gray value of each pixel point in the image;

and summing the binary gray values in the gray data according to columns to obtain the statistical result.

4. The method of claim 2, wherein determining a text boundary in the image based on the statistics comprises:

acquiring difference data according to the statistical result, wherein the difference data is used for indicating the gray value difference between adjacent positions of the image in the horizontal direction;

determining a text boundary in the image according to the difference data, wherein the gray value difference between the position of the text boundary in the image and the adjacent position is larger than the first threshold value.

5. The method of claim 1, wherein the text comprises at least one line of text, and wherein the method further comprises, prior to determining a horizontal segmentation line in the image based on a starting line of a line feed text in the text:

respectively acquiring semantic similarity between the at least one line text and a first line text, wherein the first line text is a first line text below a header text;

according to the semantic similarity corresponding to each line of text, determining a starting line of the line feed text in the at least one line of text, wherein the semantic similarity between the starting line and the first line of text is greater than a second threshold value.

6. The method according to claim 5, wherein the obtaining semantic similarity between the at least one line of text and the first line of text respectively comprises:

respectively carrying out entity recognition on the at least one line text to obtain entity information of each line text, wherein the entity information comprises an entity label corresponding to each word in one line text;

respectively extracting the characteristics of the entity information of the at least one line of text to obtain a characteristic vector of each line of text;

and acquiring semantic similarity between each line of text and the first line of text according to the similarity of the feature vectors between each line of text and the first line of text.

7. The method according to claim 6, wherein said performing feature extraction on the entity information of the at least one line of text respectively to obtain a feature vector of each line of text comprises:

determining a high frequency line in the entity information of the at least one line of text according to the entity information of each line of text, wherein the high frequency line is the entity information of one line of text, and the frequency of occurrence of the high frequency line in the entity information of the at least one line of text is the maximum;

and respectively comparing the entity information of the at least one line of text with the high-frequency line to obtain a first part of the feature vector of each line of text, wherein the first part is used for indicating the consistency degree between the entity information of the corresponding line of text and the high-frequency line.

8. The method according to claim 6, wherein said performing feature extraction on the entity information of the at least one line of text respectively to obtain a feature vector of each line of text comprises:

determining high-frequency words in the entity information of the at least one line of text according to the entity information of each line of text, wherein the high-frequency words are entity labels, and the word frequency of the high-frequency words is arranged in the entity labels of the entity information of the at least one line of text at a preset digit;

and respectively comparing the entity labels in the entity information of the at least one line of text with the high-frequency words to obtain a second part of the feature vector of each line of text, wherein the second part is used for indicating whether each entity label in the entity information of the corresponding line of text is the high-frequency word.

9. The method of claim 1, wherein prior to said segmenting said image according to said vertical segmentation line and said horizontal segmentation line, said method further comprises:

identifying header text in the text;

identifying a form tail text in the text;

determining a table area in the image according to the header text and the footer text;

the segmenting the image according to the vertical segmentation line and the horizontal segmentation line comprises:

and segmenting the table area in the image according to the vertical segmentation line and the horizontal segmentation line.

10. The method of claim 9, wherein the identifying header text in the text comprises:

using a keyword search in the text in the order from top to bottom;

and taking the first line text matched with the key words as the header text.

11. The method of claim 9, wherein the identifying of the epilogue text in the text comprises:

searching in the text by using a regular expression according to the sequence from bottom to top;

and taking the first line text matched with the regular expression as the table tail text.

12. The method of claim 1, wherein the obtaining of the image and text comprises at least one of:

converting a page of a portable document format PDF file into the image, and extracting a text stored in the page of the PDF file; or the like, or, alternatively,

and acquiring an image, and performing character recognition on the image to obtain the text.

13. An apparatus for extracting table information, the apparatus comprising:

the acquisition module is used for acquiring images and texts;

the determining module is used for determining a vertical dividing line in the image according to the gray level distribution of the image in the horizontal direction, and the gray level distribution of the position of the vertical dividing line in the image meets a condition;

the determining module is further configured to determine a horizontal dividing line in the image according to a starting line of a line-feed text in the text, where the horizontal dividing line is located above the starting line in the image;

the segmentation module is used for segmenting the image according to the vertical segmentation line and the horizontal segmentation line to obtain at least one cell area;

the obtaining module is further configured to obtain table information according to information included in the at least one cell area.

14. A computing device, characterized in that the computing device comprises a processor for executing instructions causing the computing device to perform the method of any of claims 1 to 12.

15. A computer-readable storage medium having stored therein at least one instruction that is read by a processor to cause a computing device to perform the method of any of claims 1-12.

Background

At present, various forms such as statements, delivery lists, and financial reports are often carried in Portable Document Format (PDF) files or pictures and spread in the network. However, the storage structure of the PDF file or image does not include the object of the table itself, and therefore, how to extract the table information from the PDF file or image becomes a problem to be solved urgently.

In the process of extracting the table, the computing device first parses the PDF file to obtain the coordinates of each table line from the storage structure of the PDF file. The computing device then locates each cell from the PDF file based on the coordinates of each form line, thereby extracting form information.

When the method is adopted, the table to be extracted is required to have a complete table line, and once the table itself has no table line or the table line of the table itself is incomplete, the method cannot be adopted to extract the table information.

Disclosure of Invention

The embodiment of the application provides a method, a device, equipment and a storage medium for extracting form information, which can automatically extract a form without form lines from a PDF (Portable document Format) file. The technical scheme is as follows:

in a first aspect, a method for extracting table information is provided, in which an image and a text are acquired; determining a vertical dividing line in the image according to the gray level distribution of the image in the horizontal direction, wherein the gray level distribution of the position of the vertical dividing line in the image meets the condition; determining a horizontal dividing line in the image according to a starting line of a line-feed text in the text, wherein the horizontal dividing line is positioned above the starting line in the image; according to the vertical dividing line and the horizontal dividing line, dividing the image to obtain at least one cell area; and acquiring form information according to the information included in the at least one cell area.

By the method, the vertical dividing line is positioned in the image by utilizing an image processing technology, so that the equipment can automatically find the position of the vertical dividing line. And, with NLP technology, the horizontal dividing line is located in the image, so that the apparatus can automatically find the position of the horizontal dividing line. And dividing the image by using the vertical dividing line and the horizontal dividing line so as to accurately divide the cell area, and further extracting table information contained in the image according to the cell area. By the method, even if the tables contained in the source files such as the PDF files or the pictures are irregular tables, for example, the tables have no table lines, incomplete table lines or the cells have line-changing texts, the table information can be accurately extracted by the method, manual operation of a user is not needed, and full automation is achieved, so that the extraction efficiency of the table information is greatly improved.

Optionally, the determining a vertical dividing line in the image according to the gray scale distribution of the image in the horizontal direction includes: counting the gray value of each pixel point in the image to obtain a statistical result, wherein the statistical result is used for indicating the gray distribution of the image in the horizontal direction; determining a text boundary in the image according to the statistical result, wherein the change rate of the gray distribution of the position of the text boundary in the image is greater than a first threshold value; determining a vertical segmentation line in the image based on the text boundaries, the vertical segmentation line being located between adjacent text boundaries in the image.

Through the alternative mode, on one hand, even if no frame line exists between columns of the table, the gray distribution in the horizontal direction is counted, and the text boundary of the columns is detected according to the change rate of the gray distribution, so that the vertical dividing line is more accurately positioned, and each column of the table can be accurately extracted after being divided by the vertical dividing line. On the other hand, the position of the vertical dividing line is automatically determined without the complicated operation of inputting the position of the column boundary by the user, so that the extraction efficiency of the table information is greatly improved.

Optionally, the counting the gray value of each pixel point in the image to obtain a statistical result includes: binarizing the gray value of each pixel point in the image to obtain gray data, wherein the gray data comprises the binarized gray value of each pixel point in the image;

and summing the binary gray values in the gray data according to columns to obtain the statistical result.

Optionally, the determining a text boundary in the image according to the statistical result includes:

acquiring difference data according to the statistical result, wherein the difference data is used for indicating the gray value difference between adjacent positions of the image in the horizontal direction;

determining a text boundary in the image according to the difference data, wherein the gray value difference between the position of the text boundary in the image and the adjacent position is larger than the first threshold value.

Optionally, the text includes at least one line of text, and before determining a horizontal dividing line in the image according to a starting line of a line feed text in the text, the method further includes:

respectively acquiring semantic similarity between the at least one line text and a first line text, wherein the first line text is a first line text below a header text;

according to the semantic similarity corresponding to each line of text, determining a starting line of the line feed text in the at least one line of text, wherein the semantic similarity between the starting line and the first line of text is greater than a second threshold value.

Through the optional mode, on one hand, even if no frame line exists between the lines of the table and the text of line feed appears in each cell, the starting line of the text of line feed is accurately identified according to the semantic similarity between the starting line of the text of line feed and the text of the first line, the horizontal dividing line is more accurately positioned according to the starting line of the text of line feed, and after the text of line feed is divided by the horizontal dividing line, each line of the table can be accurately extracted. On the other hand, the user does not need to perform complicated operation of inputting the segmentation dotting position, but the position of the horizontal segmentation line is automatically determined, and therefore the efficiency of extracting the table information is greatly improved.

Optionally, the respectively obtaining semantic similarities between the at least one line of text and the first line of text includes:

respectively carrying out entity recognition on the at least one line text to obtain entity information of each line text, wherein the entity information comprises an entity label corresponding to each word in one line text;

respectively extracting the characteristics of the entity information of the at least one line of text to obtain a characteristic vector of each line of text;

and acquiring semantic similarity between each line of text and the first line of text according to the similarity of the feature vectors between each line of text and the first line of text.

Optionally, the performing feature extraction on the entity information of the at least one line of text respectively to obtain a feature vector of each line of text includes:

determining a high frequency line in the entity information of the at least one line of text according to the entity information of each line of text, wherein the high frequency line is the entity information of one line of text, and the frequency of occurrence of the high frequency line in the entity information of the at least one line of text is the maximum;

and respectively comparing the entity information of the at least one line of text with the high-frequency line to obtain a first part of the feature vector of each line of text, wherein the first part is used for indicating the consistency degree between the entity information of the corresponding line of text and the high-frequency line.

Optionally, the performing feature extraction on the entity information of the at least one line of text respectively to obtain a feature vector of each line of text includes:

determining high-frequency words in the entity information of the at least one line of text according to the entity information of each line of text, wherein the high-frequency words are entity labels, and the word frequency of the high-frequency words is arranged in the entity labels of the entity information of the at least one line of text at a preset digit;

and respectively comparing the entity labels in the entity information of the at least one line of text with the high-frequency words to obtain a second part of the feature vector of each line of text, wherein the second part is used for indicating whether each entity label in the entity information of the corresponding line of text is the high-frequency word.

Optionally, before the segmenting the image according to the vertical segmentation line and the horizontal segmentation line, the method further comprises:

identifying header text in the text;

identifying a form tail text in the text;

determining a table area in the image according to the header text and the footer text;

the segmenting the image according to the vertical segmentation line and the horizontal segmentation line comprises:

and segmenting the table area in the image according to the vertical segmentation line and the horizontal segmentation line.

Optionally, the identifying the header text in the text includes:

using a keyword search in the text in the order from top to bottom;

and taking the first line text matched with the key words as the header text.

Optionally, the identifying the epilogue text in the text includes:

searching in the text by using a regular expression according to the sequence from bottom to top;

and taking the first line text matched with the regular expression as the table tail text.

Optionally, the acquiring the image and the text comprises at least one of:

converting a page of a portable document format PDF file into the image, and extracting a text stored in the page of the PDF file; or the like, or, alternatively,

and acquiring an image, and performing character recognition on the image to obtain the text.

In a second aspect, a table information extraction device is provided, where the table information extraction device has a function of extracting table information in the first aspect or any one of the options of the first aspect. The extraction device for the table information comprises at least one module, and the at least one module is used for realizing the extraction method for the table information provided by the first aspect or any one of the optional modes of the first aspect. For specific details of the table information extracting apparatus provided in the second aspect, reference may be made to the first aspect or any optional manner of the first aspect, and details are not described here.

In a third aspect, a computing device is provided, where the computing device includes a processor configured to execute instructions to cause the computing device to perform the method for extracting table information provided in the first aspect or any one of the alternatives of the first aspect. For specific details of the computing device provided by the third aspect, reference may be made to the first aspect or any optional manner of the first aspect, and details are not described here.

In a fourth aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the instruction is read by a processor to enable a computing device to execute the method for extracting table information provided in the first aspect or any one of the optional manners of the first aspect.

In a fifth aspect, a computer program product is provided, which, when run on a computing device, causes the computing device to execute the method for extracting table information provided in the first aspect or any one of the alternatives of the first aspect.

A sixth aspect provides a chip, which when run on a computing device, causes the computing device to execute the method for extracting table information provided in the first aspect or any one of the alternatives of the first aspect.

Drawings

FIG. 1 is a schematic diagram of a table with complete table lines provided in an embodiment of the present application;

FIG. 2 is a schematic diagram of a table without table lines provided by an embodiment of the present application;

FIG. 3 is a diagram of a system architecture 100 according to an embodiment of the present application;

FIG. 4 is a diagram of a system architecture 200 according to an embodiment of the present application;

fig. 5 is a flowchart of a method 300 for extracting table information according to an embodiment of the present disclosure;

FIG. 6 is a schematic view of a vertical parting line orientation provided by an embodiment of the present application;

FIG. 7 is a schematic illustration of a vertical parting line orientation provided by an embodiment of the present application;

FIG. 8 is a schematic illustration of a horizontal split line positioning provided by an embodiment of the present application;

FIG. 9 is a schematic illustration of a horizontal split line orientation provided by an embodiment of the present application;

FIG. 10 is a schematic diagram of horizontal segmentation line positioning and image segmentation provided by an embodiment of the present application;

fig. 11 is a flowchart of a method 400 for extracting table information according to an embodiment of the present application;

fig. 12 is a schematic structural diagram of an apparatus 500 for extracting table information according to an embodiment of the present application;

fig. 13 is a schematic structural diagram of a computing device 600 according to an embodiment of the present application.

Detailed Description

To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.

The terms "first," "second," and the like in this application are used for distinguishing between similar items and items that have substantially the same function or similar functionality, and it should be understood that "first," "second," and "nth" do not have any logical or temporal dependency or limitation on the number or order of execution. It will be further understood that, although the following description uses the terms first, second, etc. to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first threshold may be referred to as a second threshold, and similarly, a second threshold may be referred to as a first threshold, without departing from the scope of various described examples. Both the first threshold and the second threshold may be thresholds, and in some cases, may be separate and distinct thresholds.

The term "at least one" in this application means one or more, and the term "plurality" in this application means two or more, for example, the plurality of second messages means two or more second messages. The terms "system" and "network" are often used interchangeably herein.

It is also understood that the term "if" may be interpreted to mean "when" ("where" or "upon") or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined." or "if [ a stated condition or event ] is detected" may be interpreted to mean "upon determining.. or" in response to determining. "or" upon detecting [ a stated condition or event ] or "in response to detecting [ a stated condition or event ]" depending on the context.

The following exemplarily introduces an application scenario of the present application.

The method for extracting the table information provided by the embodiment of the application can be applied to a scene of extracting the table information from a Portable Document Format (PDF) file. For example, the method for extracting the form information in the embodiment of the application is applied to a scene of extracting a bank water bill in a PDF file. The application scenario is briefly described below.

The unstructured text information extraction technology automatically excavates mass text data through algorithms, rules and the like, extracts useful information for structured storage, and can greatly improve the efficiency of people for obtaining information. The PDF format is a common electronic document format, and is mainly used for document presentation and printing. The storage structure of the PDF file comprises four objects of an image, a line, a frame and a character, and the objects are displayed at the designated position of the document according to the stored horizontal and vertical coordinates.

The PDF table extraction technique is mainly directed to tables with complete table lines. A table of complete form lines is shown, for example, in figure 1. For such a table with complete table lines, the table can be inferred by parsing the line objects in the PDF storage structure, or the table lines and table regions can be located by image processing techniques.

However, the accuracy of extracting a table without table lines or with incomplete table lines is not high enough in the current scheme, and especially in a scene that the contents of cells in the table are changed, the current scheme cannot be identified. Particularly, for a table without a table line in a PDF file, there is a great technical difficulty in extracting table information. This is because the data storage in a PDF file is unstructured, and for text data, a PDF file stores only characters and positions of characters, and a PDF file does not store words and sentences, nor has a concept of a table. The table that appears to the human eye is only a few character plus line combinations in its underlying storage. Without the table lines, it is difficult for the computer to determine which region is the table content, where to slice horizontally and where to slice vertically. Moreover, when there is no table line and the content is wrapped, the wrapped content interferes with the extraction of the cell content, and information is easily missed.

In view of the above, the present application provides a table information extraction method capable of solving the problem of text line feed in a table without table lines, and the table is shown in fig. 2, for example. By applying the method provided by the embodiment of the application, the automatic identification of the vertical dividing line of the table without table line in the PDF file can be realized, the automatic identification of the multi-row line feed text of the vertical dividing line of the table without table line in the PDF file, namely, the horizontal dividing line identification is realized, and the full-process automatic extraction of the table without table line in the PDF file is realized. Particularly, for data sources such as statements, delivery lists and financial reports which are mainly in a table structure, the method provided by the application can effectively improve the information acquisition efficiency.

In the following embodiments, it will mainly be spread around these points:

how to realize the whole-process automatic extraction of the table without the table line in the PDF file. For example, table extraction is realized by full-process automation based on an image Processing technique and a Natural Language Processing (NLP) technique.

And (II) how to realize the automatic identification of the vertical dividing line of the table without table lines in the PDF file. For example, a method for positioning a vertical dividing line is proposed based on the gray distribution of an image.

And (III) how to realize the automatic identification of the horizontal dividing line of the table without table lines in the PDF file. For example, a method for identifying a text with multiple lines in a table is provided by using a text similarity algorithm, so that the automatic positioning of a horizontal dividing line of the table is solved.

For ease of understanding, some concepts related to terms referred to in the embodiments of the present application will be described below.

Entity identification: the method refers to identifying entities with specific meanings in texts, and mainly comprises name of a person, place name, organization name, proper noun and the like.

The regular expression is as follows: the method is a logic formula for operating on character strings, namely a 'regular character string' is formed by using specific characters defined in advance and a combination of the specific characters, and the 'regular character string' is used for expressing a filtering logic for the character strings.

Gray scale: the image is represented by black and white, different brightness has different gray values, the gray value range is generally from 0 to 255, white is 255, black is 0, and Red, Green and Blue (RGB) three-dimensional data of the color image can be converted into two-dimensional gray data through a formula.

The system architecture provided by the embodiments of the present application is described below.

Referring to fig. 3, the present embodiment provides a system architecture 100. The system architecture 100 is illustrative of the hardware environment for the method 300 described below. The system architecture 100 includes: a terminal 101 and an information extraction platform 110. The terminal 101 is connected to the information extraction platform 110 through a wireless network or a wired network.

The terminal 101 may be at least one of a smart phone, a game console, a desktop computer, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4) player, and a laptop computer. The terminal 101 is installed and operated with an application program supporting extraction of form information. The application program may be a client application or a browser application. For example, the application program may be a word processing application, a social application, a PDF application, and the like. Illustratively, the terminal 101 is a terminal used by a user, and a user account is registered in an application running in the terminal 101.

The information extraction platform 110 may be at least one of a computing device, a plurality of computing devices, a cloud computing platform, and a virtualization center. The information extraction platform 110 is used to provide background services for applications that support extraction functionality of form information. Alternatively, the information extraction platform 110 and the terminal 101 may cooperate in the extraction process of the form information. For example, the terminal 101 is used to provide a source file (such as a PDF file or an image), the information extraction platform 110 is used to extract table information from the source file, and the terminal 101 is used to display the table information.

Optionally, the information extraction platform 110 includes: a computing device 1101 and a database 1102. The computing device 1101 is configured to provide background services related to extraction of form information, such as receiving a source file sent by the terminal 101, and sending the extracted form information to the terminal 101. The database 1102 may be used to cache large amounts of table information or configurations required to extract table information, and the database 1102 may provide stored data to the computing device 1101 when needed.

Terminal 101 may generally refer to one of a plurality of terminals, or a set of a plurality of terminals; computing device 1101 may broadly refer to one of multiple computing devices, or a collection of multiple computing devices; database 1102 may generally refer to one of a plurality of databases, or a collection of databases. It should be understood that if the terminal 101, computing device 1101, or database 1102 is a collection of multiple devices, although not shown in fig. 3, the system 100 described above may also include other terminals, other computing devices, or other databases. The present embodiment does not limit the number and types of each device in the system 100.

Referring to fig. 4, the present embodiment provides another system architecture 200. The system architecture 200 is an illustration of the software architecture of the method 300 described below. The system architecture 200 includes a plurality of functional modules, each of which is a software module. Optionally, each functional module is implemented using Python language. The system architecture 200 includes: the system comprises a PDF file 201, an image extraction module 202, a text extraction module 203, a vertical dividing line positioning module 204, a table area positioning module 205, a horizontal dividing line positioning module 206 and a table extraction module 207. The functions of the modules are as follows:

PDF file 201: a file in PDF format containing a table without ruled lines.

An image extraction module 202, configured to convert a page of a PDF file into an image.

The text extraction module 203 is configured to extract texts in a storage structure in the PDF file, where the texts are mainly character object data.

And a vertical dividing line positioning module 204, configured to determine a vertical dividing line position of the table according to the gray level distribution of the image in the horizontal direction.

The table area positioning module 205 is configured to find a first line text and a last line text of the table by using an information retrieval technology, and position the table area.

And the horizontal segmentation line positioning module 206 is used for identifying the multi-line-feed texts in the table area by using a text similarity algorithm and then finding the position of the horizontal segmentation line.

And the table extraction module 207 is configured to obtain complete table structure data after segmentation by using the vertical dividing line and the horizontal dividing line, and extract text data in each cell.

The system architecture 100 and the system architecture 200 are introduced above, and a method flow for extracting table information based on the system architecture provided above is exemplarily described below by way of a first embodiment.

Example one

Referring to fig. 5, fig. 5 is a flowchart of a method 300 for extracting table information according to an embodiment of the present application. The method 300 includes S301 to S308.

S301, the computing device acquires a source file.

The format of the source file includes a variety of scenarios. Optionally, the source file is a PDF file, e.g., the source file is a table in PDF format. For example, the source file is a bank statement in PDF format. Optionally, the source file is an image. For example, the source file is a form screenshot. The source file includes a table to be extracted. Wherein the source file contains a table that may be an irregular table. The irregular table comprises at least one of a table without table lines, a table with incomplete table lines and a table with wrapped text in the table. For example, referring to fig. 2, fig. 2 shows a page of a PDF file including an irregular table without table lines and text breaks in the table, and in the present embodiment, how to extract table information of such an irregular table from the PDF file will be described.

S302, the computing device obtains images and texts according to the source file.

The computing device extracts images and text from the source file to extract form information using the images and text.

The implementation of extracting images and text may differ for different types of source files, as illustrated by case one and case two below.

In case one, the source file is a PDF file.

How to extract an image from a PDF file includes a variety of implementations. For example, the computing device converts a page of a PDF file into an image, thereby enabling extraction of the image. Optionally, the computing device converts each page of the PDF file into an image. For example, if the PDF file includes N pages, the N pages may be converted into N images. Alternatively, the computing device determines the page where the form is located and converts the page where the form is located into an image. Alternatively, the size of the converted image and the size of the page of the PDF file are the same. For example, for each page of a PDF file, the computing device converts it into an image of equal size. Or the size of the converted image is smaller than the size of the page of the PDF file. Wherein N is a positive integer.

How to extract text from a PDF file includes a number of implementations. For example, a computing device extracts text stored in a page of a PDF file. Specifically, the computing device reads the PDF file, parses the storage structure at the bottom of the PDF file, and obtains the character object data C from the storage structure. The character object data C includes a character and coordinates of the character in the PDF page.

And in case two, the source file is an image.

After the computing equipment acquires the image, the computing equipment performs character recognition on the image to obtain a text. The Character Recognition is, for example, an OCR (Optical Character Recognition) method.

And S303, determining a vertical dividing line in the image by the computing equipment according to the gray level distribution of the image in the horizontal direction.

In irregular tables, where there is no border line or missing portions of a column, such as with reference to fig. 2, the table shown in fig. 2 is not used to separate border lines between columns. In S303, even if the table lacks a column border, the vertical dividing line can be automatically positioned by using the image processing technique, and the table information of the table lacking the column border can be efficiently extracted by using the vertical dividing line.

How to locate the vertical partition lines using image processing techniques includes a variety of implementations. For example, the computing device determines the gradation distribution of each position in the image in the horizontal direction, finds a position where the gradation distribution satisfies a condition as the position where the vertical dividing line is located in the image, and thereby automatically searches out the vertical dividing line. Hereinafter, S3031 to S3034 will be specifically exemplified.

S3031, the calculation device reads the gray value of each pixel point in the image.

The gray values of the pixels in the image may be referred to as gray image data. Optionally, the gray value of each pixel point in the image is stored in the computing device in a form of a gray matrix. Each element of the gray matrix is a gray value of one pixel point. In some of the following examples, the grayscale matrix is denoted as grayscale matrix G.

S3032, the calculation device carries out statistics on the gray value of each pixel point in the image to obtain a statistical result.

Wherein the statistical result is used for indicating the gray distribution of the image in the horizontal direction. For example, the statistical result includes a plurality of values, each value corresponding to a vertical line in the image, and the variation rule of different values reflects the distribution of the gray values of the image in the horizontal direction. The numerical value in the statistical result represents the overall gray level distribution of each pixel point on the vertical line, the smaller the numerical value in the statistical result is, the more blank on the vertical line is represented, the larger the numerical value in the statistical result is, and the more text characters or lines on the vertical line are represented. For example, referring to fig. 6, fig. 6 is a schematic diagram illustrating a statistical result indicating a gray level distribution in a horizontal direction according to an embodiment of the present application, and fig. 6 shows the statistical result in the form of a curve, where a fluctuation of the curve reflects a distribution of gray level values in the horizontal direction of an image. Comparing the image above fig. 6 with the curve below fig. 6, it can be seen that the peak in the curve corresponds to the position where the text appears in the table, the valley in the curve corresponds to the blank space between different columns of text in the table, and the position where the change rate of the curve in the curve is large, i.e., the position where the peak jumps to the valley or the position where the valley jumps to the peak, is exactly the column boundary.

How to use the gray value statistics gray distribution includes various implementations, which are exemplified by steps a to b below.

And a, binarizing the gray value of each pixel point in the image by the computing equipment to obtain gray data.

The binarization is to map the gray value from 0 to 255 into two values. The gray data includes a binary gray value of each pixel in the image, and the binary gray value is, for example, 0 or 1.

How binarization is performed includes a variety of implementations. Alternatively, the computing device binarizes in an inverse manner such that the grayscale value for the blank locations in the image is converted to 0 and the grayscale value for the locations in the image where text appears is 1. Specifically, for a pixel point in the image, the computing device determines whether the gray value of the pixel point is greater than the gray value threshold, if the gray value of the pixel point is greater than the gray value threshold, then 0 is used as the binary gray value of the pixel point, and if the gray value of the pixel point is less than or equal to the gray value threshold, then 1 is used as the binary gray value of the pixel point. In this way, the gradation data is binarized in such a manner that the gradation value is inverted. Taking the gray value threshold value as 254 as an example, the computing device reads the gray matrix G, converts the gray value greater than 254 in the gray matrix G into 0, and converts the gray value less than or equal to 254 in the gray matrix G into 1, so as to obtain the inverted binary gray data of 0-1. In some examples below, the inverted binary grayscale data of 0 to 1 is referred to as grayscale data G'.

And step b, summing the binary gray values in the gray data by columns by the computing equipment to obtain a statistical result.

For example, the computing device adds the binarized gray values on each vertical line in the image, so that the binarized gray values of the pixel points on the same vertical line in the image are mapped into a sum, and the sums corresponding to the vertical lines are combined into a statistical result, thereby realizing the statistics of the gray distribution in the horizontal direction. In other words, the calculation device uses the sum of the columns of the gradation matrix as the gradation distribution statistics in the horizontal direction. For example, the calculation device adds the binarized gradation values of each column in the gradation data G 'to obtain a statistical result G' x.

S3033, the calculation device determines the text boundary in the image according to the statistical result.

Since the statistical result indicates the gradation distribution of the image in the horizontal direction, the computing device can count the rate of change of the gradation distribution at each position in the image using the gradation distribution in the horizontal direction. The computing device may compare the rate of change of the intensity distribution at each location in the image to a first threshold and determine locations in the image where the rate of change is greater than the first threshold as where text boundaries are located in the image. Where the text boundaries are, for example, the text boundaries of a column, i.e., the left and right boundaries of a column of text.

How to detect text boundaries using the rate of change of the gray scale distribution includes various implementations. This is exemplified below by steps a to b.

Step a, acquiring difference data by the computing equipment according to the statistical result, wherein the difference data is used for indicating the gray value difference between the adjacent positions of the image in the horizontal direction.

Alternatively, the calculation device calculates the rate of change of the gradation distribution by the difference in the gradation values between the adjacent positions. The gray value difference is, for example, an absolute value of a difference between different gray values. For example, the computing device subtracts every two gray values at adjacent positions in the horizontal direction, and takes the absolute value of the difference to obtain difference data. For example, after the computing device counts the gray distribution in the horizontal direction to obtain the gray data G 'x, the computing device may subtract the adjacent values in the gray data G' x and take the absolute value of the difference to obtain the difference data D.

And b, determining the text boundary in the image by the computing equipment according to the difference data.

Optionally, the computing device filters out the locations of the text boundaries using a threshold. For example, the computing device determines whether a difference between a gray value of each position in the image and a gray value of an adjacent position is greater than a first threshold according to the difference data, and if the difference between the gray value of each position in the image and the gray value of the adjacent position is greater than the first threshold, the position is used as the position of the text boundary. Taking the first threshold as the threshold D ', the computing device screens out a value larger than the threshold D' from the difference data D, and the abscissa where the value is located is used as the position of the text boundary. The threshold d' may be a parameter set according to different file requirements.

It should be understood that the above is described by taking the process of determining one text boundary as an example, the table may include multiple columns of text, and the image may include multiple text boundaries. For example, a plurality of values may be screened out from the difference data D, and each of the abscissa X' ═ X1, X2, …, xi, … where these values are located may be used as the position of one text boundary. Where xi represents the location of a text boundary.

S3034, the computing device determines a vertical segmentation line in the image according to the text boundary, wherein the vertical segmentation line is positioned between adjacent text boundaries in the image.

The computing device may take a position from between adjacent text boundaries as the position of the vertical partition line. Alternatively, the computing device determines the middle position of the adjacent text boundary as the position of the vertical dividing line, in other words, the computing device uses the middle value of a pair of text boundaries as the position of the vertical dividing line. For example, after the computing device obtains the positions X' ═ X1, X2, …, xi, … of the multiple text boundaries, it may takeAs the location of the final vertical split line. In some examples below, the position of the vertical dividing line is denoted as position X.

Exemplarily, referring to fig. 6, fig. 6 shows a schematic diagram of positioning of a vertical dividing line, where a vertical dotted line in an upper half of fig. 6 is a given vertical dividing line position, a lower half is a statistical result of horizontal gray distribution of a PDF image, and a vertical dotted line in a lower half is a column text boundary after threshold value filtering. Each vertical partition line is located in the middle of two column text boundaries.

With reference to fig. 7, fig. 7 shows an architecture diagram of the vertical dividing line positioning module, input parameters of the vertical dividing line positioning module include an image, and the vertical dividing line positioning sub-module is configured to perform the steps of gray scale distribution statistics, adjacent gray scale difference calculation, threshold value screening, vertical dividing line position calculation, and the like. The output parameters of the vertical partition line positioning sub-module include the position of the vertical partition line. The step of gray distribution statistics specifically comprises the steps of reading the gray value of an image, binarizing and inverting the gray value, distributing the gray value in the horizontal direction and the like.

By executing S303, on the one hand, even if there is no frame line between columns of the table, since the gray distribution in the horizontal direction is counted, the text boundary of the column is detected according to the change rate of the gray distribution, so that the vertical dividing line is more accurately located, and each column of the table can be accurately extracted after being divided by the vertical dividing line. On the other hand, the position of the vertical dividing line is automatically determined without the complicated operation of inputting the position of the column boundary by the user, so that the extraction efficiency of the table information is greatly improved.

S304, the calculation device determines a table area in the image.

The computing device may automatically locate the table region, as illustrated below by steps I through III.

Step I, extracting at least one line of text from the text by the computing equipment according to the text and the vertical dividing line

Taking a PDF file as an example, a computing device divides a page of the PDF file by columns according to a vertical dividing line, and divides a page of the PDF file by rows in units of each row, thereby dividing the entire page. For example, the computing device performs column segmentation on the character object data C according to the position X of the vertical segmentation line to obtain a segmented line text S ', where each line text si' and si 'in the line text S' is a line text filled with preset symbols at the position of the vertical segmentation line. The preset symbol is used to identify the position of the vertical dividing line, and the preset symbol is, for example, a separator.

And II, identifying the header text and the tail text in at least one line of text by the computing equipment.

The header text is, for example, the first line text of the table. The form tail text is, for example, the last line text of the form, which is also referred to as tail line text.

How header text is recognized includes a variety of implementations. For example, the computing device identifies header text in at least one line of text using information retrieval techniques. In one possible implementation, the computing device uses keyword retrieval in text, in top-down order; the computing device takes the first line text that matches the keyword as the header text. For example, after obtaining the line text S ', the computing device searches each line text in S' from top to bottom by using the keywords, and uses the first matched line text as the header text of the table to be extracted.

Wherein the keyword may be a header keyword. For example, the keywords include entity tags. For example, the keywords include date (date), Value (Value), Transaction record details (Transaction detail), Amount (Amount), and the like. How to obtain the keywords includes various implementations. Optionally, the computing device obtains the header of the sample table in advance, and performs statistics on the header of the sample table to obtain a statistical result. The computing device determines the keywords according to the statistical results. For example, each word in the header of the sample table is extracted, each word is converted into a corresponding entity tag, the obtained entity tags are clustered, and the keywords are obtained according to the clustering result. Optionally, the keywords are preconfigured, and the computing device may obtain the keywords from the configuration information. Alternatively, different keywords may be set for different files.

How to identify the epilogue includes a number of implementations. For example, a computing device identifies form tail text in at least one line of text using information retrieval techniques and regular expressions. In one possible implementation, the computing device uses regular expression retrieval in text, in a bottom-up order; and taking the first line text matched with the regular expression as a table tail text. For example, after the computing device obtains the line text S ', each line text in S' is retrieved from bottom to top by using the regular expression, and the first matched line text is used as the form tail text of the table to be extracted.

The regular expression is, for example, a tail row regular expression. Optionally, the computing device obtains the header of the sample table in advance, and performs statistics on the header of the sample table to obtain a statistical result. And the computing equipment determines the regular expression according to the statistical result. For example, each word in the header of the sample table is extracted, each word is converted into a corresponding entity label, the obtained entity labels are clustered, and the regular expression is obtained according to the clustering result. Optionally, the regular expression is preconfigured, and the computing device may obtain the regular expression from the configuration information. Alternatively, different regular expressions may be set for different files.

Step III, the calculation equipment determines a table area in the image according to the header text and the tail text;

the computing device may screen out all line texts in the middle of the header text and the footer text to obtain a line text S in the table region. For example, referring to fig. 8, two lines in the black box are respectively the head text and the tail text matched to obtain the upper and lower horizontal dotted lines, and locate the table area.

By executing S303, the computing device can automatically locate the form area, thereby eliminating the step of manually inputting the position of the form area, and improving the efficiency.

S305, the computing device identifies a starting line of the line feed text in the text.

In an irregular table, which is often the case when there is no border line or missing part of a row, for example, referring to fig. 2, the table shown in fig. 2 is not used to separate the border lines from row to row. In S305, even if the table lacks a line frame line, the NLP technique automatically recognizes the start line of the line feed text, automatically locates the horizontal dividing line using the position of the start line, and efficiently extracts the table information of the table lacking the line frame line using the horizontal dividing line.

The line-feed text includes at least one line of text, for example, the line-feed text is a multi-line-feed text, and the multi-line-feed text includes a plurality of lines of text. Line feed text is, for example, text that is semantically similar and segmented into different lines. The line feed text is, for example, a sentence. How to identify the beginning line of the line feed text using NLP includes various implementations, for example, the computing device may identify the beginning line of the line feed text using line text similarity calculation, as illustrated below by S3051-S3053 for how to identify the beginning line of the line feed text.

S3051, the computing device obtains at least one line of text.

For example, the computing device determines at least one line of text that the table region includes based on the position of the vertical partition line, the position of the header text, and the position of the footer text.

S3052, the computing equipment respectively obtains semantic similarity between at least one line text and the first line text.

The first line of text is the first line of text below the header text. For example, after obtaining the table area line text S, the computing device temporarily removes the header of the first line in the line text S, and uses the next line after removing the header as the first line text, so as to perform the subsequent operation by using the first line text.

How to calculate the semantic similarity between the line text and the top line text includes various implementation manners, which are exemplified by steps a to C below.

And step A, the computing equipment respectively carries out entity identification on at least one line of text to obtain entity information of each line of text.

The entity information comprises an entity label corresponding to each word in a line text. The entity tag is used to identify the type of entity, for example, the entity tag includes at least one of a date, an amount, a person name, a place name, and an organization.

In one possible implementation, for any line of text in the at least one line of text, the computing device identifies an entity corresponding to a word contained in the line of text, and replaces the word contained in the line of text with an entity tag to obtain entity information. For example, for a line text si in the line text S of the table area, the computing device identifies an entity in the line text si, and replaces an original word or phrase in the line text si with an entity tag to obtain entity information ti.

And step B, the computing equipment respectively extracts the characteristics of the entity information of at least one line of text to obtain the characteristic vector of each line of text.

The computing device may convert the entity information into feature vectors by performing feature extraction. For example, after obtaining all the line texts T in the table region, the computing device converts each line text ti in T into a feature vector vi.

How to compute the row vectors includes a variety of implementations. In one possible implementation, the computing device computes feature vectors based on the high-frequency lines and the high-frequency words. For example, the feature vector includes two portions whose values are determined based on the comparison with the high-frequency line and the high-frequency word, respectively. In order to distinguish different parts of the feature vector, a part determined based on a result of comparison with the high-frequency line is referred to as a first part, and a part determined based on a result of comparison with the high-frequency word is referred to as a second part. It should be understood that the first and second portions are only used to distinguish one portion from another and do not represent a sequential order. For example, the first portion is a first portion of the feature vector and the second portion is a second portion of the feature vector. For example, the length of the feature vector vi is 2N, the feature vector vi is composed of two parts, the first 1 to N bits of the feature vector vi are the first part, the first 1 to N bits are determined by comparing the line text ti with the high frequency line t', the N +1 to 2N bits of the feature vector vi are the second part, and the N +1 to 2N bits are determined by comparing each word in the line text ti with the high frequency word. As another example, the first portion is a second portion of the feature vector, and the second portion is the first portion of the feature vector. Optionally, the two parts of the feature vector are of equal length, that is, the length of the first part and the length of the second part are equal. Alternatively, the total length of the feature vector may be determined by the number of high frequency words. For example, the total length of the feature vector is 2 times the number of high frequency words, for example, if the number of high frequency words is N, the length of the feature vector is set to 2N.

How to determine the high frequency rows includes a variety of implementations. Optionally, the computing device counts out the high frequency lines using the line text (i.e., the entity information above) replaced by the entity tag. Specifically, the computing device determines a high frequency line in the entity information of at least one line of text based on the entity information of each line of text. For example, the computing device counts all rows T ═ T1,t2,...,ti,.. the number of occurrences of the same line, find a highest frequency line t'.

The high-frequency line is entity information of one line of text, and the frequency of the high-frequency line appearing in the entity information of at least one line of text is the most; for example, the high frequency line T' is T ═ T1,t2,...,ti,..

How to compute the feature vectors using the high frequency lines includes a variety of implementations. Optionally, the computing device compares the entity information of at least one line of text with the high frequency line, respectively, to obtain a first part of the feature vector of each line of text.

The first part is used for indicating the consistency degree between the entity information of the corresponding line text and the high frequency line. For example, if the entity information of the line text is consistent with the high frequency line, the value of each bit in the first part is the first value. And if the entity information of the line text is inconsistent with the high-frequency line, the value of each bit in the first part is a second value. Or, if the entity information of the line text is more consistent with the high-frequency line (for example, the vector similarity is greater), the value of each bit in the first part is greater. Optionally, the first value is 1, and the second value is 0. For example, after obtaining the line text ti, the computing device compares the line text ti with the high frequency line t ', and sets the 1 st bit to the N th bit of the feature vector vi to 1 if the line text ti is consistent with the high frequency line t'. On the contrary, if the line text ti is inconsistent with the high frequency line t', the 1 st bit to the N th bit of the feature vector vi are all set to 0. In this way, the features obtained by comparing the entity information of the line text with the high frequency lines are represented by N bits of the same 1 or 0 in the feature vector.

How to determine high frequency words includes a number of implementations. Optionally, the computing device uses the line text (i.e., the entity information above) replaced by the entity tag to count out the high-frequency words. Specifically, the computing device determines a high-frequency word in the entity information of at least one line of text according to the entity information of each line of text. The high-frequency words are entity labels, and the word frequency of the high-frequency words is arranged in the entity labels of the entity information of at least one line of text in the front preset digit. For example, the computing device counts the word frequencies of all words in T, and outputs a high-frequency word W' with the word frequency arranged in the first N bits1,w2,w3,...,wn. Where N is an example of the number of high-frequency words (a preset number of bits), and N may be a parameter set according to different file requirements, for example, N is 4.

How to compute feature vectors using high frequency words includes a number of implementations. Optionally, the computing device compares the entity labels in the entity information of at least one line of text with the high-frequency words, respectively, to obtain a second part of the feature vector of each line of text.

The second part is used for indicating whether each entity label in the entity information of the corresponding line of text is a high-frequency word. For example, for an entity label in the entity information of the line text, if the entity label is a high-frequency word, the value of the bit corresponding to the entity label in the second part is the first value, and if the entity label is not the high-frequency word, the value of the bit corresponding to the entity label in the second part is the second value. For example, in calculating the bits of the feature vector N +1 to 2N, the computing device determines whether each high-frequency word Wi in all high-frequency words W' belongs to all words Wi in the line text ti, and sets one bit of the feature vector to 1 if the high-frequency word Wi belongs to a word in the line text ti, and sets one bit of the feature vector to 0 if the high-frequency word Wi does not belong to a word in the line text ti.

And step C, the computing equipment acquires the semantic similarity between each line of text and the first line of text according to the feature vector similarity between each line of text and the first line of text.

After the calculation device obtains the feature vectors of the line texts and the feature vectors of the first line texts, the similarity between the two line texts can be obtained by calculating the feature vector similarity between the two feature vectors. Optionally, the feature vector similarity is a cosine similarity. Optionally, the computing device takes the feature vector similarity between the line text and the top line text as the semantic similarity between the line text and the top line text. For example, the computing device calculates the feature vector V1 for the first line of text and the feature vector V ═ V for all lines1,v2,...,vi,.. cosine similarity S ═ S11,s12,...,s1i,., the cosine similarity S ═ S11,s12,...,s1i,.. as semantic similarity between the top line text and all line text.

S3053, determining, by the computing device, an initial line of the line feed text in the at least one line text according to the semantic similarity corresponding to each line text, wherein the semantic similarity between the initial line and the first line text is greater than a second threshold.

After obtaining the semantic similarity of each line of text in at least one line of text, the computing device may determine whether the semantic similarity of each line of text is greater than a second threshold, and if the semantic similarity of any line of text is greater than the second threshold, determine the line of text as a starting line of a line feed text, where the second threshold may be set in advanceThe second threshold is set, for example, according to parameters that need to be set for different files. For example, the computing device obtains the cosine similarity S ═ S11,s12,...,s1i,., screening out lines larger than a threshold value S' from the cosine similarity S as starting lines of the line-feed text of multiple lines in the table. Where threshold s' is illustrative of the second threshold.

S306, the computing device determines a horizontal dividing line in the image according to the starting line of the line-feed text in the text.

After the computing device identifies the start line of the line feed text, the computing device may use the start line of the identified line feed text as a boundary to obtain the position of the horizontal dividing line. The horizontal dividing line is located above the beginning line of the line feed text in the image, for example, the horizontal dividing line is located at the upper boundary of the beginning line of the line feed text in the image. In some examples below, the position of the horizontal dividing line is denoted as position Y.

With reference to fig. 9, the configuration diagram of the horizontal dividing line positioning module is shown in fig. 9 by combining the steps involved in S305 and S306, the input parameter of the horizontal dividing line positioning module includes the text in the table area, and the horizontal dividing line positioning module includes a line text similarity calculation module, a threshold value screening submodule, a horizontal dividing line position dividing submodule, a line feed cell content merging submodule, and the like.

For example, referring to fig. 10, fig. 10 is a schematic diagram of positioning of horizontal dividing lines, a vertical dotted line in the left half of fig. 10 is a result of positioning of the vertical dividing lines, a horizontal dotted line is a result of positioning of the horizontal dividing lines, and a line marked by an arrow is a starting line of a multi-line-feed text screened by a threshold.

By executing S305 and S306, on the one hand, even if there is no frame line between rows of the table and a line-feed text occurs in each cell, since the start line of the line-feed text is accurately identified according to the semantic similarity between the start line and the first line of the line-feed text, the horizontal dividing line is more accurately located according to the start line of the line-feed text, and after the horizontal dividing line is used for dividing, each line of the table can be extracted accurately. On the other hand, the user does not need to perform complicated operation of inputting the segmentation dotting position, but the position of the horizontal segmentation line is automatically determined, and therefore the efficiency of extracting the table information is greatly improved.

S307, the computing equipment divides the image according to the vertical dividing line and the horizontal dividing line to obtain at least one cell area of the table.

The computing device performs vertical segmentation on the image according to the vertical segmentation line and performs horizontal segmentation on the image according to the horizontal segmentation line, thereby segmenting the image into at least one cell region. Wherein, the computing device may use the table region obtained in S304 to divide the table region.

For example, the computing device segments the complete table by using the position X of the vertical segmentation line obtained in S303 and the position Y of the horizontal segmentation line obtained in S306, and merges the multiple lines of the line-feed text in each cell. Referring to fig. 10, fig. 10 is a schematic diagram of positioning of horizontal dividing lines, and the right half of fig. 10 is a final complete table structure obtained after horizontal dividing line division and cell content merging.

S308, the calculation equipment acquires the form information according to the information included in the at least one cell area.

The computing device may include information for each cell region as the contents of a cell in the table. The information included in the cell area is, for example, a text of an image in the cell area. In addition, if the information in the cell area is a plurality of lines of the line feed text, the computing device merges the plurality of lines of the line feed text, and takes the merged content as the content of the cell. In this manner, the computing device obtains a canonical form by utilizing the split line to split and the line feed text to merge. Optionally, the computing device exports the specification form into spreadsheet software (e.g., Excel) for storage. For example, referring to table 1 below, table 1 is an illustration of table information. By performing the present embodiment, table information shown in table 1 can be extracted from fig. 2.

TABLE 1

With reference to fig. 11, fig. 11 is a flowchart of a method 400 for extracting table information from a PDF file according to an embodiment of the present application, where the method 400 includes the following steps S401 to S413, for example.

S401, reading the gray value of each pixel point in the page of the PDF file.

S402, binarizing and inverting the gray value to obtain binary data of 0-1.

And S403, counting the gray scale of the image in the horizontal direction.

And S404, calculating the difference value of the adjacent gray scales.

S404, screening the gray level difference value through a threshold value to obtain a text boundary of the column.

S405, calculating the middle value of the text boundary of two adjacent columns to obtain the position of the vertical dividing line.

S406, searching the head and tail rows of the table by using the information retrieval and the regular expression, and positioning a table text area.

S407, performing entity recognition on each line of text, and replacing words in the text with entity labels.

And S408, counting the line frequency to obtain 1 high-frequency line.

And S409, counting the high-frequency words to obtain N high-frequency words.

And S410, converting each line of text into a feature vector.

S411, calculating cosine similarity between the vector of the first line of text and the vector of each line of text.

S412, cosine similarity is screened through a threshold value, and a starting line of a table multi-line feed text is obtained.

And S413, segmenting according to the horizontal segmentation line and the vertical segmentation line, and combining the line feed texts in the cells to obtain a final complete table.

It should be understood that the present embodiment is only described by taking a single computing device to execute the above S301 to S308 as an example, and in some embodiments, the above S301 to S308 may be executed by multiple devices in cooperation. Optionally, the method 300 is executed by the information extraction platform 110 in the system architecture 100, for example, a source file in the method 300 may be a PDF file stored in the database 1102, optionally, S301 and S302 of the method 300 may be executed in the computing device 1101, or may be executed in advance by other functional modules before the computing device 1101, that is, a source file received or obtained from the database 1102 is preprocessed, for example, an image extraction process and a text extraction process of S302, to obtain an image and a text, which are used as input of the computing device 1101, and the computing device 1101 executes S302 to S308. Optionally, the method 300 is performed by the terminal 101 and the information extraction platform 110 in cooperation. In some embodiments, the information extraction platform 110 may undertake primary processing and the terminal 101 undertake secondary processing; in other embodiments, the information extraction platform 110 undertakes secondary processing, and the terminal 101 undertakes primary processing; alternatively, the information extraction platform 110 or the terminal 101 may be respectively provided with processing tasks separately.

The embodiment provides a method for automatically extracting table information without table lines, and the vertical dividing lines are positioned in the image by utilizing an image processing technology, so that the equipment can automatically find the positions of the vertical dividing lines. And, with NLP technology, the horizontal dividing line is located in the image, so that the apparatus can automatically find the position of the horizontal dividing line. And dividing the image by using the vertical dividing line and the horizontal dividing line so as to accurately divide the cell area, and further extracting table information contained in the image according to the cell area. By the method, even if the tables contained in the source files such as the PDF files or the pictures are irregular tables, for example, the tables have no table lines, incomplete table lines or the cells have line-changing texts, the table information can be accurately extracted by the method, manual operation of a user is not needed, and full automation is achieved, so that the extraction efficiency of the table information is greatly improved.

The method for extracting the form information according to the embodiment of the present application is described above, and the apparatus for extracting the form information according to the embodiment of the present application is described below, and it should be understood that the apparatus for extracting the form information has any function of the computing device in the method described above.

Fig. 12 is a schematic structural diagram of an apparatus 500 for extracting table information according to an embodiment of the present application, and as shown in fig. 12, the apparatus 500 includes: an obtaining module 501, configured to execute S301 and S302; a determining module 502 for performing S303, S304, S305, or S306; a segmentation module 503 for performing S307; the obtaining module 501 is further configured to execute S308.

It should be understood that the apparatus 500 corresponds to the computing device in the foregoing method embodiment, and each module and the other operations and/or functions in the apparatus 500 are respectively for implementing various steps and methods implemented by the computing device in the method embodiment, and specific details may be referred to the foregoing method embodiment, and are not described herein again for brevity.

It should be understood that the apparatus 500 only exemplifies the division of the above functional modules when extracting the table information, and in practical applications, the above function allocation may be performed by different functional modules according to needs, that is, the internal structure of the apparatus 500 is divided into different functional modules to perform all or part of the above described functions. In addition, the apparatus 500 and the method 300 or the method 400 belong to the same concept, and specific implementation processes thereof are described in method embodiments and are not described herein again.

Corresponding to the method embodiment and the virtual device embodiment provided by the present application, an embodiment of the present application further provides a computing device 600, and a hardware structure of the computing device 600 is described below.

The computing device 600 corresponds to the computing device in the foregoing method embodiment, and for implementing various steps and methods implemented by the computing device in the method embodiment, details of how the computing device 600 extracts the table information may be referred to in the foregoing method embodiment, and details are not described herein for brevity. Where the steps of method 300 or method 400 above are performed by instructions in the form of hardware, integrated logic circuits, or software in the processor of computing device 600. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.

The computing apparatus 600 corresponds to the apparatus 500 in the virtual device embodiment described above, and each functional module in the apparatus 500 is implemented by software of the computing apparatus 600. In other words, the apparatus 500 includes functional modules that are generated by a processor of the computing device 600 after reading program code stored in a memory.

Referring to fig. 13, fig. 13 shows a schematic structural diagram of a computing device 600 provided in an exemplary embodiment of the present application, for example, the computing device 600 may be a host computer, a server, a personal computer, or the like. The computing device 600 may be implemented by a generic bus architecture.

Computing device 600 may be any device involved in the description of method embodiments in whole or in part. The computing device 600 includes at least one processor 601, a communication bus 602, memory 603, and at least one communication interface 604.

The processor 601 may be a general-purpose Central Processing Unit (CPU), a Network Processor (NP), a microprocessor, or one or more integrated circuits such as an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof for implementing the disclosed aspects. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.

A communication bus 602 is used to transfer information between the above components. The communication bus 602 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.

Memory 603 may be a read-only Memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only Memory (EEPROM), a compact disc read-only Memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), a magnetic disc storage medium, or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of instructions or data structures and which can be accessed by a computer, but is not limited to such. The memory 603 may be separate and coupled to the processor 601 through a communication bus 602. The memory 603 may also be integrated with the processor 601.

The communication interface 604 uses any transceiver or the like for communicating with other devices or communication networks. The communication interface 604 includes a wired communication interface, and may also include a wireless communication interface. The wired communication interface may be an ethernet interface, for example. The ethernet interface may be an optical interface, an electrical interface, or a combination thereof. The wireless communication interface may be a Wireless Local Area Network (WLAN) interface, a cellular network communication interface, or a combination thereof.

In particular implementations, processor 601 may include one or more CPUs such as CPU0 and CPU1 shown in fig. 13 as one embodiment.

In particular implementations, computing device 600 may include multiple processors, such as processor 601 and processor 605 shown in FIG. 13, for one embodiment. Each of these processors may be a single-Core Processor (CPU) or a multi-Core Processor (CPU). A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).

In particular implementations, computing device 600 may also include an output device 606 and an input device 607, as one embodiment. The output device 606 is in communication with the processor 601 and may display information in a variety of ways. For example, the output device 606 may be a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display device, a Cathode Ray Tube (CRT) display device, a projector (projector), or the like. The input device 607 is in communication with the processor 601 and may receive user input in a variety of ways. For example, the input device 607 may be a mouse, a keyboard, a touch screen device, a sensing device, or the like.

In some embodiments, the memory 603 is used to store program code 610 for performing aspects of the present application, and the processor 601 may execute the program code 610 stored in the memory 603. That is, the computing device 600 may implement the table information extraction method provided by the method embodiment through the processor 601 and the program code 610 in the memory 603.

The computing device 600 of the embodiment of the present application may correspond to the computing device in the above-described various method embodiments, and the processor 601, the communication interface 604, and the like in the computing device 600 may implement the functions of the computing device in the above-described various method embodiments and/or various steps and methods implemented. For brevity, no further description is provided herein.

It is to be understood that the obtaining module 501 in the apparatus 500 corresponds to the communication interface 604 in the computing device 600; the determination module 502 and the segmentation module 503 in the apparatus 500 may correspond to the processor 601 in the computing device 600.

Those of ordinary skill in the art will appreciate that the various method steps and elements described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both, and that the steps and elements of the various embodiments have been described above generally in terms of their functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the unit is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer program instructions. When loaded and executed on a computer, produce, in whole or in part, the procedures or functions according to the embodiments of the application. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer program instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wire or wirelessly. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The available media may be magnetic media (e.g., floppy disks, hard disks, tapes), optical media (e.g., Digital Video Disks (DVDs), or semiconductor media (e.g., solid state disks), among others.

It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.

The above description is intended only to be an alternative embodiment of the present application, and not to limit the present application, and any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:目标检测方法、装置及系统、高级驾驶辅助系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!