Method, apparatus, device and storage medium for outputting information
1. A method for outputting information, comprising:
acquiring a standard label and a label to be evaluated aiming at a target image;
comparing the standard label with each semantic area in the label to be evaluated, and determining the correct number of pixels labeled in each semantic area;
determining the marking accuracy of the to-be-evaluated label according to the standard label, the to-be-evaluated label and the number of pixels with correct marks in each semantic area;
and determining and outputting evaluation information according to the marking accuracy.
2. The method of claim 1, wherein the comparing semantic regions in the standard label and the label to be evaluated to determine the correct number of pixels labeled in each semantic region comprises:
respectively determining a first semantic matrix sequence and a second semantic matrix sequence according to the standard label and each semantic area in the label to be evaluated;
and respectively comparing the first semantic matrix sequence with the second semantic matrix sequence to determine the number of pixels marked correctly in each semantic area.
3. The method of claim 2, wherein the determining a first semantic matrix sequence and a second semantic matrix sequence according to each semantic region in the standard label and the label to be evaluated respectively comprises:
determining a first mask value sequence and a first pixel value matrix according to each semantic area in the standard label;
determining the first semantic matrix sequence according to the first mask value sequence and the first pixel value matrix;
determining a second mask value sequence and a second pixel value matrix according to each semantic area in the label to be evaluated;
and determining the first semantic matrix sequence according to the second mask value sequence and the second pixel value matrix.
4. The method of claim 1, wherein the determining the labeling correctness of the label to be evaluated according to the standard label, the label to be evaluated and the number of pixels labeled correctly in each semantic area comprises:
determining the first pixel quantity of each semantic region in the standard label and the second pixel quantity of each semantic region in the label to be evaluated;
determining the labeling accuracy of each semantic area in the labels to be evaluated according to the number of pixels with correct labeling in each semantic area, the first number of pixels and the second number of pixels;
and determining the marking accuracy of the to-be-evaluated mark according to the marking accuracy of each semantic area.
5. The method according to claim 4, wherein the determining the labeling correctness rate of each semantic region in the label to be evaluated according to the number of pixels with correct labeling in each semantic region, the first number of pixels and the second number of pixels comprises:
determining the maximum value of the pixel number of each semantic area according to the first pixel number and the second pixel number;
and determining the labeling accuracy of each semantic region according to the number of pixels labeled correctly in each semantic region and the maximum value of the number of pixels of each semantic region.
6. The method of claim 4, wherein the labeling according to the first number of pixels of each semantic region in the standard label and the second number of pixels of each semantic region in the label to be evaluated comprises:
in response to determining that pixel overlap exists in at least two semantic regions, determining an upper-lower relationship in the at least two semantic regions according to the height of each pixel in the at least two semantic regions;
and determining the first pixel quantity of each semantic region in the standard label and the second pixel quantity of each semantic region in the label to be evaluated according to the upper-lower relation and the pixel regions covered by the at least two regions.
7. An apparatus for outputting information, comprising:
the annotation acquisition unit is configured to acquire a standard annotation aiming at the target image and an annotation to be evaluated;
the annotation comparison unit is configured to compare the standard annotation with each semantic area in the annotation to be evaluated and determine the correct pixel number of the annotation in each semantic area;
the accuracy calculation unit is configured to determine the marking accuracy of the to-be-evaluated mark according to the standard mark, the to-be-evaluated mark and the number of pixels marked correctly in each semantic region;
and the information output unit is configured to determine and output evaluation information according to the marking accuracy.
8. The apparatus of claim 7, wherein the annotation comparison unit is further configured to:
respectively determining a first semantic matrix sequence and a second semantic matrix sequence according to the standard label and each semantic area in the label to be evaluated;
and respectively comparing the first semantic matrix sequence with the second semantic matrix sequence to determine the number of pixels marked correctly in each semantic area.
9. The apparatus of claim 8, wherein the annotation comparison unit is further configured to:
determining a first mask value sequence and a first pixel value matrix according to each semantic area in the standard label;
determining the first semantic matrix sequence according to the first mask value sequence and the first pixel value matrix;
determining a second mask value sequence and a second pixel value matrix according to each semantic area in the label to be evaluated;
and determining the first semantic matrix sequence according to the second mask value sequence and the second pixel value matrix.
10. The apparatus of claim 7, wherein the accuracy calculation unit is further configured to:
determining the first pixel quantity of each semantic region in the standard label and the second pixel quantity of each semantic region in the label to be evaluated;
determining the labeling accuracy of each semantic area in the labels to be evaluated according to the number of pixels with correct labeling in each semantic area, the first number of pixels and the second number of pixels;
and determining the marking accuracy of the to-be-evaluated mark according to the marking accuracy of each semantic area.
11. The apparatus of claim 10, wherein the accuracy calculation unit is further configured to:
determining the maximum value of the pixel number of each semantic area according to the first pixel number and the second pixel number;
and determining the labeling accuracy of each semantic region according to the number of pixels labeled correctly in each semantic region and the maximum value of the number of pixels of each semantic region.
12. The apparatus of claim 10, wherein the accuracy calculation unit is further configured to:
in response to determining that pixel overlap exists in at least two semantic regions, determining an upper-lower relationship in the at least two semantic regions according to the height of each pixel in the at least two semantic regions;
and determining the first pixel quantity of each semantic region in the standard label and the second pixel quantity of each semantic region in the label to be evaluated according to the upper-lower relation and the pixel regions covered by the at least two regions.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.
Background
In machine learning tasks related to images, it is often necessary to label the images, such as noting the class of objects in the images, framing objects of interest in the images, labeling areas of interest in the images, and so forth. Currently, most of the annotation work for images still depends on manual work, for example, a picture to be annotated is displayed on a specific annotation platform, and an annotator completes annotation by clicking a mouse, operating a keyboard, and the like.
However, the existing labeling platform is only used as a labeling tool, and cannot give an accurate evaluation on the labeling result.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, and storage medium for outputting information.
According to a first aspect, there is provided a method for outputting information, comprising: acquiring a standard label and a label to be evaluated aiming at a target image; comparing each semantic area in the standard label and the label to be evaluated, and determining the correct number of pixels labeled in each semantic area; determining the marking accuracy of the to-be-evaluated marks according to the standard marks, the to-be-evaluated marks and the number of pixels with correct marks in each semantic area; and determining and outputting the evaluation information according to the marking accuracy.
According to a second aspect, there is provided an apparatus for outputting information, comprising: the annotation acquisition unit is configured to acquire a standard annotation aiming at the target image and an annotation to be evaluated; the annotation comparison unit is configured to compare each semantic region in the standard annotation and the annotation to be evaluated and determine the correct pixel number of the annotation in each semantic region; the accuracy calculation unit is configured to determine the marking accuracy of the to-be-evaluated mark according to the standard mark, the to-be-evaluated mark and the number of pixels marked correctly in each semantic area; and the information output unit is configured to determine and output the evaluation information according to the marking accuracy.
According to a third aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in the first aspect.
According to a fourth aspect, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method as described in the first aspect.
According to a fifth aspect, a computer program product comprising a computer program which, when executed by a processor, implements the method as described in the first aspect.
According to the technology disclosed by the invention, the accuracy of the labeling result can be evaluated, so that the evaluation information can be quickly obtained, and the efficiency of evaluating the labeling result is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram for one embodiment of a method for outputting information, according to the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of a method for outputting information according to the present disclosure;
FIG. 4 is a flow diagram of another embodiment of a method for outputting information according to the present disclosure;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for outputting information according to the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing a method for outputting information according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the disclosed method for outputting information or apparatus for outputting information may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as an image processing application, an image labeling application, and the like, can be installed on the terminal devices 101, 102, and 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices including, but not limited to, smart phones, tablet computers, e-book readers, car computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, such as a background server evaluating the annotation results provided on the terminal devices 101, 102, 103. The background server may feed back the evaluation result to the terminal devices 101, 102, 103.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for outputting information provided by the embodiment of the present disclosure may be executed by the terminal devices 101, 102, and 103, or may be executed by the server 105. Accordingly, the means for outputting information may be provided in the terminal devices 101, 102, 103, or in the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for outputting information in accordance with the present disclosure is shown. The method for outputting information of the embodiment comprises the following steps:
step 201, acquiring a standard annotation and an annotation to be evaluated for a target image.
In this embodiment, an execution subject of the method for outputting information may respectively obtain a standard annotation and an annotation to be evaluated for a target image. Here, the target image may be a face image, or may be an image including an obstacle acquired during the driving of the vehicle. Different objects can be contained in the target image, and the objects can be obstacles, five sense organs and the like. The standard label can be a label answer for the target image, that is, a label with a accuracy of 100%. The standard annotation can be determined by experienced annotating personnel or by better performing annotation algorithms. The annotation to be evaluated can be the annotation of the annotation personnel or the annotation of the algorithm to be optimized for the target image. The standard label and the label to be evaluated can include a plurality of label boxes, and different label boxes can correspond to different semantic regions.
Step 202, comparing the standard label with each semantic area in the label to be evaluated, and determining the correct number of pixels labeled in each semantic area.
In this embodiment, after the execution main body obtains the standard label and the label to be evaluated, each semantic area in the standard label and each semantic area in the label to be evaluated may be compared to determine the correct number of pixels labeled in each semantic area. Specifically, the execution main body may compare each semantic region in the to-be-evaluated label with each semantic region in the standard label, and determine the number of pixels, which have the same label, in each semantic region in the to-be-evaluated label and each semantic region in the standard label. And taking the maximum value of the quantities as the correct pixel quantity marked in the semantic area. Or, the execution main body may compare the pixels occupied by each semantic region in the to-be-evaluated label with the pixels occupied by the regions with the same semantic in the standard label, and use the number of the overlapped pixels as the number of pixels with correct label in the semantic region.
And 203, determining the marking accuracy of the to-be-evaluated mark according to the standard mark, the to-be-evaluated mark and the number of pixels marked correctly in each semantic area.
After determining the correct number of pixels to be labeled in each semantic area in the label to be evaluated, the execution main body may determine the number of pixels in each semantic area in combination with the standard label and the label to be evaluated. And determining the labeling accuracy of each semantic region according to the number of pixels labeled correctly in each semantic region and the determined number of pixels in each semantic region. And then, weighting or averaging the marking accuracy of each semantic region to obtain the marking accuracy of the to-be-evaluated mark.
And step 204, determining and outputting evaluation information according to the marking accuracy.
After the execution main body calculates the mark accuracy of the mark to be evaluated, the execution main body can generate evaluation information based on the mark accuracy. Specifically, the execution subject may generate the evaluation information in combination with the source of the to-be-evaluated label. For example, if the annotation to be evaluated is completed by the annotator a, the executive body may generate the evaluation information "the accuracy of the current annotation by the annotator a is XX". The execution subject may output the generated assessment information for viewing by the user.
With continued reference to fig. 3, a schematic diagram of one application scenario of a method for outputting information according to the present disclosure is shown. In the application scenario of fig. 3, the annotation algorithm to be optimized in the terminal 301 takes the annotation result for the face image as an annotation to be evaluated, and sends the annotation result to the terminal 302. And (4) marking the face image by using a marking person of the terminal 302, and marking the marking result as a standard mark. The terminal 301 performs the processing of the steps 202 to 204 on the label to be evaluated and the standard label to obtain the evaluation information of the label algorithm to be optimized. The terminal 301 sends the evaluation information to the terminal 301, and a technician using the terminal 301 can optimize the labeling algorithm to be optimized according to the evaluation information. Therefore, the optimized labeling algorithm can be used for continuously labeling the image, so that the accuracy of image labeling is improved.
In another application scenario, the annotation result of the new promotion annotating personnel for the face image is used as the annotation to be evaluated. And (4) labeling the face image by a labeling person with abundant labeling experience, and taking a labeling result as standard labeling. And the terminal carries out the processing of the steps 202-204 aiming at the label to be evaluated and the standard label to obtain the evaluation information of the label algorithm to be optimized. The evaluation information can be used as the evaluation result of the new promotion annotating personnel, excellent new promotion annotating personnel can be screened out according to the evaluation result, and image annotation is carried out through the screened excellent new promotion annotating personnel, so that the accuracy of image annotation can be improved.
The method for outputting information provided by the above embodiment of the present disclosure can evaluate the annotation result of the image, so that the evaluation information can be obtained quickly, and the efficiency of evaluating the annotation result is improved.
With continued reference to fig. 4, a flow 400 of another embodiment of a method for outputting information in accordance with the present disclosure is shown. As shown in fig. 4, the method of the present embodiment may include the following steps:
step 401, acquiring a standard annotation and an annotation to be evaluated for a target image.
Step 402, respectively determining a first semantic matrix sequence and a second semantic matrix sequence according to each semantic area in the standard label and the label to be evaluated.
In this embodiment, the execution subject may first determine a first semantic matrix sequence corresponding to the standard label and a second semantic matrix sequence corresponding to the to-be-evaluated label by analyzing each semantic region in the standard label and the to-be-evaluated label, respectively. Here, the first semantic matrix sequence and the second semantic matrix sequence may include a plurality of semantic matrices, each of which may include a plurality of values, the number of values being the same as the number of pixels of the target image. Each value in the semantic matrix is used for representing the semantics of the pixel point. The execution main body can respectively determine the semantics of each pixel in the standard label and the label to be evaluated, and the semantics are expressed by numerical values, so that a first semantic matrix sequence heel and a second semantic matrix sequence are obtained.
In some optional implementation manners of this embodiment, the executive subject may determine the first semantic matrix sequence through steps 4021 to 4022, and determine the second semantic matrix sequence through steps 4023 to 4024:
step 4021, determining a first mask value sequence and a first pixel value matrix according to each semantic region in the standard label.
In this implementation, the execution subject may first determine the number of semantic regions by standardizing each semantic region in the annotation. And taking the number as a numerical value in the first mask value sequence. For example, if the requirement for labeling a human face picture specifies that two semantic regions need to be labeled, that is, regions whose semantics are face and nose in the picture need to be labeled, the mask value sequence corresponding to the picture includes (1, 0) and (0, 1). The first value in the sequence corresponds to the semantic face and the second value corresponds to the semantic nose.
The execution body may also determine a first matrix of pixel values from each semantic pixel value in the standard label. Specifically, the execution body may express, by using a numerical value, a region to which each pixel point in the standard label belongs, where the number of the pixel values is the same as the number of semantics.
Step 4022, determining a first semantic matrix sequence according to the first mask value sequence and the first pixel value matrix.
The execution body may multiply the first sequence of mask values with the first matrix of pixel values resulting in a plurality of matrices. And recording each matrix as a semantic matrix, and forming a first semantic matrix sequence by a plurality of semantic matrices.
Step 4023, determining a second mask value sequence and a second pixel value matrix according to each semantic region in the label to be evaluated.
Step 4024, determining a first semantic matrix sequence according to the second mask value sequence and the second pixel value matrix.
It is understood that the determination method of the second mask value sequence and the second pixel value matrix is similar to the determination method of the first mask value sequence and the first pixel value matrix, and is not repeated here.
And 403, comparing the first semantic matrix sequence with the second semantic matrix sequence respectively, and determining the number of pixels labeled correctly in each semantic area.
The execution subject may compare each first semantic matrix in the first sequence of semantic matrices with each second semantic matrix in the second sequence of semantic matrices. Specifically, the execution subject may compare the first semantic matrix and the second semantic matrix of the same semantic in the first semantic matrix sequence and the second semantic matrix sequence. Specifically, the execution subject may check attribute information of each pixel point under each semantic one by one. The attribute information may include information describing the pixel points, such as sky, earth, and so on. The execution subject can compare the attribute information, and if one of the attribute information is different, the marking is regarded as wrong. And if the attribute information of each item is the same, the label is considered to be correct. By counting the pixels with correct labels, the executive body can determine the number of the pixels with correct labels in each semantic region.
Step 404, determine a first pixel number of each semantic region in the standard label and a second pixel number of each semantic region in the label to be evaluated.
The execution main body can respectively carry out pixel statistics on each semantic area in the standard label and each semantic area in the label to be evaluated, and the pixel number of each semantic area is determined. And recording the pixel number of each semantic region in the standard label as a first pixel number, and recording the pixel number of each semantic region in the label to be evaluated as a second pixel number.
Step 405, determining the labeling accuracy of each semantic region in the labels to be evaluated according to the correct number of pixels, the first number of pixels and the second number of pixels labeled in each semantic region.
The execution subject may determine a number of pixels per semantic region based on the first number of pixels and the second number of pixels. Then, the number of pixels with correct labeling in each semantic area is compared with the determined number to obtain the labeling accuracy of each semantic area.
In some optional implementation manners of this embodiment, the execution main body may determine the labeling accuracy of each semantic region through steps 4051 to 4052:
step 4051, determining a maximum value of the number of pixels of each semantic region according to the first number of pixels and the second number of pixels.
Step 4052, determining the labeling accuracy of each semantic region according to the number of pixels labeled correctly in each semantic region and the maximum value of the number of pixels in each semantic region.
In this implementation, the execution subject may use one of the first pixel number and the second pixel number, which has a larger numerical value, as the maximum value of the pixel number of each semantic region. Or, the execution subject may collect a union of the pixel regions corresponding to the first number of pixels and the pixel regions corresponding to the second number of pixels, and use the obtained number of pixels of the pixel regions as the maximum value of the number of pixels of each semantic region. And then, the ratio of the number of pixels with correct labeling in each semantic area to the maximum value of the number of pixels in each semantic area is obtained to obtain the labeling accuracy of each semantic area.
In some optional implementation manners of this embodiment, in the standard annotation and the annotation to be evaluated, if there is an overlap between at least two semantic regions, the execution main body may determine an upper-lower relationship in the at least two semantic regions according to a height of each pixel in the at least two semantic regions. The context may represent an upper semantic region, a middle semantic region, a lower semantic region, etc. If there are a large number of overlapping semantic regions, the executive may use different values to represent the top-to-bottom relationship between semantic regions. After determining the context, the execution body may determine, according to the context and the pixel regions covered by the at least two regions, a first pixel number of each semantic region in the standard label and a second pixel number of each semantic region in the label to be evaluated. Specifically, the executing entity may first determine the number of pixels in the semantic region of the uppermost layer. Then, the number of pixels in the part of the lower semantic region overlapping with the upper semantic region is removed, so as to obtain the number of pixels in the lower semantic region. And sequentially analyzing, thereby obtaining the pixel number of each layer of semantic area.
And 406, determining the labeling accuracy of the label to be evaluated according to the labeling accuracy of each semantic area.
The execution main body can weight or average the labeling accuracy of each semantic region, and the obtained value is used as the labeling accuracy of the label to be evaluated.
Step 407, determining and outputting the evaluation information according to the labeling accuracy.
The method for outputting information provided by the embodiment of the disclosure can compare the label to be evaluated with the standard label according to the pixel point, thereby improving the accuracy of the evaluation information.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for outputting information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for outputting information of the present embodiment includes: a label acquisition unit 501, a label comparison unit 502, a correct rate calculation unit 503, and an information output unit 504.
An annotation obtaining unit 501 configured to obtain a standard annotation for a target image and an annotation to be evaluated.
A label comparison unit 502 configured to compare semantic regions in the standard label and the label to be evaluated and determine the number of pixels labeled correctly in each semantic region;
the correctness calculating unit 503 is configured to determine the labeling correctness of the label to be evaluated according to the standard label, the label to be evaluated, and the number of pixels labeled correctly in each semantic area.
An information output unit 504 configured to determine and output evaluation information according to the annotation accuracy.
In some optional implementations of this embodiment, the annotation comparison unit 502 may be further configured to: respectively determining a first semantic matrix sequence and a second semantic matrix sequence according to each semantic area in the standard label and the label to be evaluated; and respectively comparing the first semantic matrix sequence with the second semantic matrix sequence to determine the number of pixels marked correctly in each semantic area.
In some optional implementations of this embodiment, the annotation comparison unit 502 may be further configured to: determining a first mask value sequence and a first pixel value matrix according to each semantic area in the standard label; determining a first semantic matrix sequence according to the first mask value sequence and the first pixel value matrix; determining a second mask value sequence and a second pixel value matrix according to each semantic area in the label to be evaluated; and determining a first semantic matrix sequence according to the second mask value sequence and the second pixel value matrix.
In some optional implementations of this embodiment, the accuracy calculation unit 503 may be further configured to: determining the first pixel quantity of each semantic region in the standard label and the second pixel quantity of each semantic region in the label to be evaluated; determining the labeling accuracy of each semantic area in the labels to be evaluated according to the number of pixels with correct labeling, the number of the first pixels and the number of the second pixels in each semantic area; and determining the labeling accuracy of the label to be evaluated according to the labeling accuracy of each semantic area.
In some optional implementations of this embodiment, the accuracy calculation unit 503 may be further configured to: determining the maximum value of the pixel number of each semantic area according to the first pixel number and the second pixel number; and determining the labeling accuracy of each semantic region according to the number of pixels labeled correctly in each semantic region and the maximum value of the number of pixels of each semantic region.
In some optional implementations of this embodiment, the accuracy calculation unit 503 may be further configured to: in response to determining that there is pixel overlap in the at least two semantic regions, determining an up-down relationship in the at least two semantic regions according to a height of each pixel in the at least two semantic regions; and determining the first pixel quantity of each semantic region in the standard label and the second pixel quantity of each semantic region in the label to be evaluated according to the upper-lower relation and the pixel regions covered by the at least two regions.
It should be understood that units 501 to 505, which are described in the apparatus 500 for outputting information, correspond to the respective steps in the method described with reference to fig. 2, respectively. Thus, the operations and features described above for the method for outputting information are equally applicable to the apparatus 500 and the units included therein and will not be described again here.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to an embodiment of the present disclosure.
Fig. 6 shows a block diagram of an electronic device 600 that performs a method for outputting information according to an embodiment of the disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a processor 601 that may perform various suitable actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a memory 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 can also be stored. The processor 601, the ROM602, and the RAM 603 are connected to each other via a bus 604. An I/O interface (input/output interface) 605 is also connected to the bus 604.
Various components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a memory 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Processor 601 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of processor 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 601 performs the various methods and processes described above, such as a method for outputting information. For example, in some embodiments, the method for outputting information may be implemented as a computer software program tangibly embodied in a machine-readable storage medium, such as memory 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 600 via the ROM602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the processor 601, one or more steps of the method for outputting information described above may be performed. Alternatively, in other embodiments, the processor 601 may be configured by any other suitable means (e.g., by means of firmware) to perform the method for outputting information.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. The program code described above may be packaged as a computer program product. These program code or computer program products may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor 601, causes the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable storage medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable storage medium may be a machine-readable signal storage medium or a machine-readable storage medium. A machine-readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions of the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:模型推荐方法及装置、设备、计算机存储介质