Method, device and equipment for image automatic testing
1. A method for automated testing of images, comprising:
acquiring a client image and a server image;
carrying out size adjustment on the client image and/or the server image, and converting pixels of the images to obtain a target server image and a target client image;
and comparing the consistency of the target server side image and the target client side image.
2. The method of claim 1, wherein the resizing the client image and/or the server image comprises:
and under the condition that a preset interception range exists in the process of generating the server side image, carrying out image interception in the client side image according to the preset interception range.
3. The method of claim 1, wherein the resizing the client image and/or the server image comprises:
acquiring the picture presenting directions of a server side image and a client side image;
and under the condition that the picture presenting direction of the client image is not consistent with the picture presenting direction of the server image, adjusting the picture presenting direction of the client image and/or the server image to keep the picture presenting directions of the server image and the client image consistent.
4. The method according to claim 3, further comprising, after the keeping the picture presentation directions of the server image and the client image consistent, the steps of:
acquiring the size of a server side image and the size of a client side image;
and under the condition that the size of the server side image is not consistent with that of the client side image, adjusting the size of the client side image and/or the server side image to keep the size of the server side image consistent with that of the client side image.
5. The method according to claim 3, wherein the keeping the picture presentation directions of the server image and the client image consistent further comprises:
acquiring the cut-off size of a preset target server side image and the cut-off size of a preset target client side image;
and intercepting the server side image and the client side image according to the interception size of the preset target server side image and the interception size of the preset target client side image.
6. The method of claim 1, wherein the resizing the client image and/or the server image comprises:
acquiring coordinate information of a client image or a server image;
selecting a pixel point in a client image or a server image as a first reference point;
determining a plurality of second reference points at second fixed intervals in a transverse coordinate area corresponding to the first reference point according to the coordinate information;
according to the coordinate information, a plurality of interval positions are determined at first fixed intervals in the longitudinal coordinate areas corresponding to the first reference points and the second reference points respectively;
identifying a black area of the client image or the server image according to the interval position;
and cutting off the black area of the client image or the server image to obtain the adjusted client image or the adjusted server image.
7. The method of claim 6, wherein identifying black regions of the client image or the server image according to the interval positions comprises:
acquiring the gray value of the pixel point at the interval position;
under the condition that the gray value of the interval position is smaller than a preset gray value, identifying that the interval position corresponding to the gray value is black;
and under the condition that the number of black interval positions corresponding to the gray values is greater than a preset number, identifying that the longitudinal coordinate area corresponding to the reference point corresponding to the interval positions is a black area.
8. The method of claim 1, wherein comparing the target server image to the target client image for consistency comprises:
calculating gray values of pixel points at corresponding positions of the target server side image and the target client side image;
determining that the pixel points at the corresponding positions are not the pixel points of the same image under the condition that the difference value of the gray values of the pixel points at the corresponding positions is larger than a first preset value;
and under the condition that the number of the pixel points of the corresponding positions which are not the same image is less than or equal to a second preset numerical value, identifying that the client image and the server image are the same image.
9. An apparatus for automated testing of images, comprising a processor and a memory having stored thereon program instructions, wherein the processor is configured to, when executing the program instructions, perform the method for automated testing of images according to any of claims 1 to 8.
10. An apparatus comprising the device for automated testing of images of claim 9.
Background
At present, with the development of software testing technology, automated testing is more and more popular as an important means for improving testing efficiency and ensuring testing coverage.
At present, aiming at relatively subjective function tests such as image uploading and downloading, the automation ratio is not high, manual tests are still carried out manually, some functions for checking page images need to be judged by naked eye observation, and the test results are greatly influenced by human factors.
In the prior art, the comparison of the image consistency is performed by obtaining a first interface screenshot of a server interface presented when a target test instruction in a test script is run, and a second interface screenshot generated by a screenshot instruction for the client interface received in a manual test process of an application to be tested. And comparing the structural characteristics between the first interface screenshot and the second interface screenshot so as to determine the consistency of the pictures. In the prior art, the comparison standard of the pictures is determined by manual screenshot, higher manual dependence factors exist, and the comparison mode is that when the picture or character structures displayed on the screenshot are not matched, the interface screenshot is not a picture.
In the process of implementing the embodiments of the present disclosure, it is found that at least the following problems exist in the related art:
the existing comparison about the consistency of the pictures has higher artificial dependence factors, and the judgment standard is judged by the structural information in the display interface where the pictures are located, but not the pictures.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview nor is intended to identify key/critical elements or to delineate the scope of such embodiments but rather as a prelude to the more detailed description that is presented later.
The embodiment of the disclosure provides a method, a device and equipment for image automatic testing, and aims to solve the technical problems that higher artificial dependence factors exist in the existing comparison of picture consistency, and the judgment standard is judged by structural information in a display interface where a picture is located, but not the picture.
In some embodiments, the method comprises:
acquiring a client image and a server image;
carrying out size adjustment on the client image and/or the server image, and converting pixels of the images to obtain a target server image and a target client image;
and comparing the consistency of the target server side image and the target client side image.
Optionally, the resizing the client image and/or the server image includes:
and under the condition that a preset interception range exists in the process of generating the server side image, carrying out image interception in the client side image according to the preset interception range.
Optionally, the resizing the client image and/or the server image includes:
acquiring the picture presenting directions of a server side image and a client side image;
and under the condition that the picture presenting direction of the client image is not consistent with the picture presenting direction of the server image, adjusting the picture presenting direction of the client image and/or the server image to keep the picture presenting directions of the server image and the client image consistent.
Optionally, after the making the picture presentation directions of the server image and the client image consistent, the method further includes:
acquiring the size of a server side image and the size of a client side image;
and under the condition that the size of the server side image is not consistent with that of the client side image, adjusting the size of the client side image and/or the server side image to keep the size of the server side image consistent with that of the client side image.
Optionally, the making the picture presentation direction of the server image consistent with that of the client image further includes:
acquiring the cut-off size of a preset target server side image and the cut-off size of a preset target client side image;
and intercepting the server side image and the client side image according to the interception size of the preset target server side image and the interception size of the preset target client side image.
Optionally, the resizing the client image and/or the server image includes:
acquiring coordinate information of a client image or a server image;
selecting a pixel point in a client image or a server image as a first reference point;
determining a plurality of second reference points at second fixed intervals in a transverse coordinate area corresponding to the first reference point according to the coordinate information;
according to the coordinate information, a plurality of interval positions are determined at first fixed intervals in the longitudinal coordinate areas corresponding to the first reference points and the second reference points respectively;
identifying a black area of the client image or the server image according to the interval position;
and cutting off the black area of the client image or the server image to obtain the adjusted client image or the adjusted server image.
Optionally, the identifying a black area of the client image or the server image according to the interval position includes:
acquiring the gray value of the pixel point at the interval position;
under the condition that the gray value of the interval position is smaller than a preset gray value, identifying that the interval position corresponding to the gray value is black;
and under the condition that the number of black interval positions corresponding to the gray values is greater than a preset number, identifying that the longitudinal coordinate area corresponding to the reference point corresponding to the interval positions is a black area.
In some embodiments, the apparatus comprises:
a processor and a memory storing program instructions, wherein the processor is configured to execute the method for image automation testing as described above when executing the program instructions.
In some embodiments, the apparatus comprises:
such as the apparatus described above for automated testing of images.
The method, the device and the equipment for the image automatic test provided by the embodiment of the disclosure can realize the following technical effects:
according to the method and the device, the client image and the server image are obtained through adjustment, pixel conversion is carried out on the adjusted images, the target image is obtained, consistency of the client image and the server image is determined through comparison of the target image, artificial dependence factors of manual image capturing in an automatic test picture are removed, the picture is used as a basis for consistency judgment, and accuracy of picture consistency judgment is improved.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the accompanying drawings and not in limitation thereof, in which elements having the same reference numeral designations are shown as like elements and not in limitation thereof, and wherein:
FIG. 1 is a schematic flow chart diagram of a method for automated testing of images provided by an embodiment of the present disclosure;
FIG. 2 is a schematic structural diagram of an apparatus for automated image testing provided by an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of another apparatus for image automated testing according to an embodiment of the present disclosure.
Detailed Description
So that the manner in which the features and elements of the disclosed embodiments can be understood in detail, a more particular description of the disclosed embodiments, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may be practiced without these details. In other instances, well-known structures and devices may be shown in simplified form in order to simplify the drawing.
The terms "first," "second," and the like in the description and in the claims, and the above-described drawings of embodiments of the present disclosure, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the present disclosure described herein may be made. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions.
The term "plurality" means two or more unless otherwise specified.
In the embodiment of the present disclosure, the character "/" indicates that the preceding and following objects are in an or relationship. For example, A/B represents: a or B.
The term "and/or" is an associative relationship that describes objects, meaning that three relationships may exist. For example, a and/or B, represents: a or B, or A and B.
With reference to fig. 1, an embodiment of the present disclosure provides a method for image automation test, including:
and S01, acquiring the client image and the server image.
In the technical scheme, the picture uploading or downloading is involved in the comparison process aiming at the picture consistency, the obtaining of the client side image and the server side image is not limited to the images of the client side and the server side, and can be a first display interface, a second display interface or other description modes, the technical scheme of the application is not limited, and the comparison method can be used as long as two images capable of being compared are represented. For example, when performing consistency comparison on uploaded pictures, the picture in the client may be used as a first display interface, the picture in the server may be used as a second display interface, and the picture in the client/server may be a picture stored in the client/server or taken by the client/server. When the downloaded pictures are compared in a consistent manner, the pictures in the server side can be used as a first display interface, and the pictures in the client side can be used as a second display interface.
And S02, performing size adjustment on the client image and/or the server image, and converting pixels of the images to obtain a target server image and a target client image.
In the technical solution, adjusting the size of the client image and/or the server image may be understood as adjusting the size of the client image and the size of the server image to be consistent. In the process related to uploading or downloading the picture, the client image and the server image may be respectively regarded as an original image and an image to be compared, and the client image or the server image may be both used as the original image or the image to be compared according to an actual application scene, for example, taking the process of uploading the picture as an example, the client image may be used as the original image, and the server image may be used as the image to be compared; or, the client image may be used as the image to be compared, and the server image may be used as the original image.
In the technical solution, the pixel conversion of the image may be understood as performing pixel conversion on a possibly adjusted/unadjusted client image and a server image together, and the pixel conversion may be converting the possibly adjusted/unadjusted client image and the server image into a grayscale image.
And S03, comparing the consistency of the target server side image and the target client side image.
In the technical scheme, the manner of comparing the consistency of the target server side image and the target client side image may be to compare the gray values of the pixel points of the target server side image and the target client side image.
By adopting the method for automatically testing the image, the acquired client image and the server image can be adjusted, the pixel conversion is carried out on the adjusted images to obtain the target image, the consistency of the client image and the server image is determined by comparing the target image, the artificial dependence factor of manually intercepting the image in the automatic test image is removed, the image is used as the basis for judging the consistency, and the accuracy of judging the consistency of the image is improved.
Optionally, the size of the client image and/or the server image is adjusted, including performing image interception in the client image according to a preset interception range in the server image generation process when the preset interception range exists.
In practical application, if a preset intercepting range exists in the picture uploading process, image interception is carried out in the client image according to the preset intercepting range to obtain an adjusted client image, and the server image intercepted according to the preset intercepting range is regarded as the adjusted server image. The preset intercepting range may be understood to be, for example, an image intercepting range in an avatar uploading process, an intercepting range in an identity document photo electronic edition uploading process, and the like, and this application is not particularly limited as long as it can indicate that the preset intercepting range exists from the client to the server or from the server to the client.
Therefore, the image size can be adjusted by adjusting the acquired client image and/or server image and acquiring the preset interception range, so that the calculation amount is reduced, a basis is provided for subsequent operation, the artificial dependence factor of the manual interception image in the automatic test picture is removed, the picture is taken as the basis for consistency judgment, and the effect of improving the accuracy of picture consistency judgment is achieved.
Optionally, resizing the client image and/or the server image, including:
acquiring the picture presenting directions of a server side image and a client side image;
and under the condition that the picture presenting direction of the client image is not consistent with the picture presenting direction of the server image, adjusting the picture presenting directions of the client image and/or the server image to keep the picture presenting directions of the server image and the client image consistent.
In some optional embodiments, after obtaining the picture presentation directions of the server image and the client image, the method includes:
and rotating the client image to enable the image presenting direction of the client image to be consistent with the image presenting direction of the server image under the condition that the image presenting direction of the client image is inconsistent with the image presenting direction of the server image according to the image presenting direction of the server image.
In some optional embodiments, after obtaining the picture presentation directions of the server image and the client image, the method includes:
and rotating the server side image to enable the image presenting direction of the server side image to be consistent with the image presenting direction of the client side image under the condition that the image presenting direction of the server side image is inconsistent with the image presenting direction of the client side image according to the image presenting direction of the client side image.
In some optional embodiments, after obtaining the picture presentation directions of the server image and the client image, the method may include:
acquiring a preset picture presenting direction;
and under the condition that the image presenting direction of the server side image and the presenting direction of the client side image are not consistent with the preset image presenting direction, rotating the server side image and/or the client side image to the preset image presenting direction.
Therefore, a foundation can be better provided for subsequent operation by adjusting the client image and/or the server image, so that the artificial dependence factor of the manually captured image in the automatic test picture is removed, and the picture is taken as the basis of consistency judgment, so that the effect of improving the accuracy of the picture consistency judgment is achieved.
Optionally, after the foregoing rotation makes the picture presentation directions of the server image and the client image consistent, the method may include:
acquiring the size of a server side image and the size of a client side image;
and under the condition that the size of the server side image is not consistent with that of the client side image, adjusting the size of the client side image and/or the server side image to keep the size of the server side image consistent with that of the client side image.
In some optional embodiments, in the case that the size of the server image is not consistent with the size of the client image, adjusting the size of the client image and/or the size of the server image may include:
and adjusting the size of the server image to be consistent with the size of the client image when the size of the server image is inconsistent with the size of the client image according to the size of the client image.
In some optional embodiments, in the case that the size of the server image is not consistent with the size of the client image, adjusting the size of the client image and/or the size of the server image may include:
and adjusting the size of the client image to be consistent with the size of the server image under the condition that the size of the client image is inconsistent with the size of the server image according to the size of the server image.
In some optional embodiments, in the case that the size of the server image is not consistent with the size of the client image, adjusting the size of the client image and/or the size of the server image may include:
acquiring a preset image size;
and under the condition that the size of the server image and the size of the client image are inconsistent with the preset image size, adjusting the size of the client image and the size of the server image according to the preset image size.
In practical applications, the preset image size may be obtained by averaging the client image size and the server image size, or obtained according to a greatest common divisor/least common multiple of the client image size and the server image size.
Therefore, a foundation can be better provided for subsequent operation by adjusting the client image and/or the server image, so that the artificial dependence factor of the manually captured image in the automatic test picture is removed, and the picture is taken as the basis of consistency judgment, so that the effect of improving the accuracy of the picture consistency judgment is achieved.
Optionally, the keeping the picture presenting directions of the server side image and the client side image consistent in the foregoing may include:
acquiring the cut-off size of a preset target server side image and the cut-off size of a preset target client side image;
and intercepting the server side image and the client side image according to the interception size of the preset target server side image and the interception size of the preset target client side image.
In some alternative embodiments, the cut-out size of the preset target server image and the cut-out size of the preset target client image may be images of fixed height value sizes at the top and/or bottom of the cut-out target image; or, the image of the fixed width value size of both sides or one side of the target image is cut off.
In practical applications, taking an image with a fixed height size at the top of the cut-off target client image as an example, the fixed height size may be one tenth, one fifth or other numerical value of the height of the client image, that is, the fixed height size at the top of the cut-off target client image is an image with one tenth, one fifth or other numerical value of the height of the client image from the upper edge of the cut-off target client image to the lower side; the aforementioned fixed height value size may also be a number, such as one centimeter, five millimeters or other values, and the technical solution of the present application is not particularly limited to the fixed height value, as long as the size of the cut target image can be expressed.
Therefore, the image size can be adjusted by adjusting the client image and/or the server image and acquiring the preset interception size, so that the calculation amount is reduced, a basis is provided for subsequent operation, the artificial dependence factor of the manual interception image in the automatic test picture is removed, the picture is taken as the basis for consistency judgment, and the effect of improving the accuracy of picture consistency judgment is achieved.
Optionally, the resizing the client image and/or the server image includes:
acquiring coordinate information of a client image or a server image;
selecting a pixel point in a client image or a server image as a first reference point;
determining a plurality of second reference points at second fixed intervals in a transverse coordinate area corresponding to the first reference point according to the coordinate information;
determining a plurality of spaced positions at first fixed intervals in a longitudinal coordinate region corresponding to each of the first reference point and the second reference point, based on the coordinate information;
identifying a black area of the client image or the server image according to the interval position;
and cutting off the black area of the client image or the server image to obtain the adjusted client image or the adjusted server image.
Optionally, the identifying the black area of the client image or the server image according to the interval position includes:
acquiring the gray value of the pixel point at the interval position;
under the condition that the gray value of the interval position is smaller than a preset gray value, identifying that the interval position corresponding to the gray value is black;
and under the condition that the number of black interval positions corresponding to the gray values is greater than a preset number, identifying that the longitudinal coordinate area corresponding to the reference point corresponding to the interval positions is a black area.
In this embodiment, the first fixed interval may be a coordinate interval 50, 30, 80 or other values, and the coordinate interval 50 is usually selected as the first fixed interval value.
In this embodiment, the second fixed interval may be a coordinate interval 1, 2, or 5, or another value, and usually the coordinate interval 2 is selected as the second fixed interval value.
In this embodiment, when the grayscale value of the spacing position is smaller than the preset grayscale value, identifying that the spacing position corresponding to the grayscale value is black may be identifying that the spacing position corresponding to the grayscale value is black when the grayscale value of the spacing position is smaller than 20, 25, or 30 or other numerical values. Usually 25 is selected as the predetermined gray value.
In the technical scheme, under the condition that the number of black interval positions corresponding to the gray values is greater than a preset number, identifying that a longitudinal coordinate region corresponding to a reference point corresponding to the interval positions is a black region, wherein the preset number can be expressed as image height/50/1.4 by taking the corresponding longitudinal coordinate region as a region corresponding to the image height; alternatively, the aforementioned preset number may be expressed as image width/50/2 with the corresponding vertical coordinate region as the region corresponding to the image width.
In some optional embodiments, the cutting off the black area of the client image or the server image, and the obtaining the adjusted client image or the adjusted server image includes: and cutting off a black area of the client image or the server image, and integrating adjacent images of the cut-off client image or the server image to form an adjusted client image or an adjusted server image.
In practical applications, it should be understood that the horizontal coordinate and the vertical coordinate may be changed, taking the first reference point as an upper left vertex of the image as an example, the horizontal coordinate region corresponding to the upper left vertex of the image may be a direction from the upper left vertex of the image to the lower left vertex of the image, and similarly, the vertical coordinate region corresponding to the upper left vertex of the image may be a direction from the upper left vertex of the image to the upper right vertex of the image; or, taking the first reference point as the vertex of the lower left corner of the image as an example, the horizontal coordinate region corresponding to the vertex of the lower left corner of the image may be the direction from the vertex of the lower left corner of the image to the vertex of the lower right corner of the image, and similarly, the vertical coordinate region corresponding to the vertex of the lower left corner of the image may be the direction from the vertex of the lower left corner of the image to the vertex of the upper left corner of the image.
In practical applications, after the black region of the client image or the server image is cut, the original client image or the original server image is divided into a plurality of images with different sizes due to the black region being cut. The aforementioned integration of adjacent images to form an adjusted client image or an adjusted server image may be understood as the stitching of adjacent images to form an adjusted client image or an adjusted server image. Taking the cut-off client image as an example, the identified black area is: the corresponding coordinates of one reference point identified as a black area are (1,7), the longitudinal area identified as black by the reference point is an area corresponding to an X-axis coordinate 1, the corresponding coordinates of the other reference point identified as a black area are (2,3), the longitudinal area identified as black by the reference point is an area corresponding to a Y-axis coordinate 3, after the black area of the client image is cut, the original client image is divided into four images with different sizes, and the four divided images are spliced according to the adjacent relation to obtain the adjusted client image. It should be understood that the size of the divided images may be the same or different, and is determined according to the longitudinal coordinate region of the reference point corresponding to the interval position recognized as the black region.
In some optional embodiments, resizing the client image and/or the server image may include:
determining a black area boundary of the client image or the server image;
and cutting the black area except the black area boundary of the client image or the server image to obtain the adjusted client image or the adjusted server image.
In some alternative embodiments, determining the black region boundary of the client image or the server image may be determining a right black region boundary of the client image or the server image in a positive X-axis direction corresponding to the first reference point X-axis coordinate; determining a left black area boundary of the client image or the server image in the X-axis negative direction corresponding to the X-axis coordinate of the first reference point; determining an upper black area boundary of the client image or the server image in the positive Y-axis direction corresponding to the Y-axis coordinate of the first reference point; determining a lower black region boundary of the client image or the server image in a Y-axis negative direction corresponding to the first reference point Y-axis coordinate;
in practical applications, determining a certain black area boundary corresponding to the first reference point of the client image or the server image may be that a vertical coordinate area identified as a black area appears for the first time in a direction corresponding to the black area boundary to be determined, and one or more vertical coordinates adjacent to the vertical coordinate area in the direction are identified as a black coordinate area. Taking the coordinates corresponding to the first reference point of the server as (20,50) as an example, in the case that the boundary of the black area to be determined is the right side boundary, according to the above selection manner, the points determined and selected are point one, point two, point three, point four, and point five as examples. If the judgment condition of the black boundary is determined that one adjacent longitudinal coordinate is identified as the black area, under the condition that the point I and the point II are both identified as the black areas, the longitudinal coordinate corresponding to the point I is determined as the right boundary of the black area of the server side image; or if the judgment condition of the black boundary is determined that one adjacent longitudinal coordinate is identified as the black area, and under the condition that the point I, the point II, the point IV and the point V are all identified as the black areas, the longitudinal coordinate corresponding to the point I is determined as the right boundary of the black area of the server side image; if the judgment condition of the black boundary is determined that two adjacent longitudinal coordinates are identified as black areas, under the condition that the point I, the point III, the point IV and the point V are all identified as the black areas, the longitudinal coordinate corresponding to the point III is determined as the right boundary of the black area of the server side image;
in practical applications, it may be determined that a certain black area boundary corresponding to the first reference point of the client image or the server image is an adjacent previous vertical coordinate identified as a black area, and a next vertical coordinate area identified as a non-black area. The pixel points corresponding to the four vertexes of the client image or the server image can be respectively used as first reference points, and each first reference point is respectively used for confirming a black area boundary corresponding to the first reference point. Or selecting pixel points corresponding to two opposite vertexes of the client image or the server image as first reference points, wherein each first reference point is respectively used for confirming two black area boundaries corresponding to the first reference points. Here, taking the coordinate position of the pixel point corresponding to the vertex at the lower left corner of the client image as (0,0), the first fixed interval value 50, and the second fixed interval value 1 as examples, calculating whether the vertical coordinate area corresponding to the first reference point (0,0) is identified as a black area, and selecting the second reference points (0,1), (0,2), and so on, or (1,0), (2,0), and so on, at the second fixed interval. When the ordinate region corresponding to (0,0) is recognized as the black region, it is calculated whether or not the ordinate regions corresponding to the second reference points (0,1) and (0,2) are recognized as the black region. And if the ordinate area corresponding to (0,1) is identified as black and the ordinate area corresponding to (0,2) is identified as non-black, determining that the area represented by the Y-axis ordinate 1 corresponding to the second reference point (0,1) is a black area boundary. It should be understood that the method of determining the black region boundary by (0,1) and (0,2) is the same as the manner of determining (1,0) and (2, 0).
Therefore, the image size can be adjusted better through the adjusted client image and/or server image and the preset interception range, so that the calculation amount is reduced, a foundation is provided for subsequent operation, the artificial dependence factor of the manual intercepted image in the automatic test picture is removed, the picture is taken as the basis of consistency judgment, and the effect of improving the accuracy of picture consistency judgment is achieved.
Alternatively, converting the pixels of the image to obtain the target server image and the target client image may be converting the client image and the server image, which may be adjusted/unadjusted, into a grayscale image.
In some alternative embodiments, converting the potentially adjusted/unadjusted client image and the server image into a grayscale image may be by defining the server image and the client image as grayscale images.
In practical applications, a gray image can be obtained by defining the color of the server image and the client image as gray, which is expressed by the following codes: COLOR _ BGR2 GRAY.
In some alternative embodiments, converting the possibly adjusted/unadjusted client image and the server image into the gray image may be a gray image conversion by obtaining RGB values of the server image and the client image. (RGB color scheme is a color standard in the industry, and various colors are obtained by changing three color channels of red (R), green (G) and blue (B) and superimposing the three color channels on each other, wherein RGB is a color representing three channels of red, green and blue).
In some alternative embodiments, the gray image conversion is performed by using RGB values of the server image and the client image, which may be an averaging method that averages RGB values of 3 channels at one pixel position. Can be expressed by the following formula:
I(x,y)=1/3*I_R(x,y)+1/3*I_G(x,y)+1/3*I_B(x,y);
wherein, I _ R (x, y), I _ G (x, y), I _ B (x, y) represent the values of three channels of one pixel position, respectively.
In some alternative embodiments, the gray image conversion is performed by using RGB values of the server image and the client image, which may be a weighted average method, and may be expressed by the following formula:
I(x,y)=0.3*I_R(x,y)+0.59*I_G(x,y)+0.11*I_B(x,y);
among them, 0.3,0.59, and 0.11 are parameters adjusted according to the human brightness perception system, and are widely used standardized parameters.
Optionally, comparing the consistency of the target server image and the target client image includes:
calculating gray values of pixel points at corresponding positions of the target server side image and the target client side image;
under the condition that the difference value of the gray values of the pixel points at the corresponding positions is larger than a first preset value, the pixel points at the corresponding positions are determined not to be the pixel points of the same image;
and under the condition that the number of the pixel points of the corresponding positions which are not the same image is less than or equal to a second preset numerical value, identifying that the client image and the server image are the same image.
In practical applications, the first predetermined value may be 10, 15, 20 or other values, and 10 is usually selected as the first predetermined value. The second predetermined value may be 20%, 25%, 30% or other values, and typically 20% is selected as the second predetermined value.
Therefore, the consistency of the client image and the server image is determined by comparing the target image to obtain the target image through adjusting the client image and/or the server image and performing pixel conversion on the adjusted image, so that the artificial dependence factor of manually intercepting the image in the automatic test picture is removed, the picture is used as the basis for consistency judgment, and the accuracy of picture consistency judgment is improved.
As shown in fig. 2, an apparatus for image automated testing according to an embodiment of the present disclosure includes an image acquisition module 21, an image adjustment and conversion module 22, and an image comparison module 23. The image acquisition module 21 is configured to acquire a client image and a server image; the image adjusting and converting module 22 is configured to perform size adjustment on the client image and/or the server image, and convert pixels of the image to obtain a target server image and a target client image; the image comparison module 23 is configured to compare the target server image to the target client image for consistency.
By adopting the device for the automatic image testing provided by the embodiment of the disclosure, the client image and the server image which are obtained by adjustment are facilitated, the pixel conversion is carried out on the adjusted images to obtain the target image, the consistency of the client image and the server image is further determined by comparing the target image, the artificial dependence factor of the manual intercepted image in the automatic testing image is removed, the image is used as the basis for consistency judgment, and the accuracy of the image consistency judgment is improved.
Optionally, the image adjusting and converting module 22 is configured to perform image capturing in the client image according to a preset capturing range when the preset capturing range exists in the server image generating process.
Optionally, the image adjusting and converting module 22 includes a picture presenting direction obtaining sub-module and an image rotation sub-module;
the picture presentation direction acquisition sub-module is configured to acquire picture presentation directions of the server side image and the client side image;
and the image rotation sub-module is configured to adjust the picture presenting direction of the client image and/or the server image under the condition that the picture presenting direction of the client image is not consistent with the picture presenting direction of the server image, so that the picture presenting directions of the server image and the client image are kept consistent.
Optionally, the image adjusting and converting module 22 further includes an image size obtaining sub-module and a size adjusting sub-module;
the image size obtaining submodule is configured to obtain the size of the server side image and the size of the client side image;
and the size adjusting submodule is configured to adjust the size of the client image and/or the server image to keep the size of the server image consistent with that of the client image under the condition that the size of the server image is inconsistent with that of the client image.
Optionally, the image adjusting and converting module 22 further includes an image preset intercepting size obtaining sub-module and an image intercepting sub-module;
the image preset interception size acquisition sub-module is configured to acquire the interception size of the preset target server side image and the interception size of the preset target client side image;
the image intercepting submodule is configured to intercept the server side image and the client side image according to the intercepting size of the preset target server side image and the intercepting size of the preset target client side image.
Optionally, the image adjusting and converting module 22 includes a coordinate information obtaining sub-module, a first reference point selecting sub-module, a second reference point selecting sub-module, an interval position selecting sub-module, an image black area identifying sub-module, and a black area intercepting sub-module;
the coordinate information acquisition sub-module is configured to acquire coordinate information of the client image or the server image;
the first reference point selection submodule is configured to select a pixel point in the client image or the server image as a first reference point;
the second reference point selection submodule is configured to determine a plurality of second reference points at second fixed intervals in the transverse coordinate area corresponding to the first reference point according to the coordinate information;
the interval position selection submodule is configured to determine a plurality of interval positions at first fixed intervals in the longitudinal coordinate areas corresponding to the first reference point and the second reference point respectively according to the coordinate information;
the image black area identification submodule is configured to identify a black area of the client image or the server image according to the interval position;
the black area cutting sub-module is configured to cut off a black area of the client image or the server image to obtain an adjusted client image or an adjusted server image.
Optionally, the image black region identification submodule is further configured to acquire a gray value of a pixel point at the aforementioned interval position; under the condition that the gray value of the interval position is smaller than a preset gray value, identifying that the interval position corresponding to the gray value is black; and under the condition that the number of black interval positions corresponding to the gray values is greater than a preset number, identifying that the longitudinal coordinate area corresponding to the reference point corresponding to the interval positions is a black area.
Optionally, the image comparison module 23 includes a gray value calculation sub-module, a pixel point identification sub-module and the same image identification sub-module;
the gray value calculation submodule is configured to calculate the gray value of pixel points at the corresponding positions of the target server side image and the target client side image;
the pixel point identification submodule is configured to identify that the pixel points at the corresponding positions are not the pixel points of the same image under the condition that the difference value of the gray values of the pixel points at the corresponding positions is larger than a first preset numerical value;
the same image identification submodule is configured to identify the client image and the server image as the same image under the condition that the number proportion of the pixel points of the corresponding positions, which are not the pixel points of the same image, is less than or equal to a second preset numerical value.
As shown in fig. 3, an apparatus for image automated testing according to an embodiment of the present disclosure includes a processor (processor)100 and a memory (memory) 101. Optionally, the apparatus may also include a Communication Interface (Communication Interface)102 and a bus 103. The processor 100, the communication interface 102, and the memory 101 may communicate with each other via a bus 103. The communication interface 102 may be used for information transfer. The processor 100 may invoke logic instructions in the memory 101 to perform the method for image automation testing of the above-described embodiments.
In addition, the logic instructions in the memory 101 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products.
The memory 101, which is a computer-readable storage medium, may be used for storing software programs, computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 100 executes functional applications and data processing, i.e., implements the method for image automation testing in the above-described embodiments, by executing program instructions/modules stored in the memory 101.
The memory 101 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. In addition, the memory 101 may include a high-speed random access memory, and may also include a nonvolatile memory.
The disclosed embodiment provides a device (such as a computer, a mobile phone and the like) comprising the device for image automatic testing.
Embodiments of the present disclosure provide a computer-readable storage medium storing computer-executable instructions configured to perform the above-described method for image automation testing.
The disclosed embodiments provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the above-described method for image automation testing.
The computer-readable storage medium described above may be a transitory computer-readable storage medium or a non-transitory computer-readable storage medium.
The technical solution of the embodiments of the present disclosure may be embodied in the form of a software product, where the computer software product is stored in a storage medium and includes one or more instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present disclosure. And the aforementioned storage medium may be a non-transitory storage medium comprising: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes, and may also be a transient storage medium.
The above description and drawings sufficiently illustrate embodiments of the disclosure to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. The examples merely typify possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in or substituted for those of others. Furthermore, the words used in the specification are words of description only and are not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this application is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, the terms "comprises" and/or "comprising," when used in this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Without further limitation, an element defined by the phrase "comprising an …" does not exclude the presence of other like elements in a process, method or apparatus that comprises the element. In this document, each embodiment may be described with emphasis on differences from other embodiments, and the same and similar parts between the respective embodiments may be referred to each other. For methods, products, etc. of the embodiment disclosures, reference may be made to the description of the method section for relevance if it corresponds to the method section of the embodiment disclosure.
Those of skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software may depend upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments. It can be clearly understood by the skilled person that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments disclosed herein, the disclosed methods, products (including but not limited to devices, apparatuses, etc.) may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units may be merely a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to implement the present embodiment. In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than disclosed in the description, and sometimes there is no specific order between the different operations or steps. For example, two sequential operations or steps may in fact be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.