Color sequence display control method and device based on deep learning

文档序号:8657 发布日期:2021-09-17 浏览:32次 中文

1. A color sequential display control method based on deep learning is characterized by comprising the following steps:

determining a driving algorithm matched with the single-frame image by adopting a deep learning method for the input single-frame image based on the image characteristics of the single-frame image and the refresh rate of the color-sequential display;

calculating to obtain ideal backlight distribution of the single-frame image in each field by adopting the driving algorithm;

calculating the simulated backlight distribution and the transmittance of the single-frame image in each field according to the ideal backlight distribution and by combining the light diffusion characteristic of the color-sequential display;

and calculating the image of each field according to the simulated backlight distribution and the transmittance.

2. The method according to claim 1, wherein the determining, for the input single-frame image, the driving algorithm matching with the single-frame image by using a deep learning method based on the image characteristics of the single-frame image and the refresh rate of the color-sequential display comprises:

inputting the single-frame image into a pre-trained image classification model to obtain a driving algorithm which is output by the image classification model and matched with the single-frame image;

the image classification model is obtained by training with a training image as a training sample and a driving algorithm matched with the training image as a sample label;

and selecting a driving algorithm consistent with the refresh rate of the color-sequential display from the driving algorithms output by the image classification model as a driving algorithm matched with the image.

3. The method of claim 2, wherein the training method of the image classification model comprises:

acquiring a training image set;

applying a preset driving algorithm to a training image, calculating the color separation degree of the training image under each type of driving algorithm, and marking the driving algorithm with the lowest color separation degree as a matching driving algorithm of the image;

inputting a training image into an image classification model to obtain a driving algorithm corresponding to the training image output by the image classification model;

and updating the parameters of the image classification model by taking the matching driving algorithm of the output training image corresponding to the driving algorithm approaching to the training image mark as a training target.

4. The method of claim 3, wherein the method for calculating the degree of color separation of the training image under the driving algorithm comprises:

acquiring images of all fields of the training images under a driving algorithm, and combining the images of all fields into a simulation display image of the training images;

calculating visual saliency VS of each region in the analog display image, and determining the region with the VS value larger than a preset threshold value in the analog display image as a dominant visual saliency DVS region;

calculating pixel-by-pixel color differences between a DVS region of the simulated display image and a corresponding region of the training image;

and summing the color differences of all pixels in the DVS area to obtain a total color difference value, and dividing the total color difference value by the number of the pixels in the DVS area to obtain a value of the color separation degree.

5. The method according to claim 1, wherein the determining, for the input single-frame image, the driving algorithm matching with the single-frame image by using a deep learning method based on the image characteristics of the single-frame image and the refresh rate of the color-sequential display comprises:

and determining a driving algorithm matched with the single-frame image by adopting a deep learning method for the input single-frame image based on the image characteristics of the whole area contained in the single-frame image and the refresh rate of the color-sequential display.

6. The method according to claim 1, wherein the determining, for the input single-frame image, the driving algorithm matching with the single-frame image by using a deep learning method based on the image characteristics of the single-frame image and the refresh rate of the color-sequential display comprises:

dividing an input single-frame image into at least two regions;

and for each area, determining a driving algorithm matched with the area by adopting a deep learning method based on the image characteristics of the area and combining the refresh rate of the color-sequential display.

7. The method of claim 1, wherein the driving algorithm comprises a Stencil method, an Edge-Stencil method, a Stencil-FSC method, an LPD method, a Stencil-LPD method, an RGB method, and/or a GPDK method.

8. The method of claim 3, wherein the image classification model is a depth residual network model.

9. The method according to claim 4, wherein the preset threshold is 0.5.

10. A color sequential display control apparatus based on deep learning, comprising:

a drive algorithm matching module to: determining a driving algorithm matched with the single-frame image by adopting a deep learning method for the input single-frame image based on the image characteristics of the single-frame image and the refresh rate of the color-sequential display;

an ideal backlight calculation module to: calculating to obtain ideal backlight distribution of the single-frame image in each field by adopting the driving algorithm;

the analog backlight and compensation module is used for: calculating the simulated backlight distribution and the transmittance of the single-frame image in each field according to the ideal backlight distribution and by combining the light diffusion characteristic of the color-sequential display;

a field image calculation module to: and calculating the image of each field according to the simulated backlight distribution and the transmittance.

Background

A conventional Liquid Crystal Display (LCD) is composed of a backlight panel, liquid crystals, color filters, etc., and the display suffers from a loss of luminous efficacy of about 2/3 due to the presence of the color filters. In order to improve the light efficiency of the liquid crystal display and save the power consumption of the display to the maximum extent, and simultaneously increase the resolution of the liquid crystal display, a field sequential color (lcd) has come to be developed.

Referring to fig. 1, a Field Sequential Color (FSC) display uses no Color filter, and rapidly flashes sub-pixels of three primary colors, namely red, green, and blue, in sequence in the same frame (frame) by a time-Sequential Color mixing method, thereby forming a Color image. The resolution of the color sequential display is 3 times higher than that of the conventional liquid crystal display with the same panel size. Because the color-sequence display has the advantages of high resolution, low power consumption, environmental protection and the like, the color-sequence display is widely applied to devices such as smart phones, tablet computers, desktop displays, televisions, video projectors, VR, AR and the like.

However, the video displayed by the display is substantially composed of one frame image in time sequence, and in the color sequential display, the same frame image is also composed of sub-frame images of three primary colors of red, green and blue which flash rapidly in sequence. Referring to fig. 2, in the process of glancing and smoothly tracking the image, when there is a relative speed between the eyeball of the viewer and the image to be viewed, due to the persistence of vision effect of the human eye, the images in different time sequences cannot be well overlapped on the retina of the human eye, so that red, green and blue Color stripes appear at the edge, and the image quality is reduced, which is called a Color Break Up (CBU) phenomenon of a Color sequential display.

In order to solve the color separation phenomenon, the most direct method is to increase the refresh rate of the display to be over 540Hz, but the display with high refresh rate often needs high liquid crystal response speed corresponding to the display, and the technology is difficult to realize. Therefore, researchers have proposed a series of methods for changing the color field presentation mode rather than simply increasing the refresh rate, such as 240 Hz-tencil method, 180 Hz-tencil method, 240Hz Edge tencil method, etc. proposed by taiwan association group, Local Primary resolution (LPD method), four-field LPD optimization algorithm, etc. proposed by philips.

The optimization driving algorithms effectively inhibit the color separation phenomenon to a certain extent, and because the specific optimization modes of the algorithms are different, the advantages and disadvantages of the algorithms exist in the driving application of different image contents. However, when displaying images, the current color-sequential display generally adopts a single driving algorithm for any frame image in the video, and the single driving algorithm is difficult to adapt to images of all content types, and it is difficult to effectively reduce the color separation degree of each frame image.

Disclosure of Invention

In view of the above, the present application provides a color sequential display control method and apparatus based on deep learning, so as to minimize the degree of color separation of an image.

In order to achieve the above purpose, the present application provides the following technical solutions:

a color sequential display control method based on deep learning comprises the following steps:

determining a driving algorithm matched with the single-frame image by adopting a deep learning method for the input single-frame image based on the image characteristics of the single-frame image and the refresh rate of the color-sequential display;

calculating to obtain ideal backlight distribution of the single-frame image in each field by adopting the driving algorithm;

calculating the simulated backlight distribution and the transmittance of the single-frame image in each field according to the ideal backlight distribution and by combining the light diffusion characteristic of the color-sequential display;

and calculating the image of each field according to the simulated backlight distribution and the transmittance.

Preferably, the determining, for the input single-frame image, a driving algorithm matching with the single-frame image based on the image characteristics of the single-frame image and the refresh rate of the color-sequential display includes:

inputting the single-frame image into a pre-trained image classification model to obtain a driving algorithm which is output by the image classification model and matched with the single-frame image;

the image classification model is obtained by training with a training image as a training sample and a driving algorithm matched with the training image as a sample label;

and selecting a driving algorithm consistent with the refresh rate of the color-sequential display from the driving algorithms output by the image classification model as a driving algorithm matched with the image.

Preferably, the training method of the image classification model comprises the following steps:

acquiring a training image set;

applying a preset driving algorithm to a training image, calculating the color separation degree of the training image under each type of driving algorithm, and marking the driving algorithm with the lowest color separation degree as a matching driving algorithm of the image;

inputting a training image into an image classification model to obtain a driving algorithm corresponding to the training image output by the image classification model;

and updating the parameters of the image classification model by taking the matching driving algorithm of the output training image corresponding to the driving algorithm approaching to the training image mark as a training target.

Preferably, the method for calculating the degree of color separation of the training image under the driving algorithm includes:

acquiring images of all fields of the training images under a driving algorithm, and combining the images of all fields into a simulation display image of the training images;

calculating visual saliency VS of each region in the analog display image, and determining the region with the VS value larger than a preset threshold value in the analog display image as a dominant visual saliency DVS region;

calculating pixel-by-pixel color differences between a DVS region of the simulated display image and a corresponding region of the training image;

and summing the color differences of all pixels in the DVS area to obtain a total color difference value, and dividing the total color difference value by the number of the pixels in the DVS area to obtain a value of the color separation degree.

Preferably, the determining, for the input single-frame image, a driving algorithm matching with the single-frame image based on the image characteristics of the single-frame image and the refresh rate of the color-sequential display includes:

determining a driving algorithm matched with the single-frame image based on the image characteristics of the whole area contained in the single-frame image and the refresh rate of the color sequential display for the input single-frame image;

or, for an input single frame image, dividing the single frame image into at least two regions;

for each region, determining a driving algorithm matched with the region based on the image characteristics of the region and the refresh rate of the color-sequential display.

Preferably, the driving algorithm comprises a Stencil method, an Edge-Stencil method, a Stencil-FSC method, an LPD method, a Stencil-LPD method, an RGB method and/or a GPDK method.

Preferably, the image classification model is a depth residual error network model.

Preferably, the preset threshold is 0.5.

Based on the color-sequential display control method based on the deep learning, the application also provides a color-sequential display control device based on the deep learning, which comprises the following steps:

a drive algorithm matching module to: determining a driving algorithm matched with the single-frame image by adopting a deep learning method for the input single-frame image based on the image characteristics of the single-frame image and the refresh rate of the color-sequential display;

an ideal backlight calculation module to: calculating to obtain ideal backlight distribution of the single-frame image in each field by adopting the driving algorithm;

the analog backlight and compensation module is used for: calculating the simulated backlight distribution and the transmittance of the single-frame image in each field according to the ideal backlight distribution and by combining the light diffusion characteristic of the color-sequential display;

a field image calculation module to: and calculating the image of each field according to the simulated backlight distribution and the transmittance.

According to the technical scheme, the driving algorithm matched with the single-frame image is determined for the input single-frame image based on the image characteristics of the single-frame image and the refresh rate of the color-sequential display; and calculating the image of each field of the single-frame image in the color sequential display by adopting the driving algorithm and combining the specific image characteristics of the single-frame image.

According to the method and the device, for each frame of image in the video, the matched driving algorithm is determined one by one according to the specific image characteristics contained in the image, so that the color separation phenomenon of the image in the color-sequential display is inhibited, the color separation degree of the image is reduced, each frame of image can obtain a good color separation inhibiting effect in the color-sequential display, the image quality is improved, and a good visual effect is provided for a user.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.

FIG. 1 illustrates a temporal color mixing schematic of a color sequential display;

fig. 2 illustrates a color separation phenomenon diagram of a color sequential display;

FIG. 3 illustrates a color separation simulation diagram when an image is driven and displayed by the LPD method;

FIG. 4 illustrates a new three primary color range diagram after image desaturation;

FIG. 5 is a diagram illustrating a color separation simulation of an image with rich colors and high saturation by using an LPD method and an RGB rendering method;

FIG. 6 illustrates a new three primary color range diagram after partial desaturation;

FIG. 7 illustrates a color separation simulation of the 240Hz Stencil algorithm in a high local contrast image;

FIG. 8 illustrates a color separation simulation diagram for the 240Hz Edge-Stencil algorithm in an image with uniform Edge color;

fig. 9 is a schematic diagram of a color sequential display control method based on deep learning according to an embodiment of the present application;

fig. 10 is another schematic diagram of a color-sequential display control method based on deep learning according to an embodiment of the present application;

FIG. 11 illustrates an original image of the earth;

FIG. 12 illustrates an ideal dimming state diagram for each field after blocking an original image of the earth;

FIG. 13 illustrates a real backlight simulation under each field;

FIG. 14 illustrates a graph of transmittance at each field;

fig. 15 illustrates images of fields finally output;

FIG. 16 illustrates a color separation simulation of an original image of the earth;

FIG. 17 illustrates a depth residual network model diagram;

FIG. 18 illustrates a residual structure diagram;

FIG. 19 illustrates 6 original pictures;

FIG. 20 illustrates a simulation of a 240Hz color-sequential display using different drive algorithms for different frames of image content;

FIG. 21 illustrates a color separation simulation employing the same drive algorithm for each frame of image content;

fig. 22 is another schematic diagram of a color-sequential display control method based on deep learning according to an embodiment of the present application;

FIG. 23 illustrates a schematic diagram of partitioning an original image of the earth;

FIG. 24 is a schematic diagram illustrating the selection of a matching drive algorithm for each region according to an embodiment of the present application;

FIG. 25 illustrates an ideal dimming state diagram for each field using different driving algorithms for the original regions of the earth;

FIG. 26 illustrates a real backlight simulation of fields using different driving algorithms for the original regions of the earth;

FIG. 27 illustrates a graph of the transmittance of fields using different driving algorithms for the original regions of the earth;

FIG. 28 illustrates the final output of the images of the fields using different driving algorithms for the original regions of the earth;

FIG. 29 illustrates a color separation simulation using different driving algorithms for the original regions of the earth;

FIG. 30 illustrates a color separation simulation using the Stencil method for an original global region of the earth;

fig. 31 is a schematic diagram of a color sequential display control device based on deep learning according to an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

As known in the background art, when displaying an image, a single driving algorithm is commonly used for any frame of image in a video in the current color-sequential display.

The applicant finds that existing driving algorithms for color-sequential displays have advantages and disadvantages in terms of suppressing color separation effects for images of different contents.

For example, the LPD method is a color display control method based on a local primary color desaturation algorithm (the basic principle is to control a backlight driving signal to desaturate primary colors, and on the premise of ensuring that the original image or video color is completely reproduced or nearly completely reproduced, the color difference between three adjacent fields in a frame is reduced to the maximum extent, and the color separation phenomenon of the traditional red, green and blue primary colors in a color sequential display is avoided.

In the image processing of the LPD method, the color difference between the new three primary colors is small by reducing the color gamut, so that the human eye cannot feel the color separation phenomenon. Generally, for an image with low saturation, or an image lacking any two of the three primary colors of red, green and blue, as shown in fig. 3, the Color separation (CBU) before desaturation is 23.05, the CBU after desaturation is 7.60, and fig. 4 is the range of the Color gamut before and after Color desaturation. However, for the image with rich color and high saturation, as shown in fig. 5 and fig. 6, the change of the three primary colors before and after the desaturation process is not large, the color difference between the new three primary colors is still large, the image contents of the three fields are not substantially different from those before the desaturation process, and the purpose of suppressing the color separation is basically not achieved, as shown in fig. 5, the CBU before the desaturation process is 15.48, the CBU after the desaturation process is 11.23, and fig. 6 is the range of the color gamut before and after the color desaturation process.

For another example, the 240Hz stereo-FSC method is a color sequence driving algorithm for four color fields, and the basic principle is to display the details of an image using three color fields of red, green, and blue based on the average value (i.e., the template stereo) of the first color field display picture. The Stencil-FSC method can reduce the color and brightness of the red, green and blue fields, so that the color separation phenomenon can be remarkably suppressed.

However, the stensil-FSC method determines the backlight by averaging each dimming block, and referring to fig. 7, for images with high local contrast, i.e. some images with both dark and bright portions, after averaging the backlight area, the backlight intensity of the dark portion is greatly reduced, so that a large amount of edge information remains in R, G, B three fields, and thus, there is still a significant color separation phenomenon, with a CBU value of 10.39.

For the picture shown in fig. 7, the Edge color distribution is uniform, and the Edge is blue, so that the picture is more suitable for the 240Hz Edge-step algorithm. The Edge-step algorithm considers that the parts of the image which are easy to be separated are mainly concentrated in the Edge area of the image, the Edge of the image is displayed in the first color field, and the RGB information is complemented in the other three fields. As shown in fig. 8, it can be seen that compared with the conventional global stereo algorithm, the image processed by the Edge-stereo method has significantly suppressed color separation, and the CBU value is 7.63.

Based on the above analysis, the embodiment of the application discloses a color sequential display control method based on deep learning, which adopts different driving algorithms for different images to achieve the purpose of minimizing the degree of color separation. As shown in fig. 9, a color sequential display control method based on deep learning disclosed in an embodiment of the present application may include the following steps:

step S101, determining a driving algorithm for the input single frame image.

Specifically, for an input single-frame image, a driving algorithm matched with the single-frame image is determined by adopting a deep learning method based on the image characteristics of the single-frame image and the refresh rate of the color-sequential display.

The refresh rate of the existing color-sequential display mainly comprises 240Hz, 180Hz and 120Hz, and different refresh rates correspond to different color-sequential driving algorithms. Therefore, when driving and displaying an image, it is necessary to first determine the refresh rate of the color-sequential display.

According to the refresh rate of the color-sequential display, several candidate color-sequential driving algorithms can be determined, and then according to the specific image characteristics of the input single-frame image, a matched driving algorithm is selected from the several candidate color-sequential driving algorithms to drive the single-frame image, so that the color separation degree of the image is reduced to the minimum.

And step S102, calculating to obtain ideal backlight distribution.

Specifically, according to the driving algorithm determined in step S101, the ideal backlight distribution of the single frame image in each field is calculated respectively. Wherein the ideal backlight distribution describes the brightness level of the backlight dimming module in the color sequential display.

And step S103, calculating to obtain the simulated backlight distribution and the transmittance.

Specifically, first, based on the ideal backlight distribution calculated in step S102, the simulated backlight distribution of the single-frame image in each field is calculated, respectively, in combination with the light diffusion characteristic of the color-sequential display.

Since the dynamic backlight technique reduces the brightness of some areas of the image, resulting in image distortion, gray scale compensation is required for the image. Therefore, after calculating the simulated backlight distribution, it is necessary to further calculate the transmittance, specifically, the transmittance of each field in the liquid crystal display is calculated from the simulated backlight distribution.

In step S104, an image of each field is calculated.

Specifically, from the simulated backlight distribution and the transmittance calculated in step S103, images of the respective fields are calculated.

The fields are sequentially flashed at a predetermined frequency in the color sequential display, and an original color image of the frame image is formed.

According to the color-sequential display control method based on deep learning provided by the embodiment of the application, for each frame of image in a video, according to the specific image characteristics contained in the image content, the matched driving algorithms are determined one by one, so that the color separation phenomenon generated when the image is displayed in the color-sequential display is inhibited, the color separation degree of the image is reduced, each frame of image can obtain a good color separation inhibition effect in the color-sequential display, the image quality is improved, and a good visual effect is provided for a user. For the sake of clarity, the following examples are given in detail.

In the above embodiment, there may be various implementation manners for determining the driving algorithm of the input single-frame image and calculating the ideal backlight distribution according to the driving algorithm. For example, for an input single-frame image, the whole area of the input single-frame image is used as a single object, a matched driving algorithm is determined, and then the ideal backlight distribution of the whole area of the single-frame image is calculated according to the driving algorithm; or, for the input single frame image, partitioning the input single frame image, then respectively determining a matched driving algorithm for each region, then calculating the ideal backlight distribution of the region through the driving algorithm of each region, and finally combining the ideal backlight distribution of the whole single frame image. These two implementations are described in detail below.

Referring to fig. 10 based on the technical solutions disclosed in the embodiments of the present application, in an alternative embodiment, the method for controlling a color-sequential display based on deep learning disclosed in the present application may include the following steps:

step S201, for an input single frame image, a matching driving algorithm is determined based on the entire region of the single frame image.

Specifically, firstly, determining several candidate color sequence type driving algorithms according to the refresh rate of the color sequence type display; then, according to the specific image characteristics of the whole area contained in the input single-frame image, a deep learning method is adopted, a driving algorithm matched with the single-frame image is determined from the candidate color sequence driving algorithms, and the single-frame image is driven, so that the color separation degree of the single-frame image is reduced to the minimum.

Step S202, calculating to obtain the ideal backlight distribution of the whole area of the single-frame image.

Specifically, according to the driving algorithm determined in step S201, the ideal backlight distribution of the entire area of the single frame image in each field is calculated respectively. Wherein the ideal backlight distribution describes the brightness level of the backlight dimming module in the color sequential display.

Step S203, calculating the simulated backlight distribution and transmittance of the entire region of the single frame image.

In order to reduce the computational complexity, in an alternative embodiment, the simulated backlight distribution may be calculated by simulating the real backlight intensity distribution using a Discrete Fourier Transform (DFT) and a gaussian Low-Pass Filter (GLPF).

Specifically, first, an ideal backlight distribution of the entire area of the single frame image is calculated in step S202, and the simulated backlight distribution of the entire area of the single frame image in each field is calculated in combination with the light diffusion characteristic of the color-sequential display.

Optionally, the process of calculating the simulated backlight distribution may include: calculating a light diffusion function generated by all LEDs on the liquid crystal panel under ideal backlight distribution; alternatively, the light spreading intensity of all LEDs on the liquid crystal panel under the ideal backlight distribution was measured. The formula for simplifying the calculated light diffusion function may be:

where D (u, v) is the distance to the Fourier transform origin, D0Is the cut-off frequency, (u, v) denotes the position in the frequency domain. D0Direct correlation with backlight diffusion, smaller D0The values allow for lower frequency content and result in a more blurred backlight image. Thus, D can be controlled0To simulate the backlight intensity distribution under an arbitrary point spread function.

Since the dynamic backlight technique reduces the brightness of some areas of the image, resulting in image distortion, gray scale compensation is required for the image. Therefore, after calculating the simulated backlight distribution, further calculation of the transmittance is required.

Specifically, the transmittance of each field in the liquid crystal display is calculated from the simulated backlight distribution. Wherein, the calculation formula of the transmittance may be:

wherein the content of the first and second substances,and IiRepresenting the image brightness;and BLiIndicating the intensity of a conventional full-on backlight and a blurred backlight image when using local color backlight dimming techniques. Then, by taking T for each liquid crystal pixelR、TGAnd TBTo calculate T from the minimum transmission value ofminTo generate a liquid crystal signal of a first field. R, G and a new liquid crystal signal T 'of B field'R、T′GAnd T'BDetermined by equation (3).

And step S204, calculating to obtain the image of each field of the single-frame image.

Specifically, from the simulated backlight distribution and the transmittance calculated in step S203, images of the respective fields are calculated.

The fields are sequentially flashed at a predetermined frequency in the color sequential display, and an original color image of the frame image is formed.

The following describes the above steps S201 to S204 in detail, taking a specific image as an example.

Assuming that the refresh rate of the color-sequential display is 240Hz, a single frame image as shown in fig. 11 is input, and according to step S201, the driving algorithm matching the frame image is determined to be 240Hz step method in combination with the image characteristics of the single frame image. After determining the algorithm applicable to the image, the image of four fields corresponding to the single frame image is determined according to the 240Hz Stencil driving algorithm.

According to the 240Hz Stencil driving algorithm, firstly, an input original image is blocked to obtain ideal backlight distribution, which is also called an ideal dimming state diagram. Optionally, the image shown in fig. 11 is partitioned, and the partition size is 9 × 16. Then, the traditional dimming method, such as a maximum value method, an average method or a square root method, is adopted to calculate the backlight signal, and finally, an ideal dimming state diagram after the image is partitioned is obtained.

Fig. 12 is an ideal dimming state diagram of fig. 11 under the 240Hz stensil driving algorithm, which is a first field backlight distribution, a second field backlight distribution, a third field backlight distribution and a fourth field backlight distribution in sequence. The small squares in each color field in fig. 12 represent the backlight for each area after image partitioning. It can be seen that the brightness in each square small area is not consistent according to the content of each area in the image, so that the aim of dimming is fulfilled.

Then, the real backlight distribution of light emitted from the color sequential display LED array according to the ideal dimming state diagram needs to be simulated, i.e. the simulated backlight distribution is calculated, so as to perform the gray scale compensation subsequently.

For example, the true backlight distribution is obtained by calculating a light spread function that all LEDs collectively produce across the entire liquid crystal panel. Specifically, DFT and GLPF are employed to simulate the real backlight intensity distribution, as shown in fig. 13. The four pictures in fig. 13 are respectively pictures after the real backlight simulation of the first field, the second field, the third field and the fourth field, and it can be seen from the drawings that after the backlight with clear grids in fig. 12 is subjected to the point spread function simulation by the formula (1), the simulated backlight distribution diagram in fig. 13 becomes a blurred backlight, and thus, the method better simulates the state that the LED light of the backlight part is spread onto the liquid crystal panel.

Since the dynamic backlight technique reduces the brightness of some regions of an image, thereby causing image distortion, gray-scale compensation, i.e., transmittance calculation of the image, is required for the image. As shown in fig. 14, the four pictures in fig. 14 represent the transmittances of the first field, the second field, the third field, and the fourth field, respectively, and it can be found that the transmittances corresponding to different backlight fields are not the same.

After obtaining the backlight intensity distribution by DFT and GLPF, the liquid crystal transmittance values of R, G and B sub-frames are compensated using equations (2), (3).

By the above methodAfter the simulated backlight distribution and the transmittance are calculated, the final output of the single-frame image is calculated. Specifically, 3 primary colors are used to simulate a backlight signal (BL)R、BLG、BLB) And a minimum transmittance signal TminIn combination, a high brightness image having coarse color information is displayed in the first frame image. Similarly, BL will beRAnd T'R、BLGAnd T'G、BLBAnd T'BIn combination, three further primary color images are generated.

As shown in fig. 15, the four primary color images are sequentially displayed at a frequency of 240Hz, and a vivid color image is generated. The four pictures in fig. 15 show images obtained by combining the simulated backlight distribution and the transmittance of the first field, the second field, the third field, and the fourth field, respectively, and the original color image can be formed by sequentially flashing the four field images on the 240Hz color-sequential display. As can be seen from fig. 15, the image energy is mainly concentrated in the first field, so as to reduce the intensity of the red, green, and blue fields, and finally achieve the purpose of suppressing color separation. As shown in fig. 16, the image composed of four fields is finally subjected to simulation verification of the degree of color separation.

When the display is driven by different driving algorithms in fig. 11, the CBU values are shown in table 1. As can be seen from table 1, the tencel method has a relatively good color separation suppression effect.

TABLE 1 comparison of color separation values

In the color-sequential display control method based on deep learning provided by the embodiment of the application, the whole area of an input single-frame image is taken as an object, and a matched driving algorithm is determined by combining the refresh rate of the color-sequential display according to the specific image characteristics; further according to the display method of the color sequence type display, the image of each field in the liquid crystal display is calculated, and finally the original color image of the frame image is generated. The method matches the driving algorithm to each frame image, and obtains better color separation suppression effect compared with a mode of applying a single driving algorithm to each frame image without distinction.

The above embodiment in step S201 may have various implementation methods for determining a driving algorithm matching with the input single frame image. Based on this, in an alternative embodiment, in step S201, for an input single frame image, a process of determining a driving algorithm matching with the single frame image based on an image feature of the single frame image and a refresh rate of the color-sequential display may include:

and inputting the single-frame image into a pre-trained image classification model to obtain a driving algorithm which is output by the image classification model and matched with the single-frame image.

The image classification model is obtained by training with a training image as a training sample and a driving algorithm matched with the training image as a sample label; and selecting a driving algorithm consistent with the refresh rate of the color-sequential display from the driving algorithms output by the image classification model as a driving algorithm matched with the single-frame image.

The image classification model can adopt any Convolutional Neural Network (CNN), which is constructed by simulating a visual perception mechanism of a living being, and can perform supervised learning and unsupervised learning, and the Convolutional kernel parameter sharing in an implicit layer and the sparsity of interlayer connection enable the Convolutional Neural network to learn grid-like topology (grp-like topology) features with a small amount of computation, and have a stable effect and no additional feature engineering (feature engineering) requirement on data.

Optionally, the convolutional neural network may be a deep residual network (ResNet). ResNet is a deep network constructed by a plurality of Residual blocks (Residual blocks), well solves the problem of deep network Degradation (Degradation protocol), and can train a deeper network; meanwhile, Batch Normalization (Batch Normalization) is used instead of the dropout function to solve the problem of gradient disappearance or gradient explosion.

The structure of the deep residual error network model is specifically described and analyzed by taking the ResNet _50 model as an example.

The deep residual network model may be divided into 8 build layers (Building layers), where 1 build Layer may contain 1 or more network layers and 1 or more Building blocks (e.g., ResNet Building blocks). Specifically, as shown in fig. 17, the first build layer 11 includes 1 general convolutional layer and a max-pooling layer; the second build layer 12 includes 3 residual modules; the third build layer 13 includes a downsampling residual module and 3 residual modules; the fourth build layer 14 includes a downsampling residual module and 5 residual modules; the fifth building layer 15 comprises a down-sampled residual module and 2 residual modules; the sixth build layer 16 comprises an average pooling layer; the seventh build layer 17 comprises a fully connected layer; the eighth build layer 18 comprises a Softmax layer.

Referring to fig. 18, the main branch of the residual structure of ResNet _50 (including the residual block and the to-be-sampled residual block) has three convolutional layers: the first convolution layer is a 1 x 1 convolution layer and is used for reducing the dimension of the channel; the second convolution layer is a 3 x 3 convolution layer with a step size stride of 2, which is used to reduce the height and width of the feature matrix to half of the original one; the third convolution layer is 1 × 1 convolution layer, which is used to reduce the dimension of the channel. The down-sampling residual module also comprises 1 x 1 convolutional layers on the shortcut branch, and the number of convolutional kernels of the convolutional layers is the same as that of the convolutional kernels on the third layer on the main branch, so that the outputs of the convolutional layers and the main branch can be added.

Before the image classification model is applied, the image classification model needs to be trained so that the image classification model can output a driving algorithm matched with a single frame image according to the input single frame image. Based on this, in an alternative embodiment, the method for training the image classification model may include:

1) a training image set is acquired.

Specifically, a certain number of images are selected as training images, for example, 10000 images are randomly selected, and a training set is constructed by the selected images.

2) And applying a preset driving algorithm to the training image, calculating the color separation degree of the training image under each type of driving algorithm, and marking the driving algorithm with the lowest color separation degree as the matching driving algorithm of the image.

For a color sequence display with a refresh rate of 240Hz, a driving scheme of four color fields is adopted, and the driving algorithm mainly comprises a 240Hz step method, a 240Hz Edge-step method, a 240Hz four-field LPD method, a traditional RGBK method and the like; for a 180Hz color sequence display, a driving scheme of three color fields is adopted, and the driving algorithm mainly comprises a traditional RGB three-field rendering method, a 180Hz Stencil method and a 180Hz three-field LPD method; for a 120Hz color-sequential display, a driving scheme of two color fields is adopted, and driving algorithms mainly comprise a 120Hz Stemcil method, a 120Hz Stemcil-LPD method and the like.

For the above driving algorithm of the color sequential display, the inventors of the present application have conducted investigations to find that: the Stencil method is suitable for images with low contrast; the LPD method is suitable for low saturation images; the Edge-step method is suitable for images with uniform Edge colors; and for the image whose image content only has one of the three primary colors of red, green and blue and does not need to be dimmed, the traditional RGB rendering method is applied.

The driving algorithm that can be selected in this step is not limited to the above disclosed algorithms and can be any color sequential display driving algorithm that is available at a given refresh rate.

In an alternative embodiment, taking a color sequential display with a refresh rate of 240Hz as an example, the following 6 driving algorithms can be applied to the training image:

a)240Hz Edge method (Global Dimming)

b)240Hz GPDK method (Global Dimming)

c)240Hz Global Stencil-FSC method (Global scaling)

d)240Hz LPDK method (Local Dimming)

e)240Hz RGBK method (Global Dimming)

f)240Hz Stemcil-FSC method (Local Dimming)

And respectively applying the 6 different driving algorithms to each training image in the training set, calculating the color separation degree of the training images under each driving algorithm, and marking the driving algorithm with the lowest color separation degree as the matching driving algorithm of the training images.

3) And inputting the training image into the image classification model to obtain a driving algorithm corresponding to the training image output by the image classification model.

For example, training images in the training set are input to and trained on an image classification model, such as the ResNet _50 model.

The training images pass through a deep network composed of a convolutional layer (Conv), a Batch normalization layer (Batch _ Norm), a pooling layer (Pool), a full Connected layers (FC) and the like in an image classification model, and then pass through a Softmax classifier to output a driving algorithm corresponding to the images.

4) And updating the parameters of the image classification model by taking the matching driving algorithm of the output training image corresponding to the driving algorithm approaching to the training image mark as a training target.

For example, the difference between the driving algorithm corresponding to the training image output by the image classification model and the matching driving algorithm of the training image label can be calculated through the Adam loss function, and the parameters of the image classification model are updated by taking the output driving algorithm corresponding to the training image approaching to the matching driving algorithm of the training image label as a training target.

Through the training process, the image classification model can finally output a driving algorithm matched with the image aiming at the input image, and the matching process depends on evaluating the color separation degree of the image under a specific driving algorithm. The degree of color separation of the image can be evaluated by subjective color separation evaluation, and can also be evaluated by a visual saliency model.

Based on this, in an alternative embodiment, the method for calculating the color separation degree of the training image under the driving algorithm may include:

1) and acquiring images of all fields of the training image under a driving algorithm, and combining the images of all fields into a simulation display image of the training image.

Specifically, according to the driving algorithm adopted, through the foregoing steps S202 to S204, images of the fields of the training image under the driving algorithm are acquired, and then the images of the fields are combined into a simulated reality image of the training image.

2) And calculating the Visual Saliency VS of each area in the analog display image, and determining the area with the VS value larger than a preset threshold value in the analog display image as a Dominant Visual Saliency (DVS) area.

Among them, the visual saliency theory determines the attraction degree of image contents by using information such as brightness, color, and direction in an image, and has been largely successful in the FR-IQA field. In an alternative embodiment, a Graph-based Visual salience (GBVS) method may be employed to calculate the VS value for the simulated display image. And after the VS value of the analog display image is calculated, determining the area with the VS value larger than the preset threshold value in the analog display image as the DVS area. In an alternative embodiment, 0.5 may be selected as the specific value of the threshold. Generally, the DVS region contains almost all of the severe color splitting fringes.

3) The color difference between a DVS region of the analog display image and a corresponding region of the training image is calculated on a pixel-by-pixel basis.

Specifically, based on the DVS region calculated in step 2), for the training image and the analog display image thereof, the color difference of the corresponding region is calculated on a pixel-by-pixel basis. The color difference refers to an euclidean distance between two pixels in a color space.

4) And summing the color differences of all pixels in the DVS area to obtain a total color difference value, and dividing the total color difference value by the number of the pixels in the DVS area to obtain a value of color separation degree, namely a CBU value.

Specifically, summing the color differences calculated in the step 3) to obtain a total color difference value; and then, normalizing the total color difference value by adopting the number of pixels in the DVS area to finally obtain a value representing the color separation degree.

By the method for calculating the color separation degree of the training image under the driving algorithm, the color separation degree of the image can be represented by an objective numerical value, so that the image classification model adjusts the parameters of the image classification model according to the objective numerical value.

In the following, for a specific example, the color separation degree of the image under different driving algorithms is evaluated by using the above-mentioned calculation method of the color separation degree.

On one hand, by using the color sequential display control method based on deep learning provided by the embodiment of the present application, 6 driving algorithms matched with 6 images in fig. 19 are respectively determined to drive and display the 6 images, and finally, a synthesized color image is shown in fig. 20.

On the other hand, in the prior art, the 6 images in fig. 19 are all driven and displayed by the same driving algorithm (e.g., Edge method), and the finally synthesized color image is shown in fig. 21.

As can be seen from the figure, when the same driving algorithm is used to drive and display the 6 images in fig. 19, the color separation phenomenon of the area within the dashed-line frame is more obvious. Further, in combination with the above-described method of calculating the degree of color separation, the degree of color separation values in fig. 20 and 21 can be calculated as shown in table 2, wherein the smaller the numerical value, the lower the degree of color separation.

According to the color-sequential display control method based on deep learning, for each frame of image in a video, according to the specific image characteristics of the whole area contained in the frame of image, a matched driving algorithm is determined for each frame of image one by one, so that the color separation phenomenon of the image in the color-sequential display is inhibited, the color separation degree of the image is reduced, each frame of image can obtain a good color separation inhibiting effect in the color-sequential display, the image quality is improved, and a good visual effect is provided for a user.

TABLE 2 color separation numerical comparison

The above embodiment describes in detail that for an input single frame image, a matching driving algorithm is determined based on the entire area of the single frame image, and an image of each field is calculated according to a specific application method of the driving algorithm in the color sequential display. However, for some images, the image characteristics in different areas of the image are greatly different, and if the driving algorithm for the image is subdivided into the areas included in the image, it is beneficial to suppress the color separation of the areas.

Next, a color-sequential display control method with each area in an input single-frame image as an independent object will be described.

Based on this, referring to fig. 22, a method for controlling a color-sequential display based on deep learning according to an embodiment of the present application may include the following steps:

step S301, the input single frame image is partitioned, and for each sub-area, a driving algorithm matched with the sub-area is determined.

Specifically, for an input single frame image, dividing the single frame image into at least two regions; and for each area, determining a driving algorithm matched with the area by a deep learning method based on the image characteristics of the area and combining the refresh rate of the color-sequential display.

Step S302, calculating to obtain the ideal backlight distribution of each sub-area, and combining the ideal backlight distribution of the whole area of the single-frame image.

Specifically, according to the driving algorithm determined for each region in the single frame image in step S301, the ideal backlight distribution of each region in each field of the single frame image is calculated respectively, and finally, the ideal backlight distributions of the entire region in each field of the single frame image are combined.

Step S303, calculating to obtain the simulated backlight distribution and the transmittance of the whole area of the single-frame image.

Specifically, first, an ideal backlight distribution of the entire area of the single frame image is calculated in step S302, and the simulated backlight distribution of the entire area of the single frame image in each field is calculated in combination with the light diffusion characteristic of the color-sequential display.

And step S304, calculating to obtain the image of each field of the single-frame image.

Specifically, from the simulated backlight distribution and the transmittance calculated in step S303, images of the respective fields are calculated.

The fields are sequentially flashed at a predetermined frequency in the color sequential display, and an original color image of the frame image is formed.

The color-sequential display control method based on deep learning provided by the embodiment of the application adopts the driving algorithm matched with different areas in the same image in a self-adaptive manner, so that the color separation degree of any area in the image is reduced to the minimum.

The following describes steps S301 to S304 in detail with respect to a color-sequential display having a refresh rate of 240Hz, taking a specific image as an example.

1) The image is partitioned.

For the input original image shown in fig. 11, the original image is first divided into regions, and the size of the regions is determined according to the number of mini-LED beads of the color sequential display, which may be 3 × 4, 9 × 16, 27 × 48, and the like. In an alternative embodiment, a partition size of 3 x 4 is used, and the positions of the sub-regions and the region numbers are shown in fig. 23.

2) The driving algorithm for each sub-region is determined.

The partitioned images are input into the trained image classification model in the first embodiment, and an optimum driving algorithm is matched for the trained image classification model. As shown in fig. 24, the driving algorithm matched with the sub-regions 2, 3, 9, 10, 11, and 12 is Edge method, the driving algorithm matched with the sub-regions 6 and 7 is Global _ step method, and the driving algorithm matched with the sub-regions 1, 4, 5, and 8 is RGBK algorithm.

3) An ideal backlight distribution, a simulated backlight distribution and a transmittance for each sub-region are calculated according to the selected drive algorithm.

Firstly, due to different driving algorithms, the backlight calculation methods are different, and then the ideal backlight is calculated for the corresponding sub-regions respectively according to the different driving algorithms.

As shown in fig. 25, for the sub-regions 2, 3, 9, 10, 11, 12 using the Edge algorithm, since the Edge of the image is the most likely to have color separation, the Edge of the image is first extracted by a sobel operator, and then the information of the extracted Edge of the image is used as an ideal backlight distribution; for the sub-regions 6 and 7 adopting the Global _ stereo algorithm, displaying most contents of the images of the corresponding sub-regions in a first field, displaying the rest RGB information in the rest three fields, and dimming the images of the first field to be used as ideal backlight distribution; for sub-regions 1, 4, 5, 8 using the RGBK algorithm, since the information in these sub-regions is black, the backlight for these two fields is directly turned off, which is used as the ideal backlight distribution.

Next, after obtaining the ideal backlight distribution, the DFT and GLPF are used to calculate the simulated backlight distribution, which is shown in fig. 26.

Finally, the transmittance thereof is calculated using the formulas (1) to (3). The transmittances of the four fields are shown in fig. 27.

4) And synthesizing the target image.

After the simulated backlight distribution and the transmittance are calculated by the method, the final output of the single-frame image is calculated. Simulating a backlight signal (BL) with 3 primary colorsR、BLG、BLB) And a minimum transmittance signal TminIn combination, a high brightness image having coarse color information is displayed in the first frame image. Similarly, BL will beRAnd T'R、BLGAnd T'G、BLBAnd T'BIn combination, three further primary color images are generated. The four primary color images are sequentially displayed at a frequency of 240Hz, and a vivid color image is generated, as shown in fig. 28.

The specific steps of applying different driving algorithms to different sub-regions of each image have been described. The image is partitioned, and a matched driving algorithm is determined for each area to obtain a good display effect. On the one hand, as shown in fig. 29, it can be seen that for a specific sub-region, the color separation is significantly suppressed by using the adaptive image content driving algorithm, and the CBU value is 10.21. On the other hand, as shown in fig. 30, when only one algorithm (here, the stensil method) is applied to the whole picture, the edge color is not uniform, and a distinct red, green, and blue color separation stripe appears, and the CBU value is 13.85.

For other modified embodiments of the color-sequential display control method based on deep learning provided in the foregoing embodiments, please refer to the foregoing embodiments in this specification, and details are not repeated herein.

The above embodiments of the present application adaptively apply different driving algorithms to different regions in the same image, which is essentially a completely new form of Local Dimming (Local Dimming), and the Local Dimming algorithm of the adaptive image content minimizes the degree of color separation of any region of the image, which is also the most significant advantage of the deep learning-based color-sequential display control method provided by the embodiments of the present application compared with the conventional Local Dimming algorithm, and also inherits the inherent advantages of the conventional Local Dimming algorithm on the color-sequential display, such as high dynamic range, light efficiency triple, triple resolution, and the like.

The color-sequential display control method based on deep learning provided by the embodiment of the application can be directly applied to a mini-LED, the number of the local dimming partitions is increased along with the continuous reduction of the size of the mini-LED, so that the local dimming accuracy is increased, and the image quality is greatly improved compared with a traditional local dimming algorithm by matching with the color-sequential display control method based on deep learning provided by the embodiment of the application. In addition, due to the inherent triple resolution characteristic of the color sequence type display, the spatial bandwidth product of the color sequence type display is also increased by three times, and the color sequence type display can be applied to devices such as VR and AR, and the problem that the resolution of the VR and AR devices is difficult to improve is solved.

The above embodiments of the present application achieve a better display effect by partitioning an image and performing local dimming on each region, however, the image is not driven in a manner of blindly selecting the partitions in all cases. Whether the image is partitioned, i.e. local dimming or Global dimming (Global dimming) can be chosen according to the actual application requirements. In particular, a trade-off is made between energy savings and drive costs. For the color sequence display with higher energy-saving requirement, a local dimming algorithm is adopted to dim the local backlight, so that the power consumption of the backlight can be saved; for a color-sequential display with higher driving cost requirement, a global dimming algorithm can be adopted, wherein the global dimming algorithm is a full-on backlight, the driving cost is lower, but the power consumption is relatively higher.

Based on the color-sequential display control method based on the deep learning, the application also provides a color-sequential display control device, and the color-sequential display control device described below and the color-sequential display control method based on the deep learning can be referred to correspondingly.

Referring to fig. 31, a color sequential display control apparatus according to an embodiment of the present application may include:

a driving algorithm matching module 21 for: determining a driving algorithm matched with the single-frame image based on the image characteristics of the single-frame image and the refresh rate of the color sequential display for the input single-frame image.

An ideal backlight calculation module 22 for: and calculating to obtain the ideal backlight distribution of the single-frame image in each field by adopting the driving algorithm.

An analog backlight and compensation module 23, configured to: and calculating the simulated backlight distribution and the transmittance of the single-frame image in each field according to the ideal backlight distribution and by combining the light diffusion characteristic of the color-sequential display.

A field image calculation module 24 for: and calculating the image of each field according to the simulated backlight distribution and the transmittance.

In summary, the following steps:

the embodiment of the application adopts the color sequence display driving algorithm to drive and display the image, and uses a time color mixing method to remove the color filter, so that the luminous efficiency of the display is three times of that of the original display, and the power consumption of the display is reduced; meanwhile, the color sequence type LCD can achieve the same resolution ratio only by one third of the pixels of the conventional LCD, namely, the resolution ratio can be improved by three times on the same size of the LCD panel.

On this basis, the color-sequential display control method based on deep learning provided by the embodiment of the application adopts the deep learning method to determine the matched driving algorithms for each frame of image in the video according to the specific image characteristics contained in the image content, so as to inhibit the color separation phenomenon generated when the image is displayed in the color-sequential display, reduce the color separation degree of the image, enable each frame of image to obtain a good color separation inhibition effect in the color-sequential display, improve the image quality and provide a good visual effect for users.

Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, the embodiments may be combined as needed, and the same and similar parts may be referred to each other.

The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种适用于由气象变化引起电力故障级别分类方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!