Image processing method, image processing device, electronic equipment and computer readable medium

文档序号:9201 发布日期:2021-09-17 浏览:31次 中文

1. An image processing method, comprising:

acquiring a target image;

obtaining the gray value of each pixel point in the target image;

obtaining a target gray value, a gray mean value and a gray standard deviation with the largest number of pixel points in the target image according to the gray value of each pixel point in the target image;

determining the target brightness level of the target image according to the target gray value, the gray mean value and the gray standard deviation;

and carrying out color enhancement processing on the target image according to the target brightness level.

2. The method of claim 1, wherein determining the target brightness level of the target image from the target grayscale value, the grayscale mean, and the grayscale standard deviation comprises:

and if the target gray value is less than or equal to a first gray threshold value, the gray mean value is less than a first mean value threshold value, and the gray standard deviation is less than a first standard deviation threshold value, determining that the target brightness level of the target image is a first brightness level.

3. The method of claim 2, determining a target brightness level of the target image from the target grayscale value, the grayscale mean, and the grayscale standard deviation further comprising:

if the target gray value is greater than or equal to a second gray threshold value, the gray mean value is greater than a second mean value threshold value, and the gray standard deviation is less than a first standard deviation threshold value, determining that the target brightness level of the target image is a second brightness level;

wherein the second gray threshold is greater than the first gray threshold, and the second mean threshold is greater than the first mean threshold.

4. The method of claim 3, wherein determining the target brightness level of the target image from the target grayscale value, the grayscale mean, and the grayscale standard deviation further comprises:

if the target gray value is less than or equal to the first gray threshold, the gray mean value is greater than or equal to the first mean value threshold, or the gray standard deviation is greater than or equal to the first standard deviation threshold, determining that the target brightness level of the target image is a third brightness level; alternatively, the first and second electrodes may be,

if the target gray value is greater than or equal to the second gray threshold value, the gray mean value is less than or equal to the second mean value threshold value, or the gray standard deviation is greater than or equal to the first standard deviation threshold value, determining that the target brightness level of the target image is a third brightness level; alternatively, the first and second electrodes may be,

and if the target gray value is greater than the first gray threshold and less than the second gray threshold, determining that the target brightness level of the target image is a third brightness level.

5. The method of claim 4, wherein color enhancing the target image according to the target brightness level comprises:

if the target brightness level is a first brightness level, performing color enhancement processing on the target image by using a first neural network model, wherein the first neural network model is obtained by training by using a first image training set comprising the first brightness level;

if the target brightness level is a second brightness level, performing color enhancement processing on the target image by using a second neural network model, wherein the second neural network model is obtained by using a second image training set comprising the second brightness level;

and if the target brightness level is a third brightness level, performing color enhancement processing on the target image by using a third neural network model, wherein the third neural network model is obtained by training by using a third image training set comprising the third brightness level.

6. The method of claim 4, wherein acquiring a target image comprises:

acquiring a video to be processed;

obtaining a target video according to the video to be processed;

determining the target image from the target video.

7. The method of claim 6, wherein color enhancing the target image according to the target brightness level comprises:

if the target brightness level of the target image in the target video comprises a first brightness level and a third brightness level, performing color enhancement processing on a video frame in the target video by using a fourth neural network model, wherein the fourth neural network model is obtained by training by using a fourth image training set comprising the first brightness level and the third brightness level;

if the target brightness level of the target image in the target video is the second brightness level and the third brightness level, performing color enhancement processing on a video frame in the target video by using a fifth neural network model, wherein the fifth neural network model is obtained by using a fourth image training set comprising the second brightness level and the third brightness level;

if the target brightness level of the target image in the target video is the first brightness level and the second brightness level, performing color enhancement processing on a video frame in the target video by using a sixth neural network model, wherein the sixth neural network model is obtained by training by using a sixth image training set comprising the first brightness level and the second brightness level;

and if the target brightness level of the target image in the target video is the first brightness level, the second brightness level and the third brightness level, performing color enhancement processing on a video frame in the target video by using a seventh neural network model, wherein the seventh neural network model is obtained by training a seventh image training set comprising the first brightness level, the second brightness level and the third brightness level.

8. The method of claim 6, wherein obtaining a target video from the video to be processed comprises:

detecting and obtaining a lens mutation identification position in the video to be processed;

and segmenting the video to be processed according to the lens mutation identification bit to obtain at least one target video.

9. The method of claim 6, wherein determining the target image from the target video comprises:

and sampling the video frames of the target video according to a preset sampling frequency to obtain at least one target image.

10. The method of claim 1, wherein obtaining the target gray value, the gray mean value and the gray standard deviation with the largest number of pixel points in the target image according to the gray value of each pixel point in the target image comprises:

carrying out gray level statistics on the gray level of each pixel point in the target image to obtain a gray level histogram of the target image;

determining a gray value with the maximum number of pixel points in the target image according to the peak value of the gray histogram, and taking the gray value as the target gray value of the target image;

and determining the gray mean value and the gray standard deviation of the target image according to the gray value of each pixel point in the target image and the width value and the height value of the target image.

11. An image processing apparatus characterized by comprising:

an image acquisition module configured to acquire a target image;

the pixel gray scale module is configured to obtain the gray scale value of each pixel point in the target image;

the gray scale counting module is configured to obtain a target gray scale value, a gray scale mean value and a gray scale standard deviation, which are the largest in the number of the pixel points in the target image, according to the gray scale value of each pixel point in the target image;

a brightness level module configured to determine a target brightness level of the target image according to the target gray value, the gray mean value, and the gray standard deviation;

and the color enhancement module is configured to perform color enhancement processing on the target image according to the target brightness level.

12. An electronic device, comprising:

at least one processor;

storage means for storing at least one program;

when executed by the at least one processor, cause the at least one processor to implement the method of any one of claims 1-10.

13. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-10.

Background

At present, in the scheme of intelligently enhancing the color of the image, complex operations are often required to be performed on the image, for example, the image color enhancement scheme based on the countermeasure network. However, when the current complex scheme is applied to a video with a large number of image frames, the number of image frames is very large, which results in an increase in the amount of calculation, and this problem will have a higher requirement on hardware devices, and will reduce the operation speed, and cannot meet the image processing requirement of the current video application scene.

Therefore, a new image processing method, apparatus, electronic device and computer readable medium are needed.

It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.

Disclosure of Invention

The embodiment of the disclosure provides an image processing method, an image processing device, an electronic device and a computer readable medium, so as to avoid the problem of low computational efficiency caused by adopting a high-complexity algorithm to guarantee color enhancement quality in the related art at least to a certain extent.

Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.

The embodiment of the present disclosure provides an image processing method, including: acquiring a target image; obtaining the gray value of each pixel point in the target image; obtaining a target gray value, a gray mean value and a gray standard deviation with the largest number of pixel points in the target image according to the gray value of each pixel point in the target image; determining the target brightness level of the target image according to the target gray value, the gray mean value and the gray standard deviation; and carrying out color enhancement processing on the target image according to the target brightness level.

An embodiment of the present disclosure provides an image processing apparatus, including: an image acquisition module configured to acquire a target image; the pixel gray scale module is configured to obtain the gray scale value of each pixel point in the target image; the gray scale counting module is configured to obtain a target gray scale value, a gray scale mean value and a gray scale standard deviation, which are the largest in the number of the pixel points in the target image, according to the gray scale value of each pixel point in the target image; a brightness level module configured to determine a target brightness level of the target image according to the target gray value, the gray mean value, and the gray standard deviation; and the color enhancement module is configured to perform color enhancement processing on the target image according to the target brightness level.

In some exemplary embodiments of the present disclosure, the brightness level module includes a first brightness level unit configured to determine the target brightness level of the target image as a first brightness level if the target gray-scale value is less than or equal to a first gray-scale threshold value, the gray-scale mean value is less than a first mean value threshold value, and the gray-scale standard deviation is less than a first standard deviation threshold value.

In some exemplary embodiments of the present disclosure, the brightness level module further includes a second brightness level unit, and the brightness level module may be configured to determine that the target brightness level of the target image is the second brightness level if the target gray-level value is greater than or equal to a second gray-level threshold, the gray-level mean value is greater than a second mean value threshold, and the gray-level standard deviation is less than a first standard deviation threshold; wherein the second gray threshold is greater than the first gray threshold, and the second mean threshold is greater than the first mean threshold.

In some exemplary embodiments of the present disclosure, the brightness level module further includes a third brightness level unit configured to determine that the target brightness level of the target image is a third brightness level if the target gray-scale value is less than or equal to the first gray-scale threshold, the gray-scale mean value is greater than or equal to the first mean value threshold, or the gray-scale standard deviation is greater than or equal to the first standard deviation threshold; or if the target gray value is greater than or equal to a second gray threshold, the gray mean value is less than or equal to the second mean threshold, or the gray standard deviation is greater than or equal to the first standard deviation threshold, determining that the target brightness level of the target image is a third brightness level; or if the target gray value is greater than the first gray threshold and less than the second gray threshold, determining that the target brightness level of the target image is a third brightness level.

In some exemplary embodiments of the present disclosure, the color enhancement module includes a first enhancement unit, a second enhancement unit, and a third enhancement unit. The first enhancement unit is configured to perform color enhancement processing on the target image by using a first neural network model if the target brightness level is a first brightness level, wherein the first neural network model is obtained by training with a first image training set including the first brightness level. And the second enhancement unit is configured to perform color enhancement processing on the target image by using a second neural network model if the target brightness level is a second brightness level, wherein the second neural network model is obtained by using a second image training set comprising the second brightness level. And the third enhancement unit is configured to perform color enhancement processing on the target image by using a third neural network model if the target brightness level is a third brightness level, wherein the third neural network model is obtained by training by using a third image training set comprising the third brightness level.

In some exemplary embodiments of the present disclosure, the image acquisition module includes a video acquisition unit, a target video unit, and a target image unit. Wherein the video acquisition unit is configured to acquire a video to be processed. The target video unit is configured to obtain a target video according to the video to be processed; the target image unit is configured to determine the target image from the target video.

In some exemplary embodiments of the present disclosure, the color enhancement module includes a fourth enhancement unit, a fifth enhancement unit, a sixth enhancement unit, and a seventh enhancement unit. The fourth enhancement unit is configured to perform color enhancement processing on a video frame in the target video by using a fourth neural network model if a target brightness level of the target image in the target video includes a first brightness level and a third brightness level, wherein the fourth neural network model is obtained by training by using a fourth image training set including the first brightness level and the third brightness level. And if the target brightness level of the target image in the target video is the second brightness level and the third brightness level, performing color enhancement processing on the video frame in the target video by using a fifth neural network model, wherein the fifth neural network model is obtained by training by using a fourth image training set comprising the second brightness level and the third brightness level. The sixth enhancement unit is configured to perform color enhancement processing on the video frames in the target video by using a sixth neural network model if the target brightness level of the target image in the target video is the first brightness level and the second brightness level, wherein the sixth neural network model is obtained by training by using a sixth image training set comprising the first brightness level and the second brightness level. The seventh enhancement unit is configured to perform color enhancement processing on a video frame in the target video by using a seventh neural network model if a target brightness level of the target image in the target video is a first brightness level, a second brightness level and a third brightness level, wherein the seventh neural network model is obtained by training a seventh image training set including the first brightness level, the second brightness level and the third brightness level.

In some exemplary embodiments of the present disclosure, the target video unit includes a shot detection subunit and a target video subunit. And the shot detection subunit is configured to detect and obtain a shot mutation identification bit in the video to be processed. And the target video subunit is configured to segment the video to be processed according to the shot mutation identification bits to obtain at least one target video.

In some exemplary embodiments of the present disclosure, the target image unit is configured to sample video frames of the target video at a preset sampling frequency, and obtain at least one of the target images.

In some exemplary embodiments of the present disclosure, the gray scale statistics module includes a histogram unit, a first statistics unit, and a second statistics unit. The histogram unit is configured to perform gray level statistics on the gray level of each pixel point in the target image to obtain a gray level histogram of the target image. The first statistic unit is configured to determine a gray value with the largest number of pixel points in the target image according to the peak value of the gray histogram, and the gray value is used as the target gray value of the target image. The second statistical unit is configured to determine the gray mean value and the gray standard deviation of the target image according to the gray value of each pixel point in the target image and the width value and the height value of the target image.

An embodiment of the present disclosure provides an electronic device, including: at least one processor; storage means for storing at least one program which, when executed by the at least one processor, causes the at least one processor to implement the image processing method as described in the above embodiments.

The embodiments of the present disclosure propose a computer-readable medium on which a computer program is stored, which when executed by a processor implements the image processing method as described in the embodiments above.

According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the image processing method or the method provided in the various alternative implementations of the image processing method.

In the technical solutions provided in some embodiments of the present disclosure, the target gray value, the gray mean value, and the gray standard deviation obtained according to the gray value of each pixel point in the target image can comprehensively represent the brightness degree of the target image from different dimensions, so that the obtained target brightness level has high accuracy. And further, when the color enhancement processing is carried out on the target image according to the target brightness degree, the brightness of the target image can be directionally adjusted, and the occurrence of an image which is too dark or too exposed is avoided. Meanwhile, only the target gray value, the gray mean value and the gray standard deviation are obtained through calculation, so that the operation complexity can be greatly reduced on the premise of ensuring the color enhancement quality, and the operation efficiency is improved.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:

fig. 1 shows a schematic diagram of an exemplary system architecture to which an image processing method or apparatus of an embodiment of the present disclosure may be applied;

FIG. 2 schematically shows a flow diagram of an image processing method according to an embodiment of the present disclosure;

FIG. 3 is a flowchart in an exemplary embodiment based on step S240 of FIG. 2;

FIG. 4 is a flowchart in an exemplary embodiment based on step S250 of FIG. 2;

FIG. 5 is a flowchart in an exemplary embodiment based on step S210 of FIG. 2;

FIG. 6 is a flowchart in an exemplary embodiment based on step S250 of FIG. 2;

FIG. 7 is a flowchart in an exemplary embodiment based on step S520 of FIG. 5;

FIG. 8 is a flowchart in an exemplary embodiment based on step S230 of FIG. 2;

FIG. 9 schematically shows a flow diagram of an image processing method according to an embodiment of the present disclosure;

FIG. 10 schematically illustrates an architecture diagram of a machine learning model according to an exemplary embodiment of the present disclosure;

FIG. 11 schematically illustrates an architecture diagram of a machine learning model according to an exemplary embodiment of the present disclosure;

fig. 12 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure;

FIG. 13 shows a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.

Detailed Description

Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.

Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.

The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in at least one hardware module or integrated circuit, or in different networks and/or processor means and/or microcontroller means.

The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.

Cloud technology refers to a hosting technology for unifying serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.

Cloud technology (Cloud technology) is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on Cloud computing business model application, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources, such as video websites, picture-like websites and more web portals. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can only be realized through cloud computing.

An artificial intelligence cloud Service is also commonly referred to as AIaaS (AI as a Service, chinese). The method is a service mode of an artificial intelligence platform, and particularly, the AIaaS platform splits several types of common AI services and provides independent or packaged services at a cloud. This service model is similar to the one opened in an AI theme mall: all developers can access one or more artificial intelligence services provided by the platform through an API (application programming interface), and part of the qualified developers can also use an AI framework and an AI infrastructure provided by the platform to deploy and operate and maintain the self-dedicated cloud artificial intelligence services.

With the development of diversification of internet, real-time data stream and connecting equipment and the promotion of demands of search service, social network, mobile commerce, open collaboration and the like, cloud computing is rapidly developed. Different from the prior parallel distributed computing, the generation of cloud computing can promote the revolutionary change of the whole internet mode and the enterprise management mode in concept.

Fig. 1 shows a schematic diagram of an exemplary system architecture to which an image processing method or apparatus of an embodiment of the present disclosure may be applied.

As shown in fig. 1, the system architecture 100 may include one or more of terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.

It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.

The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, portable computers, desktop computers, wearable devices, virtual reality devices, smart homes, smart cameras, and so forth.

The server 105 may be a server that provides various services, an independent physical server, a server cluster or a distributed system configured by a plurality of physical servers, or a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein. For example, the terminal device 103 (which may also be the terminal device 101 or 102) uploads the target image to the server 105. The server 105 may obtain a target image; obtaining the gray value of each pixel point in the target image; obtaining a target gray value, a gray mean value and a gray standard deviation with the largest number of pixel points in the target image according to the gray value of each pixel point in the target image; determining the target brightness level of the target image according to the target gray value, the gray mean value and the gray standard deviation; and carrying out color enhancement processing on the target image according to the target brightness level. And feeding back the color enhancement processing result to the terminal device 103, and further the terminal device 103 may display the target image after the color enhancement processing through a screen or further perform other processing on the target image after the color enhancement processing, thereby realizing a rapid processing process on the image while ensuring the color enhancement quality.

For another example, the server 105 may obtain a video to be processed; obtaining a target video according to the video to be processed; determining the target image from the target video; obtaining the gray value of each pixel point in the target image; obtaining a target gray value, a gray mean value and a gray standard deviation with the largest number of pixel points in the target image according to the gray value of each pixel point in the target image; determining the target brightness level of the target image according to the target gray value, the gray mean value and the gray standard deviation; and carrying out color enhancement processing on the target image according to the target brightness level. And feeding back the color enhancement processing result to the terminal device 101 (which may also be the terminal device 102 or 103), so that the user can browse the video to be processed after the color enhancement processing based on the content displayed on the terminal device 101, thereby improving the viewing experience of the user.

Fig. 2 schematically shows a flow chart of an image processing method according to an embodiment of the present disclosure. The method provided by the embodiment of the present disclosure may be processed by any electronic device with computing processing capability, for example, the server 105 and/or the terminal devices 102 and 103 in the embodiment of fig. 1 described above, and in the following embodiment, the server 105 is taken as an execution subject for example, but the present disclosure is not limited thereto.

As shown in fig. 2, an image processing method provided by an embodiment of the present disclosure may include the following steps.

In step S210, a target image is acquired.

In the disclosed embodiments, the target image may be obtained, for example, from a video frame sample in a video.

In step S220, the gray value of each pixel point in the target image is obtained.

In the disclosed embodiment, the gray value is a measure of the gray level that a pixel in the image appears in a black-and-white image. In the grayscale map, the grayscale is generally displayed from darkest black to brightest white. The logarithmic relationship between white and black is divided into several levels, called "gray scale". Typically ranging from 0 to 255, 255 for white and 0 for black.

In step S230, the target gray value, the average gray value and the standard gray difference of the target image are obtained according to the gray value of each pixel in the target image.

In the embodiment of the present disclosure, the target gray value with the largest number of pixels in the target image refers to the gray value with the largest number of pixels in the number of pixels of each gray value obtained after counting the gray value of each pixel in the target image. The gray average value is obtained by performing average operation on the gray values of all pixel points in the target image. The mean grayscale value may be calculated, for example, according to the following equation:

wherein ave is a Gray level mean value, Gray (i, j) is a Gray level value of a pixel point with coordinates (i, j) of the target image, width is a width value of the target image, and height is a height value of the target image. J is more than 0 and less than width, and j is more than 0 and less than height.

The gray standard deviation is obtained by performing standard deviation operation on the gray value of each pixel point in the target image. The gray scale standard deviation can be calculated, for example, according to the following formula:

where std is the gray scale standard deviation.

In step S240, a target brightness level of the target image is determined according to the target gray value, the gray mean, and the gray standard deviation.

In the embodiment of the disclosure, the target gray value, the gray mean value and the gray standard deviation can be analyzed, and the target brightness level is determined according to the analysis result.

The target gray value, the gray mean value and the gray standard deviation can be processed according to a preset mapping table of the numerical range and the brightness level, and the matched brightness level is determined as the target brightness level.

In step S250, color enhancement processing is performed on the target image according to the target brightness level.

In the embodiment of the disclosure, the specific scheme of color enhancement of the target image can be determined according to the target brightness level, and the color enhancement processing can be performed on the target image according to the specific scheme of color enhancement. For example, different ways of color enhancement processing schemes may be set according to the difference in target brightness levels.

In an exemplary embodiment, the color enhancement processing may be performed on the target image according to the trained machine learning model, and the processing result of the color enhancement of the target image may be determined according to the output result of the machine learning model. The machine learning model may be, for example and without limitation, a convolutional neural network, although this disclosure is not limited thereto.

According to the image processing method provided by the embodiment of the disclosure, the brightness degree of the target image can be comprehensively represented from different dimensions according to the target gray value, the gray mean value and the gray standard deviation obtained according to the gray value of each pixel point in the target image, so that the obtained target brightness level has high accuracy. And further, when the color enhancement processing is carried out on the target image according to the target brightness degree, the brightness of the target image can be directionally adjusted, and the occurrence of an image which is too dark or too exposed is avoided. Meanwhile, only the target gray value, the gray mean value and the gray standard deviation are obtained through calculation, so that the operation complexity can be greatly reduced on the premise of ensuring the color enhancement quality, and the operation efficiency is improved.

Fig. 3 is a flowchart in an exemplary embodiment based on step S240 of fig. 2.

As shown in fig. 3, step S240 in the above-mentioned fig. 2 embodiment may further include the following steps.

In step S310, if the target gray-level value is less than or equal to the first gray-level threshold, the gray-level mean value is less than the first mean-level threshold, and the gray-level standard deviation is less than the first standard-deviation threshold, the target brightness level of the target image is determined to be the first brightness level.

In the embodiment of the present disclosure, the first gray threshold, the first mean threshold, and the first standard deviation threshold may be obtained empirically, and in an exemplary embodiment, the value of the first gray threshold may be, for example, 50, the first mean threshold may be, for example, 60, and the first standard deviation threshold may be, for example, 40. However, the present disclosure is not limited thereto. The first brightness level may be a level with lower brightness. When the target gray value, the average gray value, and the standard gray difference satisfy the determination conditions in step S310, the overall brightness of the target image may be considered to be small. The gray standard deviation represents the size of the color distribution of the image, the smaller the standard deviation is, the closer the pixel value distribution size of the target image is to the mean value, the larger the standard deviation is, the wider the pixel value distribution is, and therefore, the smaller the gray standard deviation is, the less the brightness area of the target image appears. Therefore, if the condition of S310 is satisfied, it is determined that the brightness of the target image is low and the target image is at the first brightness level.

In an exemplary embodiment, step S240 in the above-described embodiment of fig. 2 may further include the following steps.

In step S320, if the target gray value is greater than or equal to the second gray threshold, the gray mean is greater than the second mean threshold, and the gray standard deviation is less than the first standard deviation threshold, determining that the target brightness level of the target image is the second brightness level; the second gray threshold is larger than the first gray threshold, and the second average threshold is larger than the first average threshold.

In the embodiment of the disclosure, the second gray level threshold and the second average threshold may be obtained empirically. In an exemplary embodiment, the second gray level threshold may be, for example, 200, and the second mean threshold may be, for example, 180, but the disclosure is not limited thereto. The second brightness level may be a level of over-brightness. When the target gray value, the average gray value, and the standard gray difference satisfy the determination conditions in step S320, the overall brightness of the target image may be considered to be too high, i.e., an overexposed image.

In an exemplary embodiment, step S240 in the above-described embodiment of fig. 2 may further include the following steps.

In step S330, if the target gray value is less than or equal to the first gray threshold, the gray mean is greater than or equal to the first mean threshold, or the gray standard deviation is greater than or equal to the first standard deviation threshold, determining the target brightness level of the target image as a third brightness level; alternatively, the first and second electrodes may be,

in step S340, if the target gray value is greater than or equal to the second gray threshold, the gray mean is less than or equal to the second mean threshold, or the gray standard deviation is greater than or equal to the first standard deviation threshold, determining the target brightness level of the target image as a third brightness level; alternatively, the first and second electrodes may be,

in step S350, if the target gray-scale value is greater than the first gray-scale threshold and less than the second gray-scale threshold, the target brightness level of the target image is determined to be the third brightness level.

In the disclosed embodiment, the third brightness level may be a brightness level with a medium brightness. The condition set included in this exemplary embodiment is the complement of the condition set included in steps S310-S320. That is, when the condition in step S310 or S320 is not satisfied, the overall brightness of the target image may be considered to be at a medium level.

In this embodiment, by performing accurate analysis and determination on three dimensions of the target gray value, the gray mean value, and the gray standard deviation, the target brightness level of the target image can be accurately obtained through analysis according to specific values of the target gray value, the gray mean value, and the gray standard deviation.

Fig. 4 is a flowchart in an exemplary embodiment based on step S250 of fig. 2.

As shown in fig. 4, step S250 in the above-mentioned embodiment of fig. 2 may further include the following steps. In this embodiment, the color enhancement processing may be performed on the target image according to the trained machine learning model, and the processing result of the color enhancement of the target image may be determined according to the output result of the machine learning model. Wherein the machine learning model may include a first neural network model, a second neural network model, and a third neural network model. FIG. 10 schematically illustrates an architecture diagram of a machine learning model according to an exemplary embodiment of the present disclosure. As shown in fig. 10, the machine learning model may include a first neural network model 1010, a second neural network model 1020, a third neural network model 1030, and a switching module 1040. The Network structure adopted by the first Neural Network model 1010 may be an image generation Network, such as a Super Resolution Neural Network (SRCNN) model, an image style conversion Network (cyclic area adaptive Network, cyclic gan) and the like, and the Network structures adopted by the second Neural Network model 1020 and the third Neural Network model 1030 may be the same as the first Network model 1010 or other image generation models may be selected. The network structures of the first neural network model 1010, the second neural network model 1020 and the third neural network model 1030 may be different or may be selected to be consistent, and the main difference between the three network models is that even if different image types are distinguished according to different scenes as training sets of the network under the condition that the network structures are consistent, the network models of different scenes can be obtained through training. The switching module 1040 may select a correspondingly set neural network model for processing according to the target brightness level. For example, the switching module 1040 is provided with a determining unit, which performs the processes of the steps S220 to S240 and/or the steps S310 to S350. In addition, the switching module 1040 may further include a selecting unit to select different neural network models according to the determination result of the determining unit. The machine learning model in this embodiment may be implemented, for example, by a framework of an artificial intelligence cloud service.

In step S410, if the target brightness level is the first brightness level, the color enhancement process is performed on the target image by using the first neural network model 1010, wherein the first neural network model 1010 is obtained by training with the first image training set including the first brightness level.

In the disclosed embodiment, the first neural network model 1010 may be selected, for example, according to the switching module 1040, to perform color enhancement processing on the target image.

In an exemplary embodiment, if the target luminance level is the first luminance level, the luminance and the contrast may be specially adjusted by adjusting a lightness channel (L channel) in the color mode (LAB channel) to recover more details, and then the chromaticity may be adjusted to achieve the color enhancement processing on the target image.

In step S420, if the target brightness level is the second brightness level, the color enhancement processing is performed on the target image by using the second neural network model 1020, wherein the second neural network model 1020 is obtained by training using the second image training set including the second brightness level.

In the disclosed embodiment, the second neural network model 1020 may be selected, for example, according to the switching module 1040, to perform color enhancement processing on the target image.

In an exemplary embodiment, if the target luminance level is the second luminance level, the luminance and the contrast may be specially adjusted by adjusting a lightness channel (L channel) in the color mode (LAB channel) to recover more details, and then the chromaticity may be adjusted to implement the color enhancement processing on the target image.

In step S430, if the target brightness level is the third brightness level, the color enhancement processing is performed on the target image by using the third neural network model 1030, where the third neural network model 1030 is obtained by using the third image training set including the third brightness level.

In the disclosed embodiment, the third neural network model 1030 can be selected for color enhancement processing of the target image, for example, according to the switching module 1040.

In an exemplary embodiment, when the target brightness level is the third brightness level, the color enhancement processing of the target image may also be implemented by enhancing brightness, contrast, and saturation.

In the embodiment, the target images with different brightness degrees are processed by adopting different color enhancement modes, so that the brightness of the target images can be adaptively and effectively adjusted.

Fig. 5 is a flowchart in an exemplary embodiment based on step S210 of fig. 2.

As shown in fig. 5, step S210 in the above-mentioned embodiment of fig. 2 may further include the following steps.

In step S510, a video to be processed is acquired.

In step S520, a target video is obtained from the video to be processed.

In the embodiment of the present disclosure, for example, by detecting a scene shear, a to-be-processed video may be segmented according to a detection result of the scene shear, so as to obtain a target video. For example, a frame difference image between each video frame in the video to be processed and the previous frame may be obtained, an average value between the frame difference image of the previous frame and the frame difference image obtained from the previous frame may be calculated, and whether the frame is a shear frame may be determined by comparing the size and the change condition of the average value of the frame differences. And cutting the video to be processed according to the shear frame to obtain the target video.

In step S530, a target image is determined from the target video.

In embodiments of the present disclosure, each frame in the target video may be determined to be the target image.

In an exemplary embodiment, the video frames of the target video may be sampled at a preset sampling frequency to obtain at least one target image.

Fig. 6 is a flowchart in an exemplary embodiment based on step S250 of fig. 2.

As shown in fig. 6, step S250 in the above-mentioned embodiment of fig. 2 may further include the following steps. In this embodiment, the machine learning model may further include a fourth neural network model, a fifth neural network model, a sixth neural network model, and a seventh neural network model. FIG. 11 schematically illustrates an architecture diagram of a machine learning model according to an exemplary embodiment of the present disclosure. As shown in fig. 11, the machine learning model may include a fourth neural network model 1110, a fifth neural network model 1120, a sixth neural network model 1130, a seventh neural network model 1140, and a switching module 1150. The network structure adopted by the fourth neural network model 1110 may be an image generation network, such as a super-resolution model, an image style conversion network, and the like, the network structures of the fourth neural network model 1110, the fifth neural network model 1120, the sixth neural network model 1130, and the seventh neural network model 1140 may be different or may be selected to be identical, and the main difference of the four network models is that even if the network structures are identical, different image types are distinguished according to different scenes as a training set of the network, the network models of different scenes can be obtained through training. The switching module 1150 may select a neural network model corresponding to the target brightness level for processing. For example, the switching module 1150 is provided with a determining unit, which executes the processes of the steps S220 to S240 and/or the steps S310 to S350 and/or the steps S510 to S530. In addition, the switching module 1150 may further include a selecting unit to select different neural network models according to the determination result of the determining unit. The machine learning model in this embodiment may be implemented, for example, by a framework of an artificial intelligence cloud service.

In step S610, if the target brightness level of the target image in the target video includes the first brightness level and the third brightness level, the color enhancement processing is performed on the target image in the target video by using a fourth neural network model 1110, where the fourth neural network model 1110 is obtained by training using a fourth image training set including the first brightness level and the third brightness level.

In the embodiment of the present disclosure, the fourth neural network model 1110 may be selected, for example, according to the switching module 1150, to perform color enhancement processing on the target image.

In the embodiment of the present disclosure, since the target video includes a plurality of brightness levels: the first brightness level and the third brightness level are used for processing the target image in the target video through an artificial intelligence model capable of processing the two brightness levels, the target images with different brightness levels can be processed in a self-adaptive mode, and each video frame in the target video obtained after color enhancement processing has good color brightness. And avoiding the appearance of over-bright or over-exposed video frames in the target video.

In step S620, if the target brightness level of the target image in the target video is the second brightness level and the third brightness level, the color enhancement processing is performed on the target image in the target video by using a fifth neural network model 1120, wherein the fifth neural network model 1120 is obtained by training with a fourth image training set including the second brightness level and the third brightness level.

In the embodiment of the present disclosure, the fifth neural network model 1120 may be selected, for example, according to the switching module 1150, to perform color enhancement processing on the target image.

In step S630, if the target brightness level of the target image in the target video is the first brightness level and the second brightness level, the color enhancement processing is performed on the target image in the target video by using a sixth neural network model 1130, wherein the sixth neural network model is obtained by training using a sixth image training set including the first brightness level and the second brightness level.

In the disclosed embodiment, a sixth neural network model 1130 may be selected, for example, according to the switching module 1150, to perform color enhancement processing on the target image.

In step S640, if the target brightness level of the target image in the target video is the first brightness level, the second brightness level, and the third brightness level, the seventh neural network model 1140 is used to perform color enhancement processing on the target image in the target video, where the seventh neural network model 1140 is obtained by training a seventh image training set including the first brightness level, the second brightness level, and the third brightness level.

In the disclosed embodiment, a seventh neural network model 1140 may be selected, for example, according to the switching module 1150, to perform color enhancement processing on the target image.

In this embodiment, when processing video frames in a target video in batch, the target brightness level of the target image in the target video is obtained, and the video frames of the target video with different brightness levels are processed in different color enhancement modes, so that the brightness of the target image can be adaptively and effectively adjusted.

Fig. 7 is a flowchart in an exemplary embodiment based on step S520 of fig. 5.

As shown in fig. 7, step S520 in the above-mentioned fig. 5 embodiment may further include the following steps.

In step S710, a shot sudden change flag in the video to be processed is detected and obtained.

In the embodiment of the disclosure, frame difference images of adjacent frames in a video to be processed can be calculated and obtained, judgment is performed according to the average value of the frame difference images, and a lens mutation identification bit is determined according to the judgment result. In an exemplary embodiment, before calculating the frame difference image, the video image may be subjected to frame division processing, and then each frame of three-channel color image is converted into a single-channel grayscale image; and the obtained gray level image is downsampled, so that the problem of overhigh calculation amount of a high-pixel image algorithm is solved, and the algorithm efficiency is improved. The image is preprocessed by using a filtering algorithm (such as a Gabor filtering algorithm and a Gabor filtering algorithm), and the method can effectively extract the content of the image and solve the problem of light intensity abrupt change interference.

In step S720, the video to be processed is segmented according to the shot sudden change flag to obtain at least one target video.

In this embodiment, by detecting the shot mutation flag and cutting the to-be-processed video according to the shot mutation flag, the to-be-processed video can be divided according to the scene shear to obtain the target video, so that when color enhancement is performed with the target video as a unit, color enhancement can be performed on video frames with similar scenes in a gathering manner, and the applicability of the selected color enhancement method in the color enhancement processing is improved.

Fig. 8 is a flowchart in an exemplary embodiment based on step S230 of fig. 2.

As shown in fig. 8, step S230 in the above-mentioned embodiment of fig. 2 may further include the following steps.

In step S810, gray statistics is performed on the gray value of each pixel point in the target image to obtain a gray histogram of the target image.

In the embodiment of the present disclosure, the gray histogram is a function of gray level distribution, and is a statistic of gray level distribution in an image. The gray histogram is to count the occurrence frequency of all pixels in the digital image according to the size of the gray value. The gray histogram is a function of gray level, which represents the number of pixels in an image having a certain gray level, reflecting the frequency of occurrence of a certain gray level in the image.

In step S820, the gray value with the largest number of pixels in the target image is determined according to the peak value of the gray histogram, and is used as the target gray value of the target image.

In the embodiment of the present disclosure, the number of pixels with the largest occurrence number may be determined according to a peak value in the gray histogram, and a gray value corresponding to the peak value may be determined as the target gray value.

In step S830, the gray mean and the gray standard deviation of the target image are determined according to the gray value of each pixel in the target image, the width value and the height value of the target image.

In the embodiment of the present disclosure, the gray level mean and the gray level standard deviation can be obtained through the formulas (1) and (2), for example, and are not described herein again.

Fig. 9 schematically shows a flow chart of an image processing method according to an embodiment of the present disclosure.

As shown in fig. 9, the image processing method provided by the present embodiment includes the following steps.

In step S905, a video to be processed is acquired.

In step S910, a shot sudden change flag in the video to be processed is detected and obtained.

In step S915, the video to be processed is segmented according to the shot sudden change flag to obtain at least one target video.

In step S920, a video frame of the target video is sampled according to a preset sampling frequency, so as to obtain at least one target image.

In step S925, the gray value of each pixel in the target image is obtained.

In step S930, the target gray value, the average gray value, and the standard gray difference of the target image are obtained according to the gray value of each pixel in the target image.

In steps S935 to S955, a target brightness level of the target image is determined according to the target gray value, the gray mean value, and the gray standard deviation. Specifically, the method comprises the following steps:

in step S935, if the target grayscale value is greater than 50 and less than 200, it is determined that the target brightness level of the target image is the third brightness level.

In step S940, if the target gray-level value is greater than 0 and less than or equal to 50, the average gray-level value is less than 60, and the standard deviation gray-level is less than 40, it is determined that the target brightness level of the target image is the first brightness level.

In step S945, if the target gray-scale value is greater than 0 and less than or equal to 50, the mean gray-scale value is greater than or equal to 60, or the standard deviation gray-scale value is greater than or equal to 40, the target brightness level of the target image is determined to be the third brightness level.

In step S950, if the target gray-scale value is greater than or equal to 200 and less than or equal to 255, the average gray-scale value is greater than 180, and the standard deviation gray-scale value is less than 40, the target brightness level of the target image is determined to be the second brightness level.

In step S955, if the target gray-scale value is greater than or equal to 200 and less than or equal to 255, the average gray-scale value is less than or equal to 180, or the standard deviation gray-scale value is greater than or equal to 40, the target brightness level of the target image is determined to be the third brightness level.

In steps S960-S990, the color enhancement processing is performed on the video frames in the target video according to the target brightness level of the target image in the target video. The machine learning model in this embodiment may be implemented, for example, by a framework of an artificial intelligence cloud service. Specifically, the method comprises the following steps:

in step S960, if the target brightness level of the target image in the target video is the first brightness level, the video frame in the target video is color-enhanced by using the first neural network model.

In step S965, if the target brightness level of the target image in the target video is the second brightness level, the video frame in the target video is color-enhanced by using the second neural network model.

In step S970, if the target brightness level of the target image in the target video is the third brightness level, the color enhancement processing is performed on the video frame in the target video by using the third neural network model.

In step S975, if the target luminance level of the target image in the target video is the first luminance level and the third luminance level, the video frame in the target video is color-enhanced by using the fourth neural network model.

In step S980, if the target brightness level of the target image in the target video is the second brightness level and the third brightness level, the color enhancement processing is performed on the video frame in the target video by using the fifth neural network model.

In step S985, if the target brightness level of the target image in the target video is the first brightness level and the second brightness level, the color enhancement processing is performed on the video frame in the target video by using the sixth neural network model.

In step S990, if the target brightness level of the target image in the target video is the first brightness level, the second brightness level, and the third brightness level, the seventh neural network model is used to perform color enhancement processing on the video frame in the target video.

In the embodiment, the target video can be obtained by dividing the video to be processed according to the scene shear by detecting the shot mutation identification bit of the video to be processed and cutting the video to be processed according to the shot mutation identification bit. When the color enhancement is carried out on the target video according to the target brightness level of each target image in the target video, the color enhancement can be carried out on the video frames with similar scenes in a gathering manner, and the applicability of the selected color enhancement method in the color enhancement processing is improved. Meanwhile, when the target brightness level of each target image is determined, the three dimensions of the target gray value, the gray standard deviation and the gray mean value of the target image can be considered, so that the obtained target brightness level has high accuracy. Only the target gray value, the gray mean value and the gray standard deviation are obtained through calculation, so that the operation complexity can be greatly reduced on the premise of ensuring the color enhancement quality, and the operation efficiency is improved.

The following describes embodiments of the apparatus of the present disclosure, which may be used to perform the above-mentioned image processing method of the present disclosure. For details that are not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the image processing method described above in the present disclosure.

Fig. 12 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.

Referring to fig. 12, an image processing apparatus 1200 according to an embodiment of the present disclosure may include: image acquisition module 1210, pixel grayscale module 1220, grayscale statistics module 1230, brightness level module 1240, and color enhancement module 1250.

The image acquisition module 1210 may be configured to acquire a target image.

The pixel grayscale module 1220 can be configured to obtain the grayscale values of the pixels in the target image.

The gray level statistics module 1230 may be configured to obtain a target gray level value, a gray level mean value, and a gray level standard deviation with the largest number of pixel points in the target image according to the gray level value of each pixel point in the target image.

The brightness level module 1240 may be configured to determine a target brightness level for the target image based on the target gray value, the gray mean, and the gray standard deviation.

Color enhancement module 1250 may be configured to color enhance a target image according to a target brightness level.

The image processing device provided by the embodiment of the disclosure can comprehensively represent the brightness degree of the target image from different dimensions according to the target gray value, the gray mean value and the gray standard deviation obtained according to the gray value of each pixel point in the target image, so that the obtained target brightness level has high accuracy. And further, when the color enhancement processing is carried out on the target image according to the target brightness degree, the brightness of the target image can be directionally adjusted, and the occurrence of an image which is too dark or too exposed is avoided. Meanwhile, only the target gray value, the gray mean value and the gray standard deviation are obtained through calculation, so that the operation complexity can be greatly reduced on the premise of ensuring the color enhancement quality, and the operation efficiency is improved.

In an exemplary embodiment, the brightness level module 1240 may include a first brightness level unit, which may be configured to determine the target brightness level of the target image as the first brightness level if the target gray scale value is less than or equal to the first gray scale threshold value, the gray scale mean value is less than the first mean value threshold value, and the gray scale standard deviation is less than the first standard deviation threshold value.

In an exemplary embodiment, the brightness level module 1240 may further include a second brightness level unit, and may be configured to determine that the target brightness level of the target image is the second brightness level if the target gray value is greater than or equal to the second gray threshold, the gray mean value is greater than the second mean threshold, and the gray standard deviation is less than the first standard deviation threshold; the second gray threshold is larger than the first gray threshold, and the second average threshold is larger than the first average threshold.

In an exemplary embodiment, the brightness level module 1240 may further include a third brightness level unit, which may be configured to determine that the target brightness level of the target image is the third brightness level if the target gray-level value is less than or equal to the first gray-level threshold, the gray-level mean value is greater than or equal to the first mean value threshold, or the gray-level standard deviation is greater than or equal to the first standard deviation threshold; or if the target gray value is greater than or equal to the second gray threshold, the gray mean value is less than or equal to the second mean threshold, or the gray standard deviation is greater than or equal to the first standard deviation threshold, determining that the target brightness level of the target image is a third brightness level; or if the target gray value is greater than the first gray threshold and less than the second gray threshold, determining that the target brightness level of the target image is the third brightness level.

In an exemplary embodiment, the color enhancement module 1250 may include a first enhancement unit, a second enhancement unit, and a third enhancement unit. The first enhancement unit may be configured to perform color enhancement processing on the target image by using a first neural network model if the target brightness level is a first brightness level, where the first neural network model is obtained by training using a first image training set including the first brightness level. The second enhancement unit may be configured to perform color enhancement processing on the target image using a second neural network model if the target brightness level is a second brightness level, wherein the second neural network model is obtained by training using a second image training set including the second brightness level. The third enhancement unit may be configured to perform color enhancement processing on the target image using a third neural network model if the target brightness level is a third brightness level, where the third neural network model is obtained by training using a third image training set including the third brightness level.

In an exemplary embodiment, the image acquisition module 1210 may include a video acquisition unit, a target video unit, and a target image unit. Wherein the video acquisition unit is configurable to acquire a video to be processed. The target video unit may be configured to obtain a target video from the video to be processed; the target image unit may be configured to determine a target image from the target video.

In an exemplary embodiment, the color enhancement module 1250 may include a fourth enhancement unit, a fifth enhancement unit, a sixth enhancement unit, and a seventh enhancement unit. The fourth enhancement unit may be configured to perform color enhancement processing on the video frames in the target video by using a fourth neural network model if the target brightness level of the target image in the target video includes the first brightness level and the third brightness level, where the fourth neural network model is obtained by training using a fourth image training set including the first brightness level and the third brightness level. The fifth enhancement unit may be configured to perform color enhancement processing on the video frames in the target video by using a fifth neural network model if the target brightness level of the target image in the target video is the second brightness level and the third brightness level, where the fifth neural network model is obtained by training using a fourth image training set including the second brightness level and the third brightness level. The sixth enhancement unit may be configured to perform color enhancement processing on the video frames in the target video using a sixth neural network model if the target brightness level of the target image in the target video is the first brightness level and the second brightness level, wherein the sixth neural network model is obtained by training using a sixth image training set including the first brightness level and the second brightness level. The seventh enhancement unit may be configured to perform color enhancement processing on the video frames in the target video using a seventh neural network model if the target brightness level of the target image in the target video is the first brightness level, the second brightness level, and the third brightness level, where the seventh neural network model is obtained by training using a seventh image training set including the first brightness level, the second brightness level, and the third brightness level.

In an exemplary embodiment, the target video unit may include a shot detection subunit and a target video subunit. The shot detection subunit can be configured to detect and obtain a shot mutation identification bit in the video to be processed. The target video subunit can be configured to segment the video to be processed according to the shot mutation identification bits to obtain at least one target video.

In an exemplary embodiment, the target image unit may be configured to sample video frames of the target video at a preset sampling frequency to obtain at least one target image.

In an exemplary embodiment, the gray statistics module 1230 may include a histogram unit, a first statistics unit, and a second statistics unit. The histogram unit can be configured to perform gray level statistics on the gray level of each pixel point in the target image to obtain a gray level histogram of the target image. The first statistic unit can be configured to determine a gray value with the largest number of pixel points in the target image according to the peak value of the gray histogram, and the gray value is used as the target gray value of the target image. The second statistical unit can be configured to determine a gray mean value and a gray standard deviation of the target image according to the gray value of each pixel point in the target image, the width value and the height value of the target image.

FIG. 13 shows a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. It should be noted that the electronic device 1300 shown in fig. 13 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiments of the present disclosure.

As shown in fig. 13, the electronic apparatus 1300 includes a Central Processing Unit (CPU)1301 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1302 or a program loaded from a storage portion 1308 into a Random Access Memory (RAM) 1303. In the RAM 1303, various programs and data necessary for system operation are also stored. The CPU 1301, the ROM 1302, and the RAM 1303 are connected to each other via a bus 1304. An input/output (I/O) interface 1305 is also connected to bus 1304.

The following components are connected to the I/O interface 1305: an input portion 1306 including a keyboard, a mouse, and the like; an output section 1307 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1308 including a hard disk and the like; and a communication section 1309 including a network interface card such as a LAN card, a modem, or the like. The communication section 1309 performs communication processing via a network such as the internet. A drive 1310 is also connected to the I/O interface 1305 as needed. A removable medium 1311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1310 as necessary, so that a computer program read out therefrom is mounted into the storage portion 1308 as necessary.

In particular, the processes described below with reference to the flowcharts may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications component 1309 and/or installed from removable media 1313. The computer program executes various functions defined in the system of the present application when executed by a Central Processing Unit (CPU) 1301.

It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having at least one wire, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises at least one executable instruction for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The modules and/or units and/or sub-units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware, and the described modules and/or units and/or sub-units may also be disposed in a processor. Wherein the names of such modules and/or units and/or sub-units in some cases do not constitute a limitation on the modules and/or units and/or sub-units themselves.

As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in the embodiments below. For example, the electronic device may implement the steps shown in fig. 2, 3, 4, 5, 6, 7, 8, or 9.

It should be noted that although in the above detailed description several modules or units or sub-units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units or sub-units described above may be embodied in one module or unit or sub-unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units or sub-units.

Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.

Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:用于车辆中图像去模糊的系统和方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!