Image processing method, device and equipment
1. An image processing method comprising:
acquiring a first image containing a target object, the first image being acquired in a first illumination state;
acquiring a second image containing the target object, the second image being acquired in a second illumination state;
determining a mask image for the target object from the first image and the second image; and
and synthesizing the mask image and the first image to obtain a synthesized image aiming at the target object.
2. The method of claim 1, wherein the first image comprises a foreground image of the target object and the second image comprises a background image of the target object.
3. The method of claim 2, wherein a first illumination state comprises illuminating the target object from a front side of the target object and a second illumination state comprises illuminating a background of the target object and making an image of the target object darker.
4. The method of claim 1, wherein determining the mask image from the first image and the second image comprises:
extracting a first predetermined region in a first image;
extracting a second predetermined region in the second image; and
and synthesizing the first preset area and the second preset area to obtain the mask image.
5. The method of claim 1, wherein determining the mask image from the first image and the second image comprises:
extracting a first predetermined region in a first image;
extracting a second predetermined region in the second image;
synthesizing the first preset area and the second preset area to obtain a first mask image;
carrying out binarization processing on the first mask image;
determining a region of interest ROI corresponding to the target object in the first mask image subjected to binarization processing;
and solving the intersection of the first mask image and the ROI to obtain the mask image.
6. The method of claim 4 or 5, wherein the first predetermined area comprises an area corresponding to a shadow cast from the target object and the second predetermined area comprises an area corresponding to the target object.
7. An image processing apparatus comprising:
a first acquisition module for acquiring a first image containing a target object, the first image being acquired under a first illumination state;
a second acquisition module for acquiring a second image containing the target object, the second image being acquired under a second illumination state;
a determination module for determining a mask image for the target object from the first image and the second image; and
and the synthesis module is used for synthesizing the mask image and the first image to obtain a synthesized image aiming at the target object.
8. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
9. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
10. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.
11. An image processing apparatus comprising:
a plurality of illumination devices for illuminating the target object from a plurality of viewpoints, respectively;
the image acquisition device is used for acquiring an image of the target object;
an electronic device comprising at least one processor and a memory communicatively coupled to the at least one processor, wherein the at least one processor is configured to:
controlling at least one first lighting device of the plurality of lighting devices to illuminate for a first period of time to achieve a first lighting state,
controlling at least one second lighting device of the plurality of lighting devices to illuminate for a second period of time to achieve a second lighting state,
controlling the image acquisition device to acquire a first image of the target object in a first illumination state;
controlling the image acquisition device to acquire a second image of the target object in a second illumination state;
determining a mask image corresponding to the target object according to the first image and the second image; and
and synthesizing the mask image and the first image to obtain a synthesized image aiming at the target object.
12. The apparatus of claim 11, wherein the plurality of lighting devices comprises four lighting devices located above, below, left, and right of a target object.
13. The apparatus of claim 12, wherein the plurality of lighting devices further comprises four lighting devices positioned at the rear, left front, right front, top front of the target object.
14. The apparatus of claim 11, further comprising:
a turntable for placing the target object and rotating under control of the at least one processor.
15. The apparatus of claim 14, wherein the turntable comprises a translucent support plate.
Background
In the field of image processing, when a target object needs to be extracted from an image (for example, a region including only the target object is obtained by performing a matting process on the image), the depth information of the target object in the image cannot be accurately acquired, so that the overall effect of the matting is affected.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, an image processing device, an electronic device, a non-transitory computer-readable storage medium storing computer instructions, and a computer program product.
According to a first aspect of the present disclosure, there is provided an image processing method including: acquiring a first image containing a target object, the first image being acquired in a first illumination state; acquiring a second image containing the target object, the second image being acquired in a second illumination state; determining a mask image for the target object from the first image and the second image; and synthesizing the mask image with the first image to obtain a synthesized image for the target object.
According to a second aspect of the present disclosure, there is provided an image processing apparatus comprising: a first acquisition module for acquiring a first image containing a target object, the first image being acquired under a first illumination state; a second acquisition module for acquiring a second image containing the target object, the second image being acquired under a second illumination state; a determination module for determining a mask image for the target object from the first image and the second image; and a synthesis module for synthesizing the mask image and the first image to obtain a synthesized image for the target object.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the above method.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the above method.
According to a fifth aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the above-described method.
According to a sixth aspect of the present disclosure, there is provided an image processing apparatus comprising: a plurality of illumination devices for illuminating the target object from a plurality of viewpoints, respectively; the image acquisition device is used for acquiring an image of the target object; an electronic device comprising at least one processor and a memory communicatively coupled to the at least one processor, wherein the at least one processor is configured to: controlling at least one first illumination device of the plurality of illumination devices to be illuminated for a first period of time to achieve a first illumination state, controlling at least one second illumination device of the plurality of illumination devices to be illuminated for a second period of time to achieve a second illumination state, and controlling the image acquisition device to acquire a first image of the target object in the first illumination state; controlling the image acquisition device to acquire a second image of the target object in a second illumination state; determining a mask image corresponding to the target object according to the first image and the second image; and synthesizing the mask image with the first image to obtain a synthesized image for the target object.
According to the embodiments of the present disclosure, it is possible to obtain a good composite image for a target object, that is, a matting result for the target object, by integrally controlling the operations of the lighting system and the image pickup device.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 shows a schematic diagram of an image processing apparatus according to an embodiment of the present disclosure;
FIG. 2 shows a flow diagram of an image processing method according to an embodiment of the present disclosure;
FIG. 3 shows a flow diagram of a method of determining a mask image according to an embodiment of the present disclosure;
FIG. 4 shows a flow diagram of a method of determining a mask image according to another embodiment of the present disclosure;
FIG. 5 shows a schematic diagram of a first image containing a target object acquired in a first illumination state, in accordance with an embodiment of the present disclosure;
FIG. 6 shows a schematic diagram of a second image containing a target object acquired in a second illumination state, in accordance with an embodiment of the present disclosure;
FIG. 7 shows a schematic diagram of an initial mask image for a target object, in accordance with an embodiment of the present disclosure;
FIG. 8 shows a schematic diagram of a final mask image for a target object, according to an embodiment of the present disclosure;
FIG. 9 shows a schematic diagram of a composite image for a target object, according to an embodiment of the present disclosure;
fig. 10 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure;
FIG. 11 illustrates a block diagram of an example electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 shows a schematic diagram of an image processing apparatus 100 according to an embodiment of the present disclosure.
As shown in fig. 1, the image processing apparatus 100 includes an illumination system 120, an image capture device 130, and an electronic device 140.
The illumination system 120 is used to illuminate the target object 170 from a plurality of viewing angles, respectively, and may include a plurality of illumination devices 121, 122 arranged in a plurality of directions. Although it is illustrated in fig. 1 that the lighting system 120 includes two lighting devices, embodiments of the present disclosure are not limited thereto, and the lighting system 120 may include more lighting devices. For example, the lighting system 120 may include four lighting devices located above, below, to the left, and to the right of the target object. For another example, the lighting system 120 may include eight lighting devices located above, below, left, right, rear, front left, front right, and front top of the target object.
The image capturing device 130 is used to capture an image of the target object 170. For example, the image capture device 130 may include a camera for photographing the target object. Further, in one embodiment, the image pickup device 130 may be supported by the support device 150 to flexibly pick up an image of the target object from a plurality of angles. The support means 150 may comprise, for example, a tripod.
The electronic device 140 may be connected to the illumination system 120 and the image capture device 130 in a wired or wireless manner to control the illumination system 120 and the image capture device 130. In some embodiments, the electronic device 140 may include at least one of a smartphone, tablet PC, desktop PC, notebook PC, or wearable device. Also, the wearable device may include at least one of an auxiliary-type device (e.g., a watch, ring, bracelet, foot ring, necklace, glasses, contact lens, or head-mounted device (HMD)), a device integrated with a fabric or garment (e.g., an electronic garment), a body-attached device (e.g., a skin pad or tattoo), or an implantable circuit.
The electronic device 140 controls at least one first lighting apparatus in the lighting system 120 to be lit for a first period of time to achieve a first lighting state. For example, the first illumination state may include a state in which the target object 170 is illuminated from the front side thereof. In this case, the electronic apparatus 140 may control the lighting devices positioned at the left, right, left front, right front, and top front of the target object 170 to be lit to illuminate the target object 170 from the front thereof. According to one embodiment, if the shadow of the target object 170 needs to be enhanced, the brightness of the lighting device located below the target object 170 may also be turned down, and vice versa. According to another embodiment, if the contour of the target object 170 needs to be enhanced, the brightness of the lighting device located behind the target object 170 may also be turned up, and the brightness thereof may be turned down, otherwise.
The electronic device 140 controls at least one second lighting fixture in the lighting system 120 to illuminate for a second time period to achieve a second lighting state. For example, the second illumination state may include a state of illuminating the background of the target object 170 and making the image of the target object darker. In this case, the electronic device 140 may control the lighting devices located behind and under the target object 170 to be turned on, and turn off the lighting devices located in other directions of the target object 170 to illuminate the background of the target object 170 while making the image of the target object darker.
The electronics 140 control the image capture device 130 to capture a first image of the target object 170 in a first illumination state. The first image may include a foreground image of the target object 170. The electronics 140 then control the image capture device 130 to capture a second image of the target object 170 in a second illumination state. The second image may include a background image of the target object 170.
The electronic device 140 determines a mask image corresponding to the target object 170 from the first and second images acquired by the image acquisition apparatus 130. Here, the mask image is used to extract a specific region in the image, for example, a region including only the target object, thereby achieving a matting effect for the target object. Then, the electronic device 140 obtains a composite image for the target object 170 by compositing the mask image with the first image captured by the image capturing apparatus 130. For example, the electronic device 140 may obtain a matting result for the target object 170, i.e., the electronic device 140 may obtain an image that includes only the target object 170.
According to one embodiment, the image processing apparatus 100 may further include a turntable 160. The turntable 160 is used to place the target object 170 and is rotated under the control of the electronic device 140, thereby enabling the image pickup device 130 to photograph the target object 170 at an arbitrary angle. Further, the turntable 160 may include a translucent support plate so that light emitted from the lighting device located below the target object 170 can be transmitted and projected on the target object 170.
According to one embodiment, the image processing apparatus 100 may further include a cabinet 110 housing the illumination system 120 and the turntable 160.
According to the embodiments of the present disclosure, by integrally controlling the operation of the lighting system and the image capturing device, the foreground image and the background image of the target object can be captured under an appropriate lighting state and angle, so that a good composite image for the target object (i.e., a matting result for the target object) is obtained from the foreground image and the background image. In addition, the operation of the lighting system and the image acquisition device can be integrally controlled according to the embodiment of the disclosure, so that the matting effect on the hollowed target object is better.
FIG. 2 shows a flow diagram of an image processing method 200 according to an embodiment of the present disclosure.
As shown in fig. 2, in step S210, a first image containing a target object is acquired, the first image being acquired in a first illumination state. According to one embodiment, the first image may comprise a foreground image of the target object. In this case, the first illumination state may include a state in which the target object is illuminated from the front side thereof. For example, the lighting devices positioned at the left, right, left front, right front, and upper front of the target object may be controlled to be lit to illuminate the target object from the front thereof. Furthermore, if the shadow of the target object needs to be enhanced, the brightness of the lighting device located below the target object may also be turned down, and conversely, turned up. If the contour of the target object needs to be enhanced, the brightness of the lighting device behind the target object can be also increased, and the brightness of the lighting device can be reduced.
In step S220, a second image containing the target object is acquired, the second image being acquired in a second illumination state. According to one embodiment, the second image may comprise a background image of the target object. In this case, the second illumination state may include a state in which the background of the target object is illuminated and the image of the target object is made darker. For example, the lighting devices located behind and below the target object may be controlled to be turned on, and the lighting devices located in other directions of the target object may be turned off to illuminate the background of the target object while making the image of the target object darker.
In step S230, a mask image for the target object is determined from the first image and the second image. Here, the mask image is used to extract a specific region in the image, for example, a region including only the target object, thereby achieving a matting effect for the target object.
In step S240, the mask image is synthesized with the first image, and a synthesized image for the target object is obtained. For example, a final matting result for a target object can be obtained by placing a composite image in a solid image. For example, the composite image may be placed at the center of gravity of the solid color image. In one embodiment, the color of the solid image may be specified by the user and have the same size as the first image or the second image.
FIG. 3 shows a flow chart of a method of determining a mask image according to an embodiment of the present disclosure.
In step S331, a first predetermined region is extracted in a first image including a target object acquired in a first illumination state. For example, the first predetermined region may be extracted in the first image by an image processing manner such as tone scale adjustment. In addition, the noise reduction smoothing processing may be performed on the extracted first predetermined region. The first predetermined area may include an area corresponding to a shadow cast from the target object. For example, the first predetermined area may be an area including only shadows cast from the target object.
In step S332, a second predetermined region is extracted in a second image containing the target object acquired in a second illumination state. For example, the second predetermined region may be extracted in the second image by an image processing manner such as tone scale adjustment. In addition, the noise reduction smoothing processing may be performed on the extracted second predetermined region. The second predetermined area may include an area corresponding to the target object. For example, the second predetermined area may be an area including only the target object.
In step S333, the first predetermined region and the second predetermined region are synthesized, and a mask image for the target object is obtained. For example, a mask image for the target object may be obtained by fusing the first predetermined region and the second predetermined region. The mask image can be a gray scale image after color level adjustment, and the range of pixel values is 0-255.
According to an embodiment of the present disclosure, by performing matting processing on the first image using the obtained mask image, an image including only the target object or substantially only the target object can be obtained.
FIG. 4 shows a flow diagram of a method of determining a mask image according to another embodiment of the present disclosure.
Steps S431 and S432 shown in fig. 4 are the same as steps S331 and S332 in fig. 3, and a repetitive description thereof will be omitted for the sake of brevity.
In step S433, the first predetermined region and the second predetermined region are synthesized, and a first mask image for the target object is obtained. For example, a first mask image for the target object may be obtained by fusing the first predetermined region and the second predetermined region. In one example, the first mask image may be a gray scale image after being tone-scaled, and the pixel value thereof ranges from 0 to 255. Those skilled in the art will appreciate that the pixel values of the grayscale map may also range from 0 to other values, depending on the system settings.
In step S434, binarization processing is performed on the first mask image. The binarization processing of the first mask image means that a pixel value larger than a specified threshold value in the first mask image as a grayscale image is changed to 255 and a pixel value equal to or smaller than the specified threshold value is changed to 0 so that only two pixel values of 0 and 255 are present in the first mask image.
In step S435, a Region Of Interest (Region Of Interest) corresponding to the target object is determined in the binarized first mask image. According to an embodiment, determining a region of interest (ROI) corresponding to the target object may comprise: an opening operation (i.e., an operation of erosion followed by dilation) is performed on the first mask image, and a connected domain in the first mask image is calculated. Here, the connected domain refers to a region in an image that is composed of pixels having the same pixel value and located adjacently. Then, an image area that is likely to be a target object is determined in the first mask image according to a certain rule. For example, since the target object is generally centrally located or occupies a large proportion of the image, the image region that may be the target object may be determined in the image by filtering out regions that are further away from the center of the image and/or that have a very small area. Next, a minimum bounding rectangle surrounding all the connected regions is determined in an image region that may be the target object as the ROI corresponding to the target object. According to another embodiment, the expanded minimum bounding rectangle may be taken as the ROI corresponding to the target object by appropriately expanding the minimum bounding rectangle (e.g., by a factor of 1.5).
In step S436, the first mask image is intersected with the ROI determined in step S435 to obtain a final mask image for the target region. According to an embodiment, the communication domain calculation may also be performed again on the ROI determined at step S435 to more accurately determine the final ROI region, which may be the target object, in the ROI by filtering out regions that are farther from the center of the ROI and/or have a very small area. The first mask image may then be intersected with the final ROI region to obtain a final mask image for the target region.
Fig. 5 shows a schematic diagram of a first image containing a target object acquired in a first illumination state according to an embodiment of the present disclosure. For example, the first image may comprise a foreground image of the target object. In this case, the first illumination state may include a state in which the target object is illuminated from the front side thereof. For example, the lighting devices positioned at the left, right, left front, right front, and upper front of the target object may be controlled to be lit to illuminate the target object from the front thereof.
Fig. 6 shows a schematic diagram of a second image containing a target object acquired in a second illumination state according to an embodiment of the present disclosure. For example, the second image may include a background image of the target object. In this case, the second illumination state may include a state in which the background of the target object is illuminated and the image of the target object is made darker. For example, the lighting devices located behind and below the target object may be controlled to be turned on, and the lighting devices located in other directions of the target object may be turned off to illuminate the background of the target object while making the image of the target object darker.
FIG. 7 shows a schematic diagram of an initial mask image for a target object, according to an embodiment of the present disclosure. The initial mask image shown in fig. 7 may be the mask image obtained at step S333 of fig. 3 or the first mask image obtained at step S433 of fig. 4. Since the edge portion in the initial mask image may include background elements other than the target object, the initial mask image may be further processed to obtain a final mask image to further filter out the background elements other than the target object.
FIG. 8 shows a schematic diagram of a final mask image for a target object, according to an embodiment of the present disclosure. The final mask image shown in fig. 8 may be the final mask image obtained at step S436 of fig. 4. For example, a connected-domain calculation may be performed on the ROI in the initial mask image to more accurately determine the final ROI region, which may be the target object, in the ROI by filtering out regions that are farther from the center of the ROI and/or have a very small area. The initial mask image may then be intersected with the final ROI region to obtain a final mask image for the target region.
FIG. 9 shows a schematic diagram of a composite image for a target object, according to an embodiment of the present disclosure. As shown in fig. 9, the final matting result for the target object is obtained by placing a composite image including only the target object at the center of the solid image.
Fig. 10 shows a block diagram of an image processing apparatus 1000 according to an embodiment of the present disclosure.
As shown in fig. 10, the image processing apparatus 1000 includes a first acquisition module 1010, a second acquisition module 1020, a determination module 1030, and a synthesis module 1040.
The first acquisition module 1010 is configured to acquire a first image containing a target object, the first image being acquired in a first illumination state. According to one embodiment, the first image may comprise a foreground image of the target object. In this case, the first illumination state may include a state in which the target object is illuminated from the front side thereof. For example, the lighting devices positioned at the left, right, left front, right front, and upper front of the target object may be controlled to be lit to illuminate the target object from the front thereof. Furthermore, if the shadow of the target object needs to be enhanced, the brightness of the lighting device located below the target object may also be turned down, and conversely, turned up. If the contour of the target object needs to be enhanced, the brightness of the lighting device behind the target object can be also increased, and the brightness of the lighting device can be reduced.
The second acquisition module 1020 is used to acquire a second image containing the target object, the second image being acquired in a second illumination state. According to one embodiment, the second image may comprise a background image of the target object. In this case, the second illumination state may include a state in which the background of the target object is illuminated and the image of the target object is made darker. For example, the lighting devices located behind and below the target object may be controlled to be turned on, and the lighting devices located in other directions of the target object may be turned off to illuminate the background of the target object while making the image of the target object darker.
The determination module 1030 is configured to determine a mask image for the target object from the first image and the second image. Here, the mask image is used to extract a specific region in the image, for example, a region including only the target object, thereby achieving a matting effect for the target object.
The synthesis module 1040 is configured to synthesize the mask image and the first image to obtain a synthesized image for the target object. For example, a final matting result for the target object can be obtained by placing the composite image in the center of the solid image.
In some embodiments, the determining module 1030 may include a first sub-module, a second sub-module, and a third sub-module. The first sub-module is used for extracting a first predetermined region in the first image. The second sub-module is for extracting a second predetermined region in the second image. The third sub-module is used for synthesizing the first predetermined area and the second predetermined area to obtain a mask image for the target object.
According to another embodiment, the third sub-module may synthesize the first and second predetermined regions, obtaining an initial mask image for the target object. In this case, the determining module 1030 may further include a fourth sub-module, a fifth sub-module, and a sixth sub-module. And the fourth sub-module is used for carrying out binarization processing on the first mask image. The fifth sub-module is for determining a region of interest (ROI) corresponding to the target object in the binarized first mask image. The sixth sub-module is used for solving the intersection of the initial mask image and the ROI to obtain a mask image for the target object.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, necessary security measures are taken, and the customs of the public order is not violated.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 11 illustrates a block diagram of an example electronic device 1100 that can be used to implement embodiments of the present disclosure. Electronic device 1100 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the device 1100 comprises a computing unit 1101, which may perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)1102 or a computer program loaded from a storage unit 1108 into a Random Access Memory (RAM) 1103. In the RAM 1103, various programs and data necessary for the operation of the device 1100 may also be stored. The calculation unit 1101, the ROM 1102, and the RAM 1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
A number of components in device 1100 connect to I/O interface 1105, including: an input unit 1106 such as a keyboard, a mouse, and the like; an output unit 1107 such as various types of displays, speakers, and the like; a storage unit 1108 such as a magnetic disk, optical disk, or the like; and a communication unit 1109 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 1109 allows the device 1100 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 1101 can be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 1101 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 1101 performs the respective methods and processes described above, such as an image processing method. For example, in some embodiments, the image processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 1108. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1100 via ROM 1102 and/or communication unit 1109. When the computer program is loaded into the RAM 1103 and executed by the computing unit 1101, one or more steps of the image processing method described above may be performed. Alternatively, in other embodiments, the computing unit 1101 may be configured to perform the image processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种光谱反射率的确定方法、装置及设备