Method, computing device and storage medium for image stitching
1. A method for image stitching, comprising:
acquiring two images to be spliced;
generating two feature representations based on the two images via a trained feature extraction network;
generating, via a trained decoding network, a first confidence indicating whether the two images can be stitched and a first relative position between the two images based on the two feature representations; and
and if the two images can be spliced based on the first confidence coefficient, splicing the two images based on the first relative position.
2. The method of claim 1, wherein generating the first confidence indicating whether the two images can be stitched and the first relative position between the two images comprises:
adding the two feature representations by channel to generate an added feature representation;
generating an intermediate feature representation via the plurality of convolutional layers and downsampling layers based on the added feature representations;
generating, via a first branch neural network, the first confidence indicating whether the two images can be stitched based on the intermediate feature representation; and
generating the first relative position between the two images via a second branch neural network based on the intermediate feature representation.
3. The method of claim 2, further comprising:
acquiring a plurality of stitched images;
for each image of the plurality of images, determining from the image a set of pairs of sub-images and a set of relative positions associated with the set of sub-images, there being a coincidence between each pair of sub-images in the set of sub-images; and
training the feature extraction network and the decoding network with the plurality of sets of sub-images as samples and the plurality of sets of relative positions and the predetermined confidence as labels to generate a trained feature extraction network and a trained decoding network.
4. The method of claim 3, wherein a first sub-image in each sub-image pair in the set of sub-image pairs is below a second sub-image, and the first sub-image coincides 70% -90% with the image in the lateral direction and at least 10% with the image in the longitudinal direction.
5. The method of claim 4, wherein the first sub-image and the second sub-image overlap by at least 80% in the cross direction and 10% -80% in the machine direction.
6. The method of claim 3, wherein training the feature extraction network and the decoding network comprises:
for each of the set of sub-image pairs, performing the steps of:
generating, via the feature extraction network and the decoding network, a second confidence indicating whether the sub-images can be stitched or not and a second relative position between the sub-image pairs based on the sub-image pairs;
generating a first error between the predetermined confidence level and the second confidence level based on a first predetermined loss function;
back-propagating the first branch neural network based on the first error to update the first branch neural network and generate a first intermediate error;
generating a second error between the relative position associated with the pair of sub-images and the second relative position based on a second predetermined loss function;
back-propagating the second branch neural network based on the second error to update the second branch neural network and generate a second intermediate error;
combining the first intermediate error and the second intermediate error to generate a third intermediate error; and
back-propagating the plurality of convolutional and downsampling layers and the feature extraction network based on the third intermediate error to update the plurality of convolutional and downsampling layers and the feature extraction network.
7. The method of claim 1, wherein stitching the two images comprises:
acquiring, for each of the two images, an overlapping portion of the image that overlaps the other of the two images based on the relative position;
generating a pixel difference map based on the two coincident portions;
determining a stitching location between the two images based on the pixel difference map; and
and splicing the two images based on the splicing position.
8. The method of claim 7, wherein determining the stitching location between the two images comprises:
for each of a plurality of rows included in the pixel difference map, generating a first value for the row based on a plurality of pixel difference values located at the row;
for each of a plurality of rows included in the pixel difference map, generating a second value for the row based on a first value for the row and first values for a predetermined number of rows above and below the row;
determining a line, of which the second value is smallest, from among a plurality of lines included in the pixel difference map as the stitching position between the two images.
9. The method of claim 7, wherein stitching the two images comprises:
determining a fusion region between the two images based on the stitching position and a predetermined fusion width;
for each row within the blend region, performing the following steps:
determining a first transparency for a first image of the two images and a second transparency for a second image of the two images based on the position of the line in the fused region;
processing a sequence of pixel values in the first image corresponding to the row based on the first transparency to generate a first sequence of pixel values;
processing a sequence of pixel values in the second image corresponding to the row based on the second transparency to generate a second sequence of pixel values; and
generating a sequence of pixel values in the row based on the first sequence of pixel values and the second sequence of pixel values.
10. The method of any one of claims 1-9, the image being an X-ray image.
11. A computing device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-10.
Background
Due to the limited resolution and visual field range of the X-ray equipment, the whole spine or lower limbs of a patient cannot be shot at one time. The traditional splicing scheme of the full-length X-ray film is mainly through manual splicing. However, the manual splicing mode is time-consuming and labor-consuming, an experienced doctor needs to use professional software for splicing, the splicing process is slow, and the splicing effect is closely related to the level of the doctor.
Disclosure of Invention
A method, a computing device and a computer storage medium for image stitching are provided, which can improve the efficiency of image stitching.
According to a first aspect of the present disclosure, a method for image stitching is provided. The method comprises the following steps: acquiring two images to be spliced; generating two feature representations based on the two images via the trained feature extraction network; generating, via the trained decoding network, a first confidence indicating whether the two images can be stitched and a first relative position between the two images based on the two feature representations; and if the two images are determined to be capable of being stitched based on the first confidence, stitching the two images based on the first relative position.
According to a second aspect of the present disclosure, a computing device is provided. The computing device includes: at least one processor, and a memory communicatively connected to the at least one processor, wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the method according to the first aspect.
In a third aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements a method according to the first aspect of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 is a schematic diagram of an information handling environment 100 according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of a method 200 for image stitching according to an embodiment of the present disclosure.
Fig. 3 is a schematic block diagram of a neural network 300 in accordance with an embodiment of the present disclosure.
Fig. 4 is a schematic diagram of a method 400 for generating a first confidence indicating whether two images can be stitched and a first relative position between the two images, in accordance with an embodiment of the present disclosure.
Fig. 5 is a schematic diagram of a method 500 for training a feature extraction network and a decoding network, in accordance with an embodiment of the present disclosure.
Fig. 6 is a schematic diagram of a method 600 for stitching two images according to an embodiment of the present disclosure.
Fig. 7 is a schematic diagram of a process 700 of acquisition of a sub-image pair according to an embodiment of the present disclosure.
Fig. 8 is a schematic diagram of a stitching process 800 of two images according to an embodiment of the disclosure.
FIG. 9 is a block diagram of a computing device used to implement the method for image stitching of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The term "include" and variations thereof as used herein is meant to be inclusive in an open-ended manner, i.e., "including but not limited to". Unless specifically stated otherwise, the term "or" means "and/or". The term "based on" means "based at least in part on". The terms "one example embodiment" and "one embodiment" mean "at least one example embodiment". The term "another embodiment" means "at least one additional embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
As described above, the conventional splicing scheme is inefficient mainly through manual splicing.
To address, at least in part, one or more of the above issues and other potential issues, example embodiments of the present disclosure propose a scheme for image stitching. In this scheme, a computing device obtains two images to be stitched and generates two feature representations based on the two images via a trained feature extraction network. The computing device generates, via the trained decoding network, a first confidence indicating whether the two images can be stitched and a first relative position between the two images based on the two feature representations. If the computing device determines that the two images can be stitched based on the first confidence, the two images are stitched based on the first relative position. In this way, the efficiency of image stitching can be improved.
Hereinafter, specific examples of the present scheme will be described in more detail with reference to the accompanying drawings.
FIG. 1 shows a schematic diagram of an example of an information processing environment 100, according to an embodiment of the present disclosure. The information processing environment 100 may include a computing device 110, two images to be stitched 120-1 and 120-2 (hereinafter collectively referred to as 120), and a stitching result 130.
The computing device 110 includes, for example, but is not limited to, a server computer, a multiprocessor system, a mainframe computer, a distributed computing environment including any of the above systems or devices, and the like. In some embodiments, the computing device 110 may have one or more processing units, including special purpose processing units such as image processing units GPU, field programmable gate arrays FPGA, and application specific integrated circuits ASIC, and general purpose processing units such as central processing units CPU.
The computing device 110 is configured to obtain two images 120 to be stitched; generating two feature representations based on the two images 120 via the trained feature extraction network; generating, via the trained decoding network, a first confidence indicating whether the two images can be stitched and a first relative position between the two images based on the two feature representations; and if the two images are determined to be capable of being stitched based on the first confidence, stitching the two images based on the first relative position. Finally, a splicing result 130 is obtained.
Therefore, the efficiency of image splicing can be improved.
Fig. 2 shows a flow diagram of a method 200 for image stitching according to an embodiment of the present disclosure. For example, the method 200 may be performed by the computing device 110 as shown in FIG. 1. It should be understood that method 200 may also include additional blocks not shown and/or may omit blocks shown, as the scope of the present disclosure is not limited in this respect.
At block 202, the computing device 110 acquires two images 120 to be stitched.
In some embodiments, both images are X-ray images.
At block 204, the computing device 110 generates two feature representations based on the two images 120 via the trained feature extraction network.
For example, two images are processed separately via a trained feature extraction network to generate two feature representations. For example, two images may be processed in parallel using the same trained feature extraction network to generate two feature representations, or two images may be processed in sequence using the same trained feature extraction network to generate two feature representations, or the trained feature extraction network may include two identical sub-networks of feature extraction for processing the two images respectively to output the two feature representations respectively.
Fig. 3 shows a schematic block diagram of a neural network 300 according to an embodiment of the present disclosure. As shown in fig. 3, two images 310-1 and 310-2 are input to two identical feature extraction networks 321 and 322 included in the feature extraction network 320, respectively, and output two feature representations, respectively.
In some embodiments, the feature extraction network may include a Deep residual network (ResNet), for example including but not limited to ResNet50, res2net50, res2net50_ v1b, res2next, res2net101_ v1 b.
Each generated feature representation may comprise, for example, a feature map having a plurality of channels, for example, a 7 by 7 feature map having 1024 channels.
Returning to fig. 2, at block 206, the computing device 110 generates, via the trained decoding network, a first confidence indicating whether the two images can be stitched and a first relative position between the two images based on the two feature representations.
The first confidence may be a value within a predetermined range. The predetermined range includes, for example, but is not limited to, 0-1. If the first confidence is greater than a predetermined confidence (e.g., 0.6, 0.7, 0.8, etc.), then it may be determined that the two images are stitchable, otherwise it may be determined that the two images are not stitchable.
The first relative position for example comprises an offset between the two images, for example an offset (x, y) between the upper left corners of the two images.
Referring to fig. 3, two feature representations may be input to the decoding network 330 and output a first confidence 340 and a first relative position 350.
Returning to fig. 2, at block 208, the computing device 110 determines whether the two images can be stitched based on the first confidence level.
If, at block 208, the computing device 110 determines that the two images can be stitched based on the first confidence level, then, at block 210, the two images are stitched based on the first relative position.
Therefore, the confidence coefficient and the relative position of whether the two images to be spliced can be determined by using the feature extraction network and the decoding network, and the two images are spliced based on the relative position under the condition that the confidence coefficient indicates that the two images can be spliced. In addition, the cost of the hospital can be reduced.
Fig. 4 shows a flow diagram of a method 400 for generating a first confidence indicating whether two images can be stitched and a first relative position between the two images, according to an embodiment of the disclosure. For example, the method 400 may be performed by the computing device 110 as shown in FIG. 1. It should be understood that method 400 may also include additional blocks not shown and/or may omit blocks shown, as the scope of the disclosure is not limited in this respect.
At block 402, the computing device 110 adds the two feature representations by channel to generate an added feature representation.
For example, if both feature representations are 7 by 7 feature maps with 1024 channels, then after summing the features by channels, the summed features are represented as 7 by 7 feature maps with 1024 channels. It should be understood that the number of channels and the feature size are merely illustrative and the scope of the present disclosure is not limited thereto.
At block 404, the computing device 110 generates an intermediate feature representation via the plurality of convolutional layers and downsample layers based on the added feature representations.
For example, 7 × 7 feature maps with 1024 channels may be reduced to 7 × 7 feature maps with 512 channels through 1 × 1 convolutional layer, to 7 × 7 feature maps with 256 channels through 3 × 3 convolutional layer, and to 3 × 3 feature maps with 256 channels through max pooling (maxpool) layer, generating 3 × 3 feature maps with 256 channels. It should be understood that this is by way of example only.
At block 406, the computing device 110 generates, via the first branch neural network, a first confidence indicating whether the two images can be stitched based on the intermediate feature representation.
For example, the first branch neural network may include 1 × 1 convolutional layer, 3 × 3 convolutional layer, max pooling layer, 1 × 1 convolutional layer, and activation layer. The method comprises the steps of enabling 3X 3 feature maps with 256 channels to pass through a 1X 1 convolution layer to generate 3X 3 feature maps with 256 channels, enabling 3X 3 feature maps with 256 channels to pass through the 3X 3 convolution layer to generate 3X 3 feature maps with 256 channels, enabling the 3X 3 feature maps to pass through a maximum pooling layer to generate 1X 1 feature maps with 256 channels, enabling 1X 1 feature maps with 1 channel to pass through the 1X 1 convolution layer to generate a 1X 1 feature map with 1 channel, and enabling a sigmoid activation layer to pass through finally, and outputting a first confidence degree indicating whether two images can be spliced or not.
At block 408, the computing device 110 generates a first relative position between the two images via the second branch neural network based on the intermediate feature representation.
For example, the second branch neural network may include 1 × 1 convolutional layer, 3 × 3 convolutional layer, max pooling layer, and 1 × 1 convolutional layer. 3 x 3 feature maps with 256 channels were passed through 1 x 1 convolutional layer to generate 3 x 3 feature maps with 256 channels, then 3 x 3 feature maps with 256 channels were passed through 3 x 3 convolutional layer to generate 3 x 3 feature maps with 256 channels, then maximum pooling layer to generate 1 x 1 feature map with 256 channels, then 1 x 1 convolutional layer to generate 1 x 1 feature map with 2 channels as the first relative position between the two images.
It should be understood that although fig. 4 illustrates block 406 being performed first and block 408 being performed second, this is by way of example only, and block 408 being performed first and block 406 being performed second, or block 406 and block 408 being performed in parallel, is also possible.
Thus, the two feature representations can be added according to the channels and reduced to the intermediate feature representation, and then processed by the two branch neural networks to determine the first confidence and the first relative position.
Fig. 5 shows a flow diagram of a method 500 for training a feature extraction network and a decoding network in accordance with an embodiment of the present disclosure. For example, the method 500 may be performed by the computing device 110 as shown in fig. 1. It should be understood that method 500 may also include additional blocks not shown and/or may omit blocks shown, as the scope of the disclosure is not limited in this respect.
At block 502, the computing device 110 acquires the stitched plurality of images.
In some embodiments, the stitched plurality of images comprises a stitched plurality of X-ray images, such as a plurality of stitched full length X-ray films.
In some embodiments, the computing device 110 may acquire a stitched plurality of initial X-ray images. The plurality of initial X-ray images may relate to a plurality of different X-ray samples, such as age, body type, disease, etc. Subsequently, the computing device 110 may perform one or more pre-processing on the plurality of initial X-ray images to generate a stitched plurality of X-ray images. One or more of the pre-treatments include stretching, rotating, mirroring, adjusting contrast and exposure, and the like. This can increase the number of images used for training.
At block 504, the computing device 110 determines, for each image of the plurality of images, a set of pairs of sub-images and a set of relative positions associated with the set of pairs of sub-images from the image, there being a coincidence between each pair of sub-images in the set of pairs of sub-images.
For example, as shown in fig. 7, the first sub-image 720 in each sub-image pair in the set of sub-image pairs may be below the second sub-image 710. The first sub-image 720 may overlap the image 710 by 70% -90% in the landscape direction and at least 10% in the portrait direction. The set of sub-image pairs determined from each image may comprise one or more sub-image pairs.
The first sub-image 720 and the second sub-image 730 may overlap by at least 80% in the lateral direction and 10% -80% in the longitudinal direction. Thereby, a better degree of coincidence between the first sub-image and the second sub-image is achieved.
At block 506, the computing device 110 trains the feature extraction network and the decoding network with the plurality of sets of sub-images as samples and the plurality of sets of relative positions and the predetermined confidence as labels to generate a trained feature extraction network and a trained decoding network.
In particular, for each sub-image pair in each of the plurality of sub-image pair sets, computing device 110 may generate, via the feature extraction network and the decoding network, a second confidence indicating whether the sub-image pairs can be stitched, and a second relative position between the sub-image pairs, based on the sub-image pair.
The computing device 110 may generate a first error between the predetermined confidence and the second confidence based on a first predetermined loss function (e.g., cross-entropy loss BCELoss) and back-propagate the first branch neural network based on the first error to update the first branch neural network and generate a first intermediate error. It should be understood that the first intermediate error herein refers to an error obtained after the first error is propagated backward through the first branched neural network. The predetermined confidence may be, for example, 1, indicating that the two images can be stitched. It should be understood that this is by way of example only, and that other values for the predetermined confidence level may be used, as the scope of the disclosure is not limited thereto.
The computing device 110 may generate a second error between the relative position associated with the pair of sub-images and the second relative position based on a second predetermined Loss function (e.g., smooth L1Loss SmoothL1 Loss), and backpropagate the second branch neural network based on the second error to update the second branch neural network and generate a second intermediate error. It should be understood that the second intermediate error herein refers to an error obtained after the second error propagates backward through the second branched neural network.
Subsequently, the computing device 110 may combine the first intermediate error and the second intermediate error to generate a third intermediate error and backpropagate the plurality of convolutional and downsampling layers and the feature extraction network based on the third intermediate error to update the plurality of convolutional and downsampling layers and the feature extraction network.
Therefore, two errors respectively associated with the confidence coefficient and the relative position are combined after being subjected to back propagation through the two branch neural networks, and then the back propagation is performed through the plurality of convolutional layers, the down-sampling layers and the feature extraction network, so that the training of the feature extraction network and the decoding network can be realized.
Multiple sets of sub-images may be considered positive samples. In some embodiments, the computing device 110 may also obtain multiple negative examples, such as two images that cannot be stitched. Subsequently, the computing device 110 trains the feature extraction network and the decoding network based on the obtained plurality of negative samples, another predetermined confidence level (e.g., 0 for indicating that the two images cannot be stitched), and a predetermined relative position, e.g., (0, 0). It should be understood that the values herein are merely exemplary, and that other values for the predetermined confidence level and the predetermined relative position may be used, as the scope of the present disclosure is not limited in this respect. Therefore, the confidence and the accuracy of the relative position of the output of the feature extraction network and the decoding network are further improved.
Therefore, the feature extraction network and the decoding network can be trained by acquiring two coincident sub-images from the spliced images, and the confidence coefficient and the accuracy of the relative position output by the feature extraction network and the decoding network are improved.
Fig. 6 shows a flow diagram of a method 600 for stitching two images according to an embodiment of the present disclosure. For example, the method 600 may be performed by the computing device 110 as shown in FIG. 1. It should be understood that method 600 may also include additional blocks not shown and/or may omit blocks shown, as the scope of the disclosure is not limited in this respect.
At block 602, the computing device 110, for each of the two images, acquires a coincident portion of the image that coincides with the other of the two images based on the relative position.
For example, the relative position (positional offset) of the two images is (x, y), i.e., the upper left corner of the second image 820 located below is laterally different by y from the upper left corner of the first image 810 located above and longitudinally different by x. For the first image 810 located above, the overlapping portion with the second image 820 is the region (e.g., n rows, m columns) enclosed from the x-th row to the last row and from the y-th column to the last column in the first image 810. The overlapping portion of the second image located below and overlapping the first image is a region surrounded by the 1 st row to the nth row and the 1 st column to the mth column in the second image.
At block 604, the computing device 110 generates a pixel difference map based on the two coincident portions.
Specifically, the computing device 110 may subtract the pixel values of corresponding pixels in the two overlapping portions and take the absolute value to generate a pixel difference map.
In some embodiments, the two coincident portions may also be smoothed, such as gaussian blur, before generating the pixel difference map.
At block 606, the computing device 110 determines a stitching location between the two images based on the pixel difference map.
For example, a line in the pixel difference map where the sum of pixel differences is minimum may be used as the stitching position.
In some embodiments, for each of the plurality of rows included in the pixel difference map, computing device 110 may generate a first value for the row based on a plurality of pixel difference values located at the row.
For example, a plurality of pixel difference values located in the row are added to generate a first value for the row.
Subsequently, for each of the plurality of rows included in the pixel difference map, computing device 110 may generate a second value for the row based on the first value for the row and the first values for a predetermined number of rows above and below the row.
The predetermined number includes, for example, but is not limited to, 2, 4, 5, 6, etc. For example, the first value of the line is added to the 10 first values of the upper 5 and lower 5 lines of the line to generate the second value of the line. This may not apply for e.g. the first 5 rows and the last 5 rows, or the process may be implemented with 5 rows refilled before the first 5 rows and 5 rows after the last 5 rows.
Next, the computing device 110 may determine, from the plurality of lines included in the pixel difference map, a line having a minimum second value as a stitching location between the two images.
Thus, a stitching position where the pixel difference between the two images is minimal can be found.
At block 608, the computing device 110 stitches the two images based on the stitch location.
For example, the pixel value above the stitching position may be set as the pixel value of the upper image and the pixel value below the stitching position may be set as the pixel value of the lower image.
Therefore, the splicing position can be determined and spliced based on the pixel difference between the overlapped parts of the two images.
In some embodiments, the computing device 110 may determine a fusion region between the two images based on the stitching location and the predetermined fusion width. The predetermined blend width includes, for example, but is not limited to, 20 rows, 30 rows of pixels.
For each row within the fusion region, the computing device 110 may determine a first transparency for a first image of the two images and a second transparency for a second image of the two images based on a position of the row in the fusion region.
For example, as shown in fig. 8, the first image 810 is located above and the second image 820 is located below, and if the fused region includes n rows, e.g., row i, the second transparency is i/n x 100% transparency and the first transparency is 1-i/n x 100% transparency.
The computing device 110 may process a sequence of pixel values in the first image corresponding to the row based on the first transparency to generate a first sequence of pixel values. For example, as shown in FIG. 8, the line 830 is located j rows above the stitching location 840, the stitching location 840 corresponds to the a + x th line in the first image 810 and the a th line in the second image 820, then the line 830 corresponds to the a + x-j th line in the first image 810, and the first transparency is multiplied by the sequence of pixel values of the a + x-j th line in the first image 810 to generate the first sequence of pixel values.
The computing device 110 may process a sequence of pixel values in the second image corresponding to the row based on the second transparency to generate a second sequence of pixel values.
As shown in fig. 8, the line 830 corresponds to the a-j th line in the second image 820, and the second transparency is multiplied by the sequence of pixel values of the a-j th line in the second image 820 to generate a second sequence of pixel values.
Subsequently, the computing device 110 may generate a sequence of pixel values in the row based on the first sequence of pixel values and the second sequence of pixel values.
For example, the first sequence of pixel values and the second sequence of pixel values are correspondingly added to generate the sequence of pixel values in the row.
This makes it possible to smoothly blend two images by increasing the proportion of pixels occupied by any image as the image is closer to any image in the blend region.
Fig. 9 illustrates a schematic block diagram of an example device 900 that may be used to implement embodiments of the present disclosure. For example, computing device 110 as shown in FIG. 1 may be implemented by device 900. As shown, device 900 includes a Central Processing Unit (CPU) 901 that can perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM) 902 or loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the random access memory 903, various programs and data required for the operation of the device 900 can also be stored. The central processing unit 901, the read only memory 902 and the random access memory 903 are connected to each other by a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
A number of components in the device 900 are connected to the input/output interface 905, including: an input unit 906 such as a keyboard, a mouse, a microphone, and the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, optical disk, or the like; and a communication unit 909 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The various processes and processes described above, such as the methods 200, 400 and 600, may be performed by the central processing unit 901. For example, in some embodiments, the methods 200, 400, and 600 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 908. In some embodiments, some or all of the computer program may be loaded and/or installed onto device 800 via read only memory 902 and/or communications unit 909. One or more of the actions of the methods 200, 400 and 600 described above may be performed when the computer program is loaded into the random access memory 903 and executed by the central processing unit 901.
The present disclosure relates to methods, apparatuses, systems, computing devices, computer-readable storage media, and/or computer program products. The computer program product may include computer-readable program instructions for performing various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.