Voice endpoint detection method and system based on local convolution block attention network
1. The voice endpoint detection method based on the local convolution block attention network is characterized by comprising the following steps:
acquiring spectrogram data of voice data;
extracting N adjacent frames for each frame of data in the spectrogram data by using a local sensitive hash algorithm to obtain frame-level local spectrogram data;
inputting the local spectrogram data into a local convolution block attention network, performing feature extraction through a convolution module, and performing attention operation after each convolution block sequentially through a channel attention module, a frequency spectrum attention module and a time attention module to obtain enhanced data;
and inputting the enhanced data into a classifier, and performing voice/non-voice frame detection to obtain a prediction result.
2. The local convolution block attention network based voice endpoint detection method of claim 1, wherein the obtaining spectrogram data of the voice data comprises:
framing and windowing the voice data;
and carrying out fast Fourier transform on each frame of voice data after windowing to obtain two-dimensional spectrogram data.
3. The local convolution block attention network based voice endpoint detection method according to claim 1, wherein the process of obtaining the frame-level local spectrogram data comprises:
selecting a group of hash function families, and then mapping each frame of frequency spectrum vector into an integer vector;
mapping the integer vector to a certain bit of a hash table to obtain hash table indexes, wherein each hash table index corresponds to a hash bucket;
obtaining a keyword of the frequency spectrum vector in a hash bucket according to the hash value of the integer vector;
the position indexes of the frame frequency spectrum data represented by the integer vector in all data are put into a hash bucket corresponding to the hash table index until all the frame frequency spectrum data indexes are stored;
for each query, obtaining the hash bucket index and the keywords in the bucket, searching whether the keyword exists in the hash bucket, and if so, taking out the position indexes of the frame frequency spectrum data corresponding to all the keywords from the hash bucket;
and taking out the frame frequency spectrum data corresponding to the position index, arranging the data and the query Euclidean distance from small to large, and taking out the N frame frequency spectrum data with the shortest distance as local spectrogram input.
4. The local convolution block attention network based voice endpoint detection method of claim 1, wherein the channel attention module comprises:
inputting the local spectrogram data into a convolution block module for feature extraction;
performing maximum pooling and average pooling on the extracted features along the channel dimension, and then passing the obtained maximum channel feature map and average channel feature map through a neural network to obtain a polymerized channel feature map;
and obtaining the attention score of the channel feature map by adopting a sigmod (·) function, and multiplying the attention score of the channel feature map by the output of the convolution block to obtain the output of the channel attention module.
5. The local convolution block attention network based voice endpoint detection method of claim 1, wherein the spectral attention module comprises:
performing channel dimension compression on the numerical value output by the channel attention module to obtain a first channel compression characteristic diagram;
respectively carrying out maximum pooling and average pooling on the first channel compression characteristic graph along the frequency dimension, and then passing the obtained maximum spectrum characteristic graph and average spectrum characteristic graph through a neural network to obtain a polymerized spectrum characteristic graph;
and obtaining the attention score of the spectral feature map by adopting a sigmod (·) function, and multiplying the attention score of the spectral feature map by the output of the channel attention module to obtain the output of the spectral attention module.
6. The local convolution block attention network based voice endpoint detection method of claim 1, wherein the temporal attention module comprises:
performing channel dimension compression on the numerical value output by the frequency spectrum attention module to obtain a second channel compression characteristic diagram;
respectively carrying out maximum pooling and average pooling on the second channel compression characteristic map along the time dimension, and then passing the obtained maximum time characteristic map and average time characteristic map through a neural network to obtain a polymerized time characteristic map;
and obtaining the attention score of the time characteristic diagram by adopting a sigmod (·) function, and multiplying the attention score of the time characteristic diagram by the output of the frequency spectrum attention module to obtain the output of the time attention module.
7. The local convolution block attention network based voice endpoint detection method of claim 1, wherein after each convolution block attention operation is completed, a residual connection to an initial input of a convolution block is set; judging whether the maximum convolution block number of the network is reached; if yes, obtaining enhanced data; otherwise, the attention operations of the channel attention module, the spectral attention module, and the temporal attention module are iteratively updated.
8. A voice endpoint detection system based on a partial convolution block attention network is characterized by comprising:
a spectrogram module configured to: acquiring spectrogram data of voice data;
a local spectrogram module configured to: extracting N adjacent frames for each frame of data in the spectrogram data by using a local sensitive hash algorithm to obtain frame-level local spectrogram data;
a convolution block attention acquisition module configured to: inputting the local spectrogram data into a local convolution block attention network, performing feature extraction through a convolution module, and performing attention operation after each convolution block sequentially through a channel attention module, a frequency spectrum attention module and a time attention module to obtain enhanced data;
a prediction module configured to: and inputting the enhanced data into a classifier, and performing voice/non-voice frame detection to obtain a prediction result.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method for voice endpoint detection based on a partial convolution block attention network according to any one of claims 1 to 7.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps in the method for voice endpoint detection based on local convolutional block attention network as claimed in any of claims 1 to 7.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Voice Activity Detection (VAD) is a task of detecting which parts of an utterance contain Voice and which parts are noise or silence segments, and retaining only the Voice segments. Such tasks are usually important preprocessing stages in the fields of speech recognition, speech enhancement, etc., and a good VAD preprocessing system can reduce the computation and delay of the whole model, which is the basis of high performance of the model. However, it has the following problems:
1) the characteristics of the speech signal can not be accurately represented by the conventional time domain and frequency domain characteristics under the condition of low signal-to-noise ratio;
2) under the condition of low signal-to-noise ratio, the detection accuracy of the VAD system is greatly influenced by high-intensity noise;
3) in the face of non-stationary noise background, the generalization capability of the VAD system is greatly reduced.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides a voice endpoint detection method and system based on a local convolution block attention network, which dynamically selects a neighboring frame for each frame of frequency spectrum in a spectrogram through a local sensitive hash algorithm to form a frame-level local spectrogram input; and then, a local convolution block neural network is used for directly learning frame-level features from a frequency spectrum, and channel attention, frequency spectrum attention and time attention are set behind each convolution block to help a model focus on more important information and inhibit unnecessary features.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a voice endpoint detection method based on a local rolling block attention network.
The voice endpoint detection method based on the local convolution block attention network comprises the following steps:
acquiring spectrogram data of voice data;
extracting N adjacent frames for each frame of data in the spectrogram data by using a local sensitive hash algorithm to obtain frame-level local spectrogram data;
inputting the local spectrogram data into a local convolution block attention network, performing feature extraction through a convolution module, and performing attention operation after each convolution block sequentially through a channel attention module, a frequency spectrum attention module and a time attention module to obtain enhanced data;
and inputting the enhanced data into a classifier, and performing voice/non-voice frame detection to obtain a prediction result.
Further, the acquiring spectrogram data of the voice data comprises: framing and windowing the voice data; and carrying out fast Fourier transform on each frame of voice data after windowing to obtain two-dimensional spectrogram data.
Further, the process of obtaining the frame-level local spectrogram data includes:
selecting a group of hash function families, and then mapping each frame of frequency spectrum vector into an integer vector;
mapping the integer vector to a certain bit of a hash table to obtain hash table indexes, wherein each hash table index corresponds to a hash bucket;
obtaining a keyword of the frequency spectrum vector in a hash bucket according to the hash value of the integer vector;
the position indexes of the frame frequency spectrum data represented by the integer vector in all data are put into a hash bucket corresponding to the hash table index until all the frame frequency spectrum data indexes are stored;
for each query, obtaining the hash bucket index and the keywords in the bucket, searching whether the keyword exists in the hash bucket, and if so, taking out the position indexes of the frame frequency spectrum data corresponding to all the keywords from the hash bucket;
and taking out the frame frequency spectrum data corresponding to the position index, arranging the data and the query Euclidean distance from small to large, and taking out the N frame frequency spectrum data with the shortest distance as local spectrogram input.
Further, the channel attention module includes:
inputting the local spectrogram data into a convolution block module for feature extraction;
performing maximum pooling and average pooling on the output of the rolling blocks along the channel dimension, and then passing the obtained maximum channel characteristic diagram and average channel characteristic diagram through a neural network to obtain a polymerized channel characteristic diagram;
and obtaining the attention score of the channel feature map by adopting a sigmod (·) function, and multiplying the attention score of the channel feature map by the output of the volume block to obtain the output of the channel attention module.
Further, the spectral attention module includes:
performing channel dimension compression on the numerical value output by the channel attention module to obtain a first channel compression characteristic diagram;
respectively carrying out maximum pooling and average pooling on the first channel compression characteristic graph along the frequency dimension, and then passing the obtained maximum spectrum characteristic graph and average spectrum characteristic graph through a neural network to obtain a polymerized spectrum characteristic graph;
and obtaining the attention score of the spectral feature map by adopting a sigmod (·) function, and multiplying the attention score of the spectral feature map by the output of the channel attention module to obtain the output of the spectral attention module.
Further, the temporal attention module includes:
performing channel dimension compression on the numerical value output by the frequency spectrum attention module to obtain a second channel compression characteristic diagram;
respectively carrying out maximum pooling and average pooling on the second channel compression characteristic map along the time dimension, and then passing the obtained maximum time characteristic map and average time characteristic map through a neural network to obtain a polymerized time characteristic map;
and obtaining the attention score of the time characteristic diagram by adopting a sigmod (·) function, and multiplying the attention score of the time characteristic diagram by the output of the frequency spectrum attention module to obtain the output of the time attention module.
Further, after each convolution block attention operation is completed, a residual concatenation with the initial input of the convolution block is set. Judging whether the maximum convolution block number of the network is reached; if yes, obtaining enhanced data; otherwise, the attention operations of the channel attention module, the spectral attention module, and the temporal attention module are iteratively updated.
A second aspect of the present invention provides a voice endpoint detection system based on a local convolution block attention network.
A voice endpoint detection system based on a partial convolution block attention network comprises:
a spectrogram module configured to: acquiring spectrogram data of voice data;
a local spectrogram module configured to: extracting N adjacent frames for each frame of data in the spectrogram data by using a local sensitive hash algorithm to obtain frame-level local spectrogram data;
a convolution block attention module configured to: inputting the local spectrogram data into a local convolution block attention network, performing feature extraction through a convolution module, and performing attention operation after each convolution block sequentially through a channel attention module, a frequency spectrum attention module and a time attention module to obtain enhanced data;
a prediction module configured to: and inputting the enhanced data into a classifier, and performing voice/non-voice frame detection to obtain a prediction result.
A third aspect of the invention provides a computer-readable storage medium.
A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the steps of the method for voice endpoint detection based on a partial convolution block attention network as defined in the first aspect above.
A fourth aspect of the invention provides a computer apparatus.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the local convolution block attention network based voice endpoint detection method according to the first aspect when executing the program.
Compared with the prior art, the invention has the beneficial effects that:
according to the voice endpoint detection method based on the local convolution block attention network, original voice is used as input, a spectrogram is generated firstly, and the spectrogram comprises frequency spectrum information of each frame; secondly, dynamically selecting a plurality of adjacent frames for each frame by using a local sensitive hash algorithm to form a frame-level local spectrogram input; and then respectively calculating channel attention, spectrum attention and time attention through a local convolution block attention network, and focusing on more proper channel and spectrum characteristics while extracting the characteristics, suppressing unnecessary characteristics and finally focusing on more proper context frames. The frames with high similarity can also be represented very similarly, which is beneficial to improving the detection precision of the model for the voice/non-voice frames under low signal-to-noise ratio.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a flow chart of a voice endpoint detection method based on a partial convolution block attention network according to the present invention;
FIG. 2 is a schematic view of a channel attention module of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
It is noted that the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods and systems according to various embodiments of the present disclosure. It should be noted that each block in the flowchart or block diagrams may represent a module, a segment, or a portion of code, which may comprise one or more executable instructions for implementing the logical function specified in the respective embodiment. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Example one
As shown in fig. 1, the present embodiment provides a voice endpoint detection method based on a local convolution block attention network, and the present embodiment is illustrated by applying the method to a server, it is understood that the method may also be applied to a terminal, and may also be applied to a system including a terminal and a server, and implemented by interaction between the terminal and the server. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network server, cloud communication, middleware service, a domain name service, a security service CDN, a big data and artificial intelligence platform, and the like. The terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein. In this embodiment, the method includes the steps of:
s1: acquiring spectrogram data of voice data;
s2: extracting N adjacent frames for each frame of data in the spectrogram data by using a local sensitive hash algorithm to obtain frame-level local spectrogram data;
s3: inputting the local spectrogram data into a local convolution block attention network, performing feature extraction through a convolution module, and performing attention operation after each convolution block sequentially through a channel attention module, a frequency spectrum attention module and a time attention module to obtain enhanced data;
s4: and inputting the enhanced data into a classifier, and performing voice/non-voice frame detection to obtain a prediction result.
According to a further technical scheme, the generating of spectrogram data of each piece of voice data comprises:
1.1, framing and windowing the original audio signal;
1.2, carrying out fast Fourier transform on each frame of audio signal after windowing to obtain two-dimensional spectrogram data.
In a further technical scheme, the selecting N neighboring frames for each frame of spectral data in spectrogram data by using a locality sensitive hashing algorithm to generate frame-level spectrogram data includes:
2.1 randomly selecting k different hash functions, repeating for L times to obtain L groups of hash function families, and recording as: { g1(·),g2(·),…,gL(. The) each set contains k hash functions, denoted as gi(·)=(h1(·),h2(·),…,hk(·));
2.2 randomly choosing a set of Hash function families gi(. h) each frame of the spectral vector goes through gi(. is) mapped into an integer vector, denoted as (x)1,x2,…,xk);
2.3 mapping the integer vector to a bit of the hash table to obtain hash table index, wherein one hash table index corresponds to one hash bucket. The hash function used is:
in the formula, riIs a random integer, mod is a remainder operation, C is 232-5 is a large prime number and size is the length of the hash table.
2.4, the hash value of the integer vector is obtained, and the keyword fp of the spectrum vector in the hash bucket is obtained. The hash function used is:
in the formula, ri' is a random integer.
2.5, the position indexes of the frame spectrum data represented by the integer vector in all data are put into a hash bucket corresponding to the hash table index, and the key word in the hash bucket is fp.
2.6 repeating 2.2-2.5 until all the frame spectrum data indexes are stored;
2.7 for query (query), executing 2.2-2.4 steps to obtain hash bucket index and key words fp in the bucket, searching whether the key words fp exist in the hash bucket, if yes, taking out the position indexes of frame frequency spectrum data corresponding to all the key words from the hash bucket, and recording as a set R;
and 2.8, extracting the frame frequency spectrum data corresponding to the position index in the R, arranging the Euclidean distances between the data and the query from small to large, and extracting N frame frequency spectrum data with the shortest distance to be used as a local spectrogram input x.
In a further aspect, as shown in fig. 2, the channel attention module includes:
3.1 inputting the local spectrogram data x into a convolution block module for feature extraction, and outputting x0;
3.2 along the channel dimension, respectively x0Performing maximum pooling and average pooling, and obtaining a channel characteristic diagram Cmax、CavgObtaining an aggregated channel feature map (channel map) through a neural network:
channel map=mlp(Cmax)+mlp(Cavg)
wherein mlp (-) is a neural network.
3.3 using the sigmod (-) function to obtain the attention scores of the channel feature maps, representing the importance of each channel, and applying the scores to the local spectrogram data x, the channels with higher importance will be more prominently represented:
x1=x0*sigmoid(channel map)
in the formula, x1Is the output of the channel attention module.
In a further aspect, the spectrum attention module includes:
3.4 mixing of x1Performing channel dimension compressionReducing to obtain a channel compression characteristic diagram xcompress;
3.5 separately dividing x in the frequency dimensioncompressPerforming maximum pooling and average pooling, and obtaining a frequency spectrum characteristic diagram Fmax、FavgObtaining an aggregated frequency spectrum characteristic map (frequency map) through a neural network:
frequency map=mlp(Fmax)+mlp(Favg)
3.6 obtaining the attention scores of the spectral feature map by using the sigmod (-) function, representing the importance of each frequency component, applying the scores to the output x of the channel attention module1Then the more important frequencies will be more prominent:
x2=x1*sigmoid(frequency map)
in the formula, x2Is the output of the spectral attention module.
In a further aspect, the time attention module is configured to focus on a more appropriate neighboring frame to obtain final enhancement data, and includes:
3.7 mixing of x2Channel dimension compression is carried out to obtain a channel compression characteristic map x'compress;
3.8 separately mix x 'along the time dimension'compressPerforming maximum pooling and average pooling, and obtaining time characteristic diagram Tmax、TavgObtaining an aggregated time characteristic map (temporal map) through a neural network:
temporal map=mlp(Tmax)+mlp(Tavg)
3.9 obtaining the attention score of the time characteristic graph by using the sigmod (-) function, representing the importance degree of each adjacent frame, and applying the score to the output x of the frequency spectrum attention module2Then the more important neighbor frames will have more prominent representation:
x3=x2*sigmoid(temporal map)
in the formula, x3Is the output of the temporal attention module.
3.10 set the residual join with the initial input x of the convolution block after the attention operation is completed in the following way:
x4=x3+x
3.11 repeat 3.1-3.10 until the maximum number of convolutional blocks M of the network is reached.
In a further technical scheme, the classifier is a three-layer neural network, the number of output neurons in the last layer is 1, and the probability of outputting a speech frame is represented.
In a further technical solution, the x-axis in the two-dimensional spectrogram data in step 1.2 represents a time frame, and the y-axis represents a frequency, that is, each column represents spectral data in the time frame. The spectrogram data can well represent the change of a voice spectrum along with time.
In a further embodiment, the query (query) in step 2.7 should be sequentially all the frame rate spectrum data.
In a further embodiment, the final result obtained in step 2.8 is: (H, N), wherein H is a spectral vector dimension. Channel dimension expansion should be done using the unsqueeze (·) function at the same time.
In a further embodiment, the convolution block operation in step 3.1 should keep the time dimension unchanged.
In a further embodiment, the output of the maximum pooling and the average pooling in step 3.2 should satisfy:where C represents the channel dimension.
According to a further technical scheme, the channel compression characteristic diagram in the steps 3.4 and 3.7 should satisfy the following conditions:where T represents the time (frame) dimension.
In a further embodiment, the output of the maximum pooling and the average pooling in step 3.5 should satisfy:
further technical solution, step 3The output of the maximum pooling and average pooling in fig. 8 should satisfy:
in a further embodiment, the neural network mlp (-) in steps 3.2, 3.5, and 3.8 is required to satisfy that the output dimension is equal to the initial input dimension of the network.
Further technical solution, in steps 3.3, 3.6 and 3.9, it is necessary to expand the output dimension of sigmoid () to x by using unsqueeze (-) function0、x1、x2The dimensions are the same.
In a further technical solution, the residual error concatenation in step 3.10 has the following functions: the degradation problem caused by the fact that the network is too deep can be effectively prevented.
Example two
The embodiment provides a voice endpoint detection system based on a partial convolution block attention network.
A voice endpoint detection system based on a partial convolution block attention network comprises:
a spectrogram module configured to: acquiring spectrogram data of voice data;
a local spectrogram module configured to: extracting N adjacent frames for each frame of data in the spectrogram data by using a local sensitive hash algorithm to obtain frame-level local spectrogram data;
a convolution block attention module configured to: inputting the local spectrogram data into a local convolution block attention network, performing feature extraction through a convolution module, and performing attention operation after each convolution block sequentially through a channel attention module, a frequency spectrum attention module and a time attention module to obtain enhanced data;
a prediction module configured to: and inputting the enhanced data into a classifier, and performing voice/non-voice frame detection to obtain a prediction result.
It should be noted here that the spectrogram module, the local spectrogram module, the convolution-added block attention module, and the prediction module correspond to steps S1 to S4 in the first embodiment, and the above modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in the first embodiment. It should be noted that the modules described above as part of a system may be implemented in a computer system such as a set of computer-executable instructions.
EXAMPLE III
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the voice endpoint detection method based on the partial convolution block attention network as described in the first embodiment above.
Example four
The present embodiment provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the program, the processor implements the steps in the voice endpoint detection method based on the local convolution block attention network as described in the first embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:磁盘装置以及Depop处理方法