Civil aviation control voice recognition system based on artificial intelligence technology

文档序号:9789 发布日期:2021-09-17 浏览:38次 中文

1. Civil aviation control speech recognition system based on artificial intelligence technique, its characterized in that, this system includes:

the audio frequency cutting module: the method comprises the steps of receiving and analyzing and dividing into subdata convenient for processing by an artificial intelligence recognition module according to externally input control voice data;

and the artificial intelligent voice recognition module is connected with the audio segmentation module: the method comprises the steps of receiving subdata generated by processing of an audio segmentation module, and completing identification of a control instruction through analyzing the subdata one by one;

and the voice instruction correction module is connected with the artificial intelligent voice recognition module: the method comprises the steps of receiving a control instruction which is identified and processed, and correcting the instruction by combining externally received data information;

and the artificial auditing module is connected with the artificial intelligent voice recognition module: carrying out manual intervention on the received control instruction which cannot be identified by the system, and manually auditing the control instruction;

the voice intention recognition module is connected with the voice instruction correction module: judging the current control scene according to the control voice content, and deducing the next possible control intention;

a regulatory instruction evaluation module connected with the voice intention recognition module: the method evaluates and verifies the generated control intention result data to determine the correctness of the control instruction.

2. The civil aviation control voice recognition system of claim 1, wherein the voice data comprises: the voice data recorded manually and the voice signal data collected by the tower in real time.

3. The artificial intelligence technology-based civil aviation control voice recognition system according to claim 1, wherein the control intention comprises: sliding, entering and exiting machine positions, frequency transfer and transponder identification.

4. The artificial intelligence technology-based civil aviation control voice recognition system according to claim 1, wherein the control instruction evaluation comprises: runway intrusion prevention, command safety inspection, wave-off inspection, misleading command inspection and fatigue alarm inspection.

5. The civil aviation control voice recognition system based on artificial intelligence technology as claimed in claim 1, wherein the externally received data information comprises: the model, flight number, secondary code, takeoff airport, landing airport, takeoff time, flight status information, and flight altitude of the aircraft.

Background

In recent years, the civil aviation industry is rapidly developed, a large number of airplanes and flights are increased every year, and the requirements on the aviation safety and air traffic control guarantee are more severe. The current air traffic control in China still is high-intensity mental labor mainly based on subjective decision of controllers, and human errors are inevitable. Statistically, human errors account for 80% of aviation accidents, and become an important cause for influencing aviation safety. Therefore, it is necessary to introduce a voice recognition system to send and record the instructions and the reply voice of the controller and the pilot in real time, so as to reduce the situations of understanding ambiguity, forgetting and the like.

Based on the above situation, in the face of increasingly serious phenomena of air traffic jam, flight delay and the like in the aviation industry at the present stage and huge pressure of control and guarantee, at present, people need to be assisted by a computer through an intelligent auxiliary means so as to overcome human factors which are not beneficial to control and safe operation, and auxiliary verification is carried out on instructions issued by a controller by means of an intelligent system so as to guarantee airport safety.

Disclosure of Invention

In order to solve the problems in the prior art, the invention aims to provide an air traffic control intelligent command system, so as to realize automatic control of an aircraft and reduce the probability of unsafe events caused by human factors in the existing airspace environment.

The invention relates to a civil aviation control voice recognition system based on artificial intelligence technology, which is characterized by comprising the following components:

the audio frequency cutting module: the voice data is received and analyzed and divided into subdata convenient for processing by the artificial intelligent recognition module according to externally input control voice data.

And the artificial intelligent voice recognition module is connected with the audio segmentation module: the sub-data processing module receives the sub-data generated by the audio frequency dividing module and completes the identification of the control instruction through analyzing the sub-data one by one.

And the voice instruction correction module is connected with the artificial intelligent voice recognition module: the method receives the control instruction which is identified and processed, and corrects the instruction by combining with externally received data information.

And the artificial auditing module is connected with the artificial intelligent voice recognition module: and carrying out manual intervention on the received control instruction which cannot be identified by the system, and manually auditing the control instruction.

The voice intention recognition module is connected with the voice instruction correction module: and judging the current control scene according to the control voice content and deducing the next possible control intention.

A regulatory instruction evaluation module connected with the voice intention recognition module: the method evaluates and verifies the generated control intention result data to determine the correctness of the control instruction.

The policed voice data of claim 1, wherein the voice data comprises: the voice data recorded manually and the voice signal data collected by the tower in real time.

By adopting the technical scheme, the invention is applied to the civil aviation air traffic control command system through the artificial intelligence voice recognition technology and the voice correction technology, and the real-time check is carried out when the control command is sent out so as to prevent the runway from intruding before the unit operates, wherein the command check comprises runway command check, command safety check, wave-off check, misleading command check and the like. The invention can accurately identify and analyze the control instruction, quantize the quality of the control process and improve the quality of the air traffic control operation. Therefore, quantitative evaluation of the quality of control can be achieved by performing statistical analysis on the control instructions. After the call quality analysis, the instruction intention analysis and the conflict resolution instruction analysis are comprehensively analyzed, the probability of unsafe events caused by human factors is reduced.

Drawings

FIG. 1 is a schematic structural diagram of a civil aviation control voice recognition system based on an artificial intelligence technology.

Detailed Description

The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

As shown in fig. 1, the present invention, namely a civil aviation control speech recognition system based on artificial intelligence technology, includes: the system comprises an audio segmentation module 1, an artificial intelligence voice recognition module 2, a voice instruction correction module 3, an artificial auditing module 4, a voice intention recognition module 5 and a control instruction evaluation module 6.

The audio segmentation module 1 is used for receiving externally input control voice data, analyzing and segmenting the control voice data into subdata convenient for the artificial intelligence recognition module to process, and the externally input control voice data comprises: the voice data recorded manually and the voice signal data collected by the tower in real time.

After receiving the above-mentioned controlled voice data, the audio segmentation module 1 needs to segment these data, and determines the frame length of the audio frame by analyzing the input audio and by sampling points of each frame and sampling rate of each second. After the frame length is determined, data protection and accurate segmentation are carried out on the audio data through a correlation algorithm. And finally outputting the audio sub data which can be used for analyzing one by one.

The artificial intelligent voice recognition module 2 receives the subdata generated by the processing of the audio segmentation module, and completes the recognition of the control instruction by analyzing and combining the subdata one by one. Specifically, the method comprises the following steps:

the artificial intelligent voice recognition module is the core of the whole voice recognition system and is responsible for recognizing voice and converting the voice into a character sequence. During recognition, speech is input as data, converted from an audio form into a spectrogram form, and passed into the engine. The data firstly passes through a voice feature extraction module which consists of a plurality of layers of convolutional neural networks, so that the audio features of different layers can be extracted, meanwhile, the data and the parameter quantity are greatly compressed, the training efficiency is improved, and the parameter overfitting is prevented. And then, the data enters a sequence learning module which consists of four layers of bidirectional gating circulating units, has the function of simulating a human memory system, and can control the memory and forgetting degree of state information at different moments so as to realize the learning of the language sequence. And finally, the data enters three full-connection layers to carry out classification learning and decision making, and an output sequence with the highest probability, namely a voice recognition result, is obtained through calculation of a connection time sequence classification (CTC) module.

Because the voice data set is large and the training time is long, the GPU cluster is used for training, and a primary usable voice recognition model can be obtained through training within one week. The training time can be further greatly shortened along with the increase of the number of the GPUs. The module can also perform incremental training in the working process, continuously optimize the model and improve the identification accuracy.

The voice command correction module 3 receives the control command which is recognized and processed, and corrects the command by combining with externally received data information. Specifically, the method comprises the following steps:

after the voice instruction correction module receives the instruction which is processed preliminarily, according to flight related information (including the model, the flight number, the secondary code, the takeoff airport, the landing airport, the takeoff time, the flight state information and the flight altitude of the aircraft) acquired by an external system, data verification is carried out on the preliminary recognition result generated by the artificial intelligent voice recognition module by comparing whether the flight information in the instruction is matched with the flight information plan and the state in the actual system, and when the recognized instruction is found not to be consistent with the actual situation, the recognition result is corrected according to the actual data.

The manual auditing module 4 receives the control instruction which can not be identified by the system, and performs manual intervention to audit the control instruction manually. Specifically, the method comprises the following steps:

the regulation instruction is very complicated in specification, the standard explanation has many places needing attention, and each region has different standard requirements. Therefore, even a professional controller can hardly make each rule. This module can be to the idiom custom of multiple irregularity, including the instruction of too spoken, the improper term of keyword order, reciting incomplete the circumstances such as discerning, also can be in real time through carrying out the analysis to the intensity of quiet time, and then realize the detection to noise intensity. When the condition that the artificial intelligent voice recognition module cannot normally perform preliminary voice recognition occurs, manual recognition can be performed through a manual intervention method so as to ensure the accuracy of the voice recognition.

The voice intention recognition module 5 judges the current control scene according to the control voice content and deduces the next possible control intention. Specifically, the method comprises the following steps:

the airplane has strict flight flow and conversation specifications in the airspace flight process, so that the current control scene of the airplane can be judged according to the control voice content, and the next possible control intention of the airplane can be deduced. Therefore, a scene reasoning subsystem can be established, and the core of the scene reasoning subsystem is to establish a complete empty management control scene library. The format of the regulatory scenario library is a tree structure, and each node is a possible dialog. As in the "modified altitude" scenario of an aircraft, the fixed dialog format is: "controller: southern 3547, high altitude abduction. The unit: upper height, knee eight hold, south 3547. The "modify height" dialog mode is relatively fixed in this example, and may be used as a node in a tree-structured regulatory scenario library. The current situation of the flight can be determined by the term of the conversation, such as the flight is taxiing, entering and leaving the airport, and the control is carried out for frequency handover or answering machine identification. Some scenes have a plurality of different conversations according to actual conditions, are represented as a plurality of parallel nodes in a scene library, and the corresponding probability of each parallel node is given according to a statistical result. In the using process, the control voice recognition result is searched and calculated in the control scene library, so that not only the current control scene can be obtained, but also the control intention of the next stage can be predicted.

And the control instruction evaluation module 6 evaluates and verifies the generated control intention result data to determine the correctness of the control instruction. Specifically, the method comprises the following steps:

the control instruction evaluation module is mainly used for comparing the finally generated intention result data with the actual situation to evaluate whether the current control instruction is correct or not. Human errors caused by the deviation of the current control command can be roughly divided into five types: runway anti-intrusion, command safety inspection, wave-off inspection, misleading command inspection and fatigue warning function.

The above embodiments are merely preferred embodiments of the present invention, which are not intended to limit the scope of the present invention, and various changes may be made in the above embodiments of the present invention. All simple and equivalent changes and modifications made according to the claims and the content of the specification of the present application fall within the scope of the claims of the present patent application. The invention has not been described in detail in order to avoid obscuring the invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:语音唤醒方法、系统、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!