Method, device and computer storage medium for detecting log sequence abnormity

文档序号:7723 发布日期:2021-09-17 浏览:39次 中文

1. A method of detecting log sequence anomalies, the method comprising:

collecting raw log sequence data from a data source;

sequentially extracting semantic information of each layer according to the hierarchical structure of the log sequence in the log sequence data to generate a semantic vector of each layer with fixed dimensionality;

and calculating the probability distribution of the log sequence by utilizing a SoftMax function according to the semantic vector of the log sequence, and selecting a result corresponding to the maximum conditional probability as an output category.

2. The method according to claim 1, wherein the extracting semantic information of each layer in sequence according to the hierarchical structure of the log sequence in the log sequence data to generate a semantic vector of each layer with a fixed dimension specifically comprises:

respectively corresponding the hierarchical structure of the log sequence into a word layer, a log layer and a log sequence layer according to the word, the log and the log sequence;

the word layer represents each word in the log as a word sense vector WordVec according to the importance of the part of speech and the word frequency, the log layer generates a log semantic vector LogVec according to the semantic vector WordVec of each word in the log, and the log sequence layer generates a log sequence semantic vector LogSeqVec according to the log sequence semantic vector LogVec.

3. The method according to claim 2, wherein the word layer represents each word in the log as a semantic vector WordVec according to the importance of part of speech and word frequency, and specifically comprises:

the log sequence preprocessing comprises the steps of performing word segmentation and nonsense word removal on the log sequence, wherein the word segmentation is to divide each log in the original log sequence into words or tokens, and the nonsense word removal is to remove meaningless symbols after the word segmentation;

word embedding, which comprises mapping each word after log sequence preprocessing to a vector, and word embeddingGenerating vector after embedding processingWhereinA kth word representing a jth log of the ith log sequence;

calculating importance, including part-of-speech weight calculation and word frequency weight calculation, wherein the part-of-speech weight calculation marks the part-of-speech of each word according to a natural language processing library and assigns corresponding weight to each part-of-speech; the word frequency weight calculation calculates the word frequency weight of each word according to a word frequency-inverse document frequency method, and specifically comprises the following steps: general wordsThe weight calculated by the word frequency-inverse document frequency method is recorded asThe frequency of the document isInverse document frequency ofThe calculation formulas are respectively as follows:

wherein, | SiI represents the number of logs contained in the ith log sequence;indicating that the ith log sequence containsThe number of logs of; | S | represents the total number of log sequences in the log sequence data S;is expressed as S containsThe number of log sequences of;

generating word semantic vector WordVec, specifically combining word embedding with importance calculation to form wordsGenerating a corresponding semantic vector WordVec, wherein the calculation formula is as follows:

wherein the content of the first and second substances,representing wordsGeneration of the corresponding semantic vector WordVec, alpha and beta are mediationsAndand α + β ═ 1.

4. The method according to claim 2, wherein the log layer generates a log semantic vector LogVec according to a semantic vector WordVec of each word of the log, and specifically comprises:

receiving semantic vector WordVec of each word of the log input by the word layer to form a WordVec sequenceWhereinA semantic vector WordVec representing the m word of the jth log of the ith log sequence;

extracting semantic features from a WordVec sequence by using a Bi-LSTM model based on an attention mechanism to generate a log semantic vector LogVec, which specifically comprises the following steps:

the state of the hidden layer at time t in the forward LSTM is noted asThe state of the hidden layer at time t in the backward LSTM is notedAndis calculated byAndobtaining the output of the Bi-LSTM model at the time t through a splicing modeIs expressed asWill be provided withGeneration over fully connected networksIs hidden representation ofBy calculation ofAnd context vector uijMeasure of similarity ofUsing SoftMax standardization to calculate importance weightWill be provided withAndmultiplying and accumulating to obtain log semantic vectorThe calculation formulas are respectively as follows:

wherein, WijAnd bijVector and bias for random initialization.

5. The method according to claim 2, wherein the log sequence layer generates a log sequence semantic vector LogSeqVec according to each log semantic vector LogVec of the log sequence, and specifically comprises:

receiving each log semantic vector LogVec of the log sequence input by the log layer to form a LogVec sequenceWhereinA semantic vector LogVec representing the ith and nth logs of the log sequence;

acquiring context information in a LogVec sequence by using a Bi-LSTM model based on an attention mechanism, which specifically comprises the following steps:

andrespectively represented as hidden layer states of forward LSTM and backward LSTM at time t,andcan be reduced toAndobtaining the output of the Bi-LSTM model at the time t through a splicing modeIs expressed as

Automatically learning and adjusting the importance degree of the log to the semantic expression of the log sequence to generate a log sequence semantic vector LogSeqVec, wherein the specific calculation formula is as follows:

wherein, WiAnd biFor the weight vector and the bias to be randomly initialized,is thatA hidden representation generated over a single-layer fully-connected network,is an importance weight, u, generated by a SoftMax function normalizationiIs a random initialization parameter, sviIs thatAndand multiplying and accumulating to obtain a log sequence semantic vector LogVec.

6. An apparatus for detecting log sequence anomalies, comprising:

a data acquisition module: for collecting raw log sequences from a data source;

a semantic vector generation module: the semantic information of each layer is extracted according to the hierarchical structure of the log sequence, and a semantic vector of each layer with fixed dimensionality is generated;

an anomaly detection module: and the probability distribution module is used for calculating the probability distribution of the log sequence by utilizing a SoftMax function according to the semantic vector of the log sequence, and selecting the result corresponding to the maximum conditional probability as an output category.

7. The apparatus of claim 6, wherein the semantic vector generation module comprises:

word layer: the word layer is used for representing each word in the log as a word sense vector WordVec according to the importance of the part of speech and the word frequency;

a log layer: the log layer is used for generating a log semantic vector LogVec according to the semantic vector WordVec of each word in the log;

log sequence layer: and the log sequence layer is used for generating a log sequence semantic vector LogSeqVec according to each log semantic vector LogVec of the log sequence.

8. An apparatus for detecting log sequence anomalies, comprising:

a processor; and a memory, wherein the memory has stored therein a computer-executable program that, when executed by the processor, performs the method of any of claims 1-5.

9. A computer-readable storage medium having stored thereon instructions that, when executed by a processor, cause the processor to perform the method of any one of claims 1-5.

Background

Modern systems usually generate a large amount of system logs during running, record running information of the systems in a text form, and map important activity states of the systems at different key points. The log abnormity detection is beneficial to abnormity positioning and reason analysis, so that the error time is reduced, and the normal operation of the system is ensured. The logs are arranged according to the execution time sequence to form a log sequence. An abnormal log sequence may not contain an abnormal log because there may be cases where all individual logs are normal, but an abnormal execution order or incomplete execution pattern of the log sequence may result in an abnormality. Thus, the log has context and we need to detect log anomalies from the log sequence perspective, rather than a single log. Log sequence exceptions generally include three types, execution order exceptions, operation exceptions, and incomplete exceptions. Currently, log sequence anomaly detection methods can be roughly divided into three categories: event counting vector-based methods (such as logistic regression, support vector machine, principal component analysis, invariant mining, log clustering, LSTM-AE, etc.), log key sequence-based methods (such as deep log, log key2vec, etc.), and log semantics-based methods (such as loganomally, LogRobust, etc.), wherein anomaly detection of log data using log semantics-based methods is a current research hotspot. However, these conventional methods have the following three problems.

(1) The existing method needs to use a log analyzer to convert unstructured log data into a structured log template or a log key. However, due to the variety of log formats in different systems, log parsers do not fit all log types. Furthermore, the robustness and accuracy of the log parser can affect the performance of anomaly detection. Worse yet, the use of a log parser can result in the loss of text semantic information.

(2) The event count vector based approach does not consider the order of execution between logs, whereas the log key sequence based approach only considers whether the next log is eligible to occur, ignoring the integrity of the log sequence. Both methods do not know what the log sequence is executing, nor do they detect the three anomalies simultaneously.

(3) Existing methods based on log semantics simply employ word embedding techniques to map words into word vectors, and then add these vectors as a semantic representation of the log. Since the log is composed of words, the semantics and context of the words determine the semantics of the log. However, the same word can express different meanings under different logs, and the importance degree of the word can influence the semantic expression of the logs. The existing method based on the log semantics does not consider the influence of word order and importance on the log semantics.

Disclosure of Invention

The invention provides a method, a device and a computer storage medium for detecting log sequence abnormity.

In a first aspect of the present invention, a method for detecting log sequence abnormality is provided, including:

collecting raw log sequence data from a data source;

sequentially extracting semantic information of each layer according to the hierarchical structure of the log sequence in the log sequence data to generate a semantic vector of each layer with fixed dimensionality;

and calculating the probability distribution of the log sequence by utilizing a SoftMax function according to the semantic vector of the log sequence, and selecting a result corresponding to the maximum conditional probability as an output category.

Further, the sequentially extracting semantic information of each layer according to the hierarchical structure of the log sequence in the log sequence data to generate a semantic vector of each layer with fixed dimensions specifically includes:

respectively corresponding the hierarchical structure of the log sequence into a word layer, a log layer and a log sequence layer according to the word, the log and the log sequence;

the word layer represents each word in the log as a word sense vector WordVec according to the importance of the part of speech and the word frequency, the log layer generates a log semantic vector LogVec according to the semantic vector WordVec of each word in the log, and the log sequence layer generates a log sequence semantic vector LogSeqVec according to the log sequence semantic vector LogVec.

Further, the word layer represents each word in the log as a semantic vector WordVec according to the importance of the part of speech and the word frequency, and specifically includes:

the log sequence preprocessing comprises the steps of performing word segmentation and nonsense word removal on the log sequence, wherein the word segmentation is to divide each log in the original log sequence into words or tokens, and the nonsense word removal is to remove meaningless symbols after the word segmentation;

word embedding, which comprises mapping each word after log sequence preprocessing to a vector, and word embeddingGenerating vector after embedding processingWhereinA kth word representing a jth log of the ith log sequence;

calculating importance, including part-of-speech weight calculation and word frequency weight calculation, wherein the part-of-speech weight calculation marks the part-of-speech of each word according to a natural language processing library and assigns corresponding weight to each part-of-speech; the word frequency weight calculation calculates the word frequency weight of each word according to a word frequency-inverse document frequency method, and specifically comprises the following steps: general wordsThe weight calculated by the word frequency-inverse document frequency method is recorded asThe frequency of the document isInverse document frequency ofThe calculation formulas are respectively as follows:

wherein, | SiI represents the number of logs contained in the ith log sequence;indicating that the ith log sequence containsThe number of logs of; | S | represents the total number of log sequences in the log sequence data S;is expressed as S containsThe number of log sequences of;

generating word semantic vector WordVec, specifically combining word embedding with importance calculation to form wordsGenerating a corresponding semantic vector WordVec, wherein the calculation formula is as follows:

wherein the content of the first and second substances,representing wordsGeneration of the corresponding semantic vector WordVec, alpha and beta are mediationsAndand α + β ═ 1.

Further, the log layer generates a log semantic vector LogVec according to the semantic vector WordVec of each word of the log, and specifically includes:

receiving semantic vector WordVec of each word of the log input by the word layer to form a WordVec sequenceWhereinA semantic vector WordVec representing the m word of the jth log of the ith log sequence;

extracting semantic features from a WordVec sequence by using a Bi-LSTM model based on an attention mechanism to generate a log semantic vector LogVec, which specifically comprises the following steps:

the state of the hidden layer at time t in the forward LSTM is noted asThe state of the hidden layer at time t in the backward LSTM is notedAndis calculated byAndobtaining the output of the Bi-LSTM model at the time t through a splicing modeIs expressed asWill be provided withGeneration over fully connected networksIs hidden representation ofBy calculation ofAnd context vector uijMeasure of similarity ofUsing SoftMax standardization to calculate importance weightWill be provided withAndmultiplying and accumulating to obtain log semantic vectorThe calculation formulas are respectively as follows:

wherein, WijAnd bijVector and bias for random initialization.

Further, the log sequence layer generates a log sequence semantic vector LogSeqVec according to each log semantic vector LogVec of the log sequence, and the method specifically includes:

receiving each log semantic vector LogVec of the log sequence input by the log layer to form a LogVec sequenceWhereinA semantic vector LogVec representing the ith and nth logs of the log sequence;

acquiring context information in a LogVec sequence by using a Bi-LSTM model based on an attention mechanism, which specifically comprises the following steps:

andrespectively represented as hidden layer states of forward LSTM and backward LSTM at time t,andcan be reduced toAndobtaining the output of the Bi-LSTM model at the time t through a splicing modeIs expressed as

Automatically learning and adjusting the importance degree of the log to the semantic expression of the log sequence to generate a log sequence semantic vector LogSeqVec, wherein the specific calculation formula is as follows:

wherein, WiAnd biFor the weight vector and the bias to be randomly initialized,is thatA hidden representation generated over a single-layer fully-connected network,is an importance weight, u, generated by a SoftMax function normalizationiIs a random initialization parameter, sviIs thatAndand multiplying and accumulating to obtain a log sequence semantic vector LogVec.

Further, the method for detecting log sequence abnormality further comprises storing the log sequence data in a database.

Further, the method for detecting log sequence abnormality further comprises displaying the output type, and when the output type is abnormal, positioning the abnormality according to the abnormal occurrence time and position.

In a second aspect of the present invention, an apparatus for detecting log sequence abnormality is provided, including: a data acquisition module: for collecting raw log sequences from a data source; a semantic vector generation module: the semantic information of each layer is extracted according to the hierarchical structure of the log sequence, and a semantic vector of each layer with fixed dimensionality is generated; an anomaly detection module: and the probability distribution module is used for calculating the probability distribution of the log sequence by utilizing a SoftMax function according to the semantic vector of the log sequence, and selecting the result corresponding to the maximum conditional probability as an output category.

Further, the semantic vector generation module includes: word layer: the word layer is used for representing each word in the log as a word sense vector WordVec according to the importance of the part of speech and the word frequency; a log layer: the log layer is used for generating a log semantic vector LogVec according to the semantic vector WordVec of each word in the log; log sequence layer: and the log sequence layer is used for generating a log sequence semantic vector LogSeqVec according to each log semantic vector LogVec of the log sequence.

In a third aspect of the present invention, an apparatus for detecting log sequence abnormality includes: a processor; and a memory, wherein the memory has stored therein a computer executable program that, when executed by the processor, performs the above-described method of detecting log sequence anomalies.

In a fourth aspect of the present invention, a computer-readable storage medium is provided, on which instructions are stored, which, when executed by a processor, cause the processor to perform the above-mentioned method of detecting log sequence anomalies.

The invention provides a method, a device and a computer storage medium for detecting log sequence abnormity, which are characterized in that original log sequence data are collected from a data source, semantic information of each layer is sequentially extracted according to the hierarchical structure of the log sequence in the log sequence data, semantic vectors of each layer with fixed dimensionality are generated, a log analyzer is not used, and log types do not need to be considered, so that the data source can be a system server, an application server, a database and the like, the semantic information of each layer is sequentially extracted according to the hierarchical structure of the log sequence in the log sequence data, and the semantic vectors of each layer with fixed dimensionality are generated; the log sequence is formed by the logs according to the execution sequence, the execution sequence among the logs and the integrity of the log sequence are considered, and three log sequence abnormal conditions can be detected at the same time: the method comprises the following steps of executing sequence exception, operation exception and incomplete exception, in addition, determining the semantics of a log by fully considering the semantics and context of words by the word embedding technology, performing part-of-speech weight calculation and word frequency weight calculation on the basis of word embedding, and finally achieving the beneficial effects of: compared with the existing log sequence anomaly detection method and system, the method, the device and the computer storage medium for detecting the log sequence anomaly provided by the invention can extract richer semantic features from words, logs and log sequences, so that the device can learn more accurate hierarchical semantic expression, a better detection effect is achieved, and the capability of detecting three different anomalies is further improved.

Drawings

FIG. 1 is a schematic structural diagram of an apparatus for detecting log sequence anomalies according to an embodiment of the present invention;

FIG. 2 is a flowchart illustrating a method for detecting log sequence anomalies according to an embodiment of the present invention;

FIG. 3 is a diagram illustrating a hierarchical structure of a log sequence in an embodiment of the invention;

FIG. 4 is an architecture of a computer device in an embodiment of the invention.

Detailed Description

In order to further describe the technical scheme of the present invention in detail, the present embodiment is implemented on the premise of the technical scheme of the present invention, and detailed implementation modes and specific steps are given.

The embodiment of the invention is directed to a method, a device and a computer storage medium for detecting log sequence abnormality, and fig. 1-4 are referred to, and fig. 2 is a schematic flow chart of the method for detecting log sequence abnormality, which includes the following specific steps:

s01, data acquisition: raw log sequence data is collected from data sources including, but not limited to, system servers, application servers, databases.

S02, data storage: and storing the collected log sequence data into a specified storage medium, wherein the storage medium comprises an online part and a offline part, or the data can be stored without skipping the step and carrying out the next step.

S03, original log sequence queue: the logs form log sequences according to the execution sequence, and the log sequences are arranged to form a log sequence queue, so that the system can be ensured to process a plurality of log sequence data in parallel.

S04, detecting LayerLog of log sequence abnormity: sequentially extracting semantic information of each layer according to the hierarchical structure of the log sequence in the log sequence data to generate a semantic vector of each layer with fixed dimensionality; and calculating the probability distribution of the log sequence by utilizing a SoftMax function according to the semantic vector of the log sequence, and selecting a result corresponding to the maximum conditional probability as an output category.

S05, result display and exception positioning: and displaying the output type (normal or abnormal), and positioning the abnormality according to the occurrence time and position of the abnormality.

The log sequence is arranged according to the sequence of the log execution time. When detecting the log sequence, the execution time of the log is reserved and recorded, and once the abnormality is detected, the abnormality can be positioned according to the execution time and the position of the log sequence.

Respectively corresponding the hierarchical structure of the log sequence into a word layer, a log layer and a log sequence layer according to the word, the log and the log sequence; the word layer represents each word in the log as a word sense vector WordVec according to the importance of the part of speech and the word frequency, the log layer generates a log semantic vector LogVec according to the semantic vector WordVec of each word in the log, and the log sequence layer generates a log sequence semantic vector LogSeqVec according to the log sequence semantic vector LogVec.

The specific implementation manner of detecting LayerLog by log sequence abnormality of S04 is as follows: the Log sequence is composed of logs, the logs are composed of words, a three-Layer hierarchical structure is formed, the semantics of each Layer can influence the semantic vector expression of the final Log sequence, the layers where the words, the logs and the Log sequence are located are named as a word Layer (WordLayer), a Log Layer (Log Layer) and a Log sequence Layer (LogSeq Layer), and the hierarchical structure of the Log sequence is named as a 'word-Log sequence' hierarchical structure, as shown in FIG. 3.

Suppose that the ith log sequence consists of n logs and the jth log consists of m words, SiRepresenting the ith log sequence byJ-th log representing i-th log sequence, usingThe kth word of the jth log representing the ith log sequence, where j ∈ [1, n [ ]],k∈[1,m]The composition of the log sequence can be expressed asAnd is

S04 LayerLog for detecting log sequence abnormity is that after the original log sequence is obtained, semantic information of each layer of log data is extracted to generate a semantic vector with fixed dimension, and then the semantic vector is passed through the log sequenceTo determine whether the log sequence is abnormal. In the embodiment, semantic vectors corresponding to the word layer, the log layer and the log sequence layer are named WordVec, LogVec and LogSeqVec respectively. More specifically, the words are usedThe semantic vector WordVec is expressed asWill logIs expressed as a semantic vector LogVecWill log sequence SiThe semantic vector LogSeqVec of (a) is denoted as svi

At the word level, the LayerLog combines the importance of the part of speech and the frequency of the word with the calculation of the importance, and generates a corresponding semantic vector WordVec for each word. And then all WordVec in the log is transmitted to a log layer to form a WordVec sequence, and a corresponding log semantic vector LogVec is generated through a Bi-LSTM model based on an attention mechanism. And then, transmitting the LogVec of each log in the log sequence to a LogSeq layer to form a LogVec sequence, and generating the corresponding LogSeqVec by using a Bi-LSTM model based on an attention mechanism. LayerLog judges whether the log sequence is abnormal or not through the generated LogSeqVec.

The word layer represents each word in the log as a semantic vector WordVec according to the importance of the part of speech and the word frequency, in particular, the word layer represents each word in the log as a semantic vector WordVecGenerating semantic word vectorsThe steps are as follows:

s041, preprocessing a log sequence, processing original log sequence data by combining text features of the log sequence, including performing word segmentation and nonsense word removal on the log sequence, wherein the word segmentation is to segment each log in the original log sequence into words or tokens, and because spaces are used as intervals between words in the log data in an English text format, the spaces can be used as separators to segment the logs; the nonsense word removing is to remove meaningless symbols after the word segmentation, wherein the meaningless symbols comprise but are not limited to punctuation marks and separators, and the meaningless symbols have no positive influence on the semantic expression of the log. Different from common text data (such as news text, comment text and the like), the heterogeneous text data of the log recording system running state has unique field characteristics, and when the log data is preprocessed, two special processing means are designed: (1) although prepositions (e.g., "from", "to", etc.) and quantifiers (e.g., "a", "the", etc.) often have no meaning in natural language understanding, they are retained because in log analysis, all words have semantic information, only to a different degree. (2) Compound words (e.g., "PackeResponder," "addStoredBlock," etc.) are not intentionally segmented, but rather are treated as a special word because segmenting compound words results in a loss of semantics, and thus the overall semantics of the compound word are preserved by constructing the corresponding WordVec.

S042, word embedding, which comprises the steps of mapping each word after log sequence preprocessing to a vector and mapping the wordGenerating vector after embedding processingWhereinThe k Word of the j log representing the ith log sequence is specifically represented by Word2Vec in the embodiment, wherein Word2Vec maps the sparse vector in a one-hot coded form into a dense vector of a certain dimension by using a single-layer neural network (CBOW or Skip-Gram).

S043, calculating importance, wherein the importance degrees of words in the log data are different and mainly expressed in the following two aspects: firstly, the parts of speech are different, in a log, the content words are often dominant, and the function words are complementary, that is, the importance of the content words is usually greater than that of the function words. The second is that the same word is more important in some log sequences but less important in other log sequences, depending on the word context and the log sequence to which it belongs. Thus, two corresponding methods are used to calculate the importance of a word. The method comprises part-of-speech weight calculation and word frequency weight calculation, wherein the part-of-speech weight calculation is used for marking the part of speech of a word by using an NLTK (Natural Language toolkit) natural Language processing library designed by the university of Pennsylvania, and corresponding weight is given to each part of speech. When the weight is given, the corresponding part of speech is given a weight of a corresponding magnitude according to a setting that the influence of the content words (verbs, nouns, adjectives, and adverbs) on the semantic expression is larger than that of the auxiliary words (conjunctions, qualifiers, and prepositions). Words with higher part-of-speech weights are more important and have a greater impact on the log semantics. After the words are labeled by parts of speech, the corresponding weights of the parts of speech are shown in table 1. Wherein POS refers to part of speech, Abbr is an abbreviation for part of speech,expression wordAnd (4) weighting after part of speech tagging.The larger the size of the tube is,the more important.

Table 1: weights for part-of-speech correspondence

The word frequencyWeight calculation the word Frequency weight of each word is calculated according to the Term Frequency-Inverse Document Frequency method (TF-IDF), and for a log sequence of a three-layer structure, the TF-IDF is based on the assumption that: a word is more discriminative and important if it occurs frequently in one log sequence and rarely in other log sequences. The method specifically comprises the following steps: general wordsThe weight calculated by the word frequency-inverse document frequency method is recorded asThe frequency of the document isInverse document frequency ofThe calculation formulas are respectively as follows:

wherein, | SiI represents the number of logs contained in the ith log sequence;indicating that the ith log sequence containsThe number of logs of; wherein | S | represents a log sequence in the log sequence data SA total number;is expressed as S containsThe number of log sequences of (a) is,the larger the size of the tube is,the more important.

S043, generating a word sense vector WordVec, specifically combining word embedding and importance calculation to form a wordGenerating a corresponding semantic vector WordVec, wherein the calculation formula is as follows:

wherein the content of the first and second substances,representing wordsGeneration of the corresponding semantic vector WordVec, alpha and beta are mediationsAndand α + β ═ 1.

The steps generate a corresponding semantic vector WordVec for each word, and then all the WordVec in the log is transmitted to a log layer to form a WordVec sequence.

The log layer generates a log semantic vector LogVec according to the semantic vector WordVec of each word of the log, the log layer aims to generate a semantic vector representing LogVec for the log, and Bi-LSTM based on an attention mechanism is introduced to extract semantic features from a WordVec sequence, so that context information among words can be captured, and importance weight of the words to log semantic expression can be automatically learned and adjusted.

LSTM is a variant of RNN, introducing three gating mechanisms to alleviate the gradient disappearance problem. Bi-LSTM is a combination of forward LSTM and backward LSTM, can encode bidirectional sequence information, and is very suitable for modeling sequence data. The log consists of words, WordVec of each word in the log forms a WordVec sequence, and Bi-LSTM can well capture the bidirectional semantic dependence of the WordVec sequence. The method comprises the following specific steps:

s044, receiving semantic vector WordVec of each word of the log input by the word layer to form a WordVec sequenceWhereinA semantic vector WordVec representing the m word of the jth log of the ith log sequence;

s045, extracting semantic features from a WordVec sequence by using a Bi-LSTM model based on an attention mechanism, and generating a log semantic vector LogVec, wherein the log semantic vector LogVec specifically comprises the following steps: the state of the hidden layer at time t in the forward LSTM is noted asThe state of the hidden layer at time t in the backward LSTM is notedAndis calculated byAndobtaining the output of the Bi-LSTM model at the time t through a splicing modeIs expressed asNot all words contribute equally to the semantic representation of the log. To more accurately represent log semantics, an attention mechanism is introduced to extract words that are significant to log semantic representation. Will be provided withGeneration over fully connected networksIs hidden representation ofBy calculation ofAnd context vector uijMeasure of similarity ofUsing SoftMax standardization to calculate importance weightWill be provided withAndmultiplying and accumulating to obtain log semantic vectorFormula for calculationRespectively as follows:

wherein, WijAnd bijVector and bias for random initialization.

The log sequence layer generates log sequence semantic vectors LogSeqVec according to each log semantic vector LogVec of the log sequence, and the log sequence is composed of logs according to time sequence, so that certain relation exists among the logs, and the relation relates to the number, the type and the operation content of the logs. The relation is reflected in the semantics of the log sequence, so that the semantic representation of the log sequence needs to be generated from the log vec sequence, the Bi-LSTM model based on the attention mechanism is utilized to acquire the context information in the log vec sequence, and the importance degree of the log on the semantic representation of the log sequence is automatically learned and adjusted. It should be noted that the two attention-based Bi-LSTMs of the log layer and the log sequence layer are different in design, they cannot share the same parameters, but are both trained together in LayerLog. The method comprises the following specific steps:

s046, receiving each log semantic vector LogVec of the log sequence input by the log layer to form a LogVec sequenceWhereinA semantic vector LogVec representing the ith and nth logs of the log sequence;

s047 Bi-LSTM mode based on attention mechanismThe type obtains context information in the LogVec sequence,andrespectively represented as hidden layer states of forward LSTM and backward LSTM at time t,andcan be reduced toAndobtaining the output of the Bi-LSTM model at the time t through a splicing modeIs expressed asThe log has different influences on the semantic expression of the log sequence, so that an attention mechanism is introduced again, the importance degree of the log is automatically learned and adjusted to form more accurate semantic expression of the log sequence, and the specific calculation formula is as follows:

wherein, WiAnd biFor the weight vector and the bias to be randomly initialized,is thatA hidden representation generated over a single-layer fully-connected network,is an importance weight, u, generated by a SoftMax function normalizationiIs a random initialization parameter, sviIs thatAndand multiplying and accumulating to obtain a log sequence semantic vector LogVec.

Extracting a semantic vector LogSeqVec of a log sequence from the output of a log sequence layer, then regarding the abnormal detection of the log sequence as a binary classification problem, calculating the probability distribution of the log sequence by using a SoftMax function, and then selecting a detection result corresponding to the maximum conditional probability as an output category. The calculation process is as follows:

wherein w and b are weight vectors and offsets initialized at random,representative Log sequence SiConditional probability distribution of (2), yiIs SiThe detection category of (1).

In the following, referring to fig. 1, a system corresponding to the method shown in fig. 2 is described, and an apparatus 100 for detecting log sequence abnormality according to an embodiment of the present disclosure includes: the data acquisition module 101: for collecting raw log sequences from a data source; semantic vector generation module 102: the semantic information of each layer is extracted according to the hierarchical structure of the log sequence, and a semantic vector of each layer with fixed dimensionality is generated; the abnormality detection module 103: and the probability distribution module is used for calculating the probability distribution of the log sequence by utilizing a SoftMax function according to the semantic vector of the log sequence, and selecting the result corresponding to the maximum conditional probability as an output category.

The semantic vector generation module 102 includes: word layer: the word layer is used for representing each word in the log as a word sense vector WordVec according to the importance of the part of speech and the word frequency; a log layer: the log layer is used for generating a log semantic vector LogVec according to the semantic vector WordVec of each word in the log; log sequence layer: and the log sequence layer is used for generating a log sequence semantic vector LogSeqVec according to each log semantic vector LogVec of the log sequence.

The specific working process of the apparatus 100 for detecting log sequence abnormality refers to the description of the method for detecting log sequence abnormality, and is not repeated.

Furthermore, an apparatus according to an embodiment of the present invention may also be implemented by means of the architecture of a computing device shown in fig. 4. Fig. 4 illustrates an architecture of the computing device. As shown in fig. 4, a computer system 201, a system bus 203, one or more CPUs 204, input/output components 202, memory 205, and the like. The memory 205 may store various data or files used in computer processing and/or communications as well as program instructions executed by the CPU. The architecture shown in fig. 4 is merely exemplary, and one or more of the components in fig. 4 may be adjusted as needed to implement different devices.

Embodiments of the invention may also be implemented as a computer-readable storage medium. A computer-readable storage medium according to an embodiment has computer-readable instructions stored thereon. The computer readable instructions, when executed by a processor, may perform a method according to embodiments of the invention as described with reference to the above figures.

The embodiment of the invention is based on the method, the device and the computer storage medium for detecting log sequence abnormality, and embodiment comparison is carried out on the HDFS and BGL data sets. The details of these two data sets are as follows. (1) HDFS dataset: the HDFS dataset contains 11,175,629 logs generated by Hadoop from 200 Amazon EC2 nodes. Each log in the HDFS dataset contains a "blockID" identifier, so a session window is selected to divide the log sequence. Logs with the same "blockID" identifier are grouped together in chronological order to form a log sequence. Of the 11,175,629 logs, a total of 575,061 log sequences were synthesized. Normal or abnormal labels of these log sequences have been marked by experts in the Hadoop field, where the number of normal log sequences is 558,223, which accounts for about 97.1%; the number of exception log sequences was 16,838, which was about 2.9%. (2) BGL dataset: the BGL dataset was generated by a Blue Gene/L supercomputer consisting of a 128K processor. The BGL dataset contains 4,747,963 logs, of which there are 348,460 anomalies. Unlike HDFS datasets, the logs of BGLs do not contain specific identifiers, so the BGL datasets are partitioned into log sequences using fixed or sliding windows. The selection of different window sizes and step sizes can affect the length of the log sequence and the judgment of the normal or abnormal log sequence. Corresponding to the divided log sequence, if an abnormal log entry occurs, the log sequence is regarded as abnormal. LayerLog's effectiveness for log sequence anomaly detection is measured using precision, recall, and F1 scores. Precision (P) is the percentage of real log sequence anomalies detected among all detected anomalies, P ═ TP/TP + FP. Recall recalling (R) is the percentage of real log sequence anomalies actually detected among all anomalies, and R ═ TP/TP + FN. The F1 score is a harmonic mean of accuracy and recall, F1 ═ 2 × P × R/P + R. TP is the number of correctly detected abnormal log sequences. FP is the number of normal log sequences that were erroneously detected as abnormal. FN is the number of abnormal log sequences that were error detected as normal.

An example writing language is python version 3.5.2, written based on the Tensorflow version 1.13.1 deep learning framework. The CPU is I99820X, the memory is 48GB, the video card is 2080Ti, the solid state capacity is 520GB, and the operating system is Linux Ubuntu 16.04.6 LTS. The parameter settings and explanations for the system are shown in table 2 below.

Table 2: system parameter setting

Parameter(s) Value taking Explanation of the invention
α 0.6 Generating impact factors for word vectors
β 0.4 Generating impact factors for word vectors
hidden_size 50 Number of nodes per layer of Bi-LSTM
atten_size 50 Number of nodes per layer of Attention mechanism
embedding_size 50 Dimensionality of semantic vector

LayerLog performance was verified on HDFS and BGL datasets and compared to LR, SVM, PCA, IM, LogClustering, LSTM-AE (event count vector based methods), DeepLog (log key sequence based methods), and LogAnomaly (log semantic based methods). Table 3 and table 4 show the results of the comparison of the HDFS and BGL datasets.

Table 3: experimental results on HDFS dataset

P R F1
LR 0.98 0.86 0.92
SVM 1.00 0.86 0.93
PCA 1.00 0.65 0.79
IM 0.86 0.82 0.84
LogCluster 1.00 0.46 0.63
LSTM-AE 0.89 0.88 0.88
DeepLog 0.95 0.93 0.94
LogAnomaly 0.96 0.94 0.95
LayerLog 0.99 0.98 0.99

Table 4: experimental results on the BGL dataset

From the comparison results of the examples, it can be seen that LayerLog has better performance than other methods. Its F1 scored 0.99 on the HDFS dataset and 0.98 on the BGL dataset. However, the event count vector based approach cannot achieve both high accuracy and high recall. For example, on HDFS datasets, the accuracy of support vector machines, principal component analysis and LogCluster is high, even up to 1, while their recall rates are relatively low, 0.86, 0.65 and 0.46, respectively, resulting in a lower F1 score. Meanwhile, because the evaluation criteria on the two data sets have strong distinctiveness, the robustness of the method based on the event counting vector is poor. For example, the F1 score for LR on the HDFS dataset was 0.92, while the F1 score for LR on the BGL dataset was only 0.82. LSTM-AE. Although the evaluation criteria for LSTM-AE were not significantly different for the two datasets, their F1 scores did not exceed 0.9.

Log key sequence based methods generally have better performance than event count vector based methods. F1 scores of the DeepLog on the two data sets are both above 0.9, which illustrates the necessity of log execution sequence and verifies that the DeepLog has better robustness in log sequence anomaly detection.

The method based on the log semantics obtains the best result, and shows that the method can improve the capability of detecting three different exceptions by understanding the execution content of the log sequence from the semantic perspective. LayerLog performs better than Log Anomaly, with F1 having a score on the HDFS dataset that is 4 percent higher than Log Anomaly and a score on the BGL dataset that is 2 percent higher than Log Anomaly. The LayerLog is proved to be capable of extracting richer semantic features from words, logs and log sequences, so that LayerLog can learn more accurate hierarchical semantic expression and achieve the best accuracy.

Modern systems will generate new logs during run-time. Since the training model learns a fixed log sequence semantic pattern from the training data, the accuracy may be reduced when detecting new log sequences. Therefore, to evaluate the model's adaptability to new log data, the BGL dataset was tested online. The first 50% of the time-sequentially executed BGL data sets were used as training sets and the last 50% were used as test sets, and evaluated without providing any feedback. The results are shown in Table 5, in comparison with Deeplog and LogAnomaly.

From the results of the examples, LayerLog has strong adaptability to new data, and the three evaluation indexes are 0.9944, 0.9187 and 0.9550 respectively. Because both deep log and log anomallly use log analyzers, when a system generates a new log event, the log analyzers cannot work normally, resulting in a significant drop in log sequence anomaly detection performance. In contrast, LayerLog does not use a log parser, avoiding the negative impact of the log parser. When the original log data is preprocessed, only meaningless symbols (such as punctuations, separators and the like) are removed, and the semantic information of the log text is increased to the maximum extent. Furthermore, the semantics of words, logs and log sequences can be automatically learned during the training phase. Therefore, the LayerLog framework based on the hierarchical semantics can better adapt to new data and is more suitable for online log sequence anomaly detection.

Table 5: online evaluation results for BGL datasets

P R F1
DeepLog 0.3817 0.9768 0.5489
LogAnomaly 0.8039 0.9319 0.8632
LayerLog 0.9944 0.9187 0.9550

The system log is an important resource for anomaly detection and failure analysis. A three-layer structure of log data, namely a word-log sequence hierarchical structure, provides a log sequence anomaly detection framework LayerLog based on log data hierarchical semantics. LayerLog can efficiently extract semantic features from each layer without the need for a log parser during the preprocessing stage. In addition, the LayerLog can detect the execution sequence abnormality, the operation abnormality and the incomplete abnormality of the log sequence end to end simultaneously. Evaluation of the two common data sets confirmed that LayerLog performs better than the existing methods.

Compared with the existing log sequence abnormality detection method and device, the method and device for detecting log sequence abnormality based on hierarchical semantics can extract richer semantic features from words, logs and log sequences, so that the device can learn more accurate hierarchical semantic expression, achieve the best accuracy, and further improve the capability of detecting three different abnormalities.

In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process or method.

The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:基于文本摘要的文本分类方法、装置、电子设备及介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!