Topic interaction behavior security processing method applied to big data and topic server
1. A topic interactive behavior security processing method applied to big data is characterized by being applied to a topic server and comprising the following steps:
determining a corresponding undetermined behavior state unit for a topic behavior map unit in a current visual behavior record in a scene type topic interaction log generated in advance;
aiming at each topic behavior map unit group in the current visual behavior record, the topic behavior map unit group consists of two related topic behavior map units, and an undetermined behavior state unit group corresponding to the topic behavior map unit group is determined through an undetermined behavior state unit corresponding to the topic behavior map unit in the topic behavior map unit group;
determining an optimized operation behavior track between two undetermined behavior state units in an undetermined behavior state unit group through scene type topic calling information which is counted in advance; the scene type topic calling information summarizes operation behavior track information among different called behavior state units;
and determining a target operation behavior track matched with the current visual behavior record in the scene type topic interaction log generated in advance through an optimized operation behavior track between two undetermined behavior state units in the undetermined behavior state unit group corresponding to each topic behavior map unit group.
2. The method as claimed in claim 1, wherein said determining in a previously generated scenic topic interaction log a corresponding pending behavior state unit for a topic behavior graph unit in a current visual behavior record comprises:
for each topic behavior map unit in the current visual behavior record, determining a search interval corresponding to the topic behavior map unit in the scene type topic interaction log generated in advance, determining an undetermined visual area corresponding to the topic behavior map unit in the scene type topic interaction log generated in advance through the search interval, determining a behavior state mapping unit mapped to the undetermined visual area by the topic behavior map unit, and determining an undetermined behavior state unit corresponding to the topic behavior map unit in the scene type topic interaction log generated in advance through the behavior state mapping unit.
3. The method as claimed in claim 2, wherein determining the search interval corresponding to the topic behavior graph unit in the scene-type topic interaction log generated in advance comprises:
searching the topic behavior map unit in the scene type topic interaction log generated in advance, and determining a target interval which takes the topic behavior map unit as a hot spot unit and the number of preset connecting edges as constraints;
and determining the target interval as a searching interval corresponding to the topic behavior map unit.
4. The method as claimed in claim 2 or 3, wherein the determining the pending visualization area corresponding to the topic behavior mapping unit in the scene-type topic interaction log generated in advance through the search interval comprises:
determining a first visual log paragraph covering the search interval in the scene type topic interaction log generated in advance;
searching a target end topological unit meeting requirements from an original topological unit of a hierarchical visual topology which is constructed in advance, wherein the requirements refer to: the visualization interval corresponding to the tail end topological unit is overlapped with the first visualization log paragraph; the original topological units of the hierarchical visual topology correspond to all visual intervals of the scene type topic interaction log, the x-th layer topological units in the hierarchical visual topology correspond to the x-th layer visual intervals in the scene type topic interaction log, and the visual intervals of all layers in the scene type topic interaction log are obtained by classifying the scene type topic interaction log according to a preset interval classification strategy;
determining to-be-determined visual areas corresponding to the topic behavior map units from visual intervals corresponding to all target terminal topological units;
correspondingly, the step of determining the to-be-determined visualization region corresponding to the topic behavior map unit from the visualization intervals corresponding to all the target terminal topology units includes:
aiming at each target terminal topological unit, determining at least one template visualization area from a visualization interval corresponding to the target terminal topological unit; a second visualization log paragraph corresponding to the template visualization area overlaps with the first visualization log paragraph, and the second visualization log paragraph corresponding to the template visualization area is a visualization log paragraph covering the template visualization area and positioned in the scene-type topic interaction log;
judging whether the template visualization area is overlapped with the searching interval or not aiming at each template visualization area, and if so, determining that the template visualization area is an undetermined visualization area corresponding to the topic behavior map unit;
correspondingly, the determining that the template visualization area is the to-be-determined visualization area corresponding to the topic behavior mapping unit further includes:
acquiring a behavior transmission description corresponding to the topic behavior map unit;
determining the description of the template visualization area set in the scene-type topic interaction log generated in advance;
and judging whether the quantitative difference value between the behavior transmission description corresponding to the topic behavior map unit and the set description of the template visualization area is less than or equal to a set quantitative difference value or not, and if so, determining the template visualization area as the to-be-determined visualization area corresponding to the topic behavior map unit.
5. The method of claim 1, wherein determining the pending behavior state unit group corresponding to the topic behavior map unit group by the pending behavior state unit corresponding to the topic behavior map unit in the topic behavior map unit group comprises:
aiming at each topic behavior map unit in the topic behavior map unit group, acquiring all undetermined behavior state units corresponding to the topic behavior map unit;
and pairing each undetermined behavior state unit corresponding to one topic behavior map unit in the topic behavior map unit group with each undetermined behavior state unit corresponding to the other topic behavior map unit to obtain an undetermined behavior state unit group.
6. The method of claim 1, wherein determining an optimized operational behavior trajectory between two pending behavior state units in a group of pending behavior state units from pre-completed statistical context-based topic invocation information comprises:
aiming at each undetermined behavior state unit group, searching a target visualization area in scene type topic calling information which is counted in advance through two undetermined behavior state units in the undetermined behavior state unit group;
determining an optimized operation behavior track between two undetermined behavior state units in the undetermined behavior state unit group through the found target visualization area;
correspondingly, the finding of the target visualization area in the scene type topic calling information which is counted in advance through the two undetermined behavior state units in the undetermined behavior state unit group comprises the following steps:
determining a first searching auxiliary unit and a second searching auxiliary unit in the scene type topic interaction log generated in advance through a first to-be-determined behavior state unit and a second to-be-determined behavior state unit in the to-be-determined behavior state unit group; the first search auxiliary unit is a behavior state unit which is located in an undetermined visualization area where the first undetermined behavior state unit is located and has the shortest distance to the second undetermined behavior state unit, the second search auxiliary unit is a behavior state unit which is located in an undetermined visualization area where the second undetermined behavior state unit is located and has the shortest distance to the first undetermined behavior state unit, and the first undetermined behavior state unit is as follows: the undetermined behavior state unit of the first topic behavior map unit in the topic behavior map unit group corresponding to the undetermined behavior state unit group is as follows: the undetermined behavior state unit of a second topic behavior map unit in the topic behavior map unit group corresponding to the undetermined behavior state unit group is associated with the first topic behavior map unit in the current visual behavior record and is positioned behind the first topic behavior map unit;
searching a target visual area in scene type topic calling information which is counted in advance through a first searching auxiliary unit and a second searching auxiliary unit;
correspondingly, the finding of the target visualization area in the scene-type topic invocation information which is subjected to statistics in advance through the first search assisting unit and the second search assisting unit comprises:
determining one of the first indication path and the second indication path as a current visual searching indication path; the first indication path is a path description from the first search assisting unit to the second search assisting unit, and the second indication path is a path description from the second search assisting unit to the first search assisting unit;
taking an inquiry trigger state unit of a current visual search indication path as a current behavior state unit, if the current behavior state unit is not counted in scene type topic calling information, summarizing the current behavior state unit and the current visual search indication path corresponding to the current behavior state unit and the current behavior state unit in the scene type topic calling information;
searching the associated behavior state unit of the current behavior state unit in the scene type topic interaction log generated in advance, and sequentially accessing the searched associated behavior state units; if the sequentially visited associated behavior state units are summarized in the scene type topic calling information, finding out a visual area on the current visual search indication path as the target visual area through a visual search indication path corresponding to the associated behavior state units recorded in the scene type topic calling information; if the associated behavior state units accessed in sequence are not counted in the scene type topic calling information, recording a current visual search indication path corresponding to the associated behavior state unit, the current behavior state unit and a visual area of the associated behavior state unit into the scene type topic calling information;
if the target visualization area is not determined when the sequential access is terminated, selecting one behavior state unit from all the associated behavior state units of the current behavior state unit as a query trigger state unit of the current visualization search indication path; and then updating the current visual searching indication path to be the other one of the first indication path and the second indication path, and returning to the operation of taking the query trigger state unit of the current visual searching indication path as the current behavior state unit.
7. The method as claimed in claim 6, wherein the finding of the visual area located on the current visual search indication path as the target visual area through the visual search indication path corresponding to the associated behavior state unit recorded in the scene type topic invocation information comprises:
comparing and analyzing a visual searching indication path corresponding to the associated behavior state unit recorded in the scene type topic calling information with the current visual searching indication path;
and when the comparison analysis result represents that the visual searching indication path corresponding to the associated behavior state unit is different from the current visual searching indication path, acquiring a visual area positioned on the current visual searching indication path from the scene type topic calling information as the target visual area.
8. The method of claim 7, wherein the selecting one behavior state element from all associated behavior state elements of the current behavior state element as the query trigger state element of the current visual search indication path comprises:
determining operation behavior track quantitative evaluation corresponding to each associated behavior state unit, wherein the operation behavior track quantitative evaluation is the sum of a first behavior track quantitative evaluation from a query trigger state unit of a current visual search indication path to the associated behavior state unit and a second behavior track quantitative evaluation from the associated behavior state unit to a query termination state unit of the current visual search indication path;
selecting a related behavior state unit with the lowest quantitative evaluation of the corresponding operation behavior track;
and determining the selected associated behavior state unit as a query trigger state unit of the current visual search indication path.
9. The method as claimed in claim 1, wherein the determining a target operation behavior trajectory adapted to a current visualized behavior record in the scene-type topic interaction log generated in advance through an optimized operation behavior trajectory between two undetermined behavior state units in an undetermined behavior state unit group corresponding to each topic behavior map unit group comprises:
aiming at each topic behavior map unit group, the following steps are executed: aiming at each undetermined behavior state unit group corresponding to the topic behavior map unit group, importing an optimized operation behavior track between two undetermined behavior state units in the undetermined behavior state unit group into an AI neural network which completes training in advance, and obtaining at least one importance degree variable for setting the key content possibility corresponding to the undetermined behavior state unit group; selecting one undetermined operation behavior track from optimized operation behavior tracks between two undetermined behavior state units in each undetermined behavior state unit group through at least one importance degree variable for setting key content possibility corresponding to each undetermined behavior state unit group;
and carrying out global fusion on all the to-be-determined operation behavior tracks according to a preset behavior track splicing strategy to obtain the target operation behavior track.
10. A topic server comprising a processor, a communication bus, and a memory; the processor and the memory communicate via the communication bus, the processor reading a computer program from the memory and operating to perform the method of any of claims 1-9.
Background
The big data technology has the advantages of improving data processing efficiency, helping to establish data thinking, helping scientific decision making and the like, and is gradually applied to various business fields. With the development of the internet, the combination of big data and interactive topics can realize interactive topic tendency analysis, topic user intention analysis, hot topic tracking and the like.
With the continuous expansion of topic scale, the safety of topic interaction behavior is concerned. In order to implement the security analysis of the topic interaction behavior, the operation behaviors of different topic users need to be processed in advance, for example, the operation behavior tracks of the topic users are determined in advance. However, the related technology has the problem of low efficiency when determining the operation behavior track, which affects the timeliness of subsequent topic interaction behavior safety analysis.
Disclosure of Invention
In view of this, the embodiment of the present application provides a topic interaction behavior security processing method and a topic server applied to big data.
The embodiment of the application provides a topic interaction behavior security and protection processing method applied to big data, which is applied to a topic server and comprises the following steps:
determining a corresponding undetermined behavior state unit for a topic behavior map unit in a current visual behavior record in a scene type topic interaction log generated in advance;
aiming at each topic behavior map unit group in the current visual behavior record, the topic behavior map unit group consists of two related topic behavior map units, and an undetermined behavior state unit group corresponding to the topic behavior map unit group is determined through an undetermined behavior state unit corresponding to the topic behavior map unit in the topic behavior map unit group;
determining an optimized operation behavior track between two undetermined behavior state units in an undetermined behavior state unit group through scene type topic calling information which is counted in advance; the scene type topic calling information summarizes operation behavior track information among different called behavior state units;
and determining a target operation behavior track matched with the current visual behavior record in the scene type topic interaction log generated in advance through an optimized operation behavior track between two undetermined behavior state units in the undetermined behavior state unit group corresponding to each topic behavior map unit group.
Under an independently implementable design concept, the determining a corresponding undetermined behavior state unit for a topic behavior map unit in a current visual behavior record in a scene-type topic interaction log generated in advance includes:
for each topic behavior map unit in the current visual behavior record, determining a search interval corresponding to the topic behavior map unit in the scene type topic interaction log generated in advance, determining an undetermined visual area corresponding to the topic behavior map unit in the scene type topic interaction log generated in advance through the search interval, determining a behavior state mapping unit mapped to the undetermined visual area by the topic behavior map unit, and determining an undetermined behavior state unit corresponding to the topic behavior map unit in the scene type topic interaction log generated in advance through the behavior state mapping unit.
Under an independently implementable design idea, determining a search interval corresponding to the topic behavior map unit in the scene-type topic interaction log generated in advance comprises:
searching the topic behavior map unit in the scene type topic interaction log generated in advance, and determining a target interval which takes the topic behavior map unit as a hot spot unit and the number of preset connecting edges as constraints;
and determining the target interval as a searching interval corresponding to the topic behavior map unit.
Under an independently implementable design concept, the determining, by a search interval, an undetermined visualized area corresponding to the topic behavior map unit in the scene-type topic interaction log generated in advance includes:
determining a first visual log paragraph covering the search interval in the scene type topic interaction log generated in advance;
searching a target end topological unit meeting requirements from an original topological unit of a hierarchical visual topology which is constructed in advance, wherein the requirements refer to: the visualization interval corresponding to the tail end topological unit is overlapped with the first visualization log paragraph; the original topological units of the hierarchical visual topology correspond to all visual intervals of the scene type topic interaction log, the x-th layer topological units in the hierarchical visual topology correspond to the x-th layer visual intervals in the scene type topic interaction log, and the visual intervals of all layers in the scene type topic interaction log are obtained by classifying the scene type topic interaction log according to a preset interval classification strategy;
determining to-be-determined visual areas corresponding to the topic behavior map units from visual intervals corresponding to all target terminal topological units;
correspondingly, the step of determining the to-be-determined visualization region corresponding to the topic behavior map unit from the visualization intervals corresponding to all the target terminal topology units includes:
aiming at each target terminal topological unit, determining at least one template visualization area from a visualization interval corresponding to the target terminal topological unit; a second visualization log paragraph corresponding to the template visualization area overlaps with the first visualization log paragraph, and the second visualization log paragraph corresponding to the template visualization area is a visualization log paragraph covering the template visualization area and positioned in the scene-type topic interaction log;
judging whether the template visualization area is overlapped with the searching interval or not aiming at each template visualization area, and if so, determining that the template visualization area is an undetermined visualization area corresponding to the topic behavior map unit;
correspondingly, the determining that the template visualization area is the to-be-determined visualization area corresponding to the topic behavior mapping unit further includes:
acquiring a behavior transmission description corresponding to the topic behavior map unit;
determining the description of the template visualization area set in the scene-type topic interaction log generated in advance;
and judging whether the quantitative difference value between the behavior transmission description corresponding to the topic behavior map unit and the set description of the template visualization area is less than or equal to a set quantitative difference value or not, and if so, determining the template visualization area as the to-be-determined visualization area corresponding to the topic behavior map unit.
Under an independently implementable design idea, determining the undetermined behavior state unit group corresponding to the topic behavior map unit group through the undetermined behavior state unit corresponding to the topic behavior map unit in the topic behavior map unit group includes:
aiming at each topic behavior map unit in the topic behavior map unit group, acquiring all undetermined behavior state units corresponding to the topic behavior map unit;
and pairing each undetermined behavior state unit corresponding to one topic behavior map unit in the topic behavior map unit group with each undetermined behavior state unit corresponding to the other topic behavior map unit to obtain an undetermined behavior state unit group.
Under an independently implementable design idea, determining an optimized operation behavior track between two undetermined behavior state units in an undetermined behavior state unit group through scene type topic calling information which is counted in advance comprises:
aiming at each undetermined behavior state unit group, searching a target visualization area in scene type topic calling information which is counted in advance through two undetermined behavior state units in the undetermined behavior state unit group;
determining an optimized operation behavior track between two undetermined behavior state units in the undetermined behavior state unit group through the found target visualization area;
correspondingly, the finding of the target visualization area in the scene type topic calling information which is counted in advance through the two undetermined behavior state units in the undetermined behavior state unit group comprises the following steps:
determining a first searching auxiliary unit and a second searching auxiliary unit in the scene type topic interaction log generated in advance through a first to-be-determined behavior state unit and a second to-be-determined behavior state unit in the to-be-determined behavior state unit group; the first search auxiliary unit is a behavior state unit which is located in an undetermined visualization area where the first undetermined behavior state unit is located and has the shortest distance to the second undetermined behavior state unit, the second search auxiliary unit is a behavior state unit which is located in an undetermined visualization area where the second undetermined behavior state unit is located and has the shortest distance to the first undetermined behavior state unit, and the first undetermined behavior state unit is as follows: the undetermined behavior state unit of the first topic behavior map unit in the topic behavior map unit group corresponding to the undetermined behavior state unit group is as follows: the undetermined behavior state unit of a second topic behavior map unit in the topic behavior map unit group corresponding to the undetermined behavior state unit group is associated with the first topic behavior map unit in the current visual behavior record and is positioned behind the first topic behavior map unit;
searching a target visual area in scene type topic calling information which is counted in advance through a first searching auxiliary unit and a second searching auxiliary unit;
correspondingly, the finding of the target visualization area in the scene-type topic invocation information which is subjected to statistics in advance through the first search assisting unit and the second search assisting unit comprises:
determining one of the first indication path and the second indication path as a current visual searching indication path; the first indication path is a path description from the first search assisting unit to the second search assisting unit, and the second indication path is a path description from the second search assisting unit to the first search assisting unit;
taking an inquiry trigger state unit of a current visual search indication path as a current behavior state unit, if the current behavior state unit is not counted in scene type topic calling information, summarizing the current behavior state unit and the current visual search indication path corresponding to the current behavior state unit and the current behavior state unit in the scene type topic calling information;
searching the associated behavior state unit of the current behavior state unit in the scene type topic interaction log generated in advance, and sequentially accessing the searched associated behavior state units; if the sequentially visited associated behavior state units are summarized in the scene type topic calling information, finding out a visual area on the current visual search indication path as the target visual area through a visual search indication path corresponding to the associated behavior state units recorded in the scene type topic calling information; if the associated behavior state units accessed in sequence are not counted in the scene type topic calling information, recording a current visual search indication path corresponding to the associated behavior state unit, the current behavior state unit and a visual area of the associated behavior state unit into the scene type topic calling information;
if the target visualization area is not determined when the sequential access is terminated, selecting one behavior state unit from all the associated behavior state units of the current behavior state unit as a query trigger state unit of the current visualization search indication path; and then updating the current visual searching indication path to be the other one of the first indication path and the second indication path, and returning to the operation of taking the query trigger state unit of the current visual searching indication path as the current behavior state unit.
Under an independently implementable design idea, finding a visual area located on a current visual search indication path as the target visual area through a visual search indication path corresponding to the associated behavior state unit recorded in the scene-type topic invocation information includes:
comparing and analyzing a visual searching indication path corresponding to the associated behavior state unit recorded in the scene type topic calling information with the current visual searching indication path;
and when the comparison analysis result represents that the visual searching indication path corresponding to the associated behavior state unit is different from the current visual searching indication path, acquiring a visual area positioned on the current visual searching indication path from the scene type topic calling information as the target visual area.
Under an independently implementable design concept, the selecting one behavior state unit from all associated behavior state units of the current behavior state unit as a query trigger state unit of the current visual search indication path includes:
determining operation behavior track quantitative evaluation corresponding to each associated behavior state unit, wherein the operation behavior track quantitative evaluation is the sum of a first behavior track quantitative evaluation from a query trigger state unit of a current visual search indication path to the associated behavior state unit and a second behavior track quantitative evaluation from the associated behavior state unit to a query termination state unit of the current visual search indication path;
selecting a related behavior state unit with the lowest quantitative evaluation of the corresponding operation behavior track;
and determining the selected associated behavior state unit as a query trigger state unit of the current visual search indication path.
Under an independently implementable design idea, determining a target operation behavior track adapted to a current visual behavior record in the scene-type topic interaction log generated in advance through an optimized operation behavior track between two undetermined behavior state units in an undetermined behavior state unit group corresponding to each topic behavior map unit group, including:
aiming at each topic behavior map unit group, the following steps are executed: aiming at each undetermined behavior state unit group corresponding to the topic behavior map unit group, importing an optimized operation behavior track between two undetermined behavior state units in the undetermined behavior state unit group into an AI neural network which completes training in advance, and obtaining at least one importance degree variable for setting the key content possibility corresponding to the undetermined behavior state unit group; selecting one undetermined operation behavior track from optimized operation behavior tracks between two undetermined behavior state units in each undetermined behavior state unit group through at least one importance degree variable for setting key content possibility corresponding to each undetermined behavior state unit group;
and carrying out global fusion on all the to-be-determined operation behavior tracks according to a preset behavior track splicing strategy to obtain the target operation behavior track.
The embodiment of the application also provides a topic server, which comprises a processor, a communication bus and a memory; the processor and the memory communicate via the communication bus, and the processor reads the computer program from the memory and runs the computer program to perform the method described above.
The embodiment of the application also provides a readable storage medium for a computer, wherein the readable storage medium stores a computer program, and the computer program realizes the method when running.
Therefore, the embodiment of the application provides a topic interaction behavior security processing method and a topic server applied to big data, when determining an optimized operation behavior track between undetermined behavior state units in undetermined behavior state units corresponding to each topic behavior map unit group in a current visual behavior record, the method can be determined by scene type topic calling information which is counted in advance, since the scene type topic calling information summarizes operation behavior track information between different called behavior state units, when the same visual area is called repeatedly, the visual area does not need to be analyzed additionally, a large amount of repeated analysis processing can be avoided to a certain extent, further, a target operation behavior track matched with the current visual behavior record is determined by each optimized operation behavior track, the time consumed for determining the operation behavior track can be reduced as much as possible, and the accuracy and the integrity of the obtained target operation behavior track are ensured, so that an analysis basis is rapidly and accurately provided for subsequent interactive behavior security analysis, and the influence on the timeliness of the interactive behavior security analysis caused by too much time consumed in the process of determining the target operation behavior track of the current visual behavior record is avoided.
In the description that follows, additional features will be set forth, in part, in the description. These features will be in part apparent to those skilled in the art upon examination of the following and the accompanying drawings, or may be learned by production or use. The features of the present application may be realized and attained by practice or use of various aspects of the methodologies, instrumentalities and combinations particularly pointed out in the detailed examples that follow.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic block diagram of a topic server provided in an embodiment of the present application.
Fig. 2 is a flowchart of a topic interaction behavior security processing method applied to big data according to an embodiment of the present application.
Fig. 3 is a block diagram of a topic interaction behavior security processing device applied to big data according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Fig. 1 shows a block schematic diagram of a topic server 10 provided in an embodiment of the present application. The topic server 10 in the embodiment of the present application may be a server with data storage, transmission, and processing functions, as shown in fig. 1, the topic server 10 includes: the device comprises a memory 11, a processor 12, a communication bus 13 and a topic interactive behavior security processing device 20 applied to big data.
The memory 11, processor 12 and communication bus 13 are electrically connected, directly or indirectly, to enable the transfer or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 11 stores a topic interactive behavior security processing device 20 applied to big data, the topic interactive behavior security processing device 20 applied to big data includes at least one software function module which can be stored in the memory 11 in a form of software or firmware (firmware), and the processor 12 executes various functional applications and data processing by running a software program and a module stored in the memory 11, for example, the topic interactive behavior security processing device 20 applied to big data in this embodiment of the present application, so as to implement the topic interactive behavior security processing method applied to big data in this embodiment of the present application.
The Memory 11 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 11 is used for storing a program, and the processor 12 executes the program after receiving an execution instruction.
The processor 12 may be an integrated circuit chip having data processing capabilities. The Processor 12 may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like. The various methods, steps and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The communication bus 13 is used for establishing communication connection between the topic server 10 and other communication terminal devices through a network, and implementing the transceiving operation of network signals and data. The network signal may include a wireless signal or a wired signal.
It is to be understood that the configuration shown in fig. 1 is merely illustrative, and the topic server 10 may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
The embodiment of the application also provides a readable storage medium for a computer, wherein the readable storage medium stores a computer program, and the computer program realizes the method when running.
Fig. 2 shows a flowchart of topic interaction behavior security processing applied to big data provided by an embodiment of the present application. The method steps defined by the flow related to the method are applied to the topic server 10 and can be realized by the processor 12, and the method comprises the contents described in the following steps S100 to S400.
In step S100, a corresponding pending behavior state unit is determined for the topic behavior map unit in the current visual behavior record in the scene type topic interaction log generated in advance.
For example, the scene-type topic interaction log is created according to the visual characteristics of the topic interaction conditions in the big data topic environment, and can cover a plurality of visual areas in a certain visual interval, wherein different visual areas correspond to different topic interaction conditions. The visualization interval of the scene-type topic interaction log is not limited, for example, the scene-type topic interaction log may be a scene-type topic interaction log of a topic community or a scene-type topic interaction log of a topic platform.
The current visual behavior record is the visual behavior record which needs to be matched with the scene type topic interaction log to determine the corresponding operation behavior track. The current visual behavior record may be transmitted from a behavior detection thread for capturing user operations, such as a dynamic visual behavior record of a big data topic client transmitted by a behavior detection thread on the big data topic client, where the source of the visual behavior record is not limited to the behavior detection thread, and the target of the behavior detection is not limited to the big data topic client.
When determining the corresponding undetermined behavior state unit for the topic behavior map unit in the current visualization behavior record, the topic behavior map unit may be mapped to the neighbor visualization region in the scene-type topic interaction log, and the undetermined behavior state unit of the topic behavior map unit is determined according to the behavior state mapping unit. The neighbor visualization region may be, for example, a visualization region overlapping with a search interval determined according to the topic behavior map unit, or a visualization region in the search interval, and is not limited to this.
The number of the undetermined behavior state units corresponding to each topic behavior map unit in the current visual behavior record can be one or more, and the specific number is determined according to the size of the search interval and the density degree of the visual area.
In step S200, for each topic behavior map unit group in the current visual behavior record, the topic behavior map unit group is composed of two related topic behavior map units, and the undetermined behavior state unit group corresponding to the topic behavior map unit group is determined by the undetermined behavior state unit corresponding to the topic behavior map unit in the topic behavior map unit group.
The number of the topic behavior map unit groups is determined according to the number of the topic behavior map units in the current visual behavior record, for example, the number of the topic behavior map units in the current visual behavior record is num1, and the number of the topic behavior map unit groups can be num 1-1. For example, the current visualization behavior record sequentially covers four topic behavior map units, namely topic _ map _ unit1-topic _ map _ unit4, according to the generation time sequence of the topic behavior map units, and then the formed topic behavior map unit groups may be:
(topic_graph_unit1,topic_graph_unit2);
(topic_graph_unit2,topic_graph_unit3);
(topic_graph_unit3、topic_graph_unit4)。
because each topic behavior map unit can have a plurality of undetermined behavior state units, a plurality of undetermined behavior state unit groups can be determined for a pair of topic behavior map unit groups. Of course, the case that one undetermined behavior state unit corresponds to each topic behavior map unit in the pair of topic behavior map unit groups is not excluded, and in this case, the pair of undetermined behavior state unit groups can be determined for the topic behavior map unit group.
In the embodiment of the application, the topic behavior map unit can be understood as a map node of topic interaction behavior, and a Knowledge map (Knowledge Graph) can be referred to for a node processing technology of the topic behavior map unit. Furthermore, the behavior state unit corresponding to the topic behavior map unit is used for representing the interactive behavior state of the topic behavior map unit, and can be understood as mapping and nodularization processing of the interactive behavior state in a certain sense.
Under some optional and independently implementable design ideas, in step S200, the determining, by the to-be-determined behavior state unit corresponding to the topic behavior map unit in the topic behavior map unit group, the to-be-determined behavior state unit group corresponding to the topic behavior map unit group may include the following steps:
s201: aiming at each topic behavior map unit in the topic behavior map unit group, acquiring all undetermined behavior state units corresponding to the topic behavior map unit;
s202: and pairing each undetermined behavior state unit corresponding to one topic behavior map unit in the topic behavior map unit group with each undetermined behavior state unit corresponding to the other topic behavior map unit to obtain an undetermined behavior state unit group.
In this way, the obtained undetermined behavior state unit group comprises two undetermined behavior state units, wherein one undetermined behavior state unit corresponds to one topic behavior map unit in the topic behavior map unit group, and the other undetermined behavior state unit corresponds to the other topic behavior map unit in the topic behavior map unit group.
In actual application, for example, each topic behavior map unit in a pair of topic behavior map unit groups corresponds to a plurality of pending behavior state units, such as the topic behavior map unit groups are (topic _ graph _ unit1, topic _ graph _ unit 2).
Wherein, the topic behavior map unit topic _ graph _ unit1 corresponds to two undetermined behavior state units behavior _ state _ unit1 and behavior _ state _ unit2, and the topic behavior map unit topic _ graph _ unit2 corresponds to two undetermined behavior state units behavior _ state _ unit3 and behavior _ state _ unit4, and the undetermined behavior state unit groups determined by the topic behavior map unit group are four pairs, including:
(behaviour_state_unit1,behaviour_state_unit3);
(behaviour_state_unit1,behaviour_state_unit4);
(behaviour_state_unit2,behaviour_state_unit3);
(behaviour_state_unit2,behaviour_state_unit4)。
in step S300, an optimized operation behavior track between two undetermined behavior state units in an undetermined behavior state unit group is determined through scene type topic calling information which is counted in advance; the scene type topic calling information summarizes operation behavior track information among different behavior state units which are called.
For example, the scene-type topic invocation information may be scene-type topic access information. In a general case, that is, before determining an optimized operation behavior trajectory between two undetermined behavior state units in a first pair of undetermined behavior state unit groups corresponding to the first pair of visualization behavior record pairs, the scene-type topic invocation information may be understood as an empty set. When the optimized operation behavior track is determined, the called behavior state units and the corresponding visualization areas, that is, the operation behavior track information among the called different behavior state units, can be summarized in the scene type topic calling information. Therefore, when repeated and same behavior state units and corresponding visualization areas are encountered, the scene type topic calling information summarizes corresponding operation behavior track information, so that the repeated and same visualization areas do not need to be analyzed and processed additionally, and the time and computer resources consumed for determining the optimized operation behavior track are reduced.
In other words, when the optimized operation behavior track between two undetermined behavior state units in the undetermined behavior state unit group is determined through the scene type topic calling information which is counted in advance, whether corresponding operation behavior track information is recorded in the scene type topic calling information or not can be determined, if yes, the corresponding operation behavior track information does not need to be analyzed and processed additionally, and if not, the operation behavior track information is analyzed and processed, and the operation behavior track information obtained through analysis and processing is added into the scene type topic calling information.
Therefore, the optimized operation behavior track between two undetermined behavior state units in all the undetermined behavior state unit groups can be determined, and on the premise that the topic behavior map unit groups correspond to the multiple undetermined behavior state unit groups, the pair of topic behavior map unit groups can correspond to the multiple optimized operation behavior tracks.
In step S400, a target operation behavior trajectory adapted to the current visual behavior record is determined in the scene-type topic interaction log generated in advance through an optimized operation behavior trajectory between two undetermined behavior state units in an undetermined behavior state unit group corresponding to each topic behavior map unit group.
In the embodiment of the application, the target operation behavior trajectory may be used for subsequent interactive behavior safety protection analysis, for example, whether the interactive behavior is abnormal or whether a risk exists may be quickly and accurately determined through the target operation behavior trajectory.
The optimized operation behavior track between two undetermined behavior state units in the undetermined behavior state unit group corresponding to each topic behavior map unit group can be regarded as the optimized operation behavior track from one topic behavior map unit to another topic behavior map unit in the topic behavior map unit group. On the basis that the optimized operation behavior track from one topic behavior map unit to another topic behavior map unit in every two related topic behavior map units in the current visual behavior record is determined, the target operation behavior track adapted to the current visual behavior record can be determined, and the target operation behavior track can be regarded as the optimized operation behavior track from the starting topic behavior map unit to the ending topic behavior map unit in the current visual behavior record.
In some examples, optimizing the operation behavior trajectory may be understood as simplifying the operation behavior trajectory to the greatest extent on the premise of satisfying a certain feature recognition degree. Therefore, the efficiency of subsequent operation behavior trace processing can be improved.
In other examples, there may be a plurality of optimized operation behavior tracks between two undetermined behavior state units in the undetermined behavior state unit group corresponding to some topic behavior map unit groups, and in this case, an optimal optimized operation behavior track needs to be selected from the plurality of optimized operation behavior tracks as the optimized operation behavior track adapted to the topic behavior map unit group.
For example, the positioning possibility and the change possibility corresponding to each undetermined behavior state unit group can be calculated, and an optimized operation behavior trajectory is selected from a plurality of optimized operation behavior trajectories according to the positioning possibility and the change possibility corresponding to each undetermined behavior state unit group as the optimized operation behavior trajectory matched with the topic behavior map unit group; wherein, the positioning possibility corresponding to the state unit group of the undetermined behavior comprises: the positioning possibility of each undetermined behavior state unit in the undetermined behavior state unit group, namely the positioning possibility of the undetermined behavior state unit, namely the probability that the corresponding topic behavior map unit is positioned in an undetermined visual region where the undetermined behavior state unit is positioned; the change possibility corresponding to the state unit group of the undetermined behavior comprises the following steps: and the probability that the behavior detection target is transferred from the undetermined visualization region where one undetermined behavior state unit in the undetermined behavior state unit group is located to the undetermined visualization region where the other undetermined behavior state unit is located.
The above selection manner is only an example, and further the selection manner is not limited to this, and there may be other manners, for example, different importance weights may be configured for the change possibilities corresponding to different pending behavior state unit groups, and then the selection may be performed according to the change possibilities after the importance weights are configured.
Of course, there may be only one optimized operation behavior trajectory between two undetermined behavior state units in the undetermined behavior state unit group corresponding to some topic behavior map unit groups, and in this case, the optimized operation behavior trajectory may be directly used as the optimized operation behavior trajectory adapted to the topic behavior map unit group.
After the optimized operation behavior tracks matched with all topic behavior map unit groups are determined, the optimized operation behavior tracks matched with all topic behavior map unit groups can be spliced and fused to obtain a target operation behavior track matched with the current visual behavior record.
In the embodiment of the application, when the optimized operation behavior track between the undetermined behavior state units in the undetermined behavior state unit group corresponding to each topic behavior map unit group in the current visual behavior record is determined, the optimized operation behavior track can be determined by the scene type topic calling information which is counted in advance, since the scene type topic invocation information summarizes operation behavior trace information between different behavior state units that have been invoked, when the same visual area is called repeatedly, the visual area does not need to be analyzed and processed additionally, a large amount of repeated analysis and processing can be avoided to a certain extent, and furthermore, and determining a target operation behavior track adapted to the current visual behavior record through each optimized operation behavior track, so that the time consumed for determining the operation behavior track can be reduced as much as possible, and the accuracy and the integrity of the obtained target operation behavior track are ensured.
In some independently implementable technical solutions, in step S100, the determining, in the scene-type topic interaction log generated in advance, a corresponding pending behavior state unit for the topic behavior map unit in the current visualization behavior record includes:
s101: for each topic behavior map unit in the current visual behavior record, determining a search interval corresponding to the topic behavior map unit in the scene type topic interaction log generated in advance, determining an undetermined visual area corresponding to the topic behavior map unit in the scene type topic interaction log generated in advance through the search interval, determining a behavior state mapping unit mapped to the undetermined visual area by the topic behavior map unit, and determining an undetermined behavior state unit corresponding to the topic behavior map unit in the scene type topic interaction log generated in advance through the behavior state mapping unit.
In the embodiment of the present application, the search interval may be understood as a search range.
Under some optional and independently implementable design ideas, in step S101, determining a search interval corresponding to the topic behavior map unit in the scene-type topic interaction log generated in advance may include the following steps:
s1011: searching the topic behavior map unit in the scene type topic interaction log generated in advance, and determining a target interval which takes the topic behavior map unit as a hot spot unit and the number of preset connecting edges as constraints;
s1012: and determining the target interval as a searching interval corresponding to the topic behavior map unit.
In other words, the search interval corresponding to the topic behavior map unit may be a target interval that takes the topic behavior map unit as a hot spot unit and takes the number of preset connected edges as a constraint. The preset number of the connecting edges can be determined according to the required operation behavior trajectory precision, and certainly can also be determined by comprehensively determining the operation resource overhead required by the state unit of the undetermined behavior at the same time.
Of course, the above search interval is an alternative example, and is not limited thereto. For example, the search space may also be a graphical space, and the manner of determining the search space may be determined according to the actual shape of the search space.
The search interval corresponding to the topic behavior map unit is used for determining an undetermined visualization area corresponding to the topic behavior map unit.
Further, the hotspot unit may be understood as an origin, and the preset number of connected edges may be understood as a constraint, which is that the length corresponding to the preset number of connected edges is used as a radius to perform region enclosure processing, so as to obtain a target interval.
Under some optional and independently implementable design ideas, in step S101, determining an undetermined visualization region corresponding to the topic behavior map unit in the scene-type topic interaction log generated in advance through the search interval may include the following steps:
in step S1013, a first visual log segment covering the search interval is determined from the scene-type topic interaction log generated in advance.
On the basis of the search interval, a first visual log paragraph may be determined from the scene-type topic interaction log generated in advance, where the first visual log paragraph covers the search interval.
To reduce additional resource overhead in determining a pending visualization region, the first visualization log paragraph can be the smallest visualization log paragraph that encompasses the seek interval. Of course, the first visualization log segment is not limited herein, and may be determined as needed, for example, a visualization log segment that covers the search interval and is slightly larger than the smallest visualization log segment, as long as the search interval is covered.
In step S1014, a target end topology unit meeting the requirement is found from an original topology unit of the hierarchical visualization topology that is constructed in advance, where the requirement is: the visualization interval corresponding to the terminal topological unit is overlapped with the first visualization log paragraph.
Before that, a plurality of layers of visual intervals are divided from the scene type topic interaction log according to a preset interval classification strategy, and then a hierarchical visual topology (a node tree model or a node tree path) is established according to each layer of visual intervals obtained by classifying the scene type topic interaction log. In other words, the scene-type topic interaction log is divided into a plurality of layers of visualization intervals in advance, and the hierarchical visualization topology is established in advance according to the layers of visualization intervals obtained by classifying the scene-type topic interaction log. The scene type topic interaction log can be divided in the following ways: determining a visual log paragraph covering all visual intervals of the scene type topic interaction log; and dividing the visual log paragraph into N layers of visual intervals, wherein N is greater than 1.
The layer 1 visualization interval is all visualization intervals in the visualization log section, namely all visualization intervals of the scene-type topic interaction log, the layer 2 visualization interval is 4 visualization intervals obtained by quartering the visualization log section, the layer 3 visualization interval is 16 visualization intervals obtained by further quartering each visualization interval in the 4 visualization intervals, and so on.
For example, all visualization intervals interval10 of the scene-type topic interaction log are taken as the layer 1 interval; the 1 st layer interval is divided into 4 2 nd layer intervals, which are interval11, interval12, interval13 and interval 14; each layer 2 interval is further divided into four layer 3 intervals, taking interval11 as an example, interval11 is divided into four layer 3 intervals, interval111, interval112, interval113 and interval114, and other similar intervals are omitted here. As such, the scene-type topic interaction log is divided into 3-layer visualization intervals.
Of course, the above-mentioned scenario-type topic interaction log division situation is only an example for easy understanding, and there is a difference in the intervals of the actual scenario-type topic interaction log, such as a topic platform, or a topic network, and therefore, the actual division situation may be divided into more layers.
The visual log paragraphs are only convenient to divide, and actually, all visual intervals of the scene-type topic interaction log can also be divided in other manners, and the specific dividing manner is not limited.
Based on N layers of visual intervals obtained by classifying the scene type topic interaction logs, a hierarchical visual topology can be established, wherein an original topological unit of the hierarchical visual topology corresponds to all visual intervals of the scene type topic interaction logs, an x-th layer of topological units in the hierarchical visual topology corresponds to an x-th layer of visual intervals in the scene type topic interaction logs, and x is greater than or equal to 1 and less than or equal to N.
In order to better understand the establishment method of the hierarchical visualization topology, the hierarchical visualization topology is taken as a tree model for illustration. An original topological unit of the hierarchical visualization topology, namely a layer 1 topological unit, corresponds to all visualization intervals interval10 of the scene-type topic interaction log, four downstream topological units of the original topological unit, namely a layer 2 topological unit, respectively correspond to layer 2 visualization intervals interval11, interval12, interval13 and interval14, a layer 3 tail end topological unit, namely a layer 3 tail end topological unit, of the layer 2 topological unit, corresponds to layer 3 visualization intervals, taking interval11 as an example, and four downstream topological units of the layer 2 topological unit interval11 respectively correspond to four layer 3 visualization intervals, namely interval111, interval112, interval113 and interval 114.
The last layer of topology unit of the hierarchical visualization topology is an end topology unit, and the layer 3 topology unit is an end topology unit of the hierarchical visualization topology, for example, topology units corresponding to four layer 3 visualization intervals, i.e., interval111, interval112, interval113, and interval114, respectively.
When a target terminal topology unit is searched from a hierarchical visual topology, searching from an original topology unit of the hierarchical visual topology, sequentially accessing the topology units of a hierarchical visual topology tree layer by layer in a top-down strategy, and when a visual interval corresponding to the sequentially accessed topology units is overlapped with a second visual log paragraph, continuously and sequentially accessing downstream topology units of the topology units until the terminal topology unit which is overlapped with the second visual log paragraph in a hierarchical visual local topology tree which takes the topology unit as the original topology unit is found; when the visualization interval corresponding to the sequentially accessed topology units does not overlap with the second visualization log paragraph, the sequentially accessed topology units do not need to be continuously accessed in the hierarchical visualization local topology tree using the topology units as the original topology units, so that the sequentially accessed topology units can be greatly reduced.
Through the above sequential access manner (for example, traversal may be understood), all the terminal topology units where the corresponding visualization interval overlaps with the first visualization log paragraph may be found, and the terminal topology units are all used as target terminal topology units. The searching intervals of the topic behavior map units are distributed in the visual intervals corresponding to all the target terminal topological units.
In this embodiment, the hierarchical visualization topology is utilized to quickly find the area where the search interval of the topic behavior map unit is located, there is no overlap between the visualization interval corresponding to one topology unit and the second visualization log paragraph, and the visualization interval corresponding to the downstream topology unit of the topology unit will not overlap with the second visualization log paragraph, so that the hierarchical visualization local topology tree using the topology unit as the original topology unit does not need to be visited in sequence, the number of visualization intervals to be contrastively analyzed with the second visualization log paragraph can be reduced, and the whole data volume of the scene type topic interaction log is large, so that a large amount of useless calculation resource overhead can be saved by adopting the above technology, and the analysis time is reduced.
In addition, whether the second visual log paragraph overlaps with the visual interval or not is judged, compared with the judgment of whether the search interval in a circular shape or other shapes overlaps with the visual interval or not, the judgment mode is more convenient, and the calculation resource overhead during judgment can be reduced.
In step S1015, an undetermined visualization area corresponding to the topic behavior map unit is determined from the visualization intervals corresponding to all the target terminal topology units.
When the visual region to be determined is determined, all visual regions in the visual interval corresponding to all target terminal topological units can be sequentially accessed, whether the sequentially accessed visual regions overlap with the searched interval is judged, and if yes, the sequentially accessed visual regions are the visual region to be determined.
In order to reduce the computational resource overhead when determining the to-be-determined visualization region, in step S1015, the determining the to-be-determined visualization region corresponding to the topic behavior map unit from the visualization intervals corresponding to all target end topology units may include the following steps:
s10151: aiming at each target terminal topological unit, determining at least one template visualization area from a visualization interval corresponding to the target terminal topological unit; a second visualization log paragraph corresponding to the template visualization area overlaps with the first visualization log paragraph, and the second visualization log paragraph corresponding to the template visualization area is a visualization log paragraph covering the template visualization area and positioned in the scene-type topic interaction log;
s10152: and judging whether the template visualization area is overlapped with the searching interval or not aiming at each template visualization area, and if so, determining that the template visualization area is an undetermined visualization area corresponding to the topic behavior map unit.
When determining a template visualization region (reference visualization region), a second visualization log paragraph corresponding to each visualization region in a visualization interval corresponding to a target terminal topology unit may be determined first, where the second visualization log paragraph may be, but is not limited to, a smallest visualization log paragraph covering the visualization region; then, for each second visualization log paragraph, whether the second visualization log paragraph overlaps with the first visualization log paragraph is determined, and if so, the visualization area covered by the second visualization log paragraph is a template visualization area.
The template visualization area is determined by judging whether a second visualization log paragraph covering the template visualization area overlaps with a first visualization log paragraph covering the search interval, but the overlapping of the second visualization log paragraph with the first visualization log paragraph does not represent the overlapping of the template visualization area with the search interval.
For example, zondition 100 is a search interval, a first Visualization log paragraph corresponding to the search interval zondition 100 is a joint _ section10, and Visualization100 is a Visualization area.
Further, a second Visualization log paragraph corresponding to the Visualization region Visualization100 is a joint _ section20, and the first Visualization log paragraph joint _ section10 overlaps with the second Visualization log paragraph joint _ section20, so the Visualization region Visualization100 is a template Visualization region, but the Visualization region Visualization100 does not overlap with the search region division 100.
Therefore, on the premise of finding out the template visualization area, whether the template is overlapped with the search interval is further judged, and if so, the template visualization area is determined to be the to-be-determined visualization area corresponding to the topic behavior map unit.
Therefore, the template visualization area can be found out by judging whether the visualization log paragraphs overlap or not, and then whether the template visualization area overlaps with the search interval or not is judged.
In the existing mode, in the process of determining a to-be-determined behavior state unit or a to-be-determined visual area corresponding to a topic behavior map unit, visual areas in all scene type topics are compared and analyzed with search intervals of the topic behavior map unit to determine whether the visual areas and the topic behavior map unit are crossed, and as the scene type topics have large data volume, more invalid operations are involved, and more time is consumed. And the minimum enclosing area of the searched interval and the minimum enclosing area of the visual area are compared and analyzed to determine all the first visual areas, and then the visual area crossed with the searched interval is further determined from the first visual areas and used as the template visual area, so that the computing resource overhead can be greatly reduced, and the time consumption is further reduced.
The template visualization area can be directly used as an undetermined visualization area corresponding to the topic behavior mapping unit. However, generally speaking, there is no case where the description of the visual region in which the behavior detection target is located is opposite to the description of the visual behavior description of the behavior detection target.
Based on the condition, the template visualization area can be optimized to obtain the to-be-determined visualization area corresponding to the topic behavior map unit. For this purpose, in step S10152, the determining that the template visualization area is the to-be-determined visualization area corresponding to the topic behavior map unit further includes the following steps:
s101521: acquiring a behavior transmission description corresponding to the topic behavior map unit;
s101522: determining the description of the template visualization area set in the scene-type topic interaction log generated in advance;
s101523: and judging whether the quantitative difference value between the behavior transmission description corresponding to the topic behavior map unit and the set description of the template visualization area is less than or equal to a set quantitative difference value or not, and if so, determining the template visualization area as the to-be-determined visualization area corresponding to the topic behavior map unit.
In some possible embodiments, the data collected by the behavior detection thread is provided with, in addition to the topic behavior map unit (spatial domain information) of the behavior detection target, a behavior transmission description (behavior development path or behavior trend path) of the behavior detection target at each topic behavior map unit, so that the behavior transmission description corresponding to the topic behavior map unit can be obtained from the source information recorded by the current visual behavior. Of course, in the case of missing description information, the behavior transfer description may be calculated using the previous topic behavior map unit and the topic behavior map unit, or the topic behavior map unit and the next topic behavior map unit.
Each visualization area in the scene-type topic interaction log is set with a description, and the description of the visualization area is set, namely the description of a path from a starting unit to a terminating unit of the visualization area. For example, the description of the visualization region corresponding to the non-interactive behavior is a path description of the non-interactive behavior; the interactive behavior may correspond to two visualization regions described by reciprocal paths, and accordingly, the description of each visualization region is a path description of an interaction situation of the opposite party topic in the interactive behavior.
If the quantitative difference value (such as the path description difference degree) between the set description of the template visualization area and the behavior transmission description corresponding to the topic behavior mapping unit is smaller than or equal to the set quantitative difference value, determining that the template visualization area is an undetermined visualization area; otherwise, the probability that the template visualization area is the visualization area where the behavior detection target is located is considered to be extremely low, and the template visualization area is determined not to be the to-be-determined visualization area.
Set for quantitative difference value and can predetermine as required, for avoiding because of topic action map unit is unusual and the too big problem that leads to omitting correct visual region of quantitative difference value, can be great with setting for quantitative difference value setting ground, for example can set up to 0.9 in 0~1 interval. This is, of course, by way of example only and is not intended as limiting.
When the constraint of the search interval corresponding to the topic behavior map unit is large, the number of the template visual areas is large, but only one correct visual area is provided, so that a smaller number of undetermined visual areas are obtained after the template visual areas are optimized, and a large amount of invalid operation processing can be avoided when the visual areas are subsequently aligned, and time consumption is further reduced.
In step S101, after finding the to-be-determined visualization region corresponding to the topic behavior map unit, a behavior state mapping unit that maps the topic behavior map unit to the to-be-determined visualization region may be determined, and the to-be-determined behavior state unit corresponding to the topic behavior map unit is determined in the scene type topic interaction log generated in advance by the behavior state mapping unit.
The topic behavior map unit is mapped to the behavior state mapping unit on the to-be-determined visualization region, for example, the topic behavior map unit may be an intersection point on a behavior path corresponding to the to-be-determined visualization region. On this basis, determining, by the behavior state mapping unit, an undetermined behavior state unit corresponding to the topic behavior map unit in the scene-type topic interaction log generated in advance may include:
if the behavior state mapping unit is in the undetermined visualization area, determining the behavior state mapping unit as the undetermined behavior state unit;
and if the behavior state mapping unit is not in the pending visualization region, taking the behavior state unit which is close to the behavior state mapping unit in the pending visualization region as the pending behavior state unit.
In some independently implementable technical solutions, in step S300, the determining an optimized operation behavior trajectory between two undetermined behavior state units in a undetermined behavior state unit group through scene type topic invocation information which is subjected to statistics in advance includes:
s301: aiming at each undetermined behavior state unit group, searching a target visualization area in scene type topic calling information which is counted in advance through two undetermined behavior state units in the undetermined behavior state unit group;
s302: and determining an optimized operation behavior track between two undetermined behavior state units in the undetermined behavior state unit group through the found target visualization area.
When two adjacent topic behavior map units are far away from each other, a plurality of optional operation behavior tracks are arranged between the undetermined behavior state unit corresponding to one topic behavior map unit and the undetermined behavior state unit of the next topic behavior map unit, a plurality of overlapped visualization areas are arranged between different operation behavior tracks, and if the visualization areas are repeatedly searched, a large amount of calculation time is consumed.
The scene type topic calling information is used for searching the target visualization area, so that when a repeated visualization area (namely the visualization area searched before) is searched, corresponding operation behavior track information is already summarized in the scene type topic calling information, extra operation processing is not needed at the moment, and calculation time consumed by repeatedly searching the same visualization area is saved.
Under some optional and independently implementable design ideas, in step S301, the finding out a target visualization area in the scene type topic invocation information subjected to statistics in advance through two pending behavior state units in the pending behavior state unit group may include the following steps:
s3011: determining a first searching auxiliary unit and a second searching auxiliary unit in the scene type topic interaction log generated in advance through a first to-be-determined behavior state unit and a second to-be-determined behavior state unit in the to-be-determined behavior state unit group; the first search auxiliary unit is a behavior state unit which is located in an undetermined visualization area where the first undetermined behavior state unit is located and has the shortest distance to the second undetermined behavior state unit, the second search auxiliary unit is a behavior state unit which is located in an undetermined visualization area where the second undetermined behavior state unit is located and has the shortest distance to the first undetermined behavior state unit, and the first undetermined behavior state unit is as follows: the undetermined behavior state unit of the first topic behavior map unit in the topic behavior map unit group corresponding to the undetermined behavior state unit group is as follows: the undetermined behavior state unit of a second topic behavior map unit in the topic behavior map unit group corresponding to the undetermined behavior state unit group is associated with the first topic behavior map unit in the current visual behavior record and is positioned behind the first topic behavior map unit;
s3012: and finding out a target visual area in the scene type topic calling information which is counted in advance through the first searching auxiliary unit and the second searching auxiliary unit.
For example, assume that element1 is a first topic behavior map unit, element2 is a second topic behavior map unit, element2 and element1 are two associated topic behavior map units, and element2 is located after element 1; the pending behavior state unit (a first pending behavior state unit) of the element1 is pending behavior 1, the pending4 is pending behavior state unit (a second pending behavior state unit) of the element2, and the pending1 and the pending4 form a pair of pending behavior state unit groups; the condition2 is a behavior state unit closest to the pending visualization area condition1condition2 where the pending visualization area condition1 is located, so that the condition2 is determined as a first search assisting unit; the condition7 is a behavior state unit closest to the pending visualization area condition7condition8 where the pending4 is located and is located to the pending1, so the condition7 is determined as the second search assisting unit.
The first search assisting unit condition2 and the second search assisting unit condition7 find out a target visualization area in the scene type topic invocation information with statistics completed in advance, wherein the target visualization area can be a visualization area covered by the optimized operation behavior trajectory from all the operation behavior trajectories from the first search assisting unit condition2 to the second search assisting unit condition 7.
In some optional and independently implementable design considerations, in step S3012, the finding a target visualization area in the scene-type topic invocation information subjected to statistics in advance by the first search assisting unit and the second search assisting unit may include the following steps:
s30121: determining one of the first indication path and the second indication path as a current visual searching indication path; the first indication path is the description from the first search auxiliary unit to the second search auxiliary unit, and the second indication path is the description from the second search auxiliary unit to the first search auxiliary unit;
s30122: taking an inquiry trigger state unit of a current visual search indication path as a current behavior state unit, if the current behavior state unit is not counted in scene type topic calling information, summarizing the current behavior state unit and the current visual search indication path corresponding to the current behavior state unit and the current behavior state unit in the scene type topic calling information;
s30123: searching the associated behavior state unit of the current behavior state unit in the scene type topic interaction log generated in advance, and sequentially accessing the searched associated behavior state units; if the sequentially visited associated behavior state units are summarized in the scene type topic calling information, finding out a visual area on the current visual search indication path as the target visual area through a visual search indication path corresponding to the associated behavior state units recorded in the scene type topic calling information; if the associated behavior state units accessed in sequence are not counted in the scene type topic calling information, recording a current visual search indication path corresponding to the associated behavior state unit, the current behavior state unit and a visual area of the associated behavior state unit into the scene type topic calling information;
s30124: if the target visualization area is not determined when the sequential access is terminated, selecting one behavior state unit from all the associated behavior state units of the current behavior state unit as a query trigger state unit of the current visualization search indication path; and then updating the current visual searching indication path to be the other one of the first indication path and the second indication path, and returning to the operation of taking the query trigger state unit of the current visual searching indication path as the current behavior state unit.
For example, assume that the first indication path is described from the first seeking auxiliary unit condition2 to the second seeking auxiliary unit condition7, and the second indication path is described from the second seeking auxiliary unit condition7 to the first seeking auxiliary unit condition 2. The query trigger state element of the first indicated path is the first search auxiliary unit condition2, and the query trigger state element of the second indicated path is the second search auxiliary unit condition 7.
The current visual search indication path may be the first indication path or the second indication path, and the following description will take the current visual search indication path as the first indication path as an example. Of course, here, only the first indicated path is determined as the current visual search indicated path when the search is started, and the current visual search indicated path may be updated when the search is started.
Since the current visual search instruction path is from the first search assistant unit condition2 to the second search assistant unit condition7, the query trigger status unit of the current visual search instruction path is the first search assistant unit condition2, and the first search assistant unit condition2 is used as the current behavior status unit to determine whether the first search assistant unit condition2 is summarized in the scene-type topic calling information.
At this time, since the first search assisting element condition2 is called for the first time and is not counted in the scene-type topic calling information, the current visual search instruction path (first instruction path) corresponding to the first search assisting element condition2 and the first search assisting element condition2 is recorded in the scene-type topic calling information.
The associated behavior state units of the first search assisting unit condition2, i.e., condition1, condition3, condition4, and condition5, are found in the scene-type topic interaction log generated in advance, and the associated behavior state units condition1, condition3, condition4, and condition5 are sequentially accessed. Through the sequential access to the association behavior state units condition1, condition3, condition4, condition5, it is determined that none of the association behavior state units condition1, condition3, condition4, condition5 are counted in the scene-type topic calling information, and the association behavior state units condition1, condition3, condition4, condition5 and the corresponding current visual search indication path (first indication path) are recorded in the scene-type topic calling information, and the association behavior state units condition1 to first search auxiliary unit condition2 visual area condition1condition2, association behavior state unit condition3 to first search auxiliary unit condition2 visual area condition3condition2, first search auxiliary unit condition2 to association behavior state unit condition4 visual area condition2, condition 862 to first search auxiliary unit condition 8653 visual area condition5, visual search auxiliary unit condition2 to visual search auxiliary unit condition 8653.
When the sequential access to the associated behavior state units condition1, condition3, condition4 and condition5 is terminated, and the target visual area is not determined, one behavior state unit is selected from all the associated behavior state units condition1, condition3, condition4 and condition5 of the first search assisting unit condition2 as the query trigger state unit of the current visual search indication path (first indication path).
Under some optional and independently implementable design considerations, in step S30124, the selecting one behavior state unit from all associated behavior state units of the current behavior state unit as the query trigger state unit of the current visual search indication path may include the following steps: determining operation behavior track quantitative evaluation corresponding to each associated behavior state unit, wherein the operation behavior track quantitative evaluation is the sum of a first behavior track quantitative evaluation from a query trigger state unit of a current visual search indication path to the associated behavior state unit and a second behavior track quantitative evaluation from the associated behavior state unit to a query termination state unit of the current visual search indication path; selecting a related behavior state unit with the lowest quantitative evaluation of the corresponding operation behavior track; and determining the selected associated behavior state unit as a query trigger state unit of the current visual search indication path.
The first behavior track is evaluated into the operation behavior track attribute from the query trigger state unit to the associated behavior state unit of the current visual search indication path in a quantitative manner, and the operation behavior track attribute can be represented by the length of the operation behavior track as the operation behavior track is searched; the second behavior trace is quantitatively evaluated as the operation behavior trace attribute from the associated behavior state unit to the query termination state unit of the current visual search indication path, and the operation behavior trace attribute can be represented by the euclidean metric between the associated behavior state unit and the query termination state unit of the current visual search indication path.
When the associated behavior state unit is called for the first time, the quantitative evaluation of the operation behavior track corresponding to the associated behavior state unit can also be recorded into the scene type topic calling information, so that for the associated behavior state unit which is already summarized in the scene type topic calling information, the quantitative evaluation of the operation behavior track corresponding to the associated behavior state unit can be directly obtained from the scene type topic calling information.
Continuing with the foregoing example, quantitative evaluations of operation behavior loci of all the associated behavior state units condition1, condition3, condition4, condition5 to the second search assisting unit condition7 of the first search assisting unit condition2 may be calculated, and an associated behavior state unit with the lowest quantitative evaluation of behavior loci is selected from all the associated behavior state units condition1, condition3, condition4 and condition5 as a query trigger state unit on the current visual search indication path (first indication path), such as the associated behavior state unit condition4 as a query trigger state unit of the current visual search indication path (first indication path).
And then, updating the current visual searching indication path into a second indication path, and returning to the operation of taking the query trigger state unit of the current visual searching indication path as a current behavior state unit.
The current visual search indication path is updated to a second indication path, the query trigger status unit of the second indication path is the second search assistant unit condition7, the query termination status unit of the second indication path is the first search assistant unit condition2, and the second search assistant unit condition7 is taken as the current behavior status unit to determine whether the second search assistant unit condition7 is summarized in the scene-type topic invocation information.
At this time, since the second search assisting element condition7 is called for the first time and is not counted in the scene-type topic calling information, the current visual search instruction path (second instruction path) corresponding to the second search assisting element condition7 and the second search assisting element condition7 is recorded in the scene-type topic calling information.
The associated behavior state units of the second search assisting unit condition7, i.e., condition6 and condition8, are found in the scene-type topic interaction log generated in advance, and the associated behavior state units condition6 and condition8 are sequentially accessed. After sequential access to the association status units condition6 and condition8, it is determined that neither the association status units condition6 or condition8 is counted in the scene-type topic calling information, and the association status units condition6 and condition8 and the corresponding current visual search instruction path (second instruction path) are recorded in the scene-type topic calling information, and the visualization area condition6condition7 from the association status unit condition6 to the second search aid unit condition7, and the visualization area condition7condition8 from the second search aid unit condition7 to the association status unit condition8 are recorded in the scene-type calling topic information.
If the target visualization area is not determined when the sequential access is terminated, one behavior state element is selected from all the associated behavior state elements condition6, condition8 of the second search assistant unit condition7 as the query trigger state element of the current visualization search instruction path (second instruction path). The behavior state units are selected in a manner similar to that described above, and the quantitative evaluation of the behavior trace corresponding to the associated behavior state unit condition6 is determined, such as the sum of the lengths of the operation behavior traces of the second searching auxiliary unit condition7 to the associated behavior state unit condition6 and the euclidean metric between the associated behavior state unit condition6 to the first searching auxiliary unit condition 2; the quantitative evaluation of the behavior trace corresponding to the association behavior state unit condition6 is determined, such as the sum of the length of the operation behavior trace from the second search assistant unit condition7 to the association behavior state unit condition8 (the length of the visualization area condition7condition 8) and the euclidean metric between the association behavior state unit condition8 to the first search assistant unit condition 2. When the associated behavior state units condition6 and condition8 are called for the first time, the quantitative evaluation of the behavior trace of the operation behavior corresponding to the associated behavior state units condition6 and condition8 is recorded in the calling information of the behavior trace of the operation.
The associated behavior state unit with the lowest quantitative evaluation of the behavior locus is selected from all the associated behavior state units condition6 and condition8 as the query trigger state unit on the current visual search indication path (the second indication path), for example, the associated behavior state unit condition6 is selected as the query trigger state unit of the current visual search indication path (the second indication path).
And then, updating the current visual searching indication path into a first indication path, and returning to the operation of taking the query trigger state unit of the current visual searching indication path as a current behavior state unit.
The current visual search indication path is updated to the first indication path, the query trigger status unit of the first indication path is behavior status unit condition4, the query termination status unit of the first indication path is first search auxiliary unit condition2, the behavior status unit condition4 is taken as the current behavior status unit, and whether the behavior status unit condition4 is summarized in the scene type topic invocation information is judged.
Since the behavior status unit condition4 is called for the second time, the behavior status unit condition4 has already been summarized in the scene-type topic invocation information, so there is no need to re-record the behavior status unit condition4 and related information into the scene-type topic invocation information.
The associated behavior state units of the behavior state unit condition4, i.e., condition2 and condition6, are found in the scene-type topic interaction log generated in advance, and the associated behavior state units condition2 and condition6 are sequentially accessed. Since the association status units condition2 and condition6 are already grouped in the scene-type topic calling information, when the association status units condition2 and condition6 are sequentially accessed, the visual area on the current visual search instruction path needs to be searched out as the target visual area through the visual search instruction path corresponding to the association status unit described in the scene-type topic calling information.
In some optional and independently implementable design considerations, in step S30123, the finding a visual area located on the current visual search indication path through the visual search indication path corresponding to the associated behavior state unit recorded in the scene-type topic invocation information as the target visual area may include:
comparing and analyzing a visual searching indication path corresponding to the associated behavior state unit recorded in the scene type topic calling information with the current visual searching indication path;
and when the comparison analysis result represents that the visual searching indication path corresponding to the associated behavior state unit is different from the current visual searching indication path, acquiring a visual area positioned on the current visual searching indication path from the scene type topic calling information as the target visual area.
Under some optional and independently implementable design ideas, the visual area on the current visual search indication path is acquired from the scene-type topic invocation information to serve as the target visual area, and then the sequential access can be terminated. When the visual search indication path corresponding to the associated behavior state unit recorded in the scene type topic calling information is the same as the current visual search indication path, continuing to sequentially access the next associated behavior state unit until all associated behavior state units of the current behavior state unit are sequentially accessed.
For example, when sequentially accessing the association status units condition2 and condition6, the association status units condition2 are sequentially accessed, and then the association status units condition6 are sequentially accessed, then: upon sequentially accessing the associated behavior state unit condition2, it is determined that the associated behavior state unit condition2 has been summarized in the scene-type topic calling information, and the visual search instruction path (first instruction path) corresponding to the associated behavior state unit condition2 described in the scene-type topic calling information is the same as the current visual search instruction path (first instruction path), so it is possible to continue sequentially accessing the associated behavior state unit condition 6.
Upon sequentially accessing the associated behavior status unit condition6, it is determined that the associated behavior status unit condition6 has been summarized in the scene-type topic calling information, and the visual search instruction path (second instruction path) corresponding to the associated behavior status unit condition6 described in the scene-type topic calling information is different from the current visual search instruction path (first instruction path), so it is possible to acquire the visual region located on the current visual search instruction path from the scene-type topic calling information as the target visual region, and terminate the sequential access.
It is understood that since the scene-type topic calling information summarizes the visualization areas condition1condition2, condition3condition2, condition2condition4, condition4condition6, condition6condition7 and condition7condition8, the visualization areas from the first search assisting unit condition2 to the second search assisting unit condition7, or the visualization areas from the first search assisting unit condition7 to the second search assisting unit condition3, which are respectively condition2condition4, condition4condition6 and condition6condition7, may be directly obtained from the scene-type topic calling information.
Further, these condition2condition4, condition4condition6, and condition6condition7 are set as target visualization areas.
In step S302, an optimized operation behavior trajectory between two undetermined behavior state units in the undetermined behavior state unit group is determined through the found target visualization region.
For example, the visualization region condition1condition2 where the first pending behavior state unit pending1 is located, the target visualization region condition2condition4, condition4condition6, condition6condition7, and the visualization region condition7condition8 where the second pending behavior state unit pending2 is located may be sequentially spliced (the same behavior state unit of the two visualization regions serves as a splicing point), so as to obtain the required optimized operation behavior trajectory.
Since the description of condition2 and condition8 is the case of being called for the first time, during the search, the visualization areas of condition1condition2, condition3condition2, condition2condition4, condition4condition6, condition6condition7, and condition7condition8 are added to the scene-type topic calling information for the first time, and if the subsequent condition2 or condition8 is called for the second time (for example, a search is performed on another to-be-determined behavior state unit group corresponding to the same topic behavior map unit group), information related to condition2condition4, such as operation behavior trajectory quantitative evaluation, etc., need not be subjected to additional arithmetic processing but may be acquired from the operation behavior trajectory calling information when the same visualization area, such as condition2condition4, is passed.
Through the manner, each pair of topic behavior map unit groups can obtain num2 optimized operation behavior tracks, num2 is the number of undetermined behavior state unit groups corresponding to the topic behavior map unit groups, and an optimized operation behavior track needs to be selected from the number of undetermined behavior state unit groups to serve as an optimized operation behavior track matched with the topic behavior map unit groups, so that a target operation behavior track matched with the current visualized behavior record can be determined.
In some independently implementable technical solutions, in step S400, determining, in the scene-type topic interaction log generated in advance, a target operation behavior trajectory adapted to a current visualized behavior record through an optimized operation behavior trajectory between two undetermined behavior state units in an undetermined behavior state unit group corresponding to each topic behavior map unit group, may include: aiming at each topic behavior map unit group, the following steps are executed: aiming at each undetermined behavior state unit group corresponding to the topic behavior map unit group, importing an optimized operation behavior track between two undetermined behavior state units in the undetermined behavior state unit group into an AI neural network which completes training in advance, and obtaining at least one importance degree variable for setting the key content possibility corresponding to the undetermined behavior state unit group; selecting one undetermined operation behavior track from optimized operation behavior tracks between two undetermined behavior state units in each undetermined behavior state unit group through at least one importance degree variable for setting key content possibility corresponding to each undetermined behavior state unit group; and carrying out global fusion on all the to-be-determined operation behavior tracks according to a preset behavior track splicing strategy to obtain the target operation behavior track.
In some embodiments, a pair of pending behavior state element groups corresponds to a set of importance variables, which may include: a first importance degree variable of a first key content possibility, wherein the first key content possibility is used for representing a difference condition between Euclidean measurement between two topic behavior map units in a topic behavior map unit group and an operation behavior track length between two undetermined behavior state units of an undetermined behavior state unit group corresponding to the topic behavior map unit group; a second importance degree variable of a second key content possibility, wherein the second key content possibility is used for representing a difference situation between a behavior transmission description of one topic behavior map unit in the topic behavior map unit group and a set description of an undetermined visualization region where a corresponding undetermined behavior state unit in the undetermined behavior state unit group is located; and a third important degree variable of a third key content possibility, wherein the third key content possibility is used for representing the difference situation between the average behavior transmission time of the current visual behavior record and the topic interaction aging requirement.
It can be understood that the change possibility between the visual regions to be determined where the two to-be-determined behavior state units in the corresponding to-be-determined behavior state unit groups are located is calculated according to a group of importance degree variables output by the AI neural network, and the change possibility corresponding to each to-be-determined behavior state unit group is obtained; and selecting one to-be-determined operation behavior track from the optimized operation behavior tracks between two to-be-determined behavior state units in each to-be-determined behavior state unit group according to the change possibility and the preset positioning possibility corresponding to each to-be-determined behavior state unit group.
The method for calculating the change possibility between the to-be-determined visualization areas where the two to-be-determined behavior state units in the corresponding to-be-determined behavior state unit group are located according to the set of importance degree variables output by the AI neural network may include the following steps: calculating a first quantized value taking the first key content possibility as a base number and taking a first importance degree variable of the first key content possibility in a group of importance degree variables output by the AI neural network as a power number; calculating a second quantized value with the second key content possibility as a base number and a second importance variable of the second key content possibility in a group of importance variables output by the AI neural network as a power number; calculating a third quantized value of which a third importance variable of the third key content possibility is a power number in a group of importance variables output by the AI neural network by taking the third key content possibility as a base number; and weighting the first quantized value, the second quantized value and the third quantized value to obtain the change possibility corresponding to the undetermined behavior state unit.
The AI neural network is trained in advance and stored in the server, and can be called from the equipment when needed. In the embodiment, the importance degree variable of different key content possibilities corresponding to the undetermined behavior state unit group is obtained through the AI neural network, so that more reasonable variation possibilities are calculated, the effect is better when the method is applied to scene type topic interaction logs with poor quality, and better adaptation performance can be kept when the method is applied to scenes of topic community levels.
Under some optional and independently implementable design considerations, the method may further include the technical solution described in the following step S500 on the basis of the description in step S400.
Step S500: and performing operation behavior safety analysis on the target operation behavior track to obtain a safety analysis result, and intercepting the interactive operation behavior of the topic user side corresponding to the target operation behavior track when the safety analysis result represents that the target operation behavior track has a behavior safety risk.
Under some optional and independently implementable design considerations, in step S500, the operation behavior safety analysis is performed on the target operation behavior trajectory to obtain a safety analysis result, which may include the technical solutions described in the following steps S501 to S504.
Step S501: and acquiring a target track feature description cluster to be subjected to safety analysis from the target operation behavior track.
Step S502: and respectively performing explicit intention mining and non-explicit intention mining on a plurality of track characteristic vectors in the target track characteristic description cluster to obtain an explicit intention mining result set and a non-explicit intention mining result set.
Step S503: performing first result correction processing on the explicit intention mining result set through a first preset correction strategy to obtain a first track feature description queue comprising an explicit intention; and carrying out second result correction processing on the non-explicit intention mining result set through a second preset correction strategy to obtain a second track feature description queue comprising the non-explicit intention.
Step S504: performing interference feature filtering based on the first track feature description queue and the second track feature description queue to obtain a target track feature description queue matched with a target intention in the target track feature description cluster; the target intention comprises at least one of an explicit intention and a non-explicit intention, and the target track characteristic description queue is used for carrying out safety analysis on the target track characteristic description cluster; and carrying out safety analysis on the target track characteristic description cluster through the target track characteristic description queue to obtain a safety score corresponding to the target track characteristic description cluster.
By the design, the dominant intention and the non-dominant intention can be comprehensively considered, so that the accuracy and the credibility of the security score are ensured. Further, whether the safety analysis result represents that the target operation behavior track has a behavior safety risk or not can be judged according to the size relation between the safety score and the preset score. And if the safety score is lower than the preset score, representing that the target operation behavior track has a behavior safety risk. Correspondingly, the interactive operation behavior of the topic user side corresponding to the target operation behavior track is intercepted.
Under some selective and independently implementable design ideas, performing explicit intention mining and non-explicit intention mining on a plurality of trajectory feature vectors in the target trajectory feature description cluster respectively to obtain an explicit intention mining result set and a non-explicit intention mining result set, including: respectively performing explicit intention mining on a plurality of track characteristic vectors in the target track characteristic description cluster to obtain explicit intention mining contents in each track characteristic vector and initial intention labels corresponding to each explicit intention mining content; determining an explicit intention mining result set based on the explicit intention mining contents and corresponding initial intention labels in the track feature vectors; and respectively performing non-explicit intention mining on the plurality of track characteristic vectors in the target track characteristic description cluster to obtain a non-explicit intention mining result set.
Under some selective and independently implementable design ideas, performing non-explicit intention mining on a plurality of trajectory feature vectors in the target trajectory feature description cluster respectively to obtain a non-explicit intention mining result set, including: respectively carrying out active intention mining on a plurality of track characteristic vectors in the target track characteristic vector to obtain active intention mining information corresponding to each track characteristic vector; respectively carrying out passive intention mining on a plurality of track characteristic vectors in the target track characteristic vector to obtain passive intention mining information corresponding to each track characteristic vector; associating active intention mining information and passive intention mining information corresponding to the same topic user side; and performing non-explicit intention mining processing on the basis of the passive intention mining information associated with the target active intention mining information in the target track characteristic vector to obtain a non-explicit intention mining result set.
Under some optional and independently implementable design considerations, the performing, by using a first preset modification strategy, a first result modification process on the explicit intention mining result set to obtain a first trajectory feature description queue including an explicit intention includes: respectively screening the intention labels for each track characteristic vector in the dominant intention mining result set to obtain a non-repetitive intention label corresponding to each track characteristic vector; respectively cleaning mining contents based on the content amount of the dominant intention mining contents corresponding to the corresponding unrepeated intention labels in each track feature vector to obtain an updated dominant intention mining result set; performing staged cleaning on the updated dominant intention mining result set to obtain a plurality of first alternative track feature description queues containing dominant intents; and according to the dominant categories to which the first alternative track feature description queues respectively belong, performing feature description optimization on the first alternative track feature description queues belonging to the same dominant category to obtain a first track feature description queue comprising a dominant intention.
To sum up, in the embodiment of the present application, when determining an optimized operation behavior trajectory between undetermined behavior state units in an undetermined behavior state unit group corresponding to each topic behavior map unit group in a current visual behavior record, the optimized operation behavior trajectory may be determined by scene-type topic invocation information that is counted in advance, because the scene-type topic invocation information summarizes operation behavior trajectory information between different behavior state units that have been invoked, when a same visual region is invoked repeatedly, the visual region may not need to be analyzed additionally, a large amount of repetitive analysis processing may be avoided to a certain extent, further, a target operation behavior trajectory adapted to the current visual behavior record is determined by each optimized operation behavior trajectory, time required to determine the operation behavior trajectory may be reduced as much as possible, and accuracy and integrity of the obtained target operation behavior trajectory may be ensured, therefore, analysis basis is rapidly and accurately provided for subsequent interactive behavior security analysis, and the influence on timeliness of the interactive behavior security analysis caused by excessive time consumed in the process of determining the target operation behavior track of the current visual behavior record is avoided.
Based on the same inventive concept, there is also provided a topic interaction behavior security processing device 20 applied to big data, which is applied to a topic server 10, and the device includes:
the state unit determining module 21 is configured to determine a corresponding pending behavior state unit for a topic behavior map unit in a current visualization behavior record in a scene-type topic interaction log generated in advance;
the unit grouping and pairing module 22 is used for determining each topic behavior map unit group in the current visual behavior record, each topic behavior map unit group is composed of two related topic behavior map units, and the undetermined behavior state unit group corresponding to the topic behavior map unit group is determined through the undetermined behavior state unit corresponding to the topic behavior map unit in the topic behavior map unit group;
the operation track determining module 23 is configured to determine, through scene type topic calling information which is subjected to statistics in advance, an optimized operation track between two undetermined behavior state units in the undetermined behavior state unit group; the scene type topic calling information summarizes operation behavior track information among different called behavior state units;
and the operation track generation module 24 is configured to determine, in the scene-type topic interaction log generated in advance, a target operation track adapted to the current visual behavior record through an optimized operation track between two undetermined behavior state units in the undetermined behavior state unit group corresponding to each topic behavior map unit group.
For the description of the above functional modules, refer to the description of the method shown in fig. 2.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.