Cloud service topic information processing method for big data and big data server

文档序号:7703 发布日期:2021-09-17 浏览:24次 中文

1. A cloud service topic information processing method for big data is applied to a big data server, and the method comprises the following steps:

acquiring first community service session information and second community service session information, wherein the first community service session information is a first service session theme description, and the second community service session information is a second service session theme description;

acquiring a first feature adjustment instruction between the first service session subject description in different topic scenes, a second feature adjustment instruction between the second service session subject description in different topic scenes, and a third feature adjustment instruction between the first service session subject description and the second service session subject description in a set topic scene;

and performing session demand binding on the first community service session information and the second community service session information through the first characteristic adjustment instruction, the second characteristic adjustment instruction and the third characteristic adjustment instruction.

2. The method of claim 1, wherein:

the first characteristic adjustment indication and the second characteristic adjustment indication are obtained based on session heat data between community service session information under different topic scenes and quantitative characteristics of at least one session state;

and/or the activated topic scene of the first community service session information is a first topic scene, and the activated topic scene of the second community service session information is a second topic scene; the first feature adjustment indication is a feature adjustment indication of the first service session topic description obtained through the first topic scene and a first set topic scene, the second feature adjustment indication is a feature adjustment indication of the second service session topic description obtained through the second topic scene and a second set topic scene, and the third feature adjustment indication is a feature adjustment indication between the first service session topic description in the first set topic scene and the second service session topic description in the second set topic scene.

3. The method according to claim 2, wherein the obtaining a first feature adjustment indication between the first service session subject descriptions in different topic scenarios or obtaining a second feature adjustment indication between the second service session subject descriptions in different topic scenarios includes:

setting a topic scene with the first topic scene as a target topic scene, setting the topic scene with the first set topic scene as a target, setting a business conversation topic description as a target business conversation topic description, and setting a first characteristic adjustment instruction as a target characteristic adjustment instruction, or setting a topic scene with the second topic scene as a target, setting a second business conversation topic description as a target business conversation topic description, and setting a second characteristic adjustment instruction as a target characteristic adjustment instruction;

selecting at least one first sample topic scene from a sample topic scene set, wherein the difference degree between the quantization labels of the first sample topic scene and the target topic scene is smaller than a set difference threshold value;

for each first sample topic scene, acquiring first feature adjustment information of the target service session topic description between the first sample topic scene and a target set topic scene;

and obtaining the target feature adjustment indication based on the first feature adjustment information.

4. The method as claimed in claim 3, wherein the at least one first sample topic scenario is two sample topic scenarios in the set of sample topic scenarios that have the highest scene similarity with the target topic scenario and are not the target topic scenario;

and/or the scene difference degree between every two consecutive sample topic scenes in the sample topic scene set is less than or equal to a set topic scene difference threshold value;

and/or, on the premise that the target feature adjustment instruction is a first feature adjustment instruction, the first feature adjustment information is feature adjustment information of the first business conversation topic description determined by the first sample topic scene and a first set topic scene, and on the premise that the target feature adjustment instruction is a second feature adjustment instruction, the first feature adjustment information is feature adjustment information of the second business conversation topic description determined by the second set topic scene and the first sample topic scene.

5. The method of claim 3, wherein obtaining the target feature adjustment indication based on the first feature adjustment information comprises:

obtaining second feature adjustment information of target service conversation topic description between the target topic scene and a target set topic scene based on the first feature adjustment information, wherein the first feature adjustment information and the second feature adjustment information both cover conversation heat data and quantitative features;

and obtaining the target characteristic adjustment indication through the second characteristic adjustment information.

6. The method of claim 5, wherein the first sample topic scenario is two;

the obtaining of second feature adjustment information of the target service session topic description between the target topic scene and the target set topic scene based on the first feature adjustment information includes:

acquiring a first topic scene difference degree between the target topic scene and a cold sample topic scene and a second topic scene difference degree between the two first sample topic scenes, wherein the cold sample topic scene is a topic scene without a hot label in the two first sample topic scenes;

obtaining conversation heat data in second feature adjustment information according to the conversation heat data in the first feature adjustment information, the first topic scene difference and the second topic scene difference;

obtaining quantitative features in second feature adjustment information according to the quantitative features in the first feature adjustment information, the first topic scene difference degree and the second topic scene difference degree;

correspondingly, the first feature adjustment information of the two first sample topic scenes comprises first conversation heat data and first quantized features of the hot first sample topic scene, and second conversation heat data and second quantized features of the cold first sample topic scene;

the obtaining of the conversation heat data in the second feature adjustment information through the conversation heat data in the first feature adjustment information, the first topic scene difference and the second topic scene difference includes:

obtaining conversation heat data in the second feature adjustment information according to an operation result between the weighting result of the first topic scene difference and the heat difference value and the second topic scene difference and the second conversation heat data, wherein the heat difference value is a difference between the first conversation heat data and the second conversation heat data;

the obtaining of the quantitative feature in the second feature adjustment information through the quantitative feature in the first feature adjustment information, the first topic scene difference degree, and the second topic scene difference degree includes:

and obtaining the quantization feature in the second feature adjustment information according to the second quantization feature and the operation result between the weighting result of the first topic scene difference and the feature difference and the second topic scene difference, wherein the feature difference is the difference between the first quantization feature and the second quantization feature.

7. The method as claimed in claim 3, wherein the obtaining of the first feature adjustment information of the target service session topic description between the first sample topic scenario and the target set topic scenario comprises:

taking a first sample topic scene, a target set topic scene, and a sample topic scene in the sample topic scene set between the first sample topic scene and the target set topic scene as a second sample topic scene;

obtaining a second sample feature adjustment indication through a first sample feature adjustment indication between the target service session topic descriptions in each continuous second sample topic scene; wherein the second sample feature adjustment indication is a feature adjustment indication of the target business session topic description between the first sample topic scenario and a target set topic scenario;

obtaining the first feature adjustment information based on the second sample feature adjustment indication;

correspondingly, the obtaining a second sample feature adjustment indication through a first sample feature adjustment indication between the target service session topic descriptions in each continuous second sample topic scenario includes:

if the target service session theme description is a first service session theme description, sorting the second sample topic scenes according to an ascending order of scene scale, and if the target service session theme description is a second service session theme description, sorting the second sample topic scenes according to a descending order of scene scale;

and merging the first sample feature adjustment indications corresponding to each continuous second sample topic scene under sequencing to obtain the second sample feature adjustment indication, wherein the first sample feature adjustment indication corresponding to the continuous second sample topic scenes is a feature adjustment indication of target service conversation topic description from a previous second sample topic scene to a next second sample topic scene in the continuous second sample topic scenes.

8. The method of claim 2, wherein the obtaining a third feature adjustment indication between the first service session topic description and the second service session topic description in the set topic scenario comprises:

acquiring third community service session information under the first set topic scene and fourth community service session information under the second set topic scene, wherein the third community service session information is a first service session topic description, and the fourth community service session information is a second service session topic description;

determining a plurality of subject description pairing results in the third community service session information and the fourth community service session information;

the theme description pairing result is obtained by setting pairing strategy pairing, or is determined based on description segments respectively selected by a service session client in the third community service session information and the fourth community service session information;

and obtaining the third feature adjustment indication through the plurality of theme description pairing results.

9. The method of claim 1, wherein the binding session requirements for the first and second community service session information via the first, second, and third characteristic adjustment indications comprises:

and fusing the first characteristic adjustment instruction, the second characteristic adjustment instruction and the third characteristic adjustment instruction to be used as a session requirement binding instruction between the first community service session information and the second community service session information.

10. A big data server is characterized by comprising a processor, a communication bus and a memory; the processor and the memory communicate via the communication bus, the processor reading a computer program from the memory and operating to perform the method of any of claims 1-9.

Background

Big data and cloud computing are indivisible. In the field of cloud services, different service clients perform service interaction and service processing through online communication, communication or conversation. In order to improve the service quality and efficiency of cloud services, it is generally necessary to mine the service requirements of different service clients. However, in practice, different business clients may participate in multiple topic discussions or conversation exchanges, and how to efficiently implement conversation demand analysis under different topics is a technical problem that needs to be overcome at present.

Disclosure of Invention

In view of this, the embodiment of the present application provides a cloud service topic information processing method for big data and a big data server.

The embodiment of the application provides a cloud service topic information processing method for big data, which is applied to a big data server and comprises the following steps:

acquiring first community service session information and second community service session information, wherein the first community service session information is a first service session theme description, and the second community service session information is a second service session theme description;

acquiring a first feature adjustment instruction between the first service session subject description in different topic scenes, a second feature adjustment instruction between the second service session subject description in different topic scenes, and a third feature adjustment instruction between the first service session subject description and the second service session subject description in a set topic scene;

and performing session demand binding on the first community service session information and the second community service session information through the first characteristic adjustment instruction, the second characteristic adjustment instruction and the third characteristic adjustment instruction.

The embodiment of the application also provides a big data server, which comprises a processor, a communication bus and a memory; the processor and the memory communicate via the communication bus, and the processor reads the computer program from the memory and runs the computer program to perform the method described above.

The embodiment of the application also provides a readable storage medium for a computer, wherein the readable storage medium stores a computer program, and the computer program realizes the method when running.

Therefore, in the above scheme, in the process of realizing session requirement binding between first community service session information and second community service session information of different session topics, by obtaining a feature adjustment indication between descriptions of the same service session topic under different topic scenes and a feature adjustment indication between descriptions of different service session topics under set topic scenes, session requirement binding can be performed between the first community service session information and the set topic scene under the same topic scene, session requirement binding can be performed between the first community service session information and the community service session information of another topic session under the set topic scene, then session requirement binding can be performed between the second community service session information and the second community service session information through the second feature adjustment indication, and through the above path session requirement binding mode, the method can realize the session requirement binding between the first community service session information and the second community service session information of different session topics, and can only carry out one-time session requirement binding between the community service session information of different session topics under the set topic scene no matter which topic scene carries out the session requirement binding between the community service session information of different session topics, thereby improving the accuracy and the efficiency of the session requirement binding result.

Drawings

Fig. 1 is a schematic block diagram of a big data server according to an embodiment of the present disclosure.

Fig. 2 is a flowchart of a cloud service topic information processing method for big data according to an embodiment of the present application.

Fig. 3 is a block diagram of a cloud service topic information processing apparatus for big data according to an embodiment of the present application.

Detailed Description

Fig. 1 shows a block diagram of a big data server 10 provided in an embodiment of the present application. The big data server 10 in the embodiment of the present application may be a server with data storage, transmission, and processing functions, as shown in fig. 1, the big data server 10 includes: the cloud business topic information processing device comprises a memory 11, a processor 12, a communication bus 13 and a cloud business topic information processing device 20 aiming at big data.

The memory 11, processor 12 and communication bus 13 are electrically connected, directly or indirectly, to enable the transfer or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 11 stores a cloud business topic information processing device 20 for big data, the cloud business topic information processing device 20 for big data includes at least one software functional module which can be stored in the memory 11 in a form of software or firmware (firmware), and the processor 12 executes various functional applications and data processing by running a software program and a module stored in the memory 11, for example, the cloud business topic information processing device 20 for big data in the embodiment of the present application, so as to implement the cloud business topic information processing method for big data in the embodiment of the present application.

The Memory 11 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 11 is used for storing a program, and the processor 12 executes the program after receiving an execution instruction.

The processor 12 may be an integrated circuit chip having data processing capabilities. The Processor 12 may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like. The various methods, steps and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

The communication bus 13 is used for establishing communication connection between the big data server 10 and other communication terminal devices through a network, and realizing the transceiving operation of network signals and data. The network signal may include a wireless signal or a wired signal.

It will be appreciated that the configuration shown in FIG. 1 is merely illustrative and that the big data server 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.

The embodiment of the application also provides a readable storage medium for a computer, wherein the readable storage medium stores a computer program, and the computer program realizes the method when running.

Fig. 2 shows a flowchart of cloud business topic information processing for big data provided by an embodiment of the present application. The method steps defined by the related flow of the method are applied to the big data server 10 and can be realized by the processor 12, and the method comprises the following contents described in steps 210-230.

Step210, the big data server obtains first community service session information and second community service session information.

In an embodiment of the present application, the first community service session information is a first service session topic description, and the second community service session information is a second service session topic description. In other words, the first community business session information corresponds to a first session topic, the second community business session information corresponds to a second session topic, and a difference exists between the first session topic and the second session topic. Further, the community service session information may be understood as interactive information exchanged between different community users, including but not limited to text information, voice information or graphical information.

On the basis of the above, the service session topic description may be used to characterize key information or feature descriptions of different community service session information under corresponding session topics, for example, the key information or feature descriptions may be expressed in the form of feature vectors or feature maps.

Step220, the big data server obtains a first feature adjustment instruction between the first service session subject descriptions in different topic scenes, a second feature adjustment instruction between the second service session subject descriptions in different topic scenes, and a third feature adjustment instruction between the first service session subject descriptions and the second service session subject descriptions in set topic scenes.

In the embodiment of the present application, the topic scene may be a business session block, such as an office block, an education block, a shopping block, or other blocks. Further, the feature adjustment indication is used for adjusting, transforming or mapping different service session subject descriptions, so as to achieve mining and analysis of related session requirements, for example, achieve binding of session requirements under different topic scenes, thereby mining cross-topic scene requirements of community users or service users as much as possible, and ensuring the richness of the depth of session requirement mining.

For example, in some possible embodiments, the first feature adjustment indication and the second feature adjustment indication are both obtained based on the session popularity data between the community service session information in the different topic scenarios and the quantified feature of at least one session state.

For another example, based on the above embodiment, the activated topic scene of the first community service session information is a first topic scene, and the activated topic scene of the second community service session information is a second topic scene; the first feature adjustment indication is a feature adjustment indication of the first service session topic description obtained through the first topic scene and a first set topic scene, the second feature adjustment indication is a feature adjustment indication of the second service session topic description obtained through the second topic scene and a second set topic scene, and the third feature adjustment indication is a feature adjustment indication between the first service session topic description in the first set topic scene and the second service session topic description in the second set topic scene.

In the embodiment of the present application, the session state may include a series of states, such as an active state, a suspended state, or a terminated state, and based on this, the quantified characteristic of the session state may be understood as an offset or a deviation. The session popularity data is used to characterize the popularity of the session.

On the basis of the above embodiment, the obtaining of the first feature adjustment indication between the first service session subject descriptions in different topic scenes or the obtaining of the second feature adjustment indication between the second service session subject descriptions in different topic scenes, which is described in Step220, may include the following contents described in Step221-Step 224.

Step221, setting the first topic scene as a target topic scene, the first set topic scene as a target set topic scene, the first service session topic description as a target service session topic description, and the first feature adjustment instruction as a target feature adjustment instruction, or setting the second topic scene as a target topic scene, the second set topic scene as a target set topic scene, the second service session topic description as a target service session topic description, and the second feature adjustment instruction as a target feature adjustment instruction.

Step222, selecting at least one first sample topic scene from the sample topic scene set, wherein the difference degree between the quantization labels of the first sample topic scene and the target topic scene is less than a set difference threshold value.

For example, the sample topic scene set may be understood as a reference topic scene set. Further, the difference between the quantitative labels can be understood as the difference between the keyword labels of different topic scenes, and can also be understood as the degree and the condition of distinction between different topic scenes.

Step223, for each first sample topic scene, obtaining first feature adjustment information of the target service session topic description between the first sample topic scene and a target set topic scene.

In the embodiment of the present application, the first feature adjustment information of the target service session topic description between the first sample topic scene and the target set topic scene is used to represent a conversion situation or an adjustment situation of the target service session topic description in the first sample topic scene and in the target set topic scene.

In some independently implementable technical solutions, the obtaining of the first feature adjustment information of the target service conversation topic description between the first sample topic scene and the target set topic scene, which is described in Step223, may include the technical solutions described in Step2231-Step 2233.

Step2231, regarding a first sample topic scene, a target set topic scene, and a sample topic scene located between the first sample topic scene and the target set topic scene in the sample topic scene set as a second sample topic scene.

For example, the sample topic scenario may be understood as a topic scenario located between the first sample topic scenario and the target setting topic scenario in a time sequence level.

Step2231, obtaining a second sample characteristic adjustment indication through a first sample characteristic adjustment indication between the target service conversation topic descriptions in each continuous second sample topic scene; wherein the second sample feature adjustment indication is a feature adjustment indication of the target business session topic description between the first sample topic scenario and a target set topic scenario.

For example, the first sample feature adjustment indication between the target business session topic descriptions in each successive second sample topic scenario may be understood as the first sample feature adjustment indication between the target business session topic descriptions in each adjacent second sample topic scenario.

In some independently implementable technical solutions, the Step2231 obtains a second sample characteristic adjustment indication through a first sample characteristic adjustment indication between the target service session topic descriptions in each successive second sample topic scenario, and may include Step22311 and Step 22312.

Step22311, if the target service session topic is described as a first service session topic, sorting the second sample topic scenes according to an ascending order of scene scale, and if the target service session topic is described as a second service session topic, sorting the second sample topic scenes according to a descending order of scene scale.

In the embodiment of the application, the scene size may be determined by topic participation objects in the topic scene, for example, the scene size is larger as topic participation objects are more, and the scene size is smaller as topic participation objects are less.

Step22312, merging the first sample characteristic adjustment indications corresponding to each successive second sample topic scene under the ordering to obtain the second sample characteristic adjustment indication.

In this embodiment of the application, the first sample feature adjustment indication corresponding to the consecutive second sample topic scenes is a feature adjustment indication of a target business conversation topic description from a previous one of the consecutive second sample topic scenes to a next one of the consecutive second sample topic scenes.

In this way, with Step22311 and Step22312, the second sample characteristic adjustment indication can be completely determined, and partial absence of the second sample characteristic adjustment indication is avoided.

Step2231, obtaining the first feature adjustment information based on the second sample feature adjustment indication.

It can be appreciated that by determining the second sample feature adjustment indication, a high correlation between the topic scene and the business session topic description can be ensured, thereby ensuring the availability of the first feature adjustment information determined by the second sample feature adjustment indication.

Step224, obtaining the target feature adjustment instruction based on the first feature adjustment information.

In some possible embodiments, the at least one first sample topic scenario is two sample topic scenarios in the sample topic scenario set that have the highest scene similarity with the target topic scenario and are not the target topic scenario; and/or the topic scene difference degree between every two consecutive sample topic scenes in the sample topic scene set is less than or equal to a set topic scene difference threshold value; and/or, on the premise that the target feature adjustment instruction is a first feature adjustment instruction, the first feature adjustment information is feature adjustment information of the first business conversation topic description determined by the first sample topic scene and a first set topic scene, and on the premise that the target feature adjustment instruction is a second feature adjustment instruction, the first feature adjustment information is feature adjustment information of the second business conversation topic description determined by the second set topic scene and the first sample topic scene. In the embodiment of the present application, unlike the difference degree between the quantization labels, the topic scene difference degree can be determined by scene features or scene descriptions.

In some possible embodiments, the obtaining the target feature adjustment indication based on the first feature adjustment information described in Step224 may include the following: obtaining second feature adjustment information of target service conversation topic description between the target topic scene and a target set topic scene based on the first feature adjustment information, wherein the first feature adjustment information and the second feature adjustment information both cover conversation heat data and quantitative features; and obtaining the target characteristic adjustment indication through the second characteristic adjustment information.

By means of the design, the accuracy of the target characteristic adjustment instruction can be ensured through a transmission type characteristic adjustment instruction determining mode.

In some possible embodiments, the first sample topic scenario may be two. Based on this, the obtaining of the second feature adjustment information described in the above Step based on the first feature adjustment information to obtain the target service session topic between the target topic scene and the target set topic scene may include the following technical solutions described in steps 2241-2243.

Step2241, acquiring a first topic scene difference degree between the target topic scene and the cold sample topic scene and a second topic scene difference degree between the two first sample topic scenes.

Wherein the cold sample topic scenes are topic scenes without hot tags in the two first sample topic scenes.

Step2242, obtaining conversation heat data in second feature adjustment information according to the conversation heat data in the first feature adjustment information, the first topic scene difference and the second topic scene difference.

And Step2243, obtaining the quantitative feature in the second feature adjustment information according to the quantitative feature in the first feature adjustment information, the first topic scene difference degree and the second topic scene difference degree.

By means of the design, when the conversation heat data and the quantization feature in the second feature adjustment information are determined, the conversation heat data, the first topic scene difference degree, the second topic scene difference degree, the quantization feature in the first feature adjustment information, the first topic scene difference degree and the second topic scene difference degree in the first feature adjustment information can be determined in a crossed manner, so that strong correlation between the conversation heat data of the second feature adjustment information and the quantization feature of the second feature adjustment information is ensured, and the quality of the second feature adjustment information is ensured.

In some possible embodiments, the first feature adjustment information of the two first sample topic scenes includes first conversation heat data and first quantized features of the trending first sample topic scene, and second conversation heat data and second quantized features of the cooling first sample topic scene.

On the basis of the above, the obtaining of the conversation heat data in the second feature adjustment information according to the conversation heat data in the first feature adjustment information, the first topic scene difference, and the second topic scene difference, which is described in Step2242, may include: and obtaining conversation heat data in the second feature adjustment information according to the operation result between the weighting result of the first topic scene difference and the heat difference value and the second topic scene difference and the second conversation heat data, wherein the heat difference value is the difference between the first conversation heat data and the second conversation heat data.

The operation result between the weighted result of the first topic scene difference and the heat difference value and the second topic scene difference may be a quotient between the weighted result of the first topic scene difference and the heat difference value and the second topic scene difference, and further, the quotient and the second conversation heat data are added to obtain the conversation heat data in the second feature adjustment information.

Further, the obtaining of the quantized feature in the second feature adjustment information through the quantized feature in the first feature adjustment information, the first topic scene difference degree, and the second topic scene difference degree described in Step2243 may include the following: and obtaining the quantization feature in the second feature adjustment information according to the second quantization feature and the operation result between the weighting result of the first topic scene difference and the feature difference and the second topic scene difference, wherein the feature difference is the difference between the first quantization feature and the second quantization feature.

For example, a product of the first topic scene difference and the feature difference may be determined as a weighted result of the first topic scene difference and the feature difference, a quotient between the weighted result of the first topic scene difference and the feature difference and the second topic scene difference is determined as an operation result between the weighted result of the first topic scene difference and the feature difference and the second topic scene difference, and an operation result between the weighted result of the first topic scene difference and the feature difference and the second topic scene difference and a second quantized feature are added to obtain a second quantized feature.

It is understood that, with the above, the session popularity data and the quantized feature in the second feature adjustment information can be accurately obtained by a linear calculation method.

In the embodiment of the application, through the content, the adaptation situation of the service session topic description under different topic scenes can be considered, so that the accuracy of the target feature adjustment indication and the scene adaptation are ensured.

In some other embodiments, the obtaining of the third feature adjustment indication between the first service session topic description and the second service session topic description in the set topic scenario described in Step220 may include: acquiring third community service session information under the first set topic scene and fourth community service session information under the second set topic scene; determining a plurality of subject description pairing results in the third community service session information and the fourth community service session information; and obtaining the third feature adjustment indication through the plurality of theme description pairing results.

In an embodiment of the present application, the third community service session information is a first service session topic description, and the fourth community service session information is a second service session topic description, and further, the topic description pairing result is obtained by setting a pairing policy (setting a matching algorithm) pair, or is determined based on description segments selected by a service session client in the third community service session information and the fourth community service session information respectively. Further, the business session client may be understood as a community user. The description fragment may be understood as part of the community service session information.

And Step230, the big data server performs session requirement binding on the first community service session information and the second community service session information through the first characteristic adjustment instruction, the second characteristic adjustment instruction and the third characteristic adjustment instruction.

In the embodiment of the application, the business requirement matching and association of the business conversation information of different communities can be realized by binding the conversation requirements, so that the business requirements or the conversation requirements can be mined and analyzed as large as possible. Based on this, the binding of the session requirement to the first community service session information and the second community service session information through the first feature adjustment instruction, the second feature adjustment instruction and the third feature adjustment instruction described in Step230 may include the following: and fusing the first characteristic adjustment instruction, the second characteristic adjustment instruction and the third characteristic adjustment instruction to be used as a session requirement binding instruction between the first community service session information and the second community service session information.

For example, the first feature adjustment instruction, the second feature adjustment instruction, and the third feature adjustment instruction may be integrated on a global level to obtain a session requirement binding instruction between the first community service session information and the second community service session information, so that local session requirements or fragmented session requirements of the first community service session information and the second community service session information may be bound or paired through the session requirement binding instruction to obtain session requirements or service requirements of the same service session client under different topic scenarios or different session topics. Therefore, the binding of the conversation requirements under different topic scenes can be realized, the cross-topic scene requirements of the community users or the business users are mined as far as possible, and the richness of the depth of the conversation requirement mining is ensured.

In some optional and independently implementable technical solutions, after determining the session requirement binding indication between the first community service session information and the second community service session information, the method may further include: and performing session demand binding on the first community service session information and the second community service session information based on the session demand binding indication to obtain a global demand binding result.

In some selective and independently implementable technical solutions, after obtaining the global demand binding result, the method further includes: and performing topic interaction security detection according to the global demand binding result, determining session operation behavior track information corresponding to the service session client when detecting that the service session client corresponding to the global demand binding result triggers a topic interaction security check mechanism (such as a preset security check triggering condition or a security check triggering rule), and performing topic interaction security analysis according to the session operation behavior track information.

In some optional and independently implementable technical solutions, determining session operation behavior trajectory information corresponding to the service session client may include the following: determining a corresponding undetermined behavior state unit for a topic behavior map unit in a current visual behavior record in a scene type topic interaction log generated in advance; aiming at each topic behavior map unit group in the current visual behavior record, the topic behavior map unit group consists of two related topic behavior map units, and an undetermined behavior state unit group corresponding to the topic behavior map unit group is determined through an undetermined behavior state unit corresponding to the topic behavior map unit in the topic behavior map unit group; determining an optimized operation behavior track between two undetermined behavior state units in an undetermined behavior state unit group through scene type topic calling information which is counted in advance; the scene type topic calling information summarizes operation behavior track information among different called behavior state units; and determining a target operation behavior track matched with the current visual behavior record in the scene type topic interaction log generated in advance through an optimized operation behavior track between two undetermined behavior state units in the undetermined behavior state unit group corresponding to each topic behavior map unit group.

In some selective and independently implementable technical solutions, determining session operation behavior trajectory information corresponding to a service session client may further be implemented by the following implementation manners.

In step S100, a corresponding pending behavior state unit is determined for the topic behavior map unit in the current visual behavior record in the scene type topic interaction log generated in advance.

For example, the scene-type topic interaction log is created according to the visual characteristics of the topic interaction conditions in the big data topic environment, and can cover a plurality of visual areas in a certain visual interval, wherein different visual areas correspond to different topic interaction conditions. The visualization interval of the scene-type topic interaction log is not limited, for example, the scene-type topic interaction log may be a scene-type topic interaction log of a topic community or a scene-type topic interaction log of a topic platform.

The current visual behavior record is the visual behavior record which needs to be matched with the scene type topic interaction log to determine the corresponding operation behavior track. The current visual behavior record may be transmitted by a behavior detection thread for capturing user operations, for example, the current visual behavior record may be a dynamic visual behavior record of a business session client transmitted by the behavior detection thread on the business session client, where a source of the visual behavior record is not limited to the behavior detection thread, and a target of the behavior detection is not limited to the business session client.

When determining the corresponding undetermined behavior state unit for the topic behavior map unit in the current visualization behavior record, the topic behavior map unit may be mapped to the neighbor visualization region in the scene-type topic interaction log, and the undetermined behavior state unit of the topic behavior map unit is determined according to the behavior state mapping unit. The neighbor visualization region may be, for example, a visualization region overlapping with a search interval determined according to the topic behavior map unit, or a visualization region in the search interval, and is not limited to this.

The number of the undetermined behavior state units corresponding to each topic behavior map unit in the current visual behavior record can be one or more, and the specific number is determined according to the size of the search interval and the density degree of the visual area.

In step S200, for each topic behavior map unit group in the current visual behavior record, the topic behavior map unit group is composed of two related topic behavior map units, and the undetermined behavior state unit group corresponding to the topic behavior map unit group is determined by the undetermined behavior state unit corresponding to the topic behavior map unit in the topic behavior map unit group.

The number of the topic behavior map unit groups is determined according to the number of the topic behavior map units in the current visual behavior record, for example, the number of the topic behavior map units in the current visual behavior record is num1, and the number of the topic behavior map unit groups can be num 1-1. For example, the current visualization behavior record sequentially covers four topic behavior map units, namely topic _ map _ unit1-topic _ map _ unit4, according to the generation time sequence of the topic behavior map units, and then the formed topic behavior map unit groups may be:

(topic_graph_unit1,topic_graph_unit2);

(topic_graph_unit2,topic_graph_unit3);

(topic_graph_unit3、topic_graph_unit4)。

because each topic behavior map unit can have a plurality of undetermined behavior state units, a plurality of undetermined behavior state unit groups can be determined for a pair of topic behavior map unit groups. Of course, the case that one undetermined behavior state unit corresponds to each topic behavior map unit in the pair of topic behavior map unit groups is not excluded, and in this case, the pair of undetermined behavior state unit groups can be determined for the topic behavior map unit group.

In the embodiment of the application, the topic behavior map unit can be understood as a map node of topic interaction behavior, and a Knowledge map (Knowledge Graph) can be referred to for a node processing technology of the topic behavior map unit. Furthermore, the behavior state unit corresponding to the topic behavior map unit is used for representing the interactive behavior state of the topic behavior map unit, and can be understood as mapping and nodularization processing of the interactive behavior state in a certain sense.

Under some optional and independently implementable design ideas, in step S200, the determining, by the to-be-determined behavior state unit corresponding to the topic behavior map unit in the topic behavior map unit group, the to-be-determined behavior state unit group corresponding to the topic behavior map unit group may include the following steps:

s201: aiming at each topic behavior map unit in the topic behavior map unit group, acquiring all undetermined behavior state units corresponding to the topic behavior map unit;

s202: and pairing each undetermined behavior state unit corresponding to one topic behavior map unit in the topic behavior map unit group with each undetermined behavior state unit corresponding to the other topic behavior map unit to obtain an undetermined behavior state unit group.

In this way, the obtained undetermined behavior state unit group comprises two undetermined behavior state units, wherein one undetermined behavior state unit corresponds to one topic behavior map unit in the topic behavior map unit group, and the other undetermined behavior state unit corresponds to the other topic behavior map unit in the topic behavior map unit group.

In actual application, for example, each topic behavior map unit in a pair of topic behavior map unit groups corresponds to a plurality of pending behavior state units, such as the topic behavior map unit groups are (topic _ graph _ unit1, topic _ graph _ unit 2).

Wherein, the topic behavior map unit topic _ graph _ unit1 corresponds to two undetermined behavior state units behavior _ state _ unit1 and behavior _ state _ unit2, and the topic behavior map unit topic _ graph _ unit2 corresponds to two undetermined behavior state units behavior _ state _ unit3 and behavior _ state _ unit4, and the undetermined behavior state unit groups determined by the topic behavior map unit group are four pairs, including:

(behaviour_state_unit1,behaviour_state_unit3);

(behaviour_state_unit1,behaviour_state_unit4);

(behaviour_state_unit2,behaviour_state_unit3);

(behaviour_state_unit2,behaviour_state_unit4)。

in step S300, an optimized operation behavior track between two undetermined behavior state units in an undetermined behavior state unit group is determined through scene type topic calling information which is counted in advance; the scene type topic calling information summarizes operation behavior track information among different behavior state units which are called.

For example, the scene-type topic invocation information may be scene-type topic access information. In a general case, that is, before determining an optimized operation behavior trajectory between two undetermined behavior state units in a first pair of undetermined behavior state unit groups corresponding to the first pair of visualization behavior record pairs, the scene-type topic invocation information may be understood as an empty set. When the optimized operation behavior track is determined, the called behavior state units and the corresponding visualization areas, that is, the operation behavior track information among the called different behavior state units, can be summarized in the scene type topic calling information. Therefore, when repeated and same behavior state units and corresponding visualization areas are encountered, the scene type topic calling information summarizes corresponding operation behavior track information, so that the repeated and same visualization areas do not need to be analyzed and processed additionally, and the time and computer resources consumed for determining the optimized operation behavior track are reduced.

In other words, when the optimized operation behavior track between two undetermined behavior state units in the undetermined behavior state unit group is determined through the scene type topic calling information which is counted in advance, whether corresponding operation behavior track information is recorded in the scene type topic calling information or not can be determined, if yes, the corresponding operation behavior track information does not need to be analyzed and processed additionally, and if not, the operation behavior track information is analyzed and processed, and the operation behavior track information obtained through analysis and processing is added into the scene type topic calling information.

Therefore, the optimized operation behavior track between two undetermined behavior state units in all the undetermined behavior state unit groups can be determined, and on the premise that the topic behavior map unit groups correspond to the multiple undetermined behavior state unit groups, the pair of topic behavior map unit groups can correspond to the multiple optimized operation behavior tracks.

In step S400, a target operation behavior trajectory adapted to the current visual behavior record is determined in the scene-type topic interaction log generated in advance through an optimized operation behavior trajectory between two undetermined behavior state units in an undetermined behavior state unit group corresponding to each topic behavior map unit group.

In the embodiment of the application, the target operation behavior trajectory may be used for subsequent interactive behavior safety protection analysis, for example, whether the interactive behavior is abnormal or whether a risk exists may be quickly and accurately determined through the target operation behavior trajectory.

The optimized operation behavior track between two undetermined behavior state units in the undetermined behavior state unit group corresponding to each topic behavior map unit group can be regarded as the optimized operation behavior track from one topic behavior map unit to another topic behavior map unit in the topic behavior map unit group. On the basis that the optimized operation behavior track from one topic behavior map unit to another topic behavior map unit in every two related topic behavior map units in the current visual behavior record is determined, the target operation behavior track adapted to the current visual behavior record can be determined, and the target operation behavior track can be regarded as the optimized operation behavior track from the starting topic behavior map unit to the ending topic behavior map unit in the current visual behavior record.

In some examples, optimizing the operation behavior trajectory may be understood as simplifying the operation behavior trajectory to the greatest extent on the premise of satisfying a certain feature recognition degree. Therefore, the efficiency of subsequent operation behavior trace processing can be improved.

In other examples, there may be a plurality of optimized operation behavior tracks between two undetermined behavior state units in the undetermined behavior state unit group corresponding to some topic behavior map unit groups, and in this case, an optimal optimized operation behavior track needs to be selected from the plurality of optimized operation behavior tracks as the optimized operation behavior track adapted to the topic behavior map unit group.

For example, the positioning possibility and the change possibility corresponding to each undetermined behavior state unit group can be calculated, and an optimized operation behavior trajectory is selected from a plurality of optimized operation behavior trajectories according to the positioning possibility and the change possibility corresponding to each undetermined behavior state unit group as the optimized operation behavior trajectory matched with the topic behavior map unit group; wherein, the positioning possibility corresponding to the state unit group of the undetermined behavior comprises: the positioning possibility of each undetermined behavior state unit in the undetermined behavior state unit group, namely the positioning possibility of the undetermined behavior state unit, namely the probability that the corresponding topic behavior map unit is positioned in an undetermined visual region where the undetermined behavior state unit is positioned; the change possibility corresponding to the state unit group of the undetermined behavior comprises the following steps: and the probability that the behavior detection target is transferred from the undetermined visualization region where one undetermined behavior state unit in the undetermined behavior state unit group is located to the undetermined visualization region where the other undetermined behavior state unit is located.

The above selection manner is only an example, and further the selection manner is not limited to this, and there may be other manners, for example, different importance weights may be configured for the change possibilities corresponding to different pending behavior state unit groups, and then the selection may be performed according to the change possibilities after the importance weights are configured.

Of course, there may be only one optimized operation behavior trajectory between two undetermined behavior state units in the undetermined behavior state unit group corresponding to some topic behavior map unit groups, and in this case, the optimized operation behavior trajectory may be directly used as the optimized operation behavior trajectory adapted to the topic behavior map unit group.

After the optimized operation behavior tracks matched with all topic behavior map unit groups are determined, the optimized operation behavior tracks matched with all topic behavior map unit groups can be spliced and fused to obtain a target operation behavior track matched with the current visual behavior record.

In the embodiment of the application, when the optimized operation behavior track between the undetermined behavior state units in the undetermined behavior state unit group corresponding to each topic behavior map unit group in the current visual behavior record is determined, the optimized operation behavior track can be determined by the scene type topic calling information which is counted in advance, since the scene type topic invocation information summarizes operation behavior trace information between different behavior state units that have been invoked, when the same visual area is called repeatedly, the visual area does not need to be analyzed and processed additionally, a large amount of repeated analysis and processing can be avoided to a certain extent, and furthermore, and determining a target operation behavior track adapted to the current visual behavior record through each optimized operation behavior track, so that the time consumed for determining the operation behavior track can be reduced as much as possible, and the accuracy and the integrity of the obtained target operation behavior track are ensured.

In some independently implementable technical solutions, in step S100, the determining, in the scene-type topic interaction log generated in advance, a corresponding pending behavior state unit for the topic behavior map unit in the current visualization behavior record includes:

s101: for each topic behavior map unit in the current visual behavior record, determining a search interval corresponding to the topic behavior map unit in the scene type topic interaction log generated in advance, determining an undetermined visual area corresponding to the topic behavior map unit in the scene type topic interaction log generated in advance through the search interval, determining a behavior state mapping unit mapped to the undetermined visual area by the topic behavior map unit, and determining an undetermined behavior state unit corresponding to the topic behavior map unit in the scene type topic interaction log generated in advance through the behavior state mapping unit.

In the embodiment of the present application, the search interval may be understood as a search range.

Under some optional and independently implementable design ideas, in step S101, determining a search interval corresponding to the topic behavior map unit in the scene-type topic interaction log generated in advance may include the following steps:

s1011: searching the topic behavior map unit in the scene type topic interaction log generated in advance, and determining a target interval which takes the topic behavior map unit as a hot spot unit and the number of preset connecting edges as constraints;

s1012: and determining the target interval as a searching interval corresponding to the topic behavior map unit.

In other words, the search interval corresponding to the topic behavior map unit may be a target interval that takes the topic behavior map unit as a hot spot unit and takes the number of preset connected edges as a constraint. The preset number of the connecting edges can be determined according to the required operation behavior trajectory precision, and certainly can also be determined by comprehensively determining the operation resource overhead required by the state unit of the undetermined behavior at the same time.

Of course, the above search interval is an alternative example, and is not limited thereto. For example, the search space may also be a graphical space, and the manner of determining the search space may be determined according to the actual shape of the search space.

The search interval corresponding to the topic behavior map unit is used for determining an undetermined visualization area corresponding to the topic behavior map unit.

Further, the hotspot unit may be understood as an origin, and the preset number of connected edges may be understood as a constraint, which is that the length corresponding to the preset number of connected edges is used as a radius to perform region enclosure processing, so as to obtain a target interval.

Under some optional and independently implementable design ideas, in step S101, determining an undetermined visualization region corresponding to the topic behavior map unit in the scene-type topic interaction log generated in advance through the search interval may include the following steps:

in step S1013, a first visual log segment covering the search interval is determined from the scene-type topic interaction log generated in advance.

On the basis of the search interval, a first visual log paragraph may be determined from the scene-type topic interaction log generated in advance, where the first visual log paragraph covers the search interval.

To reduce additional resource overhead in determining a pending visualization region, the first visualization log paragraph can be the smallest visualization log paragraph that encompasses the seek interval. Of course, the first visualization log segment is not limited herein, and may be determined as needed, for example, a visualization log segment that covers the search interval and is slightly larger than the smallest visualization log segment, as long as the search interval is covered.

In step S1014, a target end topology unit meeting the requirement is found from an original topology unit of the hierarchical visualization topology that is constructed in advance, where the requirement is: the visualization interval corresponding to the terminal topological unit is overlapped with the first visualization log paragraph.

Before that, a plurality of layers of visual intervals are divided from the scene type topic interaction log according to a preset interval classification strategy, and then a hierarchical visual topology (a node tree model or a node tree path) is established according to each layer of visual intervals obtained by classifying the scene type topic interaction log. In other words, the scene-type topic interaction log is divided into a plurality of layers of visualization intervals in advance, and the hierarchical visualization topology is established in advance according to the layers of visualization intervals obtained by classifying the scene-type topic interaction log. The scene type topic interaction log can be divided in the following ways: determining a visual log paragraph covering all visual intervals of the scene type topic interaction log; and dividing the visual log paragraph into N layers of visual intervals, wherein N is greater than 1.

The layer 1 visualization interval is all visualization intervals in the visualization log section, namely all visualization intervals of the scene-type topic interaction log, the layer 2 visualization interval is 4 visualization intervals obtained by quartering the visualization log section, the layer 3 visualization interval is 16 visualization intervals obtained by further quartering each visualization interval in the 4 visualization intervals, and so on.

For example, all visualization intervals interval10 of the scene-type topic interaction log are taken as the layer 1 interval; the 1 st layer interval is divided into 42 nd layer intervals, which are interval11, interval12, interval13 and interval 14; each layer 2 interval is further divided into four layer 3 intervals, taking interval11 as an example, interval11 is divided into four layer 3 intervals, interval111, interval112, interval113 and interval114, and other similar intervals are omitted here. As such, the scene-type topic interaction log is divided into 3-layer visualization intervals.

Of course, the above-mentioned scenario-type topic interaction log division situation is only an example for easy understanding, and there is a difference in the intervals of the actual scenario-type topic interaction log, such as a topic platform, or a topic network, and therefore, the actual division situation may be divided into more layers.

The visual log paragraphs are only convenient to divide, and actually, all visual intervals of the scene-type topic interaction log can also be divided in other manners, and the specific dividing manner is not limited.

Based on N layers of visual intervals obtained by classifying the scene type topic interaction logs, a hierarchical visual topology can be established, wherein an original topological unit of the hierarchical visual topology corresponds to all visual intervals of the scene type topic interaction logs, an x-th layer of topological units in the hierarchical visual topology corresponds to an x-th layer of visual intervals in the scene type topic interaction logs, and x is greater than or equal to 1 and less than or equal to N.

In order to better understand the establishment method of the hierarchical visualization topology, the hierarchical visualization topology is taken as a tree model for illustration. An original topological unit of the hierarchical visualization topology, namely a layer 1 topological unit, corresponds to all visualization intervals interval10 of the scene-type topic interaction log, four downstream topological units of the original topological unit, namely a layer 2 topological unit, respectively correspond to layer 2 visualization intervals interval11, interval12, interval13 and interval14, a layer 3 tail end topological unit, namely a layer 3 tail end topological unit, of the layer 2 topological unit, corresponds to layer 3 visualization intervals, taking interval11 as an example, and four downstream topological units of the layer 2 topological unit interval11 respectively correspond to four layer 3 visualization intervals, namely interval111, interval112, interval113 and interval 114.

The last layer of topology unit of the hierarchical visualization topology is an end topology unit, and the layer 3 topology unit is an end topology unit of the hierarchical visualization topology, for example, topology units corresponding to four layer 3 visualization intervals, i.e., interval111, interval112, interval113, and interval114, respectively.

When a target terminal topology unit is searched from a hierarchical visual topology, searching from an original topology unit of the hierarchical visual topology, sequentially accessing the topology units of a hierarchical visual topology tree layer by layer in a top-down strategy, and when a visual interval corresponding to the sequentially accessed topology units is overlapped with a second visual log paragraph, continuously and sequentially accessing downstream topology units of the topology units until the terminal topology unit which is overlapped with the second visual log paragraph in a hierarchical visual local topology tree which takes the topology unit as the original topology unit is found; when the visualization interval corresponding to the sequentially accessed topology units does not overlap with the second visualization log paragraph, the sequentially accessed topology units do not need to be continuously accessed in the hierarchical visualization local topology tree using the topology units as the original topology units, so that the sequentially accessed topology units can be greatly reduced.

Through the above sequential access manner (for example, traversal may be understood), all the terminal topology units where the corresponding visualization interval overlaps with the first visualization log paragraph may be found, and the terminal topology units are all used as target terminal topology units. The searching intervals of the topic behavior map units are distributed in the visual intervals corresponding to all the target terminal topological units.

In this embodiment, the hierarchical visualization topology is utilized to quickly find the area where the search interval of the topic behavior map unit is located, there is no overlap between the visualization interval corresponding to one topology unit and the second visualization log paragraph, and the visualization interval corresponding to the downstream topology unit of the topology unit will not overlap with the second visualization log paragraph, so that the hierarchical visualization local topology tree using the topology unit as the original topology unit does not need to be visited in sequence, the number of visualization intervals to be contrastively analyzed with the second visualization log paragraph can be reduced, and the whole data volume of the scene type topic interaction log is large, so that a large amount of useless calculation resource overhead can be saved by adopting the above technology, and the analysis time is reduced.

In addition, whether the second visual log paragraph overlaps with the visual interval or not is judged, compared with the judgment of whether the search interval in a circular shape or other shapes overlaps with the visual interval or not, the judgment mode is more convenient, and the calculation resource overhead during judgment can be reduced.

In step S1015, an undetermined visualization area corresponding to the topic behavior map unit is determined from the visualization intervals corresponding to all the target terminal topology units.

When the visual region to be determined is determined, all visual regions in the visual interval corresponding to all target terminal topological units can be sequentially accessed, whether the sequentially accessed visual regions overlap with the searched interval is judged, and if yes, the sequentially accessed visual regions are the visual region to be determined.

In order to reduce the computational resource overhead when determining the to-be-determined visualization region, in step S1015, the determining the to-be-determined visualization region corresponding to the topic behavior map unit from the visualization intervals corresponding to all target end topology units may include the following steps:

s10151: aiming at each target terminal topological unit, determining at least one template visualization area from a visualization interval corresponding to the target terminal topological unit; a second visualization log paragraph corresponding to the template visualization area overlaps with the first visualization log paragraph, and the second visualization log paragraph corresponding to the template visualization area is a visualization log paragraph covering the template visualization area and positioned in the scene-type topic interaction log;

s10152: and judging whether the template visualization area is overlapped with the searching interval or not aiming at each template visualization area, and if so, determining that the template visualization area is an undetermined visualization area corresponding to the topic behavior map unit.

When determining a template visualization region (reference visualization region), a second visualization log paragraph corresponding to each visualization region in a visualization interval corresponding to a target terminal topology unit may be determined first, where the second visualization log paragraph may be, but is not limited to, a smallest visualization log paragraph covering the visualization region; then, for each second visualization log paragraph, whether the second visualization log paragraph overlaps with the first visualization log paragraph is determined, and if so, the visualization area covered by the second visualization log paragraph is a template visualization area.

The template visualization area is determined by judging whether a second visualization log paragraph covering the template visualization area overlaps with a first visualization log paragraph covering the search interval, but the overlapping of the second visualization log paragraph with the first visualization log paragraph does not represent the overlapping of the template visualization area with the search interval.

For example, zondition 100 is a search interval, a first Visualization log paragraph corresponding to the search interval zondition 100 is a joint _ section10, and Visualization100 is a Visualization area.

Further, a second Visualization log paragraph corresponding to the Visualization region Visualization100 is a joint _ section20, and the first Visualization log paragraph joint _ section10 overlaps with the second Visualization log paragraph joint _ section20, so the Visualization region Visualization100 is a template Visualization region, but the Visualization region Visualization100 does not overlap with the search region division 100.

Therefore, on the premise of finding out the template visualization area, whether the template is overlapped with the search interval is further judged, and if so, the template visualization area is determined to be the to-be-determined visualization area corresponding to the topic behavior map unit.

Therefore, the template visualization area can be found out by judging whether the visualization log paragraphs overlap or not, and then whether the template visualization area overlaps with the search interval or not is judged.

In the existing mode, in the process of determining a to-be-determined behavior state unit or a to-be-determined visual area corresponding to a topic behavior map unit, visual areas in all scene type topics are compared and analyzed with search intervals of the topic behavior map unit to determine whether the visual areas and the topic behavior map unit are crossed, and as the scene type topics have large data volume, more invalid operations are involved, and more time is consumed. And the minimum enclosing area of the searched interval and the minimum enclosing area of the visual area are compared and analyzed to determine all the first visual areas, and then the visual area crossed with the searched interval is further determined from the first visual areas and used as the template visual area, so that the computing resource overhead can be greatly reduced, and the time consumption is further reduced.

The template visualization area can be directly used as an undetermined visualization area corresponding to the topic behavior mapping unit. However, generally speaking, there is no case where the description of the visual region in which the behavior detection target is located is opposite to the description of the visual behavior description of the behavior detection target.

Based on the condition, the template visualization area can be optimized to obtain the to-be-determined visualization area corresponding to the topic behavior map unit. For this purpose, in step S10152, the determining that the template visualization area is the to-be-determined visualization area corresponding to the topic behavior map unit further includes the following steps:

s101521: acquiring a behavior transmission description corresponding to the topic behavior map unit;

s101522: determining the description of the template visualization area set in the scene-type topic interaction log generated in advance;

s101523: and judging whether the quantitative difference value between the behavior transmission description corresponding to the topic behavior map unit and the set description of the template visualization area is less than or equal to a set quantitative difference value or not, and if so, determining the template visualization area as the to-be-determined visualization area corresponding to the topic behavior map unit.

In some possible embodiments, the data collected by the behavior detection thread is provided with, in addition to the topic behavior map unit (spatial domain information) of the behavior detection target, a behavior transmission description (behavior development path or behavior trend path) of the behavior detection target at each topic behavior map unit, so that the behavior transmission description corresponding to the topic behavior map unit can be obtained from the source information recorded by the current visual behavior. Of course, in the case of missing description information, the behavior transfer description may be calculated using the previous topic behavior map unit and the topic behavior map unit, or the topic behavior map unit and the next topic behavior map unit.

Each visualization area in the scene-type topic interaction log is set with a description, and the description of the visualization area is set, namely the description of a path from a starting unit to a terminating unit of the visualization area. For example, the description of the visualization region corresponding to the non-interactive behavior is a path description of the non-interactive behavior; the interactive behavior may correspond to two visualization regions described by reciprocal paths, and accordingly, the description of each visualization region is a path description of an interaction situation of the opposite party topic in the interactive behavior.

If the quantitative difference value (such as the path description difference degree) between the set description of the template visualization area and the behavior transmission description corresponding to the topic behavior mapping unit is smaller than or equal to the set quantitative difference value, determining that the template visualization area is an undetermined visualization area; otherwise, the probability that the template visualization area is the visualization area where the behavior detection target is located is considered to be extremely low, and the template visualization area is determined not to be the to-be-determined visualization area.

Set for quantitative difference value and can predetermine as required, for avoiding because of topic action map unit is unusual and the too big problem that leads to omitting correct visual region of quantitative difference value, can be great with setting for quantitative difference value setting ground, for example can set up to 0.9 in 0~1 interval. This is, of course, by way of example only and is not intended as limiting.

When the constraint of the search interval corresponding to the topic behavior map unit is large, the number of the template visual areas is large, but only one correct visual area is provided, so that a smaller number of undetermined visual areas are obtained after the template visual areas are optimized, and a large amount of invalid operation processing can be avoided when the visual areas are subsequently aligned, and time consumption is further reduced.

In step S101, after finding the to-be-determined visualization region corresponding to the topic behavior map unit, a behavior state mapping unit that maps the topic behavior map unit to the to-be-determined visualization region may be determined, and the to-be-determined behavior state unit corresponding to the topic behavior map unit is determined in the scene type topic interaction log generated in advance by the behavior state mapping unit.

The topic behavior map unit is mapped to the behavior state mapping unit on the to-be-determined visualization region, for example, the topic behavior map unit may be an intersection point on a behavior path corresponding to the to-be-determined visualization region. On this basis, determining, by the behavior state mapping unit, an undetermined behavior state unit corresponding to the topic behavior map unit in the scene-type topic interaction log generated in advance may include:

if the behavior state mapping unit is in the undetermined visualization area, determining the behavior state mapping unit as the undetermined behavior state unit;

and if the behavior state mapping unit is not in the pending visualization region, taking the behavior state unit which is close to the behavior state mapping unit in the pending visualization region as the pending behavior state unit.

In some independently implementable technical solutions, in step S300, the determining an optimized operation behavior trajectory between two undetermined behavior state units in a undetermined behavior state unit group through scene type topic invocation information which is subjected to statistics in advance includes:

s301: aiming at each undetermined behavior state unit group, searching a target visualization area in scene type topic calling information which is counted in advance through two undetermined behavior state units in the undetermined behavior state unit group;

s302: and determining an optimized operation behavior track between two undetermined behavior state units in the undetermined behavior state unit group through the found target visualization area.

When two adjacent topic behavior map units are far away from each other, a plurality of optional operation behavior tracks are arranged between the undetermined behavior state unit corresponding to one topic behavior map unit and the undetermined behavior state unit of the next topic behavior map unit, a plurality of overlapped visualization areas are arranged between different operation behavior tracks, and if the visualization areas are repeatedly searched, a large amount of calculation time is consumed.

The scene type topic calling information is used for searching the target visualization area, so that when a repeated visualization area (namely the visualization area searched before) is searched, corresponding operation behavior track information is already summarized in the scene type topic calling information, extra operation processing is not needed at the moment, and calculation time consumed by repeatedly searching the same visualization area is saved.

Under some optional and independently implementable design ideas, in step S301, the finding out a target visualization area in the scene type topic invocation information subjected to statistics in advance through two pending behavior state units in the pending behavior state unit group may include the following steps:

s3011: determining a first searching auxiliary unit and a second searching auxiliary unit in the scene type topic interaction log generated in advance through a first to-be-determined behavior state unit and a second to-be-determined behavior state unit in the to-be-determined behavior state unit group; the first search auxiliary unit is a behavior state unit which is located in an undetermined visualization area where the first undetermined behavior state unit is located and has the shortest distance to the second undetermined behavior state unit, the second search auxiliary unit is a behavior state unit which is located in an undetermined visualization area where the second undetermined behavior state unit is located and has the shortest distance to the first undetermined behavior state unit, and the first undetermined behavior state unit is as follows: the undetermined behavior state unit of the first topic behavior map unit in the topic behavior map unit group corresponding to the undetermined behavior state unit group is as follows: the undetermined behavior state unit of a second topic behavior map unit in the topic behavior map unit group corresponding to the undetermined behavior state unit group is associated with the first topic behavior map unit in the current visual behavior record and is positioned behind the first topic behavior map unit;

s3012: and finding out a target visual area in the scene type topic calling information which is counted in advance through the first searching auxiliary unit and the second searching auxiliary unit.

For example, assume that element1 is a first topic behavior map unit, element2 is a second topic behavior map unit, element2 and element1 are two associated topic behavior map units, and element2 is located after element 1; the pending behavior state unit (a first pending behavior state unit) of the element1 is pending behavior 1, the pending4 is pending behavior state unit (a second pending behavior state unit) of the element2, and the pending1 and the pending4 form a pair of pending behavior state unit groups; the condition2 is a behavior state unit closest to the pending visualization area condition1condition2 where the pending visualization area condition1 is located, so that the condition2 is determined as a first search assisting unit; the condition7 is a behavior state unit closest to the pending visualization area condition7condition8 where the pending4 is located and is located to the pending1, so the condition7 is determined as the second search assisting unit.

The first search assisting unit condition2 and the second search assisting unit condition7 find out a target visualization area in the scene type topic invocation information with statistics completed in advance, wherein the target visualization area can be a visualization area covered by the optimized operation behavior trajectory from all the operation behavior trajectories from the first search assisting unit condition2 to the second search assisting unit condition 7.

In some optional and independently implementable design considerations, in step S3012, the finding a target visualization area in the scene-type topic invocation information subjected to statistics in advance by the first search assisting unit and the second search assisting unit may include the following steps:

s30121: determining one of the first indication path and the second indication path as a current visual searching indication path; the first indication path is the description from the first search auxiliary unit to the second search auxiliary unit, and the second indication path is the description from the second search auxiliary unit to the first search auxiliary unit;

s30122: taking an inquiry trigger state unit of a current visual search indication path as a current behavior state unit, if the current behavior state unit is not counted in scene type topic calling information, summarizing the current behavior state unit and the current visual search indication path corresponding to the current behavior state unit and the current behavior state unit in the scene type topic calling information;

s30123: searching the associated behavior state unit of the current behavior state unit in the scene type topic interaction log generated in advance, and sequentially accessing the searched associated behavior state units; if the sequentially visited associated behavior state units are summarized in the scene type topic calling information, finding out a visual area on the current visual search indication path as the target visual area through a visual search indication path corresponding to the associated behavior state units recorded in the scene type topic calling information; if the associated behavior state units accessed in sequence are not counted in the scene type topic calling information, recording a current visual search indication path corresponding to the associated behavior state unit, the current behavior state unit and a visual area of the associated behavior state unit into the scene type topic calling information;

s30124: if the target visualization area is not determined when the sequential access is terminated, selecting one behavior state unit from all the associated behavior state units of the current behavior state unit as a query trigger state unit of the current visualization search indication path; and then updating the current visual searching indication path to be the other one of the first indication path and the second indication path, and returning to the operation of taking the query trigger state unit of the current visual searching indication path as the current behavior state unit.

For example, assume that the first indication path is described from the first seeking auxiliary unit condition2 to the second seeking auxiliary unit condition7, and the second indication path is described from the second seeking auxiliary unit condition7 to the first seeking auxiliary unit condition 2. The query trigger state element of the first indicated path is the first search auxiliary unit condition2, and the query trigger state element of the second indicated path is the second search auxiliary unit condition 7.

The current visual search indication path may be the first indication path or the second indication path, and the following description will take the current visual search indication path as the first indication path as an example. Of course, here, only the first indicated path is determined as the current visual search indicated path when the search is started, and the current visual search indicated path may be updated when the search is started.

Since the current visual search instruction path is from the first search assistant unit condition2 to the second search assistant unit condition7, the query trigger status unit of the current visual search instruction path is the first search assistant unit condition2, and the first search assistant unit condition2 is used as the current behavior status unit to determine whether the first search assistant unit condition2 is summarized in the scene-type topic calling information.

At this time, since the first search assisting element condition2 is called for the first time and is not counted in the scene-type topic calling information, the current visual search instruction path (first instruction path) corresponding to the first search assisting element condition2 and the first search assisting element condition2 is recorded in the scene-type topic calling information.

The associated behavior state units of the first search assisting unit condition2, i.e., condition1, condition3, condition4, and condition5, are found in the scene-type topic interaction log generated in advance, and the associated behavior state units condition1, condition3, condition4, and condition5 are sequentially accessed. Through the sequential access to the association behavior state units condition1, condition3, condition4, condition5, it is determined that none of the association behavior state units condition1, condition3, condition4, condition5 are counted in the scene-type topic calling information, and the association behavior state units condition1, condition3, condition4, condition5 and the corresponding current visual search indication path (first indication path) are recorded in the scene-type topic calling information, and the association behavior state units condition1 to first search auxiliary unit condition2 visual area condition1condition2, association behavior state unit condition3 to first search auxiliary unit condition2 visual area condition3condition2, first search auxiliary unit condition2 to association behavior state unit condition4 visual area condition2, condition 862 to first search auxiliary unit condition 8653 visual area condition5, visual search auxiliary unit condition2 to visual search auxiliary unit condition 8653.

When the sequential access to the associated behavior state units condition1, condition3, condition4 and condition5 is terminated, and the target visual area is not determined, one behavior state unit is selected from all the associated behavior state units condition1, condition3, condition4 and condition5 of the first search assisting unit condition2 as the query trigger state unit of the current visual search indication path (first indication path).

Under some optional and independently implementable design considerations, in step S30124, the selecting one behavior state unit from all associated behavior state units of the current behavior state unit as the query trigger state unit of the current visual search indication path may include the following steps: determining operation behavior track quantitative evaluation corresponding to each associated behavior state unit, wherein the operation behavior track quantitative evaluation is the sum of a first behavior track quantitative evaluation from a query trigger state unit of a current visual search indication path to the associated behavior state unit and a second behavior track quantitative evaluation from the associated behavior state unit to a query termination state unit of the current visual search indication path; selecting a related behavior state unit with the lowest quantitative evaluation of the corresponding operation behavior track; and determining the selected associated behavior state unit as a query trigger state unit of the current visual search indication path.

The first behavior track is evaluated into the operation behavior track attribute from the query trigger state unit to the associated behavior state unit of the current visual search indication path in a quantitative manner, and the operation behavior track attribute can be represented by the length of the operation behavior track as the operation behavior track is searched; the second behavior trace is quantitatively evaluated as the operation behavior trace attribute from the associated behavior state unit to the query termination state unit of the current visual search indication path, and the operation behavior trace attribute can be represented by the euclidean metric between the associated behavior state unit and the query termination state unit of the current visual search indication path.

When the associated behavior state unit is called for the first time, the quantitative evaluation of the operation behavior track corresponding to the associated behavior state unit can also be recorded into the scene type topic calling information, so that for the associated behavior state unit which is already summarized in the scene type topic calling information, the quantitative evaluation of the operation behavior track corresponding to the associated behavior state unit can be directly obtained from the scene type topic calling information.

Continuing with the foregoing example, quantitative evaluations of operation behavior loci of all the associated behavior state units condition1, condition3, condition4, condition5 to the second search assisting unit condition7 of the first search assisting unit condition2 may be calculated, and an associated behavior state unit with the lowest quantitative evaluation of behavior loci is selected from all the associated behavior state units condition1, condition3, condition4 and condition5 as a query trigger state unit on the current visual search indication path (first indication path), such as the associated behavior state unit condition4 as a query trigger state unit of the current visual search indication path (first indication path).

And then, updating the current visual searching indication path into a second indication path, and returning to the operation of taking the query trigger state unit of the current visual searching indication path as a current behavior state unit.

The current visual search indication path is updated to a second indication path, the query trigger status unit of the second indication path is the second search assistant unit condition7, the query termination status unit of the second indication path is the first search assistant unit condition2, and the second search assistant unit condition7 is taken as the current behavior status unit to determine whether the second search assistant unit condition7 is summarized in the scene-type topic invocation information.

At this time, since the second search assisting element condition7 is called for the first time and is not counted in the scene-type topic calling information, the current visual search instruction path (second instruction path) corresponding to the second search assisting element condition7 and the second search assisting element condition7 is recorded in the scene-type topic calling information.

The associated behavior state units of the second search assisting unit condition7, i.e., condition6 and condition8, are found in the scene-type topic interaction log generated in advance, and the associated behavior state units condition6 and condition8 are sequentially accessed. After sequential access to the association status units condition6 and condition8, it is determined that neither the association status units condition6 or condition8 is counted in the scene-type topic calling information, and the association status units condition6 and condition8 and the corresponding current visual search instruction path (second instruction path) are recorded in the scene-type topic calling information, and the visualization area condition6condition7 from the association status unit condition6 to the second search aid unit condition7, and the visualization area condition7condition8 from the second search aid unit condition7 to the association status unit condition8 are recorded in the scene-type calling topic information.

If the target visualization area is not determined when the sequential access is terminated, one behavior state element is selected from all the associated behavior state elements condition6, condition8 of the second search assistant unit condition7 as the query trigger state element of the current visualization search instruction path (second instruction path). The behavior state units are selected in a manner similar to that described above, and the quantitative evaluation of the behavior trace corresponding to the associated behavior state unit condition6 is determined, such as the sum of the lengths of the operation behavior traces of the second searching auxiliary unit condition7 to the associated behavior state unit condition6 and the euclidean metric between the associated behavior state unit condition6 to the first searching auxiliary unit condition 2; the quantitative evaluation of the behavior trace corresponding to the association behavior state unit condition6 is determined, such as the sum of the length of the operation behavior trace from the second search assistant unit condition7 to the association behavior state unit condition8 (the length of the visualization area condition7condition 8) and the euclidean metric between the association behavior state unit condition8 to the first search assistant unit condition 2. When the associated behavior state units condition6 and condition8 are called for the first time, the quantitative evaluation of the behavior trace of the operation behavior corresponding to the associated behavior state units condition6 and condition8 is recorded in the calling information of the behavior trace of the operation.

The associated behavior state unit with the lowest quantitative evaluation of the behavior locus is selected from all the associated behavior state units condition6 and condition8 as the query trigger state unit on the current visual search indication path (the second indication path), for example, the associated behavior state unit condition6 is selected as the query trigger state unit of the current visual search indication path (the second indication path).

And then, updating the current visual searching indication path into a first indication path, and returning to the operation of taking the query trigger state unit of the current visual searching indication path as a current behavior state unit.

The current visual search indication path is updated to the first indication path, the query trigger status unit of the first indication path is behavior status unit condition4, the query termination status unit of the first indication path is first search auxiliary unit condition2, the behavior status unit condition4 is taken as the current behavior status unit, and whether the behavior status unit condition4 is summarized in the scene type topic invocation information is judged.

Since the behavior status unit condition4 is called for the second time, the behavior status unit condition4 has already been summarized in the scene-type topic invocation information, so there is no need to re-record the behavior status unit condition4 and related information into the scene-type topic invocation information.

The associated behavior state units of the behavior state unit condition4, i.e., condition2 and condition6, are found in the scene-type topic interaction log generated in advance, and the associated behavior state units condition2 and condition6 are sequentially accessed. Since the association status units condition2 and condition6 are already grouped in the scene-type topic calling information, when the association status units condition2 and condition6 are sequentially accessed, the visual area on the current visual search instruction path needs to be searched out as the target visual area through the visual search instruction path corresponding to the association status unit described in the scene-type topic calling information.

In some optional and independently implementable design considerations, in step S30123, the finding a visual area located on the current visual search indication path through the visual search indication path corresponding to the associated behavior state unit recorded in the scene-type topic invocation information as the target visual area may include:

comparing and analyzing a visual searching indication path corresponding to the associated behavior state unit recorded in the scene type topic calling information with the current visual searching indication path;

and when the comparison analysis result represents that the visual searching indication path corresponding to the associated behavior state unit is different from the current visual searching indication path, acquiring a visual area positioned on the current visual searching indication path from the scene type topic calling information as the target visual area.

Under some optional and independently implementable design ideas, the visual area on the current visual search indication path is acquired from the scene-type topic invocation information to serve as the target visual area, and then the sequential access can be terminated. When the visual search indication path corresponding to the associated behavior state unit recorded in the scene type topic calling information is the same as the current visual search indication path, continuing to sequentially access the next associated behavior state unit until all associated behavior state units of the current behavior state unit are sequentially accessed.

For example, when sequentially accessing the association status units condition2 and condition6, the association status units condition2 are sequentially accessed, and then the association status units condition6 are sequentially accessed, then: upon sequentially accessing the associated behavior state unit condition2, it is determined that the associated behavior state unit condition2 has been summarized in the scene-type topic calling information, and the visual search instruction path (first instruction path) corresponding to the associated behavior state unit condition2 described in the scene-type topic calling information is the same as the current visual search instruction path (first instruction path), so it is possible to continue sequentially accessing the associated behavior state unit condition 6.

Upon sequentially accessing the associated behavior status unit condition6, it is determined that the associated behavior status unit condition6 has been summarized in the scene-type topic calling information, and the visual search instruction path (second instruction path) corresponding to the associated behavior status unit condition6 described in the scene-type topic calling information is different from the current visual search instruction path (first instruction path), so it is possible to acquire the visual region located on the current visual search instruction path from the scene-type topic calling information as the target visual region, and terminate the sequential access.

It is understood that since the scene-type topic calling information summarizes the visualization areas condition1condition2, condition3condition2, condition2condition4, condition4condition6, condition6condition7 and condition7condition8, the visualization areas from the first search assisting unit condition2 to the second search assisting unit condition7, or the visualization areas from the first search assisting unit condition7 to the second search assisting unit condition3, which are respectively condition2condition4, condition4condition6 and condition6condition7, may be directly obtained from the scene-type topic calling information.

Further, these condition2condition4, condition4condition6, and condition6condition7 are set as target visualization areas.

In step S302, an optimized operation behavior trajectory between two undetermined behavior state units in the undetermined behavior state unit group is determined through the found target visualization region.

For example, the visualization region condition1condition2 where the first pending behavior state unit pending1 is located, the target visualization region condition2condition4, condition4condition6, condition6condition7, and the visualization region condition7condition8 where the second pending behavior state unit pending2 is located may be sequentially spliced (the same behavior state unit of the two visualization regions serves as a splicing point), so as to obtain the required optimized operation behavior trajectory.

Since the description of condition2 and condition8 is the case of being called for the first time, during the search, the visualization areas of condition1condition2, condition3condition2, condition2condition4, condition4condition6, condition6condition7, and condition7condition8 are added to the scene-type topic calling information for the first time, and if the subsequent condition2 or condition8 is called for the second time (for example, a search is performed on another to-be-determined behavior state unit group corresponding to the same topic behavior map unit group), information related to condition2condition4, such as operation behavior trajectory quantitative evaluation, etc., need not be subjected to additional arithmetic processing but may be acquired from the operation behavior trajectory calling information when the same visualization area, such as condition2condition4, is passed.

Through the manner, each pair of topic behavior map unit groups can obtain num2 optimized operation behavior tracks, num2 is the number of undetermined behavior state unit groups corresponding to the topic behavior map unit groups, and an optimized operation behavior track needs to be selected from the number of undetermined behavior state unit groups to serve as an optimized operation behavior track matched with the topic behavior map unit groups, so that a target operation behavior track matched with the current visualized behavior record can be determined.

In some independently implementable technical solutions, in step S400, determining, in the scene-type topic interaction log generated in advance, a target operation behavior trajectory adapted to a current visualized behavior record through an optimized operation behavior trajectory between two undetermined behavior state units in an undetermined behavior state unit group corresponding to each topic behavior map unit group, may include: aiming at each topic behavior map unit group, the following steps are executed: aiming at each undetermined behavior state unit group corresponding to the topic behavior map unit group, importing an optimized operation behavior track between two undetermined behavior state units in the undetermined behavior state unit group into an AI neural network which completes training in advance, and obtaining at least one importance degree variable for setting the key content possibility corresponding to the undetermined behavior state unit group; selecting one undetermined operation behavior track from optimized operation behavior tracks between two undetermined behavior state units in each undetermined behavior state unit group through at least one importance degree variable for setting key content possibility corresponding to each undetermined behavior state unit group; and carrying out global fusion on all the to-be-determined operation behavior tracks according to a preset behavior track splicing strategy to obtain the target operation behavior track.

In some embodiments, a pair of pending behavior state element groups corresponds to a set of importance variables, which may include: a first importance degree variable of a first key content possibility, wherein the first key content possibility is used for representing a difference condition between Euclidean measurement between two topic behavior map units in a topic behavior map unit group and an operation behavior track length between two undetermined behavior state units of an undetermined behavior state unit group corresponding to the topic behavior map unit group; a second importance degree variable of a second key content possibility, wherein the second key content possibility is used for representing a difference situation between a behavior transmission description of one topic behavior map unit in the topic behavior map unit group and a set description of an undetermined visualization region where a corresponding undetermined behavior state unit in the undetermined behavior state unit group is located; and a third important degree variable of a third key content possibility, wherein the third key content possibility is used for representing the difference situation between the average behavior transmission time of the current visual behavior record and the topic interaction aging requirement.

It can be understood that the change possibility between the visual regions to be determined where the two to-be-determined behavior state units in the corresponding to-be-determined behavior state unit groups are located is calculated according to a group of importance degree variables output by the AI neural network, and the change possibility corresponding to each to-be-determined behavior state unit group is obtained; and selecting one to-be-determined operation behavior track from the optimized operation behavior tracks between two to-be-determined behavior state units in each to-be-determined behavior state unit group according to the change possibility and the preset positioning possibility corresponding to each to-be-determined behavior state unit group.

The method for calculating the change possibility between the to-be-determined visualization areas where the two to-be-determined behavior state units in the corresponding to-be-determined behavior state unit group are located according to the set of importance degree variables output by the AI neural network may include the following steps: calculating a first quantized value taking the first key content possibility as a base number and taking a first importance degree variable of the first key content possibility in a group of importance degree variables output by the AI neural network as a power number; calculating a second quantized value with the second key content possibility as a base number and a second importance variable of the second key content possibility in a group of importance variables output by the AI neural network as a power number; calculating a third quantized value of which a third importance variable of the third key content possibility is a power number in a group of importance variables output by the AI neural network by taking the third key content possibility as a base number; and weighting the first quantized value, the second quantized value and the third quantized value to obtain the change possibility corresponding to the undetermined behavior state unit.

The AI neural network is trained in advance and stored in the server, and can be called from the equipment when needed. In the embodiment, the importance degree variable of different key content possibilities corresponding to the undetermined behavior state unit group is obtained through the AI neural network, so that more reasonable variation possibilities are calculated, the effect is better when the method is applied to scene type topic interaction logs with poor quality, and better adaptation performance can be kept when the method is applied to scenes of topic community levels.

It can be understood that, when determining the optimized operation behavior track between the undetermined behavior state units in the undetermined behavior state unit group corresponding to each topic behavior map unit group in the current visualized behavior record, the optimized operation behavior track can be determined by the scene type topic calling information which is counted in advance, because the scene type topic calling information summarizes the operation behavior track information between the different called behavior state units, when the same visualized area is called repeatedly, the visualized area does not need to be analyzed and processed additionally, a large amount of repeated analysis and processing can be avoided to a certain extent, further, the target operation behavior track adapted to the current visualized behavior record is determined by each optimized operation behavior track, the time consumed for determining the operation behavior track can be reduced as much as possible, and the accuracy and the integrity of the obtained target operation behavior track can be ensured, therefore, analysis basis is rapidly and accurately provided for subsequent interactive behavior security analysis, and the influence on timeliness of the interactive behavior security analysis caused by excessive time consumed in the process of determining the target operation behavior track of the current visual behavior record is avoided.

Based on the same inventive concept, there is also provided a cloud service topic information processing apparatus 20 for big data, applied to a big data server 10, the apparatus including:

the information obtaining module 21 is configured to obtain first community service session information and second community service session information, where the first community service session information is a first service session topic description, and the second community service session information is a second service session topic description;

an indication obtaining module 22, configured to obtain a first feature adjustment indication between the first service session topic descriptions in different topic scenes, a second feature adjustment indication between the second service session topic descriptions in different topic scenes, and a third feature adjustment indication between the first service session topic description and the second service session topic description in a set topic scene;

and the requirement binding module 23 is configured to perform session requirement binding on the first community service session information and the second community service session information through the first characteristic adjustment instruction, the second characteristic adjustment instruction, and the third characteristic adjustment instruction.

For the description of the above functional modules, refer to the description of the method shown in fig. 2.

In summary, in the above solution, in the process of implementing the session requirement binding between the first community service session information and the second community service session information of different session topics, by obtaining the feature adjustment indication between descriptions of the same service session topic under different topic scenes and the feature adjustment indication between descriptions of different service session topics under set topic scenes, the first community service session information and the set topic scene under the same session topic scene can be session requirement bound, and then the second community service session information and the second community service session information can be session requirement bound through the second feature adjustment indication, and the session requirement binding between the first community service session information and the second community service session information of different session topics can be implemented through the above-mentioned routing session requirement binding manner, in addition, no matter what kind of topic scene is used for carrying out session requirement binding between the community service session information of different session topics, only one session requirement binding between the community service session information of different session topics under the set topic scene can be carried out, and therefore accuracy and efficiency of a session requirement binding result are improved.

The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:员工合作关系强度量化方法、系统、计算机和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!