Social network-based community dividing method, system and storage medium

文档序号:7786 发布日期:2021-09-17 浏览:40次 中文

1. A social network-based community division method is characterized by comprising the following steps:

step S1, obtaining user data from a social network, and preprocessing the user data to obtain an adjacency matrix and an attribute matrix of a node;

step S2, according to an attention mechanism, capturing the high-order topological proximity and attribute proximity of the nodes through the adjacency matrix and the attribute matrix to obtain a proximity weight;

step S3, jointly encoding the adjacency matrix and the attribute matrix through a graph automatic encoder and the proximity weight to obtain a low-dimensional embedded representation of the node;

s4, clustering the low-dimensional embedded representation through a clustering algorithm to obtain soft clustering distribution of nodes, obtaining a target loss function through the soft clustering distribution, and iterating the target loss function to be minimized to obtain a training model;

and step S5, obtaining a community division result of the nodes according to the clustering distribution result in the training model.

2. The social network based community dividing method according to claim 1, wherein the step S1 comprises:

step S11, obtaining user data from a social network through a web crawler method, or providing the user data by a social platform in the social network;

step S12, preprocessing the user data, the preprocessing steps being as follows:

carrying out data cleaning on the user data, and storing the user data after the data cleaning;

and converting the relationship among the users in the cleaned user data to obtain an adjacency matrix of the topological structure information of the characteristic node, and converting the characteristics of the users in the cleaned user data to obtain an attribute matrix of the attribute information of the characteristic node.

3. The social network based community dividing method according to claim 2, wherein the step S2 comprises:

step S21, according to the formula (1), capturing the high-order topological proximity between each node through an adjacency matrix to obtain a topological structure weight matrix, wherein the formula (1) is as follows:wherein the content of the first and second substances,the method comprises the following steps of (1) performing normalization representation on an adjacent matrix of a t-order neighbor node;

step S22, according to equation (2), capturing attribute proximity between nodes through an attribute matrix to obtain an attribute weight value, where equation (2) is:

pij=wT[xi||xj],

in the formula, xiIs the attribute value, x, of node ijIs the attribute value of the neighbor node j, W is the parameter matrix, | | is the vector series operation, T is the matrix transposition;

step S23, combining and normalizing the attribute weight values and the topology weight values in the topology weight matrix according to formula (3), to obtain a proximity weight, where formula (3) is:

in the formula, mij、mirFor the topological weight value, P, in the topological weight matrix Mij、PirIs an attribute weight value, NiLeakyReLU is the activation function for the set of i neighbor nodes in M.

4. The social network based community dividing method according to claim 3, wherein the step S3 comprises:

step S31, according to the formula (4), the adjacency matrix and the attribute matrix are coded in a preset graph automatic coder, weighting calculation is carried out through the proximity weight, and low-dimensional embedded representation of each middle layer and low-dimensional embedded representation of an output end, which comprise the topological structure information and the attribute information, are obtained in the corresponding sequence of each middle layer and the output end of the graph automatic coder; the formula (4) is:

......

where σ is a nonlinear activation function, αijIs the proximity weight value, W(1),W(2)...W(k)A parameter matrix is shown for each intermediate layer and output of the auto-encoder,for the initial input data of the graph autoencoder, i.e. the joint representation of the adjacency matrix and attribute matrix of the neighbor node j,representing the node i in the low-dimensional embedded representation obtained by learning of the corresponding sequence of each intermediate layer of the automatic graph encoder;representing the low-dimensional embedded representation, h, obtained by learning the corresponding sequence of each neighbor node related to the node i in each intermediate layer of the graph automatic encoderiLearning a low-dimensional embedded representation for the output of the graph autoencoder;

step S32, decoding the output end low-dimensional embedded expression through a decoder according to the formula (5) to obtain a reconstructed adjacency matrix representing the probability of the existence of the continuous edges between the nodesThe formula (5) is:

in the formula, T is transposition, hiFor the output low-dimensional embedded representation of node i, hjA low-dimensional embedded representation for the output of node j;

step S33, according to a reconstruction loss function formula (6), pre-training the graph automatic encoder through the reconstruction adjacent matrix and the adjacent matrix, and when the value 5 of the reconstruction loss function formula (6) is iterated to be minimized, iteratively updating the low-dimensional embedded representation of each intermediate layer and the low-dimensional embedded representation of the output end; the reconstruction loss function equation (6) is:

in the formula, AijRepresents the value of any element in the adjacency matrix, takes the value of 0 or 1,the values representing the corresponding elements in the reconstructed adjacency matrix take values between 0 and 1.

5. The social network based community dividing method according to claim 4, wherein the step S4 comprises:

s41, clustering the low-dimensional embedded representations of the output ends by using a k-means clustering algorithm to obtain initial clustering centers, wherein one initial clustering center corresponds to one community;

step S42, measuring the similarity between the low-dimensional embedded representation and the initial clustering center according to a cosine similarity function formula (7) to obtain the soft clustering distribution of each node, wherein the cosine similarity function formula (7) is as follows:

in the formula, hiFor the output low-dimensional embedded representation of node i, μuIs the u initial clustering center;

step S43, defining a clustering loss function formula (8) according to the soft clustering distribution, wherein the clustering loss function formula (8) is as follows: l isC=∑ijlog(1/qiu) Wherein i and j are nodes and q isiuDistributing for soft clustering;

step S44, jointly learning the reconstruction loss function and the clustering loss function according to equation (9) to obtain a final target loss function, where equation (9) is:Lrto reconstruct the loss function, LCIn order to be a function of the cluster loss,a hyperparameter to balance the effects of the two loss functions;

and S45, iterating the target loss function to the minimum, and iteratively updating the low-dimensional embedded representation of each intermediate layer and the low-dimensional embedded representation of the output end to obtain a training model.

6. The social network based community dividing method according to claim 5, wherein the step S5 comprises:

and determining final clustering distribution of each node according to the training model, and obtaining final clustering centers according to the final clustering distribution, wherein one final clustering center corresponds to one community, and each node is divided into corresponding communities according to the final clustering centers.

7. A social network-based community division system is characterized by comprising a preprocessing module, a training module and a division module;

the preprocessing module is used for: the system comprises a node, a node database and a node database, wherein the node database is used for acquiring user data from a social network and preprocessing the user data to obtain an adjacency matrix and an attribute matrix of the node;

the training module: the proximity weight is obtained by capturing the high-order topological proximity and attribute proximity of the nodes through the adjacency matrix and the attribute matrix according to an attention mechanism; jointly encoding the adjacency matrix and the attribute matrix through a graph automatic encoder and the proximity weight to obtain low-dimensional embedded representation of the node; clustering the low-dimensional embedded representation through a clustering algorithm to obtain soft clustering distribution of nodes, obtaining a target loss function through the soft clustering distribution, and iterating the target loss function to be minimized to obtain a training model;

the dividing module: and the community division result of the nodes is obtained according to the clustering distribution result in the training model.

8. Social network based community dividing system comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that when the computer program is executed by the processor, the social network based community dividing method according to any one of claims 1 to 6 is implemented.

9. A storage medium comprising one or more computer programs stored thereon, the one or more computer programs being executable by one or more processors to implement a social network based community partitioning method as claimed in any one of claims 1 to 6.

Background

Today, people all prefer to connect to each other through various online social networks, where users are both producers and consumers of network content. They will have a close relationship: friends, relatives, co-workers, etc., with close interaction: chat, comment, follow, forward, etc., and specific similar interests and consumption habits, etc.

The community division is to find and divide members with internal affinity and high similarity in a complex network, and to make the members in the same community closely and the members in different communities loosely. The potential relations among the members are mined, the communities are divided, news and advertisements which are in accordance with the characteristics and interests of the users are delivered to the users by the Internet platform, and personalized services are provided. In addition, by capturing the relationship among the members and detecting the issued speech, public sentiment can be effectively monitored or guided, detection of criminal community organization is realized, and the method has very important significance for stable development of society. In a word, the community division technology is widely applied to the fields of personalized recommendation, advertisement putting, online public opinion monitoring, anomaly detection, terror organization identification and the like.

However, complex relationships among members in a social network are difficult to represent and learn, and are generally represented in a graph form for convenience of understanding and research, and community division can be essentially regarded as a graph clustering problem, and how to effectively mine effective information in node attributes and topological structures in a graph is a difficult problem. In the prior art, most of the information only needs to mine network topology structure information or attribute information of nodes, which is obviously not accurate enough and ignores important information; in addition, only the direct neighbors of the nodes are generally considered, the potential relation of high-order neighbors is not considered, the actual situation is not met, the correlation between different neighbor nodes and a target node is not explicitly expressed and measured, and the model is not strong in interpretability; finally, the traditional method is difficult to effectively perform data fusion on high-dimensional data containing various attribute information; these factors all contribute to the inaccuracy of community division results to different degrees.

Disclosure of Invention

The invention provides a social network-based community dividing method, a social network-based community dividing system and a storage medium, aiming at the defects of the prior art. The network topology structure and the attribute information are effectively integrated, and the clustering performance is further optimized by jointly performing iterative optimization on the node low-dimensional embedded representation and the soft clustering distribution.

The technical scheme for solving the technical problems is as follows: a social network-based community division method comprises the following steps:

step S1, obtaining user data from a social network, and preprocessing the user data to obtain an adjacency matrix and an attribute matrix of a node;

step S2, according to an attention mechanism, capturing the high-order topological proximity and attribute proximity of the nodes through the adjacency matrix and the attribute matrix to obtain a proximity weight;

step S3, jointly encoding the adjacency matrix and the attribute matrix through a graph automatic encoder and the proximity weight to obtain a low-dimensional embedded representation of the node;

s4, clustering the low-dimensional embedded representation through a clustering algorithm to obtain soft clustering distribution of nodes, obtaining a target loss function through the soft clustering distribution, and iterating the target loss function to be minimized to obtain a training model;

and step S5, obtaining a community division result of the nodes according to the clustering distribution result in the training model.

The invention has the beneficial effects that: the attention mechanism is utilized to explicitly measure the correlation among users, reflect the similarity or influence difference of different users to a target user, effectively integrate the network topology structure and the attribute information through low-dimensional embedded representation, fully capture effective information and effectively fuse high-dimensional data containing various attribute information; and the trained node low-dimensional embedded representation and soft clustering distribution iterative optimization are simultaneously performed in a unified framework, so that the clustering performance can be further optimized, and the community division is more accurate.

On the basis of the technical scheme, the invention can be further improved as follows:

further, the step S1 includes:

step S11, obtaining user data from a social network through a web crawler method, or providing the user data by a social platform in the social network;

step S12, preprocessing the user data, the preprocessing steps being as follows:

carrying out data cleaning on the user data, and storing the user data after the data cleaning;

and converting the relationship among the users in the cleaned user data to obtain an adjacency matrix of the topological structure information of the characteristic node, and converting the characteristics of the users in the cleaned user data to obtain an attribute matrix of the attribute information of the characteristic node.

Further, the step S2 specifically includes:

step S21, according to the formula (1), capturing the high-order topological proximity between each node through an adjacency matrix to obtain a topological structure weight matrix, wherein the formula (1) is as follows:wherein the content of the first and second substances,the method comprises the following steps of (1) performing normalization representation on an adjacent matrix of a t-order neighbor node;

step S22, according to equation (2), capturing attribute proximity between nodes through an attribute matrix to obtain an attribute weight value, where equation (2) is:

pij=wT[xi||xj],

in the formula, xiIs the attribute value, x, of node ijIs the attribute value of the neighbor node j, W is the parameter matrix, | | is the vector series operation, T is the matrix transposition;

step S23, combining and normalizing the attribute weight values and the topology weight values in the topology weight matrix according to formula (3), to obtain a proximity weight, where formula (3) is:

in the formula, mij、mirFor the topological weight value, P, in the topological weight matrix Mij、PirIs an attribute weight value, NiLeakyReLU is the activation function for the set of i neighbor nodes in M.

The beneficial effect of adopting the further scheme is that: the method has the advantages that the attention mechanism can be utilized to explicitly measure the correlation among users, the similarity or influence difference of different users to a target user is reflected, the high-order topological proximity is obtained by considering the influence of high-order neighbors, and compared with the prior art that only the neighbor nodes of the direct connection edges of the users are considered for analysis, the method can capture the potential correlation among the users more deeply, so that the division result is more accurate and accords with the reality.

Further, step S3 specifically includes:

step S31, according to the formula (4), the adjacency matrix and the attribute matrix are coded in a preset graph automatic coder, weighting calculation is carried out through the proximity weight, and low-dimensional embedded representation of each middle layer and low-dimensional embedded representation of an output end, which comprise the topological structure information and the attribute information, are obtained in the corresponding sequence of each middle layer and the output end of the graph automatic coder; the formula (4) is:

......

where σ is a nonlinear activation function, αijIs the proximity weight value, W(1),W(2)...W(k)A parameter matrix is shown for each intermediate layer and output of the auto-encoder,for the initial input data of the graph autoencoder, i.e. the joint representation of the adjacency matrix and attribute matrix of the neighbor node j,representing the node i in the low-dimensional embedded representation obtained by learning of the corresponding sequence of each intermediate layer of the automatic graph encoder;representing the low-dimensional embedded representation, h, obtained by learning the corresponding sequence of each neighbor node related to the node i in each intermediate layer of the graph automatic encoderiLearning a low-dimensional embedded representation for the output of the graph autoencoder;

step S32, decoding the output end low-dimensional embedded expression through a decoder according to the formula (5) to obtain a reconstructed adjacency matrix representing the probability of the existence of the continuous edges between the nodesThe formula (5) is:

in the formula, T is transposition, hiFor the output low-dimensional embedded representation of node i, hjA low-dimensional embedded representation for the output of node j;

step S33, according to a reconstruction loss function formula (6), pre-training the graph automatic encoder through the reconstruction adjacent matrix and the adjacent matrix, and when the value of the reconstruction loss function formula (6) is iterated to be minimized, iteratively updating the low-dimensional embedded representation of each intermediate layer and the low-dimensional embedded representation of the output end; the reconstruction loss function equation (6) is:

in the formula, AijRepresents the value of any element in the adjacency matrix, takes the value of 0 or 1,the values representing the corresponding elements in the reconstructed adjacency matrix take values between 0 and 1.

The beneficial effect of adopting the further scheme is that: the low-dimensional embedded representation of the nodes is obtained through the graph automatic encoder and the loss function, the network topology structure and the attribute information are effectively integrated, effective information is fully captured, and high-dimensional data containing various attribute information can be effectively fused through the low-dimensional embedded representation, so that the community division is more accurate.

Further, the step S4 specifically includes:

s41, clustering the low-dimensional embedded representations of the output ends by using a k-means clustering algorithm to obtain initial clustering centers, wherein one initial clustering center corresponds to one community;

step S42, measuring the similarity between the low-dimensional embedded representation and the initial clustering center according to a cosine similarity function formula (7) to obtain the soft clustering distribution of each node, wherein the cosine similarity function formula (7) is as follows:

in the formula, hiFor the output low-dimensional embedded representation of node i, μuIs the u initial clustering center;

step S43, defining cluster loss function according to the soft cluster distributionEquation (8), the cluster loss function equation (8) is: l isC=∑ijlog(1/qiu) Wherein i and j are nodes and q isiuDistributing for soft clustering;

step S44, jointly learning the reconstruction loss function and the clustering loss function according to equation (9) to obtain a final target loss function, where equation (9) is:Lrto reconstruct the loss function, LCIn order to be a function of the cluster loss,a hyperparameter to balance the effects of the two loss functions;

and S45, iterating the target loss function to the minimum, and iteratively updating the low-dimensional embedded representation of each intermediate layer and the low-dimensional embedded representation of the output end to obtain a training model.

The beneficial effect of adopting the further scheme is that: and the low-dimensional embedded representation and soft clustering distribution iterative optimization of the learned nodes are simultaneously performed in a unified framework, so that the clustering performance can be further optimized, and the community division is more accurate.

Further, the step S5 specifically includes:

and determining final clustering distribution of each node according to the training model, and obtaining final clustering centers according to the final clustering distribution, wherein one final clustering center corresponds to one community, and each node is divided into corresponding communities according to the final clustering centers.

In order to solve the technical problem, the invention also provides a social network-based community partitioning system, which comprises a preprocessing module, a training module and a partitioning module;

the preprocessing module is used for: the system comprises a node, a node database and a node database, wherein the node database is used for acquiring user data from a social network and preprocessing the user data to obtain an adjacency matrix and an attribute matrix of the node;

the training module: the proximity weight is obtained by capturing the high-order topological proximity and attribute proximity of the nodes through the adjacency matrix and the attribute matrix according to an attention mechanism; jointly encoding the adjacency matrix and the attribute matrix through a graph automatic encoder and the proximity weight to obtain low-dimensional embedded representation of the node; clustering the low-dimensional embedded representation through a clustering algorithm to obtain soft clustering distribution of nodes, obtaining a target loss function through the soft clustering distribution, and iterating the target loss function to be minimized to obtain a training model;

the dividing module: and the community division result of the nodes is obtained according to the clustering distribution result in the training model.

Drawings

FIG. 1 is a flowchart of a social network-based community partitioning method according to an embodiment of the present invention;

fig. 2 is a schematic structural diagram of a social network-based community partitioning system according to an embodiment of the present invention.

Detailed Description

The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.

Example one

As shown in fig. 1, a social network-based community partitioning method includes the following steps:

step S1, obtaining user data from a social network, and preprocessing the user data to obtain an adjacency matrix and an attribute matrix of a node;

step S2, according to an attention mechanism, capturing the high-order topological proximity and attribute proximity of the nodes through the adjacency matrix and the attribute matrix to obtain a proximity weight;

step S3, jointly encoding the adjacency matrix and the attribute matrix through a graph automatic encoder and the proximity weight to obtain a low-dimensional embedded representation of the node;

s4, clustering the low-dimensional embedded representation through a clustering algorithm to obtain soft clustering distribution of nodes, obtaining a target loss function through the soft clustering distribution, and iterating the target loss function to be minimized to obtain a training model;

and step S5, obtaining a community division result of the nodes according to the clustering distribution result in the training model.

In the embodiment, the attention mechanism is utilized to explicitly measure the correlation among users, reflect the similarity or influence difference of different users to the target user, effectively integrate the network topology structure and the attribute information through low-dimensional embedded representation, fully capture effective information and effectively fuse high-dimensional data containing various attribute information; and the trained node low-dimensional embedded representation and soft clustering distribution iterative optimization are simultaneously performed in a unified framework, so that the clustering performance can be further optimized, and the community division is more accurate.

Preferably, as an embodiment of the present invention, the step S1 specifically includes:

step S11, obtaining user data from a social network through a web crawler method, or providing the user data by a social platform in the social network;

step S12, preprocessing the user data, the preprocessing steps being as follows:

carrying out data cleaning on the user data, and storing the user data after the data cleaning;

and converting the relationship among the users in the cleaned user data to obtain an adjacency matrix of the topological structure information of the characteristic node, and converting the characteristics of the users in the cleaned user data to obtain an attribute matrix of the attribute information of the characteristic node.

It should be noted that each user in the social network is abstracted as a node, the friend relationship between users is abstracted as an edge, and the personal characteristics (sex, age, academic history, interests, etc.) and the interaction relationship (forwarding or comment content, interaction frequency, fan attention count, etc.) of the user are abstracted as attribute tags.

Storing the cleaned user data in a txt text form; and processing the user data by using a numpy data calculation tool to perform conversion processing, so as to obtain an adjacency matrix and an attribute matrix.

Wherein the adjacency matrix can be expressed asWherein A isij1 means that the ith node and the jth node have an adjacency relation, wherein Aij0 means that there is no adjacency between the ith node and the jth node, and each row of the adjacency matrix represents the link relationship between one node and all other nodes, which can be regarded as a representation of the corresponding node.

Wherein the attribute matrix can be expressed as

Preferably, as an embodiment of the present invention, the step S2 specifically includes:

step S21, according to the formula (1), capturing the high-order topological proximity between each node through an adjacency matrix to obtain a topological structure weight matrix, wherein the formula (1) is as follows:wherein the content of the first and second substances,the method comprises the following steps of (1) performing normalization representation on an adjacent matrix of a t-order neighbor node;

step S22, according to equation (2), capturing attribute proximity between nodes through an attribute matrix to obtain an attribute weight value, where equation (2) is:

pij=wT[xi||xj],

in the formula, xiIs the attribute value, x, of node ijIs the attribute value of the neighbor node j, W is the parameter matrix, | | is the vector series operation, T is the matrix transposition;

step S23, combining and normalizing the attribute weight values and the topology weight values in the topology weight matrix according to formula (3), to obtain a proximity weight, where formula (3) is:

in the formula, mij、mirFor the topological weight value, P, in the topological weight matrix Mij、PirIs an attribute weight value, NiLeakyReLU is the activation function for the set of i neighbor nodes in M.

It should be noted that the target node refers to a user in the network, and the neighboring node refers to another user having an interaction relationship or similar attributes with the user, where the node i is the target node, and the node j is a neighboring node of the node i, and when t > 1, it indicates that the node j is a high-order neighbor of the node i. Wherein the content of the first and second substances,mija topological structure weight value obtained by capturing high-order topological proximity between a node j and a node i represents the topological correlation of the node j and the node i in the order t; if m isijIf the node j is more than 0, the node j can be considered as a similar neighbor of the target node i, and the neighbor set of the node i in M is NiAnd (4) showing.

It should be noted that, by combining the learned attribute weight values and topology weight values, in principle, a proximity weight value reflecting the similarity relationship between nodes can be obtained, but in order to make the coefficients between different nodes easier to compare, all the nodes belonging to N need to be subjected to comparisoniIf the neighbor node is more similar to the target node, the obtained proximity weight value is larger, and the more dissimilar is smaller.

In this embodiment, the beneficial effects of adopting the above further scheme are: the method has the advantages that the attention mechanism can be utilized to explicitly measure the correlation among users, the similarity or influence difference of different users to a target user is reflected, the high-order topological proximity is obtained by considering the influence of high-order neighbors, and compared with the prior art that only the neighbor nodes of the direct connection edges of the users are considered for analysis, the method can capture the potential correlation among the users more deeply, so that the division result is more accurate and accords with the reality.

Preferably, as an embodiment of the present invention, the step S3 specifically includes:

step S31, according to the formula (4), the adjacency matrix and the attribute matrix are coded in a preset graph automatic coder, weighting calculation is carried out through the proximity weight, and low-dimensional embedded representation of each middle layer and low-dimensional embedded representation of an output end, which comprise the topological structure information and the attribute information, are obtained in the corresponding sequence of each middle layer and the output end of the graph automatic coder; the formula (4) is:

......

where σ is a nonlinear activation function, αijIs the proximity weight value, W(1),W(2)...W(k)A parameter matrix is shown for each intermediate layer and output of the auto-encoder,for the initial input data of the graph autoencoder, i.e. the joint representation of the adjacency matrix and attribute matrix of the neighbor node j,representing the node i in the low-dimensional embedded representation obtained by learning of the corresponding sequence of each intermediate layer of the automatic graph encoder;automatic graph compilation for all neighbor nodes related to node iLow-dimensional embedded representation, h, obtained by learning corresponding sequences in each intermediate layer of the encoderiLearning a low-dimensional embedded representation for the output of the graph autoencoder;

step S32, decoding the output end low-dimensional embedded expression through a decoder according to the formula (5) to obtain a reconstructed adjacency matrix representing the probability of the existence of the continuous edges between the nodesThe formula (5) is:

in the formula, T is transposition, hiFor the output low-dimensional embedded representation of node i, hjA low-dimensional embedded representation for the output of node j;

step S33, according to a reconstruction loss function formula (6), pre-training the graph automatic encoder through the reconstruction adjacent matrix and the adjacent matrix, and when the value of the reconstruction loss function formula (6) is iterated to be minimized, iteratively updating the low-dimensional embedded representation of each intermediate layer and the low-dimensional embedded representation of the output end; the reconstruction loss function equation (6) is:

in the formula, AijRepresents the value of any element in the adjacency matrix, takes the value of 0 or 1,the values representing the corresponding elements in the reconstructed adjacency matrix take values between 0 and 1.

It should be noted that the adjacency matrixThe storage space of | n | × | n |, which represents the number of users in the acquired social network data, is occupied when | n | grows to millions, that isRepresenting high dimensional data can affect the efficiency of processing the data; in practice, most users do not have social relationships, so most A in the adjacency matrixijIs 0, the data is very sparse, resulting in high dimensional sparse data, making learning and application difficult. And the embedded learning of the nodes means that the nodes in the network are learned to obtain a low-dimensional dense embedded representation, so that the data dimension is far smaller than n, namely the low-dimensional embedded representation. Intuitively, nodes with similar topologies in a network should also have similar embedded representations. The graph autoencoder can effectively learn this embedded representation to effectively capture the topology and attribute information of the graph.

In the embodiment, the low-dimensional embedded representation of the nodes is obtained through the graph automatic encoder and the loss function, the network topology structure and the attribute information are effectively integrated, effective information is fully captured, and high-dimensional data containing various attribute information can be effectively fused through the low-dimensional embedded representation, so that the community division is more accurate.

Preferably, as an embodiment of the present invention, the step S4 specifically includes:

s41, clustering the low-dimensional embedded representations of the output ends by using a k-means clustering algorithm to obtain initial clustering centers, wherein one initial clustering center corresponds to one community;

step S42, measuring the similarity between the low-dimensional embedded representation and the initial clustering center according to a cosine similarity function formula (7) to obtain the soft clustering distribution of each node, wherein the cosine similarity function formula (7) is as follows:

in the formula, hiFor the output low-dimensional embedded representation of node i, μuIs the u initial clustering center;

step S43, defining a clustering loss function formula (8) according to the soft clustering distribution, wherein the clustering loss function formula (8) is as follows: l isC=∑ijlog(1/qiu) Wherein i and j are nodes and q isiuDistributing for soft clustering;

step S44, jointly learning the reconstruction loss function and the clustering loss function according to equation (9) to obtain a final target loss function, where equation (9) is:Lrto reconstruct the loss function, LCIn order to be a function of the cluster loss,a hyperparameter to balance the effects of the two loss functions;

and S45, iterating the target loss function to the minimum, and iteratively updating the low-dimensional embedded representation of each intermediate layer and the low-dimensional embedded representation of the output end to obtain a training model.

Wherein, each initial clustering center is assigned with a unique label, namely, the label represents the community to which the user belongs, and the community can be divided into c communities, and the initial clustering center can be represented as muu∈[0,1,...,c]。

The cosine similarity function uses a cosine value of an included angle between two vectors as a measure of the difference between the two individuals, and the closer the included angle is to 0 degree, the closer the cosine value is to 1, the more similar the two vectors are.

It should be noted that, in the soft clustering assignment, a clustering center is assigned to each node in advance according to a K-means clustering algorithm, and in the learning process of the model, the assignment of each node may change, and is only a temporary division, not a final result. q. q.siuCan be viewed as a soft cluster allocation for each node, the soft cluster allocation for all users can be expressed asThe method has the advantages that clustering information is introduced to achieve clustering-oriented low-dimensional embedded expression of nodes, each node is forced to be closer to a corresponding clustering center, the distance between the same classes is minimum, and the distance between different classes is maximum。

Wherein the target loss function L is subjected to a random gradient descent algorithmtotalAnd (5) carrying out derivation and iteratively optimizing the objective loss function to the minimum.

In the embodiment, the low-dimensional embedded representation and soft clustering distribution iterative optimization of the learned nodes are simultaneously performed in a unified framework, so that the clustering performance can be further optimized, and the community division is more accurate.

Preferably, as an embodiment of the present invention, the step S5 specifically includes:

and determining final clustering distribution of each node according to the training model, and obtaining final clustering centers according to the final clustering distribution, wherein one final clustering center corresponds to one community, and each node is divided into corresponding communities according to the final clustering centers.

Wherein each user's final cluster allocation is stored when the target loss function converges to a minimum valueIn other words, Q includes the probability value of each user belonging to each community, and the maximum probability value maxq is takeniuAnd the label distributed to the corresponding final clustering center is the community to which the user belongs. Members divided into the same community are highly similar in some respects, while member connections between different communities are loose.

Example two

The embodiment provides a social network-based community partitioning system, as shown in fig. 2, including a preprocessing module, a training module, and a partitioning module;

the preprocessing module is used for: the system comprises a node, a node database and a node database, wherein the node database is used for acquiring user data from a social network and preprocessing the user data to obtain an adjacency matrix and an attribute matrix of the node;

the training module: the proximity weight is obtained by capturing the high-order topological proximity and attribute proximity of the nodes through the adjacency matrix and the attribute matrix according to an attention mechanism; jointly encoding the adjacency matrix and the attribute matrix through a graph automatic encoder and the proximity weight to obtain low-dimensional embedded representation of the node; clustering the low-dimensional embedded representation through a clustering algorithm to obtain soft clustering distribution of nodes, obtaining a target loss function through the soft clustering distribution, and iterating the target loss function to be minimized to obtain a training model;

the dividing module: and the community division result of the nodes is obtained according to the clustering distribution result in the training model.

The embodiment also provides a social network-based community partitioning system, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and is characterized in that when the processor executes the computer program, steps of the social network-based community partitioning method are implemented, which are not described in detail herein.

The present embodiment also provides a storage medium, where the storage medium includes one or more computer programs stored therein, and the one or more computer programs may be executed by one or more processors to implement the steps of the social network-based community partitioning method in the embodiments described above, which are not described herein again.

The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.

Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.

In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

The technical solutions provided by the embodiments of the present invention are described in detail above, and the principles and embodiments of the present invention are explained in this patent by applying specific examples, and the descriptions of the embodiments above are only used to help understanding the principles of the embodiments of the present invention; the present invention is not limited to the above preferred embodiments, and any modifications, equivalent replacements, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种基于分布式储存系统的数据处理方法和系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!