Despite the fact that the description of data using graphs has been practiced since the century before last, their use in solving everyday problems of data analysis is only gaining momentum. Although the focus is on graph embeddings and convolutional networks, as usual, small steps are taken in algorithms to search for anomalies or antifraud. The main review article, which most experts refer to in their reports and publications, is Graph based anomaly detection and description: a survey by Leman Akoglu, Hanghang Tong, Danai Koutra (Akoglu, 2015) . We at CleverDATA decided to tell Habr about this almost the only material on the topic and bring to your attention his sammari.

ITKarma picture
First Count of the Russian Empire Boris Petrovich Sheremetev. No anomalies found.

In the introduction, the authors answer the question: “What are the advantages of searching for anomalies using graph theory?”:

  • most of the data that we encounter is interconnected, and you need to use entities that can take this into account;
  • columns are extremely convenient and understandable to humans;
  • anomalies are usually related to each other, especially when considering fraud;
  • fraudsters find it more difficult to adapt behavior to such methods, since they lack a vision of the graph as a whole and, accordingly, an understanding of how to adapt their strategy in accordance with the possible actions of antifraud systems.

I would like to add on my own that, although for some types of data the transition to a graphical representation of the data requires careful setting of the principles of such a transition and requires certain tricks, for many types of data (social networks, banking operations, computer networks) this representation is natural and requires appropriate working methods with him.

Thus, the prospect of using graph theory to search for anomalies is justified. Let's move on to the description of the methods themselves.

In the article, graph methods for searching for anomalies are divided depending on whether they are applied to static or dynamic graphs. The authors divide static graphs into two groups: ordinary and those for which some properties correspond to nodes and edges. Within each subgroup, the authors divide the approaches into structural and search-based communities.

ITKarma picture

Search for anomalies in graphs without properties assigned to vertices and/or edges


For simple graphs, three directions of structural approaches are distinguished: the search for anomalies based on features of nodes, dyads of nodes, triads, egonets, based on communities and based on proximity metrics: pagerank, personalized pagerank, simrank. At the same time, it is proposed to use conventional algorithms (for example, Isolation Forest, or if there is markup, then standard classifiers) to solve the problem of searching for anomalies in a graph, but based on graph features.

ITKarma picture
Example of Egonet

Separately, an approach is described with signs of egonets - subgraphs, including the target node, and its nearest neighbors.Authors referring to the article of 2010 (Akoglu, 2010) ( there will be a lot of such links in the article, I gave hyperlinks to some in the text, but more detailed links, for example, with the indication of the pages, you will find in the list of references at the end of this article ), they suggest looking for patterns of egonets with a pronounced dependence between the characteristics, and egonettes that do not correspond to these patterns are considered abnormal, and thus their central nodes are considered abnormal. Since there can be many various indicators based on egonets, the authors in the same article substantiate their choice of the subgroup of such indicators that best reflect the properties of the graph (for example, the number of triangles, the total weight of the edges). It is proposed to measure the distance between the patterns and between the egonet and the pattern as the deviation of the characteristics of this egonet from the characteristic dependence corresponding to the pattern.

Another graph-based anomaly search branch relies on the discovery of closely related groups or communities. It is suggested that nodes or edges that do not belong to any community or belong to several at once are considered abnormal.

The authors also describe other approaches:

  • clustering of nodes based on the similarity of their immediate environment; a reorganization of the adjacency matrix is ​​proposed to produce denser and less dense blocks (Chakrabarti 2004; Rissanen, 1999);
  • matrix factorization; an analogue of Non-negative matrix factorization (NMF) is proposed (Tong and Lin 2011).

Search for anomalies in graphs with nodes and/or edges with properties


The main idea of ​​such approaches consists in the sequential solution of two problems: the search for anomalous subgraphs in terms of structure and the search within this subset of anomalous subgraphs in terms of the properties of nodes and/or edges. The authors consider the solution to this problem as a search for rare elements in the set, thus reducing it to a task inverse to the search for elements that are most often found in the graph, and thus are most characteristic for it (the graph is "compressed" best).

The problem of the presence of many characteristics of various modalities in nodes is also considered. It is proposed that all conditionally normal values ​​be assigned to a single value of the categorical variable, and for emissions, the value of a single anomaly indicator should be matched, for example, based on metric methods for detecting anomalies: kNN, LOF.

Other features are also listed: SAX (Symbolic Aggregate approXimation) (Lin et al., 2003), MDL-binning (Minimum description length) (Kontkanen and Myllymki, 2007) and entropy minimum sampling (Fayyad and Irani, 1993). The authors of this article (Eberlie and Holder, 2007) take a different approach to the determination of anomalies in graph data, considering those subgraphs that are similar to a conditionally normal graph within certain limits to be abnormal. The authors justify this approach by saying that the most successful scammers will try to imitate reality as much as possible. They also propose taking into account the cost of modifying the indicator and formulating anomaly indicators taking into account this cost (the lower the cost, the more anomalous the indicator).

The search for anomalies for graphs with attributed nodes is also considered in a community-based paradigm. It is proposed to divide the columns into communities. Next, within each community, look for anomalies by attribute. For example, a smoker on a baseball team. A smoker is not an anomaly for society as a whole, but in his community is. Another approach (Müller, 2013) is based on the selection by the user (analyst) of a set of nodes for which a subspace of indicators similar to them is further defined. And the anomalies in this approach are nodes that structurally belong to the cluster of these nodes, but are far from them in the selected subspace of indicators.

Semi-supervised methods are considered separately, under the assumption that some of the nodes are marked as normal and abnormal, and the remaining nodes can be classified using the appropriate methods, and in the simplest case, they can be assigned labels of neighboring nodes.The main approaches are listed: iterative classification algorithm, gibbs sampling (more about these approaches write here ), loopy belief propagation , weighted-vote relational network classifier .

Search for anomalies in a dynamic graph


For a dynamic graph, which is a sequence of static graphs ordered in time, the basic approach is as follows:

  1. some compression or integral characteristic of each static graph is highlighted;
  2. calculates the distance of consecutive graphs;
  3. those graphs for which the distance is above the threshold are accepted as abnormal.

As distance measures are offered:

  • maximum common subgraph (MCS) distance;
  • error correcting graph matching distance, that is, a distance that measures how many steps you need to take to make another graph from one graph;
  • graph edit distance (GED), the same as the previous one, but only topological changes are possible;
  • distances between adjacency matrices (for example, Hamming);
  • different distances based on the weights of the ribs;
  • distances between the spectral representation of graphs (eigenvector distributions);
  • a more exotic measure is also described: the Euclidean distance between the Perron eigenvectors of the graph.

In an article from Bunke et al. (2006) the authors propose to consider the distance not only between successive graphs, but generally between all graphs in a sequence, and then apply multidimensional scaling, translating graphs into two-dimensional space. Next, emissions are sought in this two-dimensional space.

The following way of working with dynamic graphs is also described (Box and Jenkins, 1990): a certain number is assigned to the graph (calculated indicator) and then standard methods for searching for anomalies in time series are applied. For example, discrepancies with the ARIMA model.

In an article by Akoglu and Faloutsos (2010), the authors perform the following sequence of operations:

  1. allocate for each node of the graph for each moment of time F-signs;
  2. for each feature with a time window W, correlation matrices between nodes are counted;
  3. select eigenvectors and then consider only the first eigenvector;
  4. simultaneously distinguish the "typical" behavior of the eigenvectors of the correlation matrix (for this, one more SVD decomposition is performed over the matrix of the change in all eigenvectors of the correlation matrix in time);
  5. compare (through the cosine product) with the real behavior of this vector, thus obtaining an anomaly indicator of the considered time window.

Matrix decomposition is also used in Rossi (2013):

  1. similarly to the previous approach, F-signs are allocated per node for each time interval;
  2. for each time interval, NMF decomposition is performed, in which a role is assigned to each node;
  3. Next, the role changes of each node are monitored.

Matrix decomposition for interpreting results


I would also like to note the matrix approximation methods presented by the authors that are alternative to the well-known SVD, PCA, NMF: CUR (Drineas et al., 2006), CMD (Sun et al. 2007b) and Colibri (Tong et al. 2008). The main advantage of these methods is interpretability, because unlike SVD, which transfers points to another space, these methods leave the space intact, only by sampling the points from it. The simplest of them is CUR, in which the authors note two drawbacks: in it, points are selected from the matrix with repetition. CMD succeeds in removing this drawback, however, as in CUR, linear redundancy is inherent in this method, which the authors of the Colibri algorithm manage to avoid. Although the methods were invented specifically for solving the problems of searching for anomalies in graphs using matrix approximation methods, their use can be promising for other problems.

In the problems discussed in this review, these approaches are applied according to the following principle: approximation is performed and it is estimated how different columns/rows differ in the approximated matrix from the original one. The authors also note the NrMF method (Tong and Lin 2011), a modification of NMF, in which the restriction on non-negativity is imposed on the residual matrix R, since it contains the basic information on the difference between the approximation and the original matrix, and it would be difficult to interpret negative values ​​in this case. Nevertheless, it is not completely clear why SVD cannot be used in a similar way for decomposition, subsequent reconstruction and subsequent calculation of the difference from the original matrix.

ITKarma picture

Identification of nodes connecting abnormal


When analyzing the results, the task may arise to determine the nodes associated with abnormal. So, for example, how an abnormal node can be defined that has undergone a DDoS attack, while the attacking nodes are not defined. Or how abnormal can be identified by members of some group, while the people who lead them are not defined as abnormal. To solve this problem, the authors propose several approaches, the main idea of ​​which is to select a subgraph from a complete graph that contains abnormal nodes and nodes that best connect them.

  1. Definition of a connection subgraph (Faloutsos et al., 2004). The problem is proposed to be solved in terms of electrical engineering, assigning one node a positive potential, and the other nodes zero, and watch how the current will "flow" between them if you assign a certain resistance to the ribs.
  2. Center-Piece Subgraphs (CePS) (Tong and Faloutsos, 2006). In contrast to the previous method, an attempt is made to isolate only k-nodes from all the anomalous ones, since it is not necessary that all nodes are given. In this case, k must be specified.
  3. Dot2Dot (Akoglu et al., 2013b; Chau et al., 2012). In this approach, the authors solve the problem of grouping selected nodes and then further select the nodes that connect them.

Examples of searching for anomalies in various fields


The authors describe cases where methods for detecting anomalies in graphs were used.

Telecommunications. The goal is people who use the services for free. Cortes et al. (2002) searched for subgraphs closely related to the key node in terms of the number and duration of calls. Observations that the authors found: fraud accounts were connected, that is, the offenders either called each other, or called on the same phones. The second observation - violators can be detected by the similarity of their subgraphs defined by the proposed image.

Online auction. Violators create fake accounts and win ratings. They cannot be tracked by the usual aggregate indicators, but it is possible to see the graph. Intruder accounts are more associated with fake accounts than with good accounts. Fakes are associated approximately equally with the accounts of violators and with good ones. The latter are mainly associated with similar accounts. Pandit et al. (2007) solve this problem by converting to relational markov networks and then classify nodes through Loopy Belief Propagation (class labels iteratively propagate along the graph).

Transactions. McGlohon et al. (2009) solve this problem through relational classification under the assumption that intruders will be close to each other. That is, similar to the approach from the previous example.

Brokers who cheat on securities. Here Neville et al. (2005) analyze a multimodal graph, highlighting subgraphs that include a person under suspicion, his colleagues, the company with which he is associated, etc. They calculate aggregated attributes and attribute them to the central node. Next, relational probability trees (Neville et al. 2003) are used for relational classification.

Search for fake posts on forums that provide false information. The authors describe several approaches using featured descriptions, text analysis, and graph methods. Graph methods used by Wang et al. (2011a) , were applied for a situation when in the task there are reviews of some product. In the algorithm they proposed, it was proposed to assign to the reviewers indicators of the “degree of trust in them,” their reviews of “reliability” and goods — indicators of “reliability”. All these indicators are interconnected. So, how much you can trust the reviewer depends on how reliable the reviewer is. The reliability of the goods depends on the degree of trust to the reviewers who describe it, and the reliability of the reviews depends on the reliability of the goods on which they are written, and on the trust in their authors. The proposed algorithm first randomly initializes them, and then iteratively improves the estimate.

Trading. Fraudsters first make a large number of transactions with each other on some type of stock, increasing their attractiveness, and then when the stock goes up in price, they sell them to other traders. Both of these successive incidents can be tracked by graph data. In the first case, a subgraph will be highlighted where there are no external transactions (called a “black hole”), and after a period of time the same subgraph will be converted to a subgraph in which transactions from the subgraph to another part of the graph (called a volcano) are very prevalent. The authors cite the work Li et al. (2010) .

Web Resources. One of the first approaches to combat “bad” websites was the proliferation of indicators of “reliability” and “unreliability” of resources. If there is a link from one page to another, then for the last it increases its status as reliable. If the page points to another page for which it is known to be spam, this reduces the reliability of the original page. The TrustRank algorithm is mentioned (Gyöngyi et al., 2004) - a modification of PageRank to combat web spam. It requires that initially experts mark out part of the sites as reliable. These indicators are further distributed throughout the graph, gradually fading. Anti-TrustRank (Krishnan and Raj, 2006) follows the same principle, but with the spread of unreliability indicators from deliberately labeled untrustworthy sites. Both methods have the disadvantage that reliability is divided by the number of child nodes. Benczúr et al. (2005) suggest a completely different approach: analyze the PageRank of the site and its neighbors. The PageRank distribution of such subgraphs must obey a certain power law. For those nodes for which the PageRank distribution of their neighbors is knocked out of this law, a fine is assigned. In the work (Castillo et al., 2007) it is proposed to first train the classifier on known reliable and unreliable pages, and then “blur” the result of scoring the remaining websites according to the graph.

Social networks. To detect fraudster posts on social networks (to increase the number of likes, to redirect to a malicious page or to a questionnaire), approaches are described based on conventional classifiers, but taking into account graph signs: the length of distribution of “bad” posts according to the graph, the number of likes and comments of other users on the post, the similarity of messages spreading the users post, the degree of the site of the user who wrote the post.

Attacks on computer networks. Sun et al. (2008) successfully apply matrix decomposition (CMD) to solve this problem. Ding et al. (2012) take a community search approach by highlighting bridges between communities as suspicious.

Conclusion


Graph theory, until recently, came into contact with machine learning only for social networks, where in a completely different way it is impossible. Now the application of graph theory for solving classical ML-problems is gradually developing, but so far slowly. Mainly because there are still few problems where it is advisable to go over to the graph representation. Now at world conferences the trend of gradual development of this field, but mainly theory, not practice, is guessed. Graph libraries are quite limited and poorly developed. The search for anomalies is an even rarer task, since it is carried out without marking, and the quality of the detection of anomalies can only be assessed expertly.In many problems, the transition to a graph description is not advisable.

If it’s interesting to read about standard methods for detecting anomalies, then I already wrote about it a year ago on Habr in this article .

If you are interested in the topic, then to get more information, you definitely need to go to the ODS (OpenDataScience) slack, to the #network_analysis and # class_cs224w channels, watch the Stanford cs224w course.

More recently, a course on Knowledge Graphs was taught. Well, of course, you need to read the article Graph based anomaly detection and description: a survey from the authors Leman Akoglu, Hanghang Tong, Danai Koutra (Akoglu, 2015 ) , which is discussed in this post. I did not translate it all, but only those fragments that I considered important, and I understood to some extent. Most authors refer specifically to this article, because there are no more reviews of this level and breadth on the topic. At least I did not find such.

References

Да, наша команда CleverDATA не только пишет и переводит статьи. Большую часть времени мы посвящаем решению интересных и разнообразных практических задач, для чего используем не только теорию графов, но и множество методов машинного и глубокого обучения.

Source