Eigenvector Centrality

A natural extension of degree centrality is eigenvector centrality. In-degree centrality awards one centrality point for every link a node receives. But not all vertices are equivalent: some are more relevant than others, and, reasonably, endorsements from important nodes count more. The eigenvector centrality thesis reads:

A node is important if it is linked to by other important nodes.

Eigenvector centrality differs from in-degree centrality: a node receiving many links does not necessarily have a high eigenvector centrality (it might be that all linkers have low or null eigenvector centrality). Moreover, a node with high eigenvector centrality is not necessarily highly linked (the node might have few but important linkers).

Eigenvector centrality, regarded as a ranking measure, is a remarkably old method. Early pioneers of this technique are Wassily W. Leontief (The Structure of American Economy, 1919-1929. Harvard University Press, 1941) and John R. Seeley (The net of reciprocal influence: A problem in treating sociometric data. The Canadian Journal of Psychology, 1949). Read the full story full story.

Math

Let A=(ai,j) be the adjacency matrix of a graph. The eigenvector centrality xi of node i is given by:
xi=1λkak,ixk
where λ0 is a constant. In matrix form we have:
λx=xA

Hence the centrality vector x is the left-hand eigenvector of the adjacency matrix A associated with the eigenvalue λ. It is wise to choose λ as the largest eigenvalue in absolute value of matrix A. By virtue of Perron-Frobenius theorem, this choice guarantees the following desirable property: if matrix A is irreducible, or equivalently if the graph is (strongly) connected, then the eigenvector solution x is both unique (up to a multiplicative constant) and positive.

The power method can be used to solve the eigenvector centrality problem. Let m(v) denote the signed component of maximal magnitude of vector v. If there is more than one maximal component, let m(v) be the first one. For instance, m(3,3,2)=3. Let x(0)=e, where e is the vector of all 1's. For k1:

  1. repeatedly compute x(k)=x(k1)A;
  2. normalize x(k)=x(k)/m(x(k));

until the desired precision is achieved. It follows that x(k) converges to the dominant eigenvector of A and m(x(k)) converges to the dominant eigenvalue of A. If matrix A is sparse, each vector-matrix product can be performed in linear time in the size of the graph.

Let

|λ1||λ2||λn|
be the eigenvalues of A. The power method converges if |λ1|>|λ2|. The rate of convergence is the rate at which (λ2/λ1)k goes to 0. Hence, if the sub-dominant eigenvalue is small compared to the dominant one, then the method quickly converges. If A is a symmetric matrix, that is the graph of A is undirected, then the eigenvalues of A are all real numbers (in general, they might be complex numbers). In this case, it holds that the graph of A is bipartite if and only if λ1 and λ1 are eigenvalues of A. Hence, for bipartite graphs λ2=λ1 and hence |λ1|=|λ2|. It follows that the power method diverges on bipartite graphs. Lanczos and Jacobi-Davidson methods are faster alternatives to power method that converge also on bipartite graphs.

Code

The built-in function eigen_centrality (R) computes eigenvector centrality (using ARPACK package).

A user-defined function eigenvector.centrality follows:

# Eigenvector centrality (direct method) #INPUT # g = graph # t = precision # OUTPUT # A list with: # vector = centrality vector # value = eigenvalue # iter = number of iterations eigenvector.centrality = function(g, t=6) { A = as_adjacency_matrix(g, sparse=FALSE); n = vcount(g); x0 = rep(0, n); x1 = rep(1/n, n); eps = 1/10^t; iter = 0; while (sum(abs(x0 - x1)) > eps) { x0 = x1; x1 = x1 %*% A; m = x1[which.max(abs(x1))]; x1 = x1 / m; iter = iter + 1; } return(list(vector = as.vector(x1), value = m, iter = iter)) }

Example

An undirected connected graph with nodes labelled with their eigenvector centrality follows: