Intuition behind eigenvalues of an adjacency matrixGenerating a adjacency matrix representing a DAGHow to reduce the number of crossing edges in a diagram?Does spectral graph theory say anything about graph isomorphismscale-free networks and adjacency matrixDoes the order matter in the adjacency matrix?Adjacency matrix and recognizing hierarchy?Manipulating Adjacency matrixA simple way to find the feasible region of a system with simple constraints
How to avoid that customers come to local shop to get advice and then buy online?
Why didn't Petunia know that Harry wasn't supposed to use magic out of school?
Why apt asking to uninstall GIMP when installing ardour?
Why people more frequently say "三四个", less frequently say "三五个" or "四五个" or "五六个", and even more less frequently say "四六个"?
It's right here. It's very very far
Was Switzerland pressured either by Allies or Axis to take part in World War 2 at any time?
Implement the 2D Hadamard Transform
Is it plausible for a certain area of a continent to be/remain/become uninhabited for a long period of time?
Is the phrase “You are requested” polite or rude?
When and why did the House rules change to permit an inquiry without a vote?
Stamp of electrical department on letter head of recommendation letter
What does "he was equally game to slip into bit parts" mean?
Is rent considered a debt?
How to run fortran77 program with inputs from file?
How to get to Antarctica without using a travel company
Remove x last elements of an array and reinsert them before position y
Did Bobby Fischer actually write "Bobby Fischer Teaches Chess"
Drawing function relations
Is there a minimal approach speed for airliners during rush hour?
Can every type of linear filter be modelled by a convolution?
Matrix class in C#
Was Hitler exclaiming "Heil Hitler!" himself when saluting?
Why doesn't the nucleus have "nucleus-probability cloud"?
Can Chill Touch prevent Regeneration?
Intuition behind eigenvalues of an adjacency matrix
Generating a adjacency matrix representing a DAGHow to reduce the number of crossing edges in a diagram?Does spectral graph theory say anything about graph isomorphismscale-free networks and adjacency matrixDoes the order matter in the adjacency matrix?Adjacency matrix and recognizing hierarchy?Manipulating Adjacency matrixA simple way to find the feasible region of a system with simple constraints
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty
margin-bottom:0;
$begingroup$
I am currently working to understand the use of the Cheeger bound and of Cheeger's inequality, and their use for spectral partitioning, conductance, expansion, etc, but I still struggle to have a start of an intuition regarding the second eigenvalue of the adjacency matrix.
Usually, in graph theory, most of the concepts we come across of are quite simple to intuit, but in this case, I can't even come up with what kind of graphs would have a second eigenvalue being very low, or very high.
I've been reading similar questions asked here and there on the SE network, but they usually refer to eigenvalues in different fields (multivariate analysis, Euclidian distance matrices, correlation matrices ...).
But nothing about spectral partitioning and graph theory.
Can someone try and share his intuition/experience of this second eigenvalue in the case of graphs and adjacency matrices?
graph-theory adjacency-matrix
$endgroup$
add a comment
|
$begingroup$
I am currently working to understand the use of the Cheeger bound and of Cheeger's inequality, and their use for spectral partitioning, conductance, expansion, etc, but I still struggle to have a start of an intuition regarding the second eigenvalue of the adjacency matrix.
Usually, in graph theory, most of the concepts we come across of are quite simple to intuit, but in this case, I can't even come up with what kind of graphs would have a second eigenvalue being very low, or very high.
I've been reading similar questions asked here and there on the SE network, but they usually refer to eigenvalues in different fields (multivariate analysis, Euclidian distance matrices, correlation matrices ...).
But nothing about spectral partitioning and graph theory.
Can someone try and share his intuition/experience of this second eigenvalue in the case of graphs and adjacency matrices?
graph-theory adjacency-matrix
$endgroup$
$begingroup$
Are you familiar with the connection between the spectrum of the adjacency matrix and the convergence of random walks on the graph?
$endgroup$
– Yuval Filmus
May 28 at 14:14
$begingroup$
@YuvalFilmus Not at all, despite being really familiar with random walks, and somehow familiar with the spectrum of an adjacency matrix. So I'm interested in your view indeed :)
$endgroup$
– m.raynal
May 28 at 15:00
add a comment
|
$begingroup$
I am currently working to understand the use of the Cheeger bound and of Cheeger's inequality, and their use for spectral partitioning, conductance, expansion, etc, but I still struggle to have a start of an intuition regarding the second eigenvalue of the adjacency matrix.
Usually, in graph theory, most of the concepts we come across of are quite simple to intuit, but in this case, I can't even come up with what kind of graphs would have a second eigenvalue being very low, or very high.
I've been reading similar questions asked here and there on the SE network, but they usually refer to eigenvalues in different fields (multivariate analysis, Euclidian distance matrices, correlation matrices ...).
But nothing about spectral partitioning and graph theory.
Can someone try and share his intuition/experience of this second eigenvalue in the case of graphs and adjacency matrices?
graph-theory adjacency-matrix
$endgroup$
I am currently working to understand the use of the Cheeger bound and of Cheeger's inequality, and their use for spectral partitioning, conductance, expansion, etc, but I still struggle to have a start of an intuition regarding the second eigenvalue of the adjacency matrix.
Usually, in graph theory, most of the concepts we come across of are quite simple to intuit, but in this case, I can't even come up with what kind of graphs would have a second eigenvalue being very low, or very high.
I've been reading similar questions asked here and there on the SE network, but they usually refer to eigenvalues in different fields (multivariate analysis, Euclidian distance matrices, correlation matrices ...).
But nothing about spectral partitioning and graph theory.
Can someone try and share his intuition/experience of this second eigenvalue in the case of graphs and adjacency matrices?
graph-theory adjacency-matrix
graph-theory adjacency-matrix
edited May 29 at 9:57
Glorfindel
2641 gold badge4 silver badges11 bronze badges
2641 gold badge4 silver badges11 bronze badges
asked May 28 at 13:36
m.raynalm.raynal
1677 bronze badges
1677 bronze badges
$begingroup$
Are you familiar with the connection between the spectrum of the adjacency matrix and the convergence of random walks on the graph?
$endgroup$
– Yuval Filmus
May 28 at 14:14
$begingroup$
@YuvalFilmus Not at all, despite being really familiar with random walks, and somehow familiar with the spectrum of an adjacency matrix. So I'm interested in your view indeed :)
$endgroup$
– m.raynal
May 28 at 15:00
add a comment
|
$begingroup$
Are you familiar with the connection between the spectrum of the adjacency matrix and the convergence of random walks on the graph?
$endgroup$
– Yuval Filmus
May 28 at 14:14
$begingroup$
@YuvalFilmus Not at all, despite being really familiar with random walks, and somehow familiar with the spectrum of an adjacency matrix. So I'm interested in your view indeed :)
$endgroup$
– m.raynal
May 28 at 15:00
$begingroup$
Are you familiar with the connection between the spectrum of the adjacency matrix and the convergence of random walks on the graph?
$endgroup$
– Yuval Filmus
May 28 at 14:14
$begingroup$
Are you familiar with the connection between the spectrum of the adjacency matrix and the convergence of random walks on the graph?
$endgroup$
– Yuval Filmus
May 28 at 14:14
$begingroup$
@YuvalFilmus Not at all, despite being really familiar with random walks, and somehow familiar with the spectrum of an adjacency matrix. So I'm interested in your view indeed :)
$endgroup$
– m.raynal
May 28 at 15:00
$begingroup$
@YuvalFilmus Not at all, despite being really familiar with random walks, and somehow familiar with the spectrum of an adjacency matrix. So I'm interested in your view indeed :)
$endgroup$
– m.raynal
May 28 at 15:00
add a comment
|
3 Answers
3
active
oldest
votes
$begingroup$
The second (in magnitude) eigenvalue controls the rate of convergence of the random walk on the graph. This is explained in many lecture notes, for example lecture notes of Luca Trevisan. Roughly speaking, the L2 distance to uniformity after $t$ steps can be bounded by $lambda_2^t$.
Another place where the second eigenvalue shows up is the planted clique problem. The starting point is the observation that a random $G(n,1/2)$ graph contains a clique of size $2log_2 n$, but the greedy algorithm only finds a clique of size $log_2 n$, and no better efficient algorithm is known. (The greedy algorithm just picks a random node, throws away all non-neighbors, and repeats.)
This suggests planting a large clique on top of $G(n,1/2)$. The question is: how big should the clique be, so that we can find it efficiently. If we plant a clique of size $Csqrtnlog n$, then we could identify the vertices of the clique just by their degree; but this method only works for cliques of size $Omega(sqrtnlog n)$. We can improve this using spectral techniques: if we plant a clique of size $Csqrtn$, then the second eigenvector encodes the clique, as Alon, Krivelevich and Sudakov showed in a classic paper.
More generally, the first few eigenvectors are useful for partitioning the graph into a small number of clusters. See for example Chapter 3 of lecture notes of Luca Trevisan, which describes higher-order Cheeger inequalities.
$endgroup$
add a comment
|
$begingroup$
(Disclaimer: This answer is about eigenvalues of graphs in general, not the second eigenvalue in particular. I hope it is helpful nevertheless.)
An interesting way of thinking about the eigenvalues of a graph $G = (V, E)$ is by taking the vector space $mathbbR^n$ where $n = |V|$ and identifying each vector with a function $fcolon V to mathbbR$ (i.e., a vertex labeling). An eigenvector of the adjacency matrix, then, is an element of $f in mathbbR^n$ such that there is $lambda in mathbbR$ (i.e., an eigenvalue) with $A f = lambda f$, $A$ being the adjacency matrix of $G$. Note that $A f$ is the vector associated with the map which sends every vertex $v in V$ to $sum_u in N(v) f(u)$, $N(v)$ being the set of neighbors (i.e., vertices adjacent to) $u$. Hence, in this setting, the eigenvector property of $f$ corresponds to the property that summing over the function values (under $f$) of the neighbors of a vertex yields the same result as multiplying the function value of the vertex with the constant $lambda$.
$endgroup$
$begingroup$
Thanks a lot, I had never 'seen' that the eigenvector multiplied by lambda had the value of the sum of function values of neighbors (even if it comes straight from the definition).
$endgroup$
– m.raynal
May 28 at 14:09
1
$begingroup$
Me neither :) I found it by chance in a syllabus on eigenvalues of graphs.
$endgroup$
– dkaeae
May 28 at 14:14
add a comment
|
$begingroup$
I think for most things it's more productive to look at the Laplacian of the graph $G$, which is closely related to the adjacency matrix. Here you can use it to relate the second eigenvalue to a "local vs global" property of the graph.
For simplicity, let's suppose that $G$ is $d$-regular. Then the normalized Laplacian of $G$ is $L= I - frac1d A$, where $I$ is the $ntimes n$ identity, and $A$ is the adjacency matrix. The nice thing about the Laplacian is that, writing vectors as functions $f: Vto mathbbR$ like @dkaeae, and using $langle cdot, cdot rangle$ for the usual inner product, we have this very nice expression for the quadratic form given by $L$:
$$
langle f, Lfrangle = frac1d sum_(u,v) in E(f(u) - f(v))^2.
$$
The largest eigenvalue of $A$ is $d$, and corresponds to the smallest eigenvalue of $L$, which is $0$; the second largest eigenvalue $lambda_2$ of $A$ corresponds to the second smallest eigenvalue of $L$, which is $1 - fraclambda_2d$. By the min-max principle, we have
$$
1 - fraclambda_2d=minleftfraclangle f, Lfranglelangle f, frangle:sum_v in Vf(v) = 0, f neq 0right.
$$
Notice that $langle f, Lfrangle$ does not change when we shift $f$ by the same constant for every vertex. So, equivalently, you can define, for any $f:V to mathbbR$, the "centered" function $f_0$ by $f_0(u) = f(u) - frac1nsum_v in Vf(v)$, and write
$$
1 - fraclambda_2d=minleftfraclangle f, Lfranglelangle f_0, f_0rangle: f text not constantright.
$$
Now a bit of calculation shows that $langle f_0, f_0rangle = frac1nsum_u,vin Vchoose 2(f(u) - f(v))^2$, and substituting above and dividing numerator and denominator by $fracn2$, we have
$$
1 - fraclambda_2d=minleftfracfrac2nd sum_(u,v) in E(f(u) - f(v))^2frac2n^2sum_u,vin Vchoose 2(f(u) - f(v))^2: f text not constantright.
$$
What this means is that, if we place every vertex $u$ of $G$ on the real line at the point $f(u)$, then the average distance between two independent random vertices in the graph (the denominator) is at most $fracdd - lambda_2$ times the average distance between the endpoints of a random edge in the graph (the numerator). So in this sense, a large spectral gap means that what happens across a random edge of $G$ (local behavior) is a good predictor for what happens across a random uncorrelated pair of vertices (global behavior).
$endgroup$
add a comment
|
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "419"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f109963%2fintuition-behind-eigenvalues-of-an-adjacency-matrix%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The second (in magnitude) eigenvalue controls the rate of convergence of the random walk on the graph. This is explained in many lecture notes, for example lecture notes of Luca Trevisan. Roughly speaking, the L2 distance to uniformity after $t$ steps can be bounded by $lambda_2^t$.
Another place where the second eigenvalue shows up is the planted clique problem. The starting point is the observation that a random $G(n,1/2)$ graph contains a clique of size $2log_2 n$, but the greedy algorithm only finds a clique of size $log_2 n$, and no better efficient algorithm is known. (The greedy algorithm just picks a random node, throws away all non-neighbors, and repeats.)
This suggests planting a large clique on top of $G(n,1/2)$. The question is: how big should the clique be, so that we can find it efficiently. If we plant a clique of size $Csqrtnlog n$, then we could identify the vertices of the clique just by their degree; but this method only works for cliques of size $Omega(sqrtnlog n)$. We can improve this using spectral techniques: if we plant a clique of size $Csqrtn$, then the second eigenvector encodes the clique, as Alon, Krivelevich and Sudakov showed in a classic paper.
More generally, the first few eigenvectors are useful for partitioning the graph into a small number of clusters. See for example Chapter 3 of lecture notes of Luca Trevisan, which describes higher-order Cheeger inequalities.
$endgroup$
add a comment
|
$begingroup$
The second (in magnitude) eigenvalue controls the rate of convergence of the random walk on the graph. This is explained in many lecture notes, for example lecture notes of Luca Trevisan. Roughly speaking, the L2 distance to uniformity after $t$ steps can be bounded by $lambda_2^t$.
Another place where the second eigenvalue shows up is the planted clique problem. The starting point is the observation that a random $G(n,1/2)$ graph contains a clique of size $2log_2 n$, but the greedy algorithm only finds a clique of size $log_2 n$, and no better efficient algorithm is known. (The greedy algorithm just picks a random node, throws away all non-neighbors, and repeats.)
This suggests planting a large clique on top of $G(n,1/2)$. The question is: how big should the clique be, so that we can find it efficiently. If we plant a clique of size $Csqrtnlog n$, then we could identify the vertices of the clique just by their degree; but this method only works for cliques of size $Omega(sqrtnlog n)$. We can improve this using spectral techniques: if we plant a clique of size $Csqrtn$, then the second eigenvector encodes the clique, as Alon, Krivelevich and Sudakov showed in a classic paper.
More generally, the first few eigenvectors are useful for partitioning the graph into a small number of clusters. See for example Chapter 3 of lecture notes of Luca Trevisan, which describes higher-order Cheeger inequalities.
$endgroup$
add a comment
|
$begingroup$
The second (in magnitude) eigenvalue controls the rate of convergence of the random walk on the graph. This is explained in many lecture notes, for example lecture notes of Luca Trevisan. Roughly speaking, the L2 distance to uniformity after $t$ steps can be bounded by $lambda_2^t$.
Another place where the second eigenvalue shows up is the planted clique problem. The starting point is the observation that a random $G(n,1/2)$ graph contains a clique of size $2log_2 n$, but the greedy algorithm only finds a clique of size $log_2 n$, and no better efficient algorithm is known. (The greedy algorithm just picks a random node, throws away all non-neighbors, and repeats.)
This suggests planting a large clique on top of $G(n,1/2)$. The question is: how big should the clique be, so that we can find it efficiently. If we plant a clique of size $Csqrtnlog n$, then we could identify the vertices of the clique just by their degree; but this method only works for cliques of size $Omega(sqrtnlog n)$. We can improve this using spectral techniques: if we plant a clique of size $Csqrtn$, then the second eigenvector encodes the clique, as Alon, Krivelevich and Sudakov showed in a classic paper.
More generally, the first few eigenvectors are useful for partitioning the graph into a small number of clusters. See for example Chapter 3 of lecture notes of Luca Trevisan, which describes higher-order Cheeger inequalities.
$endgroup$
The second (in magnitude) eigenvalue controls the rate of convergence of the random walk on the graph. This is explained in many lecture notes, for example lecture notes of Luca Trevisan. Roughly speaking, the L2 distance to uniformity after $t$ steps can be bounded by $lambda_2^t$.
Another place where the second eigenvalue shows up is the planted clique problem. The starting point is the observation that a random $G(n,1/2)$ graph contains a clique of size $2log_2 n$, but the greedy algorithm only finds a clique of size $log_2 n$, and no better efficient algorithm is known. (The greedy algorithm just picks a random node, throws away all non-neighbors, and repeats.)
This suggests planting a large clique on top of $G(n,1/2)$. The question is: how big should the clique be, so that we can find it efficiently. If we plant a clique of size $Csqrtnlog n$, then we could identify the vertices of the clique just by their degree; but this method only works for cliques of size $Omega(sqrtnlog n)$. We can improve this using spectral techniques: if we plant a clique of size $Csqrtn$, then the second eigenvector encodes the clique, as Alon, Krivelevich and Sudakov showed in a classic paper.
More generally, the first few eigenvectors are useful for partitioning the graph into a small number of clusters. See for example Chapter 3 of lecture notes of Luca Trevisan, which describes higher-order Cheeger inequalities.
answered May 28 at 15:15
Yuval FilmusYuval Filmus
209k15 gold badges202 silver badges370 bronze badges
209k15 gold badges202 silver badges370 bronze badges
add a comment
|
add a comment
|
$begingroup$
(Disclaimer: This answer is about eigenvalues of graphs in general, not the second eigenvalue in particular. I hope it is helpful nevertheless.)
An interesting way of thinking about the eigenvalues of a graph $G = (V, E)$ is by taking the vector space $mathbbR^n$ where $n = |V|$ and identifying each vector with a function $fcolon V to mathbbR$ (i.e., a vertex labeling). An eigenvector of the adjacency matrix, then, is an element of $f in mathbbR^n$ such that there is $lambda in mathbbR$ (i.e., an eigenvalue) with $A f = lambda f$, $A$ being the adjacency matrix of $G$. Note that $A f$ is the vector associated with the map which sends every vertex $v in V$ to $sum_u in N(v) f(u)$, $N(v)$ being the set of neighbors (i.e., vertices adjacent to) $u$. Hence, in this setting, the eigenvector property of $f$ corresponds to the property that summing over the function values (under $f$) of the neighbors of a vertex yields the same result as multiplying the function value of the vertex with the constant $lambda$.
$endgroup$
$begingroup$
Thanks a lot, I had never 'seen' that the eigenvector multiplied by lambda had the value of the sum of function values of neighbors (even if it comes straight from the definition).
$endgroup$
– m.raynal
May 28 at 14:09
1
$begingroup$
Me neither :) I found it by chance in a syllabus on eigenvalues of graphs.
$endgroup$
– dkaeae
May 28 at 14:14
add a comment
|
$begingroup$
(Disclaimer: This answer is about eigenvalues of graphs in general, not the second eigenvalue in particular. I hope it is helpful nevertheless.)
An interesting way of thinking about the eigenvalues of a graph $G = (V, E)$ is by taking the vector space $mathbbR^n$ where $n = |V|$ and identifying each vector with a function $fcolon V to mathbbR$ (i.e., a vertex labeling). An eigenvector of the adjacency matrix, then, is an element of $f in mathbbR^n$ such that there is $lambda in mathbbR$ (i.e., an eigenvalue) with $A f = lambda f$, $A$ being the adjacency matrix of $G$. Note that $A f$ is the vector associated with the map which sends every vertex $v in V$ to $sum_u in N(v) f(u)$, $N(v)$ being the set of neighbors (i.e., vertices adjacent to) $u$. Hence, in this setting, the eigenvector property of $f$ corresponds to the property that summing over the function values (under $f$) of the neighbors of a vertex yields the same result as multiplying the function value of the vertex with the constant $lambda$.
$endgroup$
$begingroup$
Thanks a lot, I had never 'seen' that the eigenvector multiplied by lambda had the value of the sum of function values of neighbors (even if it comes straight from the definition).
$endgroup$
– m.raynal
May 28 at 14:09
1
$begingroup$
Me neither :) I found it by chance in a syllabus on eigenvalues of graphs.
$endgroup$
– dkaeae
May 28 at 14:14
add a comment
|
$begingroup$
(Disclaimer: This answer is about eigenvalues of graphs in general, not the second eigenvalue in particular. I hope it is helpful nevertheless.)
An interesting way of thinking about the eigenvalues of a graph $G = (V, E)$ is by taking the vector space $mathbbR^n$ where $n = |V|$ and identifying each vector with a function $fcolon V to mathbbR$ (i.e., a vertex labeling). An eigenvector of the adjacency matrix, then, is an element of $f in mathbbR^n$ such that there is $lambda in mathbbR$ (i.e., an eigenvalue) with $A f = lambda f$, $A$ being the adjacency matrix of $G$. Note that $A f$ is the vector associated with the map which sends every vertex $v in V$ to $sum_u in N(v) f(u)$, $N(v)$ being the set of neighbors (i.e., vertices adjacent to) $u$. Hence, in this setting, the eigenvector property of $f$ corresponds to the property that summing over the function values (under $f$) of the neighbors of a vertex yields the same result as multiplying the function value of the vertex with the constant $lambda$.
$endgroup$
(Disclaimer: This answer is about eigenvalues of graphs in general, not the second eigenvalue in particular. I hope it is helpful nevertheless.)
An interesting way of thinking about the eigenvalues of a graph $G = (V, E)$ is by taking the vector space $mathbbR^n$ where $n = |V|$ and identifying each vector with a function $fcolon V to mathbbR$ (i.e., a vertex labeling). An eigenvector of the adjacency matrix, then, is an element of $f in mathbbR^n$ such that there is $lambda in mathbbR$ (i.e., an eigenvalue) with $A f = lambda f$, $A$ being the adjacency matrix of $G$. Note that $A f$ is the vector associated with the map which sends every vertex $v in V$ to $sum_u in N(v) f(u)$, $N(v)$ being the set of neighbors (i.e., vertices adjacent to) $u$. Hence, in this setting, the eigenvector property of $f$ corresponds to the property that summing over the function values (under $f$) of the neighbors of a vertex yields the same result as multiplying the function value of the vertex with the constant $lambda$.
answered May 28 at 14:03
dkaeaedkaeae
4,5721 gold badge11 silver badges29 bronze badges
4,5721 gold badge11 silver badges29 bronze badges
$begingroup$
Thanks a lot, I had never 'seen' that the eigenvector multiplied by lambda had the value of the sum of function values of neighbors (even if it comes straight from the definition).
$endgroup$
– m.raynal
May 28 at 14:09
1
$begingroup$
Me neither :) I found it by chance in a syllabus on eigenvalues of graphs.
$endgroup$
– dkaeae
May 28 at 14:14
add a comment
|
$begingroup$
Thanks a lot, I had never 'seen' that the eigenvector multiplied by lambda had the value of the sum of function values of neighbors (even if it comes straight from the definition).
$endgroup$
– m.raynal
May 28 at 14:09
1
$begingroup$
Me neither :) I found it by chance in a syllabus on eigenvalues of graphs.
$endgroup$
– dkaeae
May 28 at 14:14
$begingroup$
Thanks a lot, I had never 'seen' that the eigenvector multiplied by lambda had the value of the sum of function values of neighbors (even if it comes straight from the definition).
$endgroup$
– m.raynal
May 28 at 14:09
$begingroup$
Thanks a lot, I had never 'seen' that the eigenvector multiplied by lambda had the value of the sum of function values of neighbors (even if it comes straight from the definition).
$endgroup$
– m.raynal
May 28 at 14:09
1
1
$begingroup$
Me neither :) I found it by chance in a syllabus on eigenvalues of graphs.
$endgroup$
– dkaeae
May 28 at 14:14
$begingroup$
Me neither :) I found it by chance in a syllabus on eigenvalues of graphs.
$endgroup$
– dkaeae
May 28 at 14:14
add a comment
|
$begingroup$
I think for most things it's more productive to look at the Laplacian of the graph $G$, which is closely related to the adjacency matrix. Here you can use it to relate the second eigenvalue to a "local vs global" property of the graph.
For simplicity, let's suppose that $G$ is $d$-regular. Then the normalized Laplacian of $G$ is $L= I - frac1d A$, where $I$ is the $ntimes n$ identity, and $A$ is the adjacency matrix. The nice thing about the Laplacian is that, writing vectors as functions $f: Vto mathbbR$ like @dkaeae, and using $langle cdot, cdot rangle$ for the usual inner product, we have this very nice expression for the quadratic form given by $L$:
$$
langle f, Lfrangle = frac1d sum_(u,v) in E(f(u) - f(v))^2.
$$
The largest eigenvalue of $A$ is $d$, and corresponds to the smallest eigenvalue of $L$, which is $0$; the second largest eigenvalue $lambda_2$ of $A$ corresponds to the second smallest eigenvalue of $L$, which is $1 - fraclambda_2d$. By the min-max principle, we have
$$
1 - fraclambda_2d=minleftfraclangle f, Lfranglelangle f, frangle:sum_v in Vf(v) = 0, f neq 0right.
$$
Notice that $langle f, Lfrangle$ does not change when we shift $f$ by the same constant for every vertex. So, equivalently, you can define, for any $f:V to mathbbR$, the "centered" function $f_0$ by $f_0(u) = f(u) - frac1nsum_v in Vf(v)$, and write
$$
1 - fraclambda_2d=minleftfraclangle f, Lfranglelangle f_0, f_0rangle: f text not constantright.
$$
Now a bit of calculation shows that $langle f_0, f_0rangle = frac1nsum_u,vin Vchoose 2(f(u) - f(v))^2$, and substituting above and dividing numerator and denominator by $fracn2$, we have
$$
1 - fraclambda_2d=minleftfracfrac2nd sum_(u,v) in E(f(u) - f(v))^2frac2n^2sum_u,vin Vchoose 2(f(u) - f(v))^2: f text not constantright.
$$
What this means is that, if we place every vertex $u$ of $G$ on the real line at the point $f(u)$, then the average distance between two independent random vertices in the graph (the denominator) is at most $fracdd - lambda_2$ times the average distance between the endpoints of a random edge in the graph (the numerator). So in this sense, a large spectral gap means that what happens across a random edge of $G$ (local behavior) is a good predictor for what happens across a random uncorrelated pair of vertices (global behavior).
$endgroup$
add a comment
|
$begingroup$
I think for most things it's more productive to look at the Laplacian of the graph $G$, which is closely related to the adjacency matrix. Here you can use it to relate the second eigenvalue to a "local vs global" property of the graph.
For simplicity, let's suppose that $G$ is $d$-regular. Then the normalized Laplacian of $G$ is $L= I - frac1d A$, where $I$ is the $ntimes n$ identity, and $A$ is the adjacency matrix. The nice thing about the Laplacian is that, writing vectors as functions $f: Vto mathbbR$ like @dkaeae, and using $langle cdot, cdot rangle$ for the usual inner product, we have this very nice expression for the quadratic form given by $L$:
$$
langle f, Lfrangle = frac1d sum_(u,v) in E(f(u) - f(v))^2.
$$
The largest eigenvalue of $A$ is $d$, and corresponds to the smallest eigenvalue of $L$, which is $0$; the second largest eigenvalue $lambda_2$ of $A$ corresponds to the second smallest eigenvalue of $L$, which is $1 - fraclambda_2d$. By the min-max principle, we have
$$
1 - fraclambda_2d=minleftfraclangle f, Lfranglelangle f, frangle:sum_v in Vf(v) = 0, f neq 0right.
$$
Notice that $langle f, Lfrangle$ does not change when we shift $f$ by the same constant for every vertex. So, equivalently, you can define, for any $f:V to mathbbR$, the "centered" function $f_0$ by $f_0(u) = f(u) - frac1nsum_v in Vf(v)$, and write
$$
1 - fraclambda_2d=minleftfraclangle f, Lfranglelangle f_0, f_0rangle: f text not constantright.
$$
Now a bit of calculation shows that $langle f_0, f_0rangle = frac1nsum_u,vin Vchoose 2(f(u) - f(v))^2$, and substituting above and dividing numerator and denominator by $fracn2$, we have
$$
1 - fraclambda_2d=minleftfracfrac2nd sum_(u,v) in E(f(u) - f(v))^2frac2n^2sum_u,vin Vchoose 2(f(u) - f(v))^2: f text not constantright.
$$
What this means is that, if we place every vertex $u$ of $G$ on the real line at the point $f(u)$, then the average distance between two independent random vertices in the graph (the denominator) is at most $fracdd - lambda_2$ times the average distance between the endpoints of a random edge in the graph (the numerator). So in this sense, a large spectral gap means that what happens across a random edge of $G$ (local behavior) is a good predictor for what happens across a random uncorrelated pair of vertices (global behavior).
$endgroup$
add a comment
|
$begingroup$
I think for most things it's more productive to look at the Laplacian of the graph $G$, which is closely related to the adjacency matrix. Here you can use it to relate the second eigenvalue to a "local vs global" property of the graph.
For simplicity, let's suppose that $G$ is $d$-regular. Then the normalized Laplacian of $G$ is $L= I - frac1d A$, where $I$ is the $ntimes n$ identity, and $A$ is the adjacency matrix. The nice thing about the Laplacian is that, writing vectors as functions $f: Vto mathbbR$ like @dkaeae, and using $langle cdot, cdot rangle$ for the usual inner product, we have this very nice expression for the quadratic form given by $L$:
$$
langle f, Lfrangle = frac1d sum_(u,v) in E(f(u) - f(v))^2.
$$
The largest eigenvalue of $A$ is $d$, and corresponds to the smallest eigenvalue of $L$, which is $0$; the second largest eigenvalue $lambda_2$ of $A$ corresponds to the second smallest eigenvalue of $L$, which is $1 - fraclambda_2d$. By the min-max principle, we have
$$
1 - fraclambda_2d=minleftfraclangle f, Lfranglelangle f, frangle:sum_v in Vf(v) = 0, f neq 0right.
$$
Notice that $langle f, Lfrangle$ does not change when we shift $f$ by the same constant for every vertex. So, equivalently, you can define, for any $f:V to mathbbR$, the "centered" function $f_0$ by $f_0(u) = f(u) - frac1nsum_v in Vf(v)$, and write
$$
1 - fraclambda_2d=minleftfraclangle f, Lfranglelangle f_0, f_0rangle: f text not constantright.
$$
Now a bit of calculation shows that $langle f_0, f_0rangle = frac1nsum_u,vin Vchoose 2(f(u) - f(v))^2$, and substituting above and dividing numerator and denominator by $fracn2$, we have
$$
1 - fraclambda_2d=minleftfracfrac2nd sum_(u,v) in E(f(u) - f(v))^2frac2n^2sum_u,vin Vchoose 2(f(u) - f(v))^2: f text not constantright.
$$
What this means is that, if we place every vertex $u$ of $G$ on the real line at the point $f(u)$, then the average distance between two independent random vertices in the graph (the denominator) is at most $fracdd - lambda_2$ times the average distance between the endpoints of a random edge in the graph (the numerator). So in this sense, a large spectral gap means that what happens across a random edge of $G$ (local behavior) is a good predictor for what happens across a random uncorrelated pair of vertices (global behavior).
$endgroup$
I think for most things it's more productive to look at the Laplacian of the graph $G$, which is closely related to the adjacency matrix. Here you can use it to relate the second eigenvalue to a "local vs global" property of the graph.
For simplicity, let's suppose that $G$ is $d$-regular. Then the normalized Laplacian of $G$ is $L= I - frac1d A$, where $I$ is the $ntimes n$ identity, and $A$ is the adjacency matrix. The nice thing about the Laplacian is that, writing vectors as functions $f: Vto mathbbR$ like @dkaeae, and using $langle cdot, cdot rangle$ for the usual inner product, we have this very nice expression for the quadratic form given by $L$:
$$
langle f, Lfrangle = frac1d sum_(u,v) in E(f(u) - f(v))^2.
$$
The largest eigenvalue of $A$ is $d$, and corresponds to the smallest eigenvalue of $L$, which is $0$; the second largest eigenvalue $lambda_2$ of $A$ corresponds to the second smallest eigenvalue of $L$, which is $1 - fraclambda_2d$. By the min-max principle, we have
$$
1 - fraclambda_2d=minleftfraclangle f, Lfranglelangle f, frangle:sum_v in Vf(v) = 0, f neq 0right.
$$
Notice that $langle f, Lfrangle$ does not change when we shift $f$ by the same constant for every vertex. So, equivalently, you can define, for any $f:V to mathbbR$, the "centered" function $f_0$ by $f_0(u) = f(u) - frac1nsum_v in Vf(v)$, and write
$$
1 - fraclambda_2d=minleftfraclangle f, Lfranglelangle f_0, f_0rangle: f text not constantright.
$$
Now a bit of calculation shows that $langle f_0, f_0rangle = frac1nsum_u,vin Vchoose 2(f(u) - f(v))^2$, and substituting above and dividing numerator and denominator by $fracn2$, we have
$$
1 - fraclambda_2d=minleftfracfrac2nd sum_(u,v) in E(f(u) - f(v))^2frac2n^2sum_u,vin Vchoose 2(f(u) - f(v))^2: f text not constantright.
$$
What this means is that, if we place every vertex $u$ of $G$ on the real line at the point $f(u)$, then the average distance between two independent random vertices in the graph (the denominator) is at most $fracdd - lambda_2$ times the average distance between the endpoints of a random edge in the graph (the numerator). So in this sense, a large spectral gap means that what happens across a random edge of $G$ (local behavior) is a good predictor for what happens across a random uncorrelated pair of vertices (global behavior).
edited May 30 at 3:55
answered May 29 at 5:04
Sasho NikolovSasho Nikolov
2,32711 silver badges20 bronze badges
2,32711 silver badges20 bronze badges
add a comment
|
add a comment
|
Thanks for contributing an answer to Computer Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f109963%2fintuition-behind-eigenvalues-of-an-adjacency-matrix%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Are you familiar with the connection between the spectrum of the adjacency matrix and the convergence of random walks on the graph?
$endgroup$
– Yuval Filmus
May 28 at 14:14
$begingroup$
@YuvalFilmus Not at all, despite being really familiar with random walks, and somehow familiar with the spectrum of an adjacency matrix. So I'm interested in your view indeed :)
$endgroup$
– m.raynal
May 28 at 15:00