One is simply taking a random walk on the set of journals by picking at random a citation in that journal within a year's collection (at which point, once you are "at" another journal, you take another citation from all of the papers within fixed year, like 2006, not from the paper which was just referenced, which would probably be from well before 2006). The numbers like 0.0040427 reflect a percentage of time one ends up spending within a given journal (this is over the whole of the scientific literature). Because this number will naturally be larger for journals which publish more articles, you divide by the number of articles to get a more accurate per article measure of strength of a journal.

This is essentially the algorithm which Google uses - so every one of us has used it - and has been used in social science for almost forty years. I teach it in my linear algebra class, since the set of percentages of time spent at each vertex ends up being the dominant, eigenvalue one, eigenvector for the weighted adjacency matrix for the graph. Though for a graph of any size you are better off performing the random walk than trying to find the eigenvector algebraically (even though you know it is eigenvalue one).
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

siyuv: (Default)
siyuv

June 2024

S M T W T F S
      1
2345678
9101112131415
16171819202122
232425262728 29
30      

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 4th, 2025 09:49 am
Powered by Dreamwidth Studios