The 
Mathematica Journal
Volume 10, Issue 1

Search

In This Issue
Articles
Trott's Corner
New Products
New Publications
Calendar
News Bulletins
New Resources
Classifieds

Download This Issue 

About the Journal
Editorial Policy
Staff and Contributors
Submissions
Subscriptions
Advertising
Back Issues
Contact Information

Efficient, High-Quality Force-Directed Graph Drawing
Yifan Hu

4. Barnes-Hut Force Calculation

Each iteration of Algorithm 1 involves two loops. The outer loop iterates over each . Of the two inner loops that calculate the attractive and repulsive forces, the latter is the most expensive and loops over each , . Thus the overall complexity is .

The repulsive force calculation resembles the -body problem in physics, which is well studied. One of the widely used techniques to calculate the repulsive forces in time with good accuracy, but without ignoring long-range forces, is to treat groups of faraway vertices as supernodes, using a suitable data structure [1]. This idea was adopted by Tunkelang [16] and Quigley [18]. Both used an octree (3D) or quadtree (2D) data structure. In principal, other space decomposition methods can also be used. For example, Pulo [21] investigated recursive Voronoi diagrams.

For simplicity, hereafter we use the term octree exclusively, which should be understood as quadtree in the context of 2D layout. An octree data structure is constructed by first forming a square (or cube in 3D) that encloses all vertices. This is the level 0 square. This square is subdivided into four squares (or eight cubes) if it contains more than one vertex and forms the level 1 squares. This process is repeated until level , where each square contains no more than one vertex. Figure 5 (left) shows an octree on the jagmesh1 graph.

The octree forms a recursive grouping of vertices and can be used to efficiently approximate the repulsive force in the spring-electrical model. The idea is that in calculating the repulsive force on a vertex , if a cluster of vertices, , lies in a square that is "far" from , the whole group can be treated as a supernode. The supernode is assumed to situate at the center of gravity of the cluster, . The repulsive force on vertex from this supernode is

However, we need to define what "far" means. Following [16, 18], we define the supernode to be faraway from vertex , if the width of the square that contains the cluster is small, compared with the distance between the cluster and the vertex ,

Here is the width of the square that the cluster lies in, and is a parameter. The smaller the value of , the more accurate the approximation to the repulsive force and the more computationally expensive it is. We found that is a good compromise and will use this value throughout the article. This inequality (7) is called the Barnes-Hut opening criterion [1].

The octree data structure allows efficient identification of all the supernodes that satisfy (7). The process starts from the level 0 square. Each square is checked and recursively opened until the inequality (7) is satisfied. Figure 5 (right) shows all the supernodes (the squares) and the vertices these supernodes consist of, with reference to vertex located at the top-middle part of the graph. In this case we have 32 supernodes.

Figure 5. An illustration of the octree data structure: the overall octree (left); supernodes with reference to a vertex at the top-middle part of the graph, with (right).

Under reasonable assumption [1, 22] of the distribution of vertices, it can be proved that building the octree takes a time complexity of . Finding all the supernodes with reference to a vertex can also be done in a time complexity . Overall, by using an octree structure to approximate the repulsive force, the complexity for each iteration of Algorithm 1 under the spring-electrical model is reduced from to .

We only build the octree structure once every outer loop. Note that the positions of the vertices are actually updated continuously within the loop; therefore, as they move about, some vertices may not be in the squares where they are supposed to lie. However, we found that this does not cause a problem in practice. Consequently, we observed that construction of the tree structure only takes a fraction of the total time, and the majority of the time in Algorithm 1 is spent in finding the supernodes, as well as in the force calculations.

For very large graphs, it may happen that some of the vertices are very close to each other. Therefore, if we keep subdividing the squares without a limit, we may end up with a tree structure with one or more very deep branches. This will make the algorithm less efficient. In particular, a disproportionately large amount of time will be spent in searching through the tree to find supernodes. It is therefore necessary to limit the number of levels in the octree. We denote this limit . Even if a square at level still has multiple vertices, it is not subdivided further. We call such a square a dense leaf (of the octree).

However, it is difficult to decide a priori how many levels should be allowed. If we set the too small, we will have many vertices in the same square as at the last level. This increases the average number of supernodes, because when identifying supernodes needed to approximate the repulsive force on a vertex , if a dense leaf happens to be close to , each vertex on the dense leaf has to be treated individually as a supernode since no subgrouping is available. In the extreme case when , every vertex belongs to one dense leaf, and there are supernodes that correspond to every vertex . On the other hand, although a large value for reduces the average number of supernodes, it increases the number of squares that need to be traversed due to the deep branches.

We use an adaptive scheme to automatically find the optimal . This is essentially a one-dimensional optimization problem with the variable being and the objective function the CPU time of each outer iteration of Algorithm 1, which consists largely of the time to transverse the octree and the time in the repulsive force calculation. So one way to find the optimal is to measure the CPU time of the outer loop and increase/decrease by one each time until the bottom of a valley is located. However, CPU time measurement can fluctuate and such a scheme may cause a different from run to run. This in turn gives a different layout between runs, which is undesirable. Instead, we use

as the objective function, where is the total number of squares traversed, is the total number of supernodes found during one outer iteration, and is a parameter chosen so that (8) gives the best estimate of the CPU time. Through numerical experiments, we found that in the range of to gives very good correlation to CPU time measurement. Thus we used . The adaptive scheme starts with . After one outer iteration, we set . Then, depending on whether the estimated CPU time increases or decreases, we try a smaller or larger depth. If we ever hit a depth already tried, we end the procedure and use the depth corresponding to the smallest estimated CPU time. Typically, we found that the procedure converges within 3-4 outer iterations, and the located is very near the optimal value. For smaller graphs of a few thousand vertices, typically settles down at around 8, while for very large graphs, can go as high as 11.



     
About Mathematica | Download Mathematica Player
© Wolfram Media, Inc. All rights reserved.