AstronomyAstrophysicsPhysics

The Effectiveness of “Pre-computed Vector Space and Interpolation” Method on the Computing Time for an n-body Simulation

Abstract

In the field of physics and astronomy, -body simulations are widely used tools to simulate particles under the influence of gravitational forces, and they are generally made using an algorithm that is almost a brute-force approach, as it computes all pairwise gravitational forces by solving for computational differential equation solutions using algorithms like the Runge Kutta 4th order algorithm. The purpose of this research is to design an optimized -body simulation algorithm of shorter computation time by assessing the performance of existing algorithms and designing an alternative using a “pre-computed vector space and interpolation” method. Theoretically, this algorithm should cut down the computing time by first computing the gravitational vector fields for each body in the simulation, creating a lookup table with these vector fields, and then using various degrees of interpolation functions to approximate the accelerations of the bodies according to the aforementioned lookup table. To optimize the algorithm, two competing factors must be considered. One is the level of accuracy determined by the density of the pre-computed vector space and the level of interpolation, and the second is the computing speed. The performances of the algorithms were quantified by measuring the computing time and error produced by the algorithm while varying the number of bodies in simulation. The results showed that the optimal algorithm, which used the interpolation level of 4 with a vector space of pre-computed acceleration values, had successfully reduced the computing time from to computing time of while retaining a desirable level of accuracy.

Introduction

The purpose of this research is to design an optimized -body simulation algorithm of shorter computation time by assessing the performance of existing algorithms and designing an alternative using a “pre-computed vector space and interpolation” method. In the field of physics and astronomy, -body simulations, showing particles under the influence of gravitational forces, are widely used tools to investigate few-body systems like the Earth-Moon-Sun system and large-scale systems like the universe.

The n-body Simulations

In general, the most straightforward algorithm used by astrophysicists to create -body simulations is the Particle-Particle method (PP). The PP method is rather inefficient, as it computes all pairwise gravitational forces by solving for computational differential equation solutions using algorithms like the Runge Kutta 4th order algorithm (RK4) [1]. While the PP method may be used for few-body simulations, it is not feasible for large numbers of particles due to the computing time. The PP method introduces no added approximation other than the inherent error of the RK4 algorithm; thus it is the most accurate algorithm possible [2].

One way that astrophysicists work around the drawbacks of the PP method is to use the tree code method. This method is an approximation approach and drastically reduces computing time by considering clusters of particles rather than each individual particle separately. Particles in a predetermined radius are clustered into one larger particle. By doing this, the number of computations required is reduced, thereby reducing the computing time of the simulation [2]. The most commonly used application of the tree code method is when dealing with bodies in space. For example, rather than considering the particle to particle attractions within the Earth, the planet is simplified to a singular particle with the mass of Earth. Though the tree code method reduces computing time, because this method considers clustered particles, the algorithm is unusable for cases when astrophysicists are interested in all particles’ trajectories. This traditionally used combination of tree-code method and PP method will be used as the control or reference design in this experiment, and the novel algorithms will be compared against it.

The findings of this research will provide scientists with alternatives to existing -body simulations that are optimized and keep the accuracy comparable to that of the reference design. The titular approach to -body simulations can be extended not only to astrodynamics, but also other fields of modeling. For example, modeling the structural strength of bridges or modeling the attractions between molecules.

Big O Notation

Simply stated, Big O is how computer scientists determine how an algorithm’s computing time asymptotically increases in response to an increase in data size. An algorithm, for example, states the computing time is directly proportional to the square of the size of the inputted data. One of the main goals of this research is to reduce the computing time from  to at least  or [3].

Interpolation

In mathematics, interpolation is a method of approximation by constructing new data points from a given set of known data points. For example, given several data points, interpolation methods can be used to estimate values between the known data. Of course, there are different ways of estimating intermediary values, and they are the different levels of interpolation. The interpolation of level 0, also known as piecewise constant interpolation, is the simplest interpolation method. From the given data points, the interpolation of level 0 locates and assigns the nearest data value. Interpolation of level 1, also known as linear interpolation, estimates the value of a function by connecting two adjacent given data points with a line, and calculating the value of the function on that line. Interpolation level 2 and higher, also known as polynomial interpolation, is a generalization of linear interpolation. Instead of connecting two adjacent given data points with a linear function, polynomial interpolation connects them with a polynomial of power 2 or higher in order to make the estimated function much smoother.

Hypothesis

If an -body simulation of planetary bodies is computed with interpolation-based algorithms and compared to the traditionally used algorithm, then the interpolation-based algorithms will reduce computing time, while maintaining comparable accuracy.

The novel algorithms that will be designed and tested in this experiment will use the “pre-computed vector space and interpolation” method, which is a form of lookup table. This method computes the sum of the inverse square law acceleration vectors for all bodies at preselected, finite locations. Simply stated, a gravitational field, also known as a vector space, will be computed for each body of the simulation. However, because this method cannot compute an infinite number of points in space, it trades accuracy for computing speed. Two ways of increasing the accuracy are through increasing the density of the pre-computed vector space and by increasing the level of interpolation, both of which costs computing time. By designing an algorithm that uses the “pre-computed vector space and interpolation” method, it should be possible to reduce the computing time while maintaining the accuracy of the traditional algorithm by balancing these two aforementioned factors, thus finding the optimal tradeoff point.

https://lh3.googleusercontent.com/-9pa0ZhORalk/VxyOOtVh9EI/AAAAAAAAEnI/30s1K4vuoywvVkXH5j-R1FG2sjyVHydlgCKgB/w900/Fig%2B1.png

Fig. 1. A graphical representation of the effect of interpolation level and the number of pre-computed values on estimation error

https://lh3.googleusercontent.com/-ul20avIbQGc/Vy4e69YmO7I/AAAAAAAAE1Q/nOsgXp1l2_Ut33QWyucjrsbcoN83ghLPACLcB/w900/Fig%2B2.png

Fig. 2,3,4. The effect of piecewise, linear, and quadratic interpolation on the estimation accuracy of a 3D function

Materials and Methods

Materials

Hardware: A computer with CPU clock speed greater than 1 GHz, more than 4GB of memory

Software: Mathematica version 10 or higher

Procedures

The procedure will begin with a solvable two-body problem, e.g. the Sun and the Earth, which will be solved analytically using differential equations. The same two-body problem will be solved using the computational differential equations algorithm RK4 to recreate the PP algorithm. It will then be verified that this computational solution matches the results from the analytical method above. Since the analytical solution is not feasible for a higher number of bodies, this PP algorithm will become the reference design, or control, to which other algorithms will be measured.

Fig. 5. A graphical representation of the method used to measure the error of the novel algorithm: the distance between theoretical final position and that of the novel algorithm.

The performance of the reference design will be assessed by measuring the computing time and error. The computing time can be measured by timing how long the algorithm took to run using Mathematica’s AbsoluteTiming function, and the error can be measured by calculating the distance between the theoretical final positions of the bodies, computed by the analytical method, and the observed final positions computed by the novel algorithm. The smaller the distance, the higher the accuracy.

Next, to create a pre-computed vector space, a new algorithm will be written to compute the gravitational force exerted by a body at intervals away from the body. 10 different (intervals will be used to vary the density (grid size) of the pre-computed vector values in the vector space, and five different interpolation levels are to be tested: 0, 1, 2, 3, and 4. For simplicity, Mathematica’s built-in interpolation functions will be utilized to approximate the locations of bodies in the simulation.

The optimal algorithm configuration will be determined by minimizing the product of computing time (seconds) and error for each algorithm (meters). The reason for using the product is because the product would allow for the minimization of both quantities. To determine the Big O computing time for the optimal algorithm, various numbers of bodies will be inputted into the algorithm and regression analysis will be used to determine a function for the measured computing times.

Results

The constants in this experiment included the initial positions of the bodies in the simulations, the initial velocities of the bodies in the simulation, the differential equation solution RK4, and, i.e., the time intervals that were used to calculate acceleration, and simulation duration. To measure the computing time, the methods were executed on simulations with up to 30 bodies, and Mathematica’s AbsoluteTiming function was used, which measures the wall-clock-time required to run a program, as the number of pre-computed acceleration values was increased. The errors of the simulations were determined by measuring how far the simulations’ endpoint deviated from the theoretical final position as shown in Figure 5.

Fig. 6. The Algorithms’ Computing Time for Various Interpolation Levels and Pre-computed Vector Space Densities, i.e. the computing time of the 2-body problem using the novel algorithm as a function of interpolation level and grid size in comparison to that of the reference design

Table 1: Computing Time of 2-body Problem vs Interpolation Level and Pre-computed Vector Space Density

As shown in Figure 6 and Table 1, while the reference design had a computing time of 167.12 seconds, the computing time of the novel algorithms regardless of interpolation level and grid size were all lower in comparison. As interpolation level and density of pre-computed vector space were increased, so did the computation time.

Fig. 7. The error of the 2-body problem using the novel algorithm as a function of interpolation level and grid size in comparison to that of the reference design

Table 2 Algorithm Error of 2-body Problem vs Interpolation Level and Pre-computed Vector Space Density

As shown in Figure 7 and Table 2, none of the novel algorithms surpassed the accuracy of the reference design. This is not a surprise because the reference design has no approximation scheme, thus it is theoretically the most accurate algorithm possible. However, increasing the interpolation level and density of pre-computed vector space drastically decreased the error of the novel algorithm.

Discussion

The Optimal Algorithm: Minimizing Computing Time Error

Fig. 8. The product of computing time and error of the novel algorithm for the 2-body problem as a function of grid size for various interpolation levels

Table 3: Optimal Algorithm Design for 2-body Problem

The optimal algorithm design was found to be the novel algorithm with interpolation level of 4 and a grid size of, highlighted in yellow in Table 3. As shown in Figure 8, this configuration resulted in the minimization of the product of computing time and error. When using this interpolation level and grid size for the 2-body problem, the computing time was lower than that of the reference design and had a greater error.

The Optimal Algorithm’s Computing Time for Various Number of Bodies

Fig. 9. Computing time as a function of the number of bodies inputted into the optimal algorithm and reference design

C:\Users\Justin\AppData\Local\Microsoft\Windows\INetCache\Content.Word\2016-02-20_0158.pngTable 4: Number of Bodies vs. Computing Time

As seen in Figure 9, when increasing the number of bodies inputted into the optimal algorithm, the computation time did not increase quadratically like the reference design’s results showed, and instead increased linearly, i.e. . For two bodies, the computing time of the method using the interpolation level of 4 was lower than that of the reference design. For thirty bodies, the computing time of the method using the interpolation level of 4 was lower than that of the reference design.

Fig. 10. Computing time as a function of the number of bodies inputted into the optimal algorithm and reference design.

Equation 1: Quadratic Curve Fitting of the Reference Design for bodies

Equation 2: Linear Fitting of the Optimal Algorithm for bodies

As seen in Equation 1, the computing time for the reference design for bodies is overwhelmingly quadratic with minimal coefficients for cubic and quartic terms. Likewise, Equation 2 shows that the computing time for the optimal algorithm for -bodies is overwhelmingly linear with minimal coefficients for quadratic and cubic terms.

Graphical Presentation of Equivalence of the Trajectories Computed by Novel and Traditional Algorithm

1TradFig. 11: Two Body Simulation

Fig. 12: Four Body Simulation

Fig. 13: Six Body Simulation

Fig. 14: Thirty Body Simulation

Conclusion and Future Research

The data supported the hypothesis that if an -body simulation of planetary bodies is computed with interpolation-based algorithms and compared to the traditionally used algorithm, then the interpolation-based algorithms will reduce computing time while maintaining comparable accuracy. Compared to the reference design, the pre-computed vector space with interpolation level of 0, 1, 2, 3, and 4 levels all decreased the computing times, with interpolation level 4 having the longest computing time and interpolation level 1 having the shortest. The average accuracy, on the other hand, increased in the reverse of the aforementioned order.

The optimal algorithm design, among the tested interpolation levels, was the pre-computed vector space with interpolation level of 4 method with pre-computed acceleration values. When using this algorithm, the computing time was lower than that of the reference design and had greater error. As seen in Figures 11, 12, 13, and 14, the traditional algorithm and the novel algorithm using the Interpolation of level 4 produced almost identical results.

When adding multiple bodies into the simulation, the interpolation level of 4 method kept a linearly increasing computation time, i.e. , while the computing time for the reference design increased quadratically when multiple bodies were added, shown in Figure 9 and 10. For the 30 body simulation, the optimal algorithm’s computing time was lower than that of the reference design.

In future research, the findings of this research can be utilized by creating a simulation of a much larger number of bodies or different orders of magnitude of bodies such as Saturn’s rings, which contain millions of particles. By creating a large-scale simulation of Saturn’s rings, it should be possible to simulate the formation of Saturn’s rings and the dynamics of those particles. Only this type of algorithm, i.e.,, would be able to handle thousands to millions of bodies, while retaining a comparable level of accuracy.

Furthermore, in future research, it would be interesting to extend the findings of this research to create optimal algorithms for -body simulations in 3-D space. However, the optimal algorithm found from this research may not be feasible when creating 3-D -body simulations, because computing 3-D pre-computed vector spaces would require massive amounts of memory.

References

1. Barker, C. A. (2009). Numerical Methods for Solving Differential Equations – The Runge-Kutta method. Retrieved November 30, 2015, from Mathematics & Science Learning Center Computer Laboratory: http://calculuslab.deltacollege.edu/ODE/7-C-3/7-C-3-h.html

2. Meeus, J. (1998). Astronomical Algorithms (2nd ed.). Richmond: Willmann-Bell, Inc. Thomson, W. T. (1986). Introduction to Space Dynamics. New York: Dover Publications, Inc.

3. Abrahms, J. (2013, July 17). Big-O notation explained by a self-taught programmer. Retrieved November 30, 2015, from https://justin.abrah.ms/computer-science/big-o-notation- explained.html

About the Author

Justin Lee, 17, is a senior at Adlai E. Stevenson High School in Illinois, USA and has many interests in math and science at a young age and started to commit his time to various math and science-related activities. Some of these include math competitions, computer programming, and science research.

Leave a Reply

Your email address will not be published. Required fields are marked *