This semester I have decided to do an independent study with Professor Jeff Erickson.
The project we are working on (like several of the projects I have worked on in the past) involves numerical simulations of physical systems. Consider a function F(x,t) which represents the value of a particular physical variable at a particular point in space (x) at a given time (t). For example, in an application modeling heat transfer, F could represent the temperature at a given point in the material. The function F is defined by a set of initial conditions (at t=0) and then a set of differential equations that define how the value of F at each point x changes over time. The goal is to calculate (or at least estimate) the values of F over time.
Of course there are an infinite number of different values of X, and so an infinite number of variables to track. In order to solve this problem it is necessary to divide the region of interest up into a large number of smaller "cells," and keep track of one value at each cell. The pattern of cells is referred to as a "mesh". In a one dimensional case the "mesh" is trivial, but in a two dimensional problem there are lots of different mesh patterns, such as a square grid or a triangular grid. (I worked on a project with exactly this theme several years ago.)
One way of understanding the point of the "mesh" is by analogy to wargames like Dungeons + Dragons. In a real battle, movement is continuous, but in the game it is necessary to approximate that continuous movement by a discrete grid. And the shape of that grid (square in D+D, hexagonal in many other war games) is like the "mesh", and each point in that mesh is dealt with separately.
Of course in real life time is also continuous, so there are an infinite number of points in time. So just like with space, it is necessary to approximate the continuous time interval with a set of discrete points in time.The standard way of solving these problems doing this is to choose a "time step" dt - so for example if dt=6 seconds, you calculate the values of F at t=6 based on the values at t=0, then repreat the process to get the values at t=12 from t=6, and so on. (To continue with the D+D analogy, in a real life battle things are happening continuously in real-time, but the game divides up the time into a series of discrete "combat rounds".)
Now we come to the main part of our project. The part of our project that is different is to, rather than choose a mesh once and do a series of iterations where the time increases by dt each time, you keep track of a separate value of t for each vertex in the mesh, and increase the t value one vertex at a time. The advantage of doing this is that in many applications, there's a known maximum speed at which effects can propagate, and you can use this to limit the number of simultaneous equations that it is necessary to solve. Suppose that the equations are modeling sound waves, and the speed of sound in the material in question is 1000 meters per second, and suppose that the cells are spaced 1 meter apart. Then if, you use a dt of 0.001 seconds, then in one time step effects can propagate at most one mesh cell, so at each step rather than having to solve simultaneous equations for all the cells in the mesh, you only have to do so for one cell and its nearest neighbors, because those are the only cells that can affect the given vertex. This can significantly speed up the algorithm, and it also allows you to take better advantage of multiple processors, because different processors can be working on different parts of the mesh.
As for the D+D analogy for that, this is more of a stretch but here goes. Let's say you have a gigantic battle with 100 different characters arranged in a 10x10 array, and let's say each character only has powers that can affect other characters adjacent to him, and none of them can move.* Of course this battle is going to be slow going because once one person declares his actions he has to wait for 99 other characters to declare their actions before they can go do them and start theire next round. But as it turns out you don't need to keep everyone in sync like that - you can have some players start their second and subsequent rounds even before everyone finishes their first round. This is possible for the slow speed of interaction - since it will take nine rounds for anything happening in, say, the top left corner of the board to affect the character in he bottom right corner, one of those corners can be up to nine rounds "ahead" of the other and the game will still work just as normal. Basically, let's say that the guy in the top left corner just finished his 10th round while the guy in the bottom right corner is still deciding on his action for the 5th round. This will still work because the bottom right guy's 5th round action won't affect the first guy until at least the 14th round (it has to get through nine intervening spaces at one space per round), so he doesn't need to know about it yet. The condition that the "time difference" between two cells is no greater than the time it takes for information to move between them is known as the "causality" constraint.
However, there is an important complication. It is possible in certain circumstances, depending on how the mesh is laid out, to get "stuck" - i.e. there is no way to progress any vertex without violating a causality constraint.** If this happens then you have to (at least temporarily) go back to the original method of pushing up all the vertices at once*** (this is analogous to having everyone take their turns in sync, in the previous analogy). The problem is how to progress in such a way that you avoid getting stuck. There are also lots of other complications, like it you want to refine (i.e. make denser) a part of the mesh in the middle of the program execution, etc.
There is already a program that does this, and a paper about it, for the case of two dimensional meshes. The part that remains to be done is doing this for three dimensional meshes. Some of the algorithm design work is already done so a lot of the work is just coding it up (but after that we'll get to the other parts where the algorithms that are needed are not known yet).
In Erickson's words, "most computer science Ph.D. students can't program their way out of a wet paper bag, and that applies to most computer science professors as well.****" Since I am not one of those "most computer science Ph.D. students" he thought it would be a good idea for me to work for him.
---
* This analogy is technically inaccurate because in D+D, each player's turn happens sequentially, so A can do something to affect B, then that affects what B does, which affects C, so C chooses an action which affects D, and so on, all in the space of one round. For this analogy to work you have to assume that all turns happen simultaneously so that each link in the chain requires another round.
** It might appear that it is never possible to get stuck - if you increment the t value on the vertex that currently has the lowest t value, then you're not increasing any gaps. The explanation is that my description is significantly simplified - in reality it's not actually about the difference between the t values of the vertices, it has to do with the slopes of the sides of the "cells" in the n+1 dimensional space consisting of the n space dimensions plus the time dimension. But I couldn't figure out how to explain that part concisely and this post is long enough as it is.
***I'm actually not 100% sure how this part works. I just talked to Erickson yesterday and haven't read the papers yet.
****Erickson is not the first of my professors to express a similar sentiment. University of Maryland professor Samir Khuller said that when people find out that he is a computer science professor, sometimes that ask him to help fix their computer, and he says that "I can't even fix my own computer, how can I fix yours?" Michelle Hugue even once said that she "hates computers."
Saturday, September 5, 2009
Subscribe to:
Post Comments (Atom)
4 comments:
Thanks for the D&D analogy. It helps us non-math/computer science people understand.
I read an article the other day explaining that multi-core processors aren't increasing processing speed as much as they could because we haven't solved many of the problems with parallel processing. It sounds similar to what you are working on...Is it?
I suprised to hear that most computer science phd students can't program their way out of a paper bag. Isn't programming what initially attracts people to computer science?
Jeff Erikson's web site is really funny.
Hey, Alex....Your post is very clear. Thanks. I love having an idea of what you're working on, even if the details would be too complex. And I like Erikson's website, too.
For a given problem, let's say you have some sort ofn accuracy threshold and you want to minimize computing time. How do you choose the optimal mesh? Is that something you can solve in some sort of closed form solution or does that have to be approximated numerically, as well?
What I'm saying is...in your examples it sounds like you start with a mesh and then try to get the best algorithm, but how does one choose the type of mesh one wants to use?
I don't know how you "choose the optimal mesh", and I don't believe that's what our project is focusing on. The way he described it, it was like the initial mesh (and when and where to refine it) was given by some other (already known) algorithm.
The method we are using is designed specifically to work with meshes in the form of a "simplicial complex." This means (in the 2 dimensional case) that each cell is a triangle, and the only places where the triangles intersect are either common faces or common vertices. (In other words, there's no place where a vertex of one cell is in the middle of an edge of another cell.)
As for the parallel programming part, we probably won't get to that for a while. We'll probably start off doing it sequentially, and then work on parallelizing it.
Post a Comment