Unless hardware accelerated PC's share their "early" results with the rest who follow their lead.
|
I think the problem with large scale simulation of physical worlds
have been a key issue in HLA and there are a couple open source implementation available such as CERTI. http://www.cert.fr/CERTI/ I am new to Croquet but I am very interested in using it as a platform for a distribute game we are developing for kids. I am making my way though the documents and codes. I'd like to know more about how Croquet should define the world (or worlds connected via portals) and how the simulation/synchronization of these worlds should be perform. Chaos theory has already show that sight different in the initial conditions and continuous input can result in big difference in the end result. However, dead reckoning and other techniques have been used in online games to keep players in sync. From my reading and understanding of the Croquet's object system, there is an attempt to address the problems and simplify the problems dealing with distributed objects. However, I haven't seen how these are addressed. David On Apr 27, 2006, at 9:43 PM, Eugen Leitl wrote: >> The first question is whether the API allows for a repeatable >> experience. We can't have the computations be different on different >> machines (e.g., because the random numbers are seeded differently or >> produce non-repeatable results, or if there are floating point >> computations that produce different results on different boards). > > This is a potentially nasty problem. Game worlds develop towards > realistic in-game physics, becoming effectively large-scale > physical simulations. Both random and float-derived divergences > will result in ultimatively widely divergent simulation worlds > which are supposed to be in sync. > > If you don't utilize hardware acceleration for world physics, > then world detail would be a caricature. So effectively you > have to ban floats, and hardware noise sources. |
In reply to this post by Darius Clarke
On Fri, Apr 28, 2006 at 08:40:18PM -0700, Darius Clarke wrote:
> Unless hardware accelerated PC's share their "early" results with the rest > who follow their lead. If you have a large simulation (say, a million particles) with diverging local input (source of entropy or numerical noise) you have an exponential amplification runaway of the smallest initial deviation. Diagnosing a deviation condition would require several strategically placed measurment points, and correcting it completely impractical (huge amounts of traffic over WAN which would effectively freeze the simulation). I think there is value in Second Life's centralized simulation approach, especially since servers close to the backbone can easily have two-three orders of magnitude higher bandwidth than the average home node. Not only would they be able to serve as supernodes helping NATed home nodes to rendezvous, with Opterons and the Cell (which comes in a blade cluster flavor, too) such systems could deliver substantial *local* numerical performance. Of course one could also ban floats (integers don't have accumulating rounding error issues) and hardware entropy sources. But avatars are also a source of noise, especially if many of them are interacting in the same space, while connecting over WAN (not a problem on a local LAN). Question: I have several servers (Debian) in the rack. How can I run OpenCroquet headless, offering a hub for other people to connect? -- Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE signature.asc (198 bytes) Download Attachment |
In reply to this post by Darius Clarke
> If you have a large simulation (say, a million particles) with
> diverging > local input (source of entropy or numerical noise) you have an > exponential > amplification runaway of the smallest initial deviation. I think an idea would be to split into two parts, nonsynchronized special effects and physics that affects the world. Particles could in many cases just be special effects that the client would calculate locally, if it has powerful enough hardware. Something like Ageia PhysX PPU would likely only be useful for effects, but could make things look much nicer for those who have it. > Of course one could also ban floats (integers don't have > accumulating rounding error issues) and hardware entropy > sources. But avatars are also a source of noise, especially > if many of them are interacting in the same space, while > connecting over WAN (not a problem on a local LAN). Maybe floats and hardware entropy sources could be banned for "world physics", anything affecting other objects in a croquet space. While "effects physics" could just use floats and all that since it doesn't have to be perfectly synchronized. -Mats |
In reply to this post by Darius Clarke
How does a accelerated machine gives earlier result then
unaccelerated machines? If a car is moving east at 90 kilometers an hour, it's doing 25 meters in a second. If the distributed simulations use the same synchronized clock, the car is at the same position given the same origin in all simulations regardless accelerated or not. The accelerated machines may get a boost for additional frame rate since it runs faster so it may be doing 100 frame per seconds which divid the animation for the 25 meter move in 100 frames while the unaccelerated ones get 30 frames. As long as we are doing Newtonian physics, there should be no difference between accelerated and unaccelerated machines. On the other hand, there will be different network delays and each simulation may get an event out of order. Let's say play 1 make a left turn at T0, the play 2 may receive this event at T1 in which T1 > T0. Play 2 already simulate player's position without the left turn. There would be some dead reckoning to be done here but it's a well established technique in HLA and widely used in the military simulation and fast pace online games such as car racing and first person shooter. David On Apr 29, 2006, at 12:40 PM, Darius Clarke wrote: > Unless hardware accelerated PC's share their "early" results with > the rest who follow their lead. |
In reply to this post by Darius Clarke
> I think there is value in Second Life's centralized simulation
> approach, especially since servers close to the backbone can > easily have two-three orders of magnitude higher bandwidth than > the average home node. Not only would they be able to serve > as supernodes helping NATed home nodes to rendezvous, with Opterons > and the Cell (which comes in a blade cluster flavor, too) such > systems could deliver substantial *local* numerical performance. HLA is already a well established system for doing distributed simulations. The usage of centralized server in more commercial online games are more for cheat prevention then simulation synchronization. P2P based simulations are more robust and efficient then client/server model if the cheat can be prevented. There are some works done on cheat prevention in a P2P architecture MMORPG in U Penn. http://www.cis.upenn.edu/~hhl/games/ HLA is IEEE standard for doing distributed simulation. http://en.wikipedia.org/wiki/High_Level_Architecture David |
In reply to this post by Darius Clarke
> HLA is IEEE standard for doing distributed simulation.
> > http://en.wikipedia.org/wiki/High_Level_Architecture > How do you use HLA for physics simulation, do you have any links to papers or example code? Can it be integrated with ODE (Open Dynamics Engine)? -Mats |
In reply to this post by Darius Clarke
On Sat, Apr 29, 2006 at 08:27:23PM +0200, Mats wrote:
> Maybe floats and hardware entropy sources could be banned for "world > physics", anything affecting other objects in a croquet space. While > "effects physics" could just use floats and all that since it doesn't > have to be perfectly synchronized. I would be intensely uncomfortable with my local simulations forking from a somebody else's. Even if you specify this in the semantics, it's not obvous authors would understand the semantics, and user/author interpretation would be always right. It's best to avoid this can of worms by banning state forks right from the start. -- Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org ______________________________________________________________ ICBM: 48.07100, 11.36820 http://www.ativel.com 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE signature.asc (198 bytes) Download Attachment |
In reply to this post by Darius Clarke
David,
For instance: Say you wanted to share an accurate simulatation of a few dozen mutually interacting particles or planets via gravity, electromagnitism, or whatever force laws where the time scale is such that the system state is visibly dynamic. In N body problems, molecular dynamics, finite element analysis, fluid dynamics, and numerous other classes of simulation, the shear scale of computational speed necessary to run a live simulation gets far beyond the ken of a single PC processor very quickly with increased N (number of degrees of freedom in the system). Two general solutions: 1) Have one particularly powerful node (perhaps a supercomputing cluster) do the heavy sim math and emit messages about the running changel of state to the other connected nodes while receiving messages from other nodes aboutany user interactions with the simulated subsystem. (This is NON-replicated computation). 2) If physics acceleration cards were to become common, the realm of replicated computatios (hence less trouble with latency) would become possible. . But as others have noted, that has not happened yet. Best, Ed David Li said: > How does a accelerated machine gives earlier result then > unaccelerated machines? If a car is moving east at 90 kilometers an > hour, it's doing 25 meters in a second. If the distributed > simulations use the same synchronized clock, the car is at the same > position given the same origin in all simulations regardless > accelerated or not. The accelerated machines may get a boost for > additional frame rate since it runs faster so it may be doing 100 > frame per seconds which divid the animation for the 25 meter move in > 100 frames while the unaccelerated ones get 30 frames. As long as we > are doing Newtonian physics, there should be no difference between > accelerated and unaccelerated machines. > > On the other hand, there will be different network delays and each > simulation may get an event out of order. Let's say play 1 make a > left turn at T0, the play 2 may receive this event at T1 in which T1 > > T0. Play 2 already simulate player's position without the left > turn. There would be some dead reckoning to be done here but it's a > well established technique in HLA and widely used in the military > simulation and fast pace online games such as car racing and first > person shooter. > > David > > > > On Apr 29, 2006, at 12:40 PM, Darius Clarke wrote: > >> Unless hardware accelerated PC's share their "early" results with >> the rest who follow their lead. > > ----------------------------------------------------- Ed Boyce Education and Outreach Writer/Editor Coordinator, Visualize Education Virtual Institute Engaging People in CyberInfrastructure (EPIC) Program http://www.eotepic.org Boston University Center for Computational Science 3 Cummington Street, 5th Floor Boston, Massachusetts 02215 413-245-3997 [hidden email] ------------------------------------------------------ |
In reply to this post by Darius Clarke
Hi Ed,
I think we may be talking about two different "simulations." You are talking about scientific simulation for searching for solutions or verification of models. Most of these simulations are intractable and behind the computation power we may ever have. These are mostly run as batch to search only some of the spaces and are easily distributed. I look at the interactive simulation in the context of Croquet to add physics to make the the virtual more "real" for the players. I think these are more close to multiple online player games today. Some precisions in the models have to be sacrificed for speed so the world's physics are running at the interactive rate. I guess we we are referring to two different kind of simulation. David On Apr 30, 2006, at 10:09 AM, Ed Boyce wrote: > David, > > For instance: Say you wanted to share an accurate simulatation > of a few > dozen mutually interacting particles or planets via gravity, > electromagnitism, or whatever force laws where the time scale is such > that the system state is visibly dynamic. In N body problems, > molecular dynamics, finite element analysis, fluid dynamics, and > numerous other classes of simulation, the shear scale of computational > speed necessary to run a live simulation gets far beyond the ken of a > single PC processor very quickly with increased N (number of > degrees of > freedom in the system). > > Two general solutions: 1) Have one particularly powerful node > (perhaps a > supercomputing cluster) do the heavy sim math and emit messages about > the running changel of state to the other connected nodes while > receiving messages from other nodes aboutany user interactions with > the > simulated subsystem. (This is NON-replicated computation). > > 2) If physics acceleration cards were to become common, the > realm of > replicated computatios (hence less trouble with latency) would become > possible. . But as others have noted, that has not happened yet. > > Best, > > Ed > David Li said: >> How does a accelerated machine gives earlier result then >> unaccelerated machines? If a car is moving east at 90 kilometers an >> hour, it's doing 25 meters in a second. If the distributed >> simulations use the same synchronized clock, the car is at the same >> position given the same origin in all simulations regardless >> accelerated or not. The accelerated machines may get a boost for >> additional frame rate since it runs faster so it may be doing 100 >> frame per seconds which divid the animation for the 25 meter move in >> 100 frames while the unaccelerated ones get 30 frames. As long as we >> are doing Newtonian physics, there should be no difference between >> accelerated and unaccelerated machines. >> >> On the other hand, there will be different network delays and each >> simulation may get an event out of order. Let's say play 1 make a >> left turn at T0, the play 2 may receive this event at T1 in which T1 >>> T0. Play 2 already simulate player's position without the left >> turn. There would be some dead reckoning to be done here but it's a >> well established technique in HLA and widely used in the military >> simulation and fast pace online games such as car racing and first >> person shooter. >> >> David >> >> >> >> On Apr 29, 2006, at 12:40 PM, Darius Clarke wrote: >> >>> Unless hardware accelerated PC's share their "early" results with >>> the rest who follow their lead. >> >> > > > ----------------------------------------------------- > Ed Boyce > Education and Outreach Writer/Editor > Coordinator, Visualize Education Virtual Institute > Engaging People in CyberInfrastructure (EPIC) Program > http://www.eotepic.org > > Boston University Center for Computational Science > 3 Cummington Street, 5th Floor > Boston, Massachusetts 02215 > > 413-245-3997 > [hidden email] > ------------------------------------------------------ > > > |
In reply to this post by Darius Clarke
HLA is a architecture for federation of simulation, more or less the
same level as TeaTime in Croquet. Actually, HLA is a bit even lower level then TeaTime since TeaTime is handing the object messaging. Integrating with ODE, one has to define the conflict resolution strategy on how to solve conflict event. Every local simulation would be doing "prediction" and from time to time, authentic updates from other players to bring the system in sync. Here is a good article about this. Dead Reckoning: Latency Hiding for Networked Games http://www.gamasutra.com/features/19970919/aronson_01.htm David On Apr 30, 2006, at 5:38 AM, Mats wrote: >> HLA is IEEE standard for doing distributed simulation. >> >> http://en.wikipedia.org/wiki/High_Level_Architecture >> > How do you use HLA for physics simulation, do you have any links to > papers or example code? Can it be integrated with ODE (Open Dynamics > Engine)? > > -Mats > > > |
In reply to this post by Darius Clarke
David,
I realize well the distinction between those two traditional classes of simulation (lets call them the Computational Science or HPC class vs gaming physics class). Historically, for performance and design reasons, the two classes have been quite distinct. I'm suggesting that Croquet is an exellent architectural context where various hybrid practices are possible and, in fact, are emerging in current development efforts. -- Ed David Li said: > Hi Ed, > > I think we may be talking about two different "simulations." You > are talking about scientific simulation for searching for solutions > or verification of models. Most of these simulations are intractable > and behind the computation power we may ever have. These are mostly > run as batch to search only some of the spaces and are easily > distributed. > > I look at the interactive simulation in the context of Croquet to > add physics to make the the virtual more "real" for the players. I > think these are more close to multiple online player games today. > Some precisions in the models have to be sacrificed for speed so the > world's physics are running at the interactive rate. > > I guess we we are referring to two different kind of simulation. > > David > > On Apr 30, 2006, at 10:09 AM, Ed Boyce wrote: > >> David, >> >> For instance: Say you wanted to share an accurate simulatation >> of a few >> dozen mutually interacting particles or planets via gravity, >> electromagnitism, or whatever force laws where the time scale is such >> that the system state is visibly dynamic. In N body problems, >> molecular dynamics, finite element analysis, fluid dynamics, and >> numerous other classes of simulation, the shear scale of computational >> speed necessary to run a live simulation gets far beyond the ken of a >> single PC processor very quickly with increased N (number of >> degrees of >> freedom in the system). >> >> Two general solutions: 1) Have one particularly powerful node >> (perhaps a >> supercomputing cluster) do the heavy sim math and emit messages about >> the running changel of state to the other connected nodes while >> receiving messages from other nodes aboutany user interactions with >> the >> simulated subsystem. (This is NON-replicated computation). >> >> 2) If physics acceleration cards were to become common, the >> realm of >> replicated computatios (hence less trouble with latency) would become >> possible. . But as others have noted, that has not happened yet. >> >> Best, >> >> Ed >> David Li said: >>> How does a accelerated machine gives earlier result then >>> unaccelerated machines? If a car is moving east at 90 kilometers an >>> hour, it's doing 25 meters in a second. If the distributed >>> simulations use the same synchronized clock, the car is at the same >>> position given the same origin in all simulations regardless >>> accelerated or not. The accelerated machines may get a boost for >>> additional frame rate since it runs faster so it may be doing 100 >>> frame per seconds which divid the animation for the 25 meter move in >>> 100 frames while the unaccelerated ones get 30 frames. As long as we >>> are doing Newtonian physics, there should be no difference between >>> accelerated and unaccelerated machines. >>> >>> On the other hand, there will be different network delays and each >>> simulation may get an event out of order. Let's say play 1 make a >>> left turn at T0, the play 2 may receive this event at T1 in which T1 >>>> T0. Play 2 already simulate player's position without the left >>> turn. There would be some dead reckoning to be done here but it's a >>> well established technique in HLA and widely used in the military >>> simulation and fast pace online games such as car racing and first >>> person shooter. >>> >>> David >>> >>> >>> >>> On Apr 29, 2006, at 12:40 PM, Darius Clarke wrote: >>> >>>> Unless hardware accelerated PC's share their "early" results with >>>> the rest who follow their lead. >>> >>> >> >> >> ----------------------------------------------------- >> Ed Boyce >> Education and Outreach Writer/Editor >> Coordinator, Visualize Education Virtual Institute >> Engaging People in CyberInfrastructure (EPIC) Program >> http://www.eotepic.org >> >> Boston University Center for Computational Science >> 3 Cummington Street, 5th Floor >> Boston, Massachusetts 02215 >> >> 413-245-3997 >> [hidden email] >> ------------------------------------------------------ >> >> >> > > |
In reply to this post by Darius Clarke
Hi Ed,
I think that's a very interesting idea. I think Croquet serves as an interesting platform to bring these together. For me, I am evaluating Croquet for a pre-school science education platform I am working on. As computation are becoming more and more essential to the science and simulation is going to play a big role in the future of science, I think it would be great to have a unified platform that could integrate the game physics and real physics. However, I think this is a work in long progress because of the two conflicting requirement: interactivity and accuracy. I think it would be good to come up with some scenarios how these two could be combined. Here are some thought. 1. For UI physics: windows, portal and "game" objects such as avatars. I think interactivity is more importance then accuracy. Actually, these are artificial artifacts created in the world, and their physics are for the creation of the authors. 2. Portal into a simulation of some cool physics. Protein folding and black collapses are actually interesting to look at. Their simulations can be more easily parallelized and distributed across computation nodes. It would be nice that have the architecture supporting distributed simulation so the power of network computation can be leveraged. For projects, we are working on artificial life characters to be evolved in the world. There will be a distributed underlying simulation for the ecosystem as well. We are planning to distribute the simulation across all available network nodes. It would be nice to have a unified model for simulation writing. Another thread in the list mentioned about the AI codes and I think these would also be part of the simulation as well. David On May 1, 2006, at 2:23 AM, Ed Boyce wrote: > David, > > I realize well the distinction between those two traditional > classes of > simulation (lets call them the Computational Science or HPC class vs > gaming physics class). Historically, for performance and design > reasons, the two classes have been quite distinct. I'm suggesting > that > Croquet is an exellent architectural context where various hybrid > practices are possible and, in fact, are emerging in current > development efforts. > > -- Ed > > > David Li said: >> Hi Ed, >> >> I think we may be talking about two different "simulations." You >> are talking about scientific simulation for searching for solutions >> or verification of models. Most of these simulations are intractable >> and behind the computation power we may ever have. These are mostly >> run as batch to search only some of the spaces and are easily >> distributed. >> >> I look at the interactive simulation in the context of Croquet to >> add physics to make the the virtual more "real" for the players. I >> think these are more close to multiple online player games today. >> Some precisions in the models have to be sacrificed for speed so the >> world's physics are running at the interactive rate. >> >> I guess we we are referring to two different kind of simulation. >> >> David >> >> On Apr 30, 2006, at 10:09 AM, Ed Boyce wrote: >> >>> David, >>> >>> For instance: Say you wanted to share an accurate simulatation >>> of a few >>> dozen mutually interacting particles or planets via gravity, >>> electromagnitism, or whatever force laws where the time scale is >>> such >>> that the system state is visibly dynamic. In N body problems, >>> molecular dynamics, finite element analysis, fluid dynamics, and >>> numerous other classes of simulation, the shear scale of >>> computational >>> speed necessary to run a live simulation gets far beyond the ken >>> of a >>> single PC processor very quickly with increased N (number of >>> degrees of >>> freedom in the system). >>> >>> Two general solutions: 1) Have one particularly powerful node >>> (perhaps a >>> supercomputing cluster) do the heavy sim math and emit messages >>> about >>> the running changel of state to the other connected nodes while >>> receiving messages from other nodes aboutany user interactions with >>> the >>> simulated subsystem. (This is NON-replicated computation). >>> >>> 2) If physics acceleration cards were to become common, the >>> realm of >>> replicated computatios (hence less trouble with latency) would >>> become >>> possible. . But as others have noted, that has not happened yet. >>> >>> Best, >>> >>> Ed >>> David Li said: >>>> How does a accelerated machine gives earlier result then >>>> unaccelerated machines? If a car is moving east at 90 kilometers an >>>> hour, it's doing 25 meters in a second. If the distributed >>>> simulations use the same synchronized clock, the car is at the same >>>> position given the same origin in all simulations regardless >>>> accelerated or not. The accelerated machines may get a boost for >>>> additional frame rate since it runs faster so it may be doing 100 >>>> frame per seconds which divid the animation for the 25 meter >>>> move in >>>> 100 frames while the unaccelerated ones get 30 frames. As long >>>> as we >>>> are doing Newtonian physics, there should be no difference between >>>> accelerated and unaccelerated machines. >>>> >>>> On the other hand, there will be different network delays and each >>>> simulation may get an event out of order. Let's say play 1 make a >>>> left turn at T0, the play 2 may receive this event at T1 in >>>> which T1 >>>>> T0. Play 2 already simulate player's position without the left >>>> turn. There would be some dead reckoning to be done here but it's a >>>> well established technique in HLA and widely used in the military >>>> simulation and fast pace online games such as car racing and first >>>> person shooter. >>>> >>>> David >>>> >>>> >>>> >>>> On Apr 29, 2006, at 12:40 PM, Darius Clarke wrote: >>>> >>>>> Unless hardware accelerated PC's share their "early" results with >>>>> the rest who follow their lead. >>>> >>>> >>> >>> >>> ----------------------------------------------------- >>> Ed Boyce >>> Education and Outreach Writer/Editor >>> Coordinator, Visualize Education Virtual Institute >>> Engaging People in CyberInfrastructure (EPIC) Program >>> http://www.eotepic.org >>> >>> Boston University Center for Computational Science >>> 3 Cummington Street, 5th Floor >>> Boston, Massachusetts 02215 >>> >>> 413-245-3997 >>> [hidden email] >>> ------------------------------------------------------ >>> >>> >>> >> >> > > > |
Free forum by Nabble | Edit this page |