I was thinking of fractal mountains perhaps illustrated well
by p.85 exercise 2.1 and p.553 section 11.7.4 (only 1 page) of Edward Angel's "Interactive Computer Graphics" 3rd edition. Imagine you want to make a VR movie of a rocket landing approaching the mountain top, in "powers of ten the sequel" (with a continuum of logarithmic scaling). The impression of a rock texture is, in p.85 exercise 2.1, done with random generation. You don't store a real mountain but generate texture when scaled up enough for you to resolve it. The landing guy's viewpoint will need the fractal resolution only he should generate it and it should not be copied to the model of the guy who is still orbiting. All that need be shared by the croquet model is, not an instantiation, but rather a probabilistic model while what is rendered is computed only by a viewpoints cpu. Rendering of the random mountain detail is to a retina of a viewpoint, only computed by and known to its hardware and user. |
Considering that one of the basic pillars of croquet is that "everyone runs the same world bit-identically", I'm not sure that intentionally introducing differences in the world between viewers is that great an idea. I do have two responses to this however:
(1) SecondLife does this to some extent, as this bug report I found shows: SL-29834: terrain texture is not consistant from session to session Affects: 1.13.0 (Graphics - Environment) Repro: * Go to any nearly flat parcel that is in a elevation in which the terrain textures are in transition * Take a snapshot of the terrain * Relog * Take another snapshot of the terrain * Compare the snapshots. Observe: The terrain textures blotches are in a different pattern. On the mainland this is exhibited by patches of grass that shift to different location on the parcel. Expected: The terrain textures blotches should be in a consistant pattern from session to session. (2) Croquet doesn't appear to do terribly well with worlds of arbitrary complexity. What if there was a rocketship already landed on the mountain top, and you only saw it if you got close? In general this almost sounds like something that tiling should handle. The individual islands/tiles might be dynamically constructed as needed by the viewer, but all viewers of a tile should be seeing the same thing. On 5/17/07, [hidden email] <[hidden email]> wrote:
I was thinking of fractal mountains perhaps illustrated well |
I think I agree with you Erik.
It is disorienting if the texture or terrain changes, and it is even worst if two people viewing the same scene are not quite seeing the same thing. We have enough trouble with conveying our thoughts to one another with language ambiguities, without the environment adding to the confusion. I had been chewing on the issue mentally as I read the rest of my email when I came across your post. You stated the case quite well. However there is a cost, and one of the issues with the cost of transfer bandwidth being what it is, is how do we handle the probblem of insufficient bandwidth to handle multiple texture times across millions of users? It is the horns of a dilemma for sure. Bandwidth and speed vs accuracy and environmental consistancy. I am sure I don't have the answer, unless I can figure out how to make 3Mbit into 9Megabit persecond links, or find some sneaky algorithm that will over come the issue? Maybe psuedom randomness with the seed furnished across the net, then each viewer would achieve the same pattern even though they were rasterized individually? Regards, Les H On Thu, 2007-05-17 at 17:04 -0700, Erik Anderson wrote: Considering that one of the basic pillars of croquet is that "everyone runs the same world bit-identically", I'm not sure that intentionally introducing differences in the world between viewers is that great an idea. I do have two responses to this however: Repro:
On 5/17/07, [hidden email] <[hidden email]> wrote: |
The way things currently work is that the Island (i.e., the snapshot
of simulation memory that gets serialized from a peer and delivered to you when you join) is supposed to contain everything that defines the history of the simulation. This includes the state of a random number generator for the Island, and it usually includes a low resolution version of all the textures used in rendering everything on the Island. Usually, it also contains a GUID indicating higher resolution versions of these textures. When you try to render a surface that is visible within the frustrum of your own individual camera position, most applications will begin to get the higher resolution texture that corresponds to that GUID -- but will continue rendering. There is machinery in place to check a local disk cache, and if the texture is not present, to request it over the network. The default implementation gets this from one of the peers on that Island, while the KAT has all Islands share a separate media pseudo-island. Either way, when you try to render and you actually have the higher-res image at that point, it gets used. The effect is that when you visit an Island for the first time, the things you see come into sharper focus one at a time, something like what happens with some kinds of images on a Web page. This could be done with multiple texture resolutions based on distance, but the public code doesn't do this yet. The key thing is that high-res stuff isn't part of the Island itself. If you have it cached locally or not, the only difference is in rendering quality, not semantics. Once you do have it cached, you will see the same thing every time. If you want a generated texture, you could so so. I'm not aware of a lot of work done on this because I would think you would want to do generated textures on the graphics card using shader program, and that's not in the common distribution. Josh had written a Cg interface for Jasmine at Wisconsin, and it's fair to assume that he's given some thought to a successor... (The Dormouse distro included the shader stuff.) Anyway, if I wanted a generated thing to be repeatable every time I (or anyone else) rendered at the same resolution range, I think I would store the seed in the model, analogously to the way we store a GUID for the off-island texture. -Howard On May 17, 2007, at 7:52 PM, Les wrote: > I think I agree with you Erik. > > It is disorienting if the texture or terrain changes, and it is > even worst if two people viewing the same scene are not quite > seeing the same thing. We have enough trouble with conveying our > thoughts to one another with language ambiguities, without the > environment adding to the confusion. I had been chewing on the > issue mentally as I read the rest of my email when I came across > your post. You stated the case quite well. However there is a > cost, and one of the issues with the cost of transfer bandwidth > being what it is, is how do we handle the probblem of insufficient > bandwidth to handle multiple texture times across millions of users? > It is the horns of a dilemma for sure. Bandwidth and speed vs > accuracy and environmental consistancy. > I am sure I don't have the answer, unless I can figure out how to > make 3Mbit into 9Megabit persecond links, or find some sneaky > algorithm that will over come the issue? Maybe psuedom randomness > with the seed furnished across the net, then each viewer would > achieve the same pattern even though they were rasterized > individually? > > Regards, > Les H > On Thu, 2007-05-17 at 17:04 -0700, Erik Anderson wrote: >> Considering that one of the basic pillars of croquet is that >> "everyone runs the same world bit-identically", I'm not sure that >> intentionally introducing differences in the world between viewers >> is that great an idea. I do have two responses to this however: >> >> (1) SecondLife does this to some extent, as this bug report I >> found shows: >> >> SL-29834: terrain texture is not consistant from session to session >> Affects: 1.13.0 (Graphics - Environment) >> Repro: >> * Go to any nearly flat parcel that is in a elevation in which the >> terrain textures are in transition >> * Take a snapshot of the terrain >> * Relog >> * Take another snapshot of the terrain >> * Compare the snapshots. >> Observe: The terrain textures blotches are in a different pattern. >> On the mainland this is exhibited by patches of grass that shift >> to different location on the parcel. >> Expected: The terrain textures blotches should be in a consistant >> pattern from session to session. >> >> >> (2) Croquet doesn't appear to do terribly well with worlds of >> arbitrary complexity. What if there was a rocketship already >> landed on the mountain top, and you only saw it if you got close? >> In general this almost sounds like something that tiling should >> handle. The individual islands/tiles might be dynamically >> constructed as needed by the viewer, but all viewers of a tile >> should be seeing the same thing. >> >> On 5/17/07, [hidden email] <[hidden email]> wrote: >> I was thinking of fractal mountains perhaps illustrated well >> by p.85 exercise 2.1 and p.553 section 11.7.4 (only 1 page) >> of Edward Angel's "Interactive Computer Graphics" 3rd edition. >> >> Imagine you want to make a VR movie >> of a rocket landing approaching the mountain top, >> in "powers of ten the sequel" (with a continuum of logarithmic >> scaling). >> >> The impression of a rock texture is, in p.85 exercise 2.1, >> done with random generation. >> You don't store a real mountain but generate texture when scaled up >> enough for you to resolve it. >> >> The landing guy's viewpoint will need the fractal resolution >> only he should generate it and it should not be copied to the model >> of the guy who is still orbiting. >> >> All that need be shared by the croquet model is, >> not an instantiation, but rather a probabilistic model >> while what is rendered is computed only by a viewpoints cpu. >> >> Rendering of the random mountain detail is to a retina of a >> viewpoint, >> only computed by and known to its hardware and user. >> >> |
Free forum by Nabble | Edit this page |