On Jul 7, 2006, at 7:22 AM, David Faught wrote:
> Hi Howard,
> I finally had a chance to try out your new app last night, but it
> wouldn't work for me. I have a reasonably good cable connection
> through TimeWarner and the PC is wired, the same one that I have used
> to Croquet-connect (once, a few weeks ago) to Peter Moore's island at
> UMN. I tried a couple of different times at around 7:30 and 9:30 PM
> and waited several minutes each time, but all I ever got was the big
> red neverending rectangle.
Try again. It looks like the machine running the router (and a bunch
of other stuff on campus) was rebooted last night.
It happens, and we don't have things set to automatically restart
when the machine goes down.
For debugging purposes, please see the thread "Running Wisconsin
Croquet Demo" (e.g., from me July 6).
On Jul 4, 2006, at 9:51 AM, Howard Stearns wrote:
> Subject: Re: Video Conferencing
> To: [hidden email]
> If you go into the 'Wisconsinization' project, there are a number
> of buttons. The one marked '3dWiki' is the same as the button on
> the startup project, and connects to a router on Mac which also has
> a headless peer on Windows connected to it (to provide the current
> world definitions). The one marked 'campus' connects to a router
> and headless peer both running in the same Squeak, on Linux.
Neither of these will be up permanently, but the 'campus' router is
the one likely to go away first.
> After waiting, I interrupted
> Croquet/Squeak and the traceback looked like it was waiting to logon
> to the global cache with some hardcoded ID, not "everyone". I take it
> that this is a separate thing from joining the island?
yes. Explained below.
> On a different tack, by taking this approach to having persistent
> content, aren't you going a step backwards to a client-server model?
No. And yes. It depends.
We're all pretty used to the components of the client-server model,
and we know what the issues are.
I think most folks individually have a some idea of what "full"
TeaTime might be like. (I'm imagining a fully P2P overlay network,
which carries traffic within and between islands.)
But the 1.0 SDK, aka Hedgehog, aka "Simplified TeaTime", is something
If we parse it like a lawyer, here are the fragments:
- There's a router. The Internet is dependent on the workings of
hardware routers. Here's a software router. We can make it
arbitrarily reliable in a conventional sense, and of quite small
scope for scalability, but it is still a single point of failure and
not infinitely scalable. It's easy, though to envision adding a fail-
over mechanism for reliability, and __maybe__ some sort of of Paxos-
like mechanism to make it distributed.
- There's something to give continuity of island state. This is
simply a peer that is left on. There's some engineering involved in
figuring out the "best" way to do this for a given application, but a
"continuity server" is an easy and adequate model for now.
- There's a lot of immutable stuff that doesn't need to be in the
island state. In the SDK, this includes textures, and comes first
from your disk cache, otherwise you ask the router for a peer in the
same world to give it to you. In WiscWorlds, we're moving more stuff
into this cache. Sounds, movies, hopefully meshes. Right now, we have
a global cache (among all our worlds) that is handled by an
additional router. Everyone connects to it as a client, and one or
more machines connect as a "server," analogous to the continuity
server. This is what you saw being logged into with a hardcoded id.
(Hats off to Josh for this model and implementation.) A WorldBase
would be another approach. Both of these approaches do introduce
another client-server-like single point of failure within this aspect
of the system. However, I think it's pretty straightforward (e.g.,
"just work") to swap this whole thing out with a "conventional"
Distributed Hash Table p2p overlay network.
- Discovery of routers. The SDK handles this only on the LAN.
Dormouse used a single global introducer. But it is pretty easy to
envision a distributed introducer. In WiscWorlds, we ignore the
problem by hardcoding the router (actually, the dispatcher) address.
The point is that we're trying to break the problems apart into more
tractable pieces. Right now, the total effect for practical purposes
is to still be be pretty dependent on "servers." But I am 100%
confident of being able to make everything "parallel distributed"
when required -- except for the routers. Here I am only confident of
making each router "serially distributed" (e.g., handling failover,
but not automatically distributing a load using parallelism). My gut
feeling is that it is too early, and entirely unnecessary, to place
bets on whether it is more productive to work on a parallel
distributed router for simplified teatime, or to just work on full
teatime. But I am going to do neither. Just apps and the technology
> Maybe for this application that makes sense, and I'm still trying to
> figure out what would be a good way to provide persistent content in a
> P2P model. I guess UMN's approach is the WorldBase object store
> server. Maybe by the time you provide a persistent but dynamically
> updateable object store and a meeting place/introducer server, you may
> as well have a full participating but unmanned host.
It should be clear that my preference is to break the issues apart,
using individual solutions for orthogonal problems. To me, DHTs have
the right math for immutable data/media, used across worlds, which
never need to be garbage collected. I think quasi-relational
databases are a proven technology for handling metadata as a service:
things like author, time, rating, comments, and postcard data all
change slowly, and tend to be accessed in a way which allows a
connectionless, service-oriented, big-iron server implementation to
be adequate. In the long run, I'd like to see searchable metadata
services handled in a more free way, but that's a political opinion,
not an engineering one.
|Free forum by Nabble||Edit this page|