Automatic music collaboration in Croquet

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
16 messages Options
Reply | Threaded
Open this post in threaded view
|

Automatic music collaboration in Croquet

Jay Hardesty-2
Hello Croquet List,

I previously emailed this list to describe a proposed
project, where music remixes and variations are
automatically be generated based on which avatars
are currently present within a given subspace
(that is, have passed through the same portal).

The application has now been implemented. So far
it has only been tested by running multiple virtual
machines on a single computer. A video of one
of the first run-throughs, from the viewpoint of
various avatars, can be found at:
http://tone23.org/environment/implementation
(video production qualities are pretty low but you
can get the idea)

Each avatar tags itself with a musical selection,
and the music playing in a given subspace incorporates
elements from the music associated with each avatar
currently in that space. When an avatar enters a space
the music changes to incorporate that avatar's music
tag into the remix. When they leave the space, their
musical tag gets mixed out of the music in that space.

With more avatars I hope that a type of musical flocking
behavior could occur as avatars congregate in various
subspaces, eventually settling into musically compatible
groups (this interaction shown in the video is much too
sparse - due to my CPU limitations -to exhibit anything
like flocking).

The Croquet spaces communicate with another Squeak image
(via http) that calculates musical results in response to
the arrival/departure of avatars in a given space.
That server image is the same one behind a Seaside-based
remixer ( http://tone23.org/qtone ) that uses the same
music engine.

Trying to gauge whether there's any potential future
within Croquet to further develop and make use of an app
like this (commercial or otherwise), or whether this sort
of application belongs more in something like Second Life
(I'm currently finishing an implementation of the app
there as well)

Thanks to anyone with time to take a look.
Any remarks/suggestions/proposals welcome.

Jay
[hidden email]
http://tone23.org
Reply | Threaded
Open this post in threaded view
|

Re: Automatic music collaboration in Croquet

Peter Moore-5
Nice. That first URL should be: http://tone23.org/environment/implement.html

Peter Moore
University Technology Development Center
University of Minnesota


On Apr 4, 2007, at 3:23 PM, Jay Hardesty wrote:

Hello Croquet List,

I previously emailed this list to describe a proposed
project, where music remixes and variations are
automatically be generated based on which avatars
are currently present within a given subspace
(that is, have passed through the same portal).

The application has now been implemented. So far
it has only been tested by running multiple virtual
machines on a single computer. A video of one
of the first run-throughs, from the viewpoint of
various avatars, can be found at:
http://tone23.org/environment/implementation
(video production qualities are pretty low but you
can get the idea)

Each avatar tags itself with a musical selection,
and the music playing in a given subspace incorporates
elements from the music associated with each avatar
currently in that space. When an avatar enters a space
the music changes to incorporate that avatar's music
tag into the remix. When they leave the space, their
musical tag gets mixed out of the music in that space.

With more avatars I hope that a type of musical flocking
behavior could occur as avatars congregate in various
subspaces, eventually settling into musically compatible
groups (this interaction shown in the video is much too
sparse - due to my CPU limitations -to exhibit anything
like flocking).

The Croquet spaces communicate with another Squeak image
(via http) that calculates musical results in response to
the arrival/departure of avatars in a given space.
That server image is the same one behind a Seaside-based
remixer ( http://tone23.org/qtone ) that uses the same
music engine.

Trying to gauge whether there's any potential future
within Croquet to further develop and make use of an app
like this (commercial or otherwise), or whether this sort
of application belongs more in something like Second Life
(I'm currently finishing an implementation of the app
there as well)

Thanks to anyone with time to take a look.
Any remarks/suggestions/proposals welcome.

Jay
[hidden email]
http://tone23.org

Reply | Threaded
Open this post in threaded view
|

Re: Automatic music collaboration in Croquet

Jay Hardesty-2
Oops sorry.  Thanks - I put another link on the web site so
hopefully the link in the original post will work.
But yes <a href="http://tone23.org/environment/implement.html" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)"> http://tone23.org/environment/implement.html
is correct.

Thanks again,
Jay

On 4/4/07, Peter Moore <[hidden email]> wrote:
Nice. That first URL should be: <a href="http://tone23.org/environment/implement.html" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)"> http://tone23.org/environment/implement.html

Peter Moore
University Technology Development Center
University of Minnesota


On Apr 4, 2007, at 3:23 PM, Jay Hardesty wrote:

Hello Croquet List,

I previously emailed this list to describe a proposed
project, where music remixes and variations are
automatically be generated based on which avatars
are currently present within a given subspace
(that is, have passed through the same portal).

The application has now been implemented. So far
it has only been tested by running multiple virtual
machines on a single computer. A video of one
of the first run-throughs, from the viewpoint of
various avatars, can be found at:
<a href="http://tone23.org/environment/implementation" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)"> http://tone23.org/environment/implementation
(video production qualities are pretty low but you
can get the idea)

Each avatar tags itself with a musical selection,
and the music playing in a given subspace incorporates
elements from the music associated with each avatar
currently in that space. When an avatar enters a space
the music changes to incorporate that avatar's music
tag into the remix. When they leave the space, their
musical tag gets mixed out of the music in that space.

With more avatars I hope that a type of musical flocking
behavior could occur as avatars congregate in various
subspaces, eventually settling into musically compatible
groups (this interaction shown in the video is much too
sparse - due to my CPU limitations -to exhibit anything
like flocking).

The Croquet spaces communicate with another Squeak image
(via http) that calculates musical results in response to
the arrival/departure of avatars in a given space.
That server image is the same one behind a Seaside-based
remixer ( <a href="http://tone23.org/qtone" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)"> http://tone23.org/qtone ) that uses the same
music engine.

Trying to gauge whether there's any potential future
within Croquet to further develop and make use of an app
like this (commercial or otherwise), or whether this sort
of application belongs more in something like Second Life
(I'm currently finishing an implementation of the app
there as well)

Thanks to anyone with time to take a look.
Any remarks/suggestions/proposals welcome.

Jay
[hidden email]
<a href="http://tone23.org" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)"> http://tone23.org


Reply | Threaded
Open this post in threaded view
|

Re: Automatic music collaboration in Croquet

Howard Stearns
That's pretty cool. I like the social evolution aspect of keeping the generated
music around for people to choose as their tag.

In addition to enter/exit, would it be meaningful to pick up other gestures of
the users and use them to alter the mix? In particular, suppose the recent
discussion of camera-detected gestures inspired someone to allow the system to
receive information about how the users were "dancing" in their chairs?

"Funky Chicken" indeed.

Is it possible to replicate the mixing? I.e., to have the mixing software on
each machine (in Squeak) so that each participant generates the mix rather than
having it be generated and and piped-out from a central bottleneck-server?

For example, it might be interesting to have EVERY space in the
CroquetCollaborative allow music generation. The sample data could be stored in
the Collaborative's shared media cache (which already stores sound and other
media), so that it could be shared (and cached) efficiently between islands. The
ability to generate new media would be a big motivator to make the shared media
cache scale better than it does now (e.g., with distributed storage). [Frankly,
I have other things to do and was hoping to defer that until later, but I can't
resist a good use-case.]

Jay Hardesty wrote:

> Oops sorry.  Thanks - I put another link on the web site so
> hopefully the link in the original post will work.
> But yes http://tone23.org/environment/implement.html
> is correct.
>
> Thanks again,
> Jay
>
> On 4/4/07, *Peter Moore* < [hidden email] <mailto:[hidden email]>> wrote:
>
>     Nice. That first URL should be:
>     http://tone23.org/environment/implement.html
>
>     Peter Moore
>     University Technology Development Center
>     University of Minnesota
>     [hidden email] <mailto:[hidden email]>
>
>
>     On Apr 4, 2007, at 3:23 PM, Jay Hardesty wrote:
>
>>     Hello Croquet List,
>>
>>     I previously emailed this list to describe a proposed
>>     project, where music remixes and variations are
>>     automatically be generated based on which avatars
>>     are currently present within a given subspace
>>     (that is, have passed through the same portal).
>>
>>     The application has now been implemented. So far
>>     it has only been tested by running multiple virtual
>>     machines on a single computer. A video of one
>>     of the first run-throughs, from the viewpoint of
>>     various avatars, can be found at:
>>     http://tone23.org/environment/implementation
>>     (video production qualities are pretty low but you
>>     can get the idea)
>>
>>     Each avatar tags itself with a musical selection,
>>     and the music playing in a given subspace incorporates
>>     elements from the music associated with each avatar
>>     currently in that space. When an avatar enters a space
>>     the music changes to incorporate that avatar's music
>>     tag into the remix. When they leave the space, their
>>     musical tag gets mixed out of the music in that space.
>>
>>     With more avatars I hope that a type of musical flocking
>>     behavior could occur as avatars congregate in various
>>     subspaces, eventually settling into musically compatible
>>     groups (this interaction shown in the video is much too
>>     sparse - due to my CPU limitations -to exhibit anything
>>     like flocking).
>>
>>     The Croquet spaces communicate with another Squeak image
>>     (via http) that calculates musical results in response to
>>     the arrival/departure of avatars in a given space.
>>     That server image is the same one behind a Seaside-based
>>     remixer ( http://tone23.org/qtone ) that uses the same
>>     music engine.
>>
>>     Trying to gauge whether there's any potential future
>>     within Croquet to further develop and make use of an app
>>     like this (commercial or otherwise), or whether this sort
>>     of application belongs more in something like Second Life
>>     (I'm currently finishing an implementation of the app
>>     there as well)
>>
>>     Thanks to anyone with time to take a look.
>>     Any remarks/suggestions/proposals welcome.
>>
>>     Jay
>>     [hidden email] <mailto:[hidden email]>
>>     http://tone23.org
>
>

--
Howard Stearns
University of Wisconsin - Madison
Division of Information Technology
mailto:[hidden email]
jabber:[hidden email]
office:+1-608-262-3724
mobile:+1-608-658-2419
Reply | Threaded
Open this post in threaded view
|

Re: Automatic music collaboration in Croquet

Brad Fuller-3
In reply to this post by Jay Hardesty-2
Jay Hardesty wrote:

> Hello Croquet List,
>
> I previously emailed this list to describe a proposed
> project, where music remixes and variations are
> automatically be generated based on which avatars
> are currently present within a given subspace
> (that is, have passed through the same portal).
>
> The application has now been implemented. So far
> it has only been tested by running multiple virtual
> machines on a single computer. A video of one
> of the first run-throughs, from the viewpoint of
> various avatars, can be found at:
> http://tone23.org/environment/implementation
> (video production qualities are pretty low but you
> can get the idea)
>
> Each avatar tags itself with a musical selection,
> and the music playing in a given subspace incorporates
> elements from the music associated with each avatar
> currently in that space. When an avatar enters a space
> the music changes to incorporate that avatar's music
> tag into the remix. When they leave the space, their
> musical tag gets mixed out of the music in that space.
>
> With more avatars I hope that a type of musical flocking
> behavior could occur as avatars congregate in various
> subspaces, eventually settling into musically compatible
> groups (this interaction shown in the video is much too
> sparse - due to my CPU limitations -to exhibit anything
> like flocking).
>
> The Croquet spaces communicate with another Squeak image
> (via http) that calculates musical results in response to
> the arrival/departure of avatars in a given space.
> That server image is the same one behind a Seaside-based
> remixer ( http://tone23.org/qtone ) that uses the same
> music engine.
>
> Trying to gauge whether there's any potential future
> within Croquet to further develop and make use of an app
> like this (commercial or otherwise), or whether this sort
> of application belongs more in something like Second Life
> (I'm currently finishing an implementation of the app
> there as well)
>
> Thanks to anyone with time to take a look.
> Any remarks/suggestions/proposals welcome.
>
Jay,

Interesting concept. Can you tell us more about what a tag musically
consists of? You say that there is music playing in a space. Is there
music associated with the space before anyone enters? How are the
various parts of the music, that an avatar holds, prioritized with the
others in a space? I'm not real certain what a "tag" is.

Music has a wide definition, of course. However, music that is
interesting develops overtime, otherwise it's one big lump of looping
noise (ok... well.. others like Cage may argue that this, too, is
music.) How will the engine develop or grow a sonic idea over time?

It seems like a great idea for performance art piece. I can see a setup
at a museum where people can explore and listen in various rooms.
Perhaps a series of museums could collaborate so that people around the
world could explore.

brad

--
brad fuller
www.bradfuller.com
+1 (408) 799-6124

Reply | Threaded
Open this post in threaded view
|

Re: Automatic music collaboration in Croquet

Jay Hardesty-2
In reply to this post by Howard Stearns


On 4/4/07, Howard Stearns <[hidden email]> wrote:
That's pretty cool. I like the social evolution aspect of keeping the generated
music around for people to choose as their tag.

Yes I'm hoping that, over time, people would collect musical tracks
that contain vestiges of their previous encounters in the shared space,
and that those tracks would be fed back in to create more.  (Loosely
analogous to the way conversations are informed by bits of past
conversations)


In addition to enter/exit, would it be meaningful to pick up other gestures of
the users and use them to alter the mix? In particular, suppose the recent
discussion of camera-detected gestures inspired someone to allow the system to
receive information about how the users were "dancing" in their chairs?

"Funky Chicken" indeed.

Currently the musical elements are drawn at random from the musical tags
associated with the avatars in the space.  But the selection could be biased
toward elements associated with avatars that are, for instance, most
enthusiastically moving to the music, or perhaps, most successfully
competing at some task (as in a dance contest, rating of appearance,
verbal chat, etc). 


Is it possible to replicate the mixing? I.e., to have the mixing software on
each machine (in Squeak) so that each participant generates the mix rather than
having it be generated and and piped-out from a central bottleneck-server?

For example, it might be interesting to have EVERY space in the
CroquetCollaborative allow music generation. The sample data could be stored in
the Collaborative's shared media cache (which already stores sound and other
media), so that it could be shared (and cached) efficiently between islands. The
ability to generate new media would be a big motivator to make the shared media
cache scale better than it does now (e.g., with distributed storage). [Frankly,
I have other things to do and was hoping to defer that until later, but I can't
resist a good use-case.]


The server process is separate now because I reuse it for other apps as well.
But the fact that it's all been running on the same machine probably indicates
that performance isn't the reason (unless you try to run too many images, as
the video pretty much does).  So yes I suppose that the simplest scheme might be
to package the music engine into each image that will host one of more shared
spaces.


Jay Hardesty wrote:

> Oops sorry.  Thanks - I put another link on the web site so
> hopefully the link in the original post will work.
> But yes http://tone23.org/environment/implement.html
> is correct.
>
> Thanks again,
> Jay
>
> On 4/4/07, *Peter Moore* < [hidden email] <mailto:[hidden email]>> wrote:
>
>     Nice. That first URL should be:
>     http://tone23.org/environment/implement.html
>
>     Peter Moore
>     University Technology Development Center

>     University of Minnesota
>     [hidden email] <mailto:[hidden email]>
>
>
>     On Apr 4, 2007, at 3:23 PM, Jay Hardesty wrote:
>
>>     Hello Croquet List,
>>
>>     I previously emailed this list to describe a proposed
>>     project, where music remixes and variations are
>>     automatically be generated based on which avatars
>>     are currently present within a given subspace
>>     (that is, have passed through the same portal).
>>
>>     The application has now been implemented. So far
>>     it has only been tested by running multiple virtual
>>     machines on a single computer. A video of one
>>     of the first run-throughs, from the viewpoint of
>>     various avatars, can be found at:
>>     http://tone23.org/environment/implementation
>>     (video production qualities are pretty low but you
>>     can get the idea)
>>
>>     Each avatar tags itself with a musical selection,
>>     and the music playing in a given subspace incorporates
>>     elements from the music associated with each avatar
>>     currently in that space. When an avatar enters a space

>>     the music changes to incorporate that avatar's music
>>     tag into the remix. When they leave the space, their
>>     musical tag gets mixed out of the music in that space.
>>
>>     With more avatars I hope that a type of musical flocking
>>     behavior could occur as avatars congregate in various
>>     subspaces, eventually settling into musically compatible
>>     groups (this interaction shown in the video is much too
>>     sparse - due to my CPU limitations -to exhibit anything
>>     like flocking).
>>
>>     The Croquet spaces communicate with another Squeak image
>>     (via http) that calculates musical results in response to
>>     the arrival/departure of avatars in a given space.
>>     That server image is the same one behind a Seaside-based
>>     remixer ( http://tone23.org/qtone ) that uses the same
>>     music engine.
>>
>>     Trying to gauge whether there's any potential future
>>     within Croquet to further develop and make use of an app
>>     like this (commercial or otherwise), or whether this sort
>>     of application belongs more in something like Second Life
>>     (I'm currently finishing an implementation of the app
>>     there as well)
>>
>>     Thanks to anyone with time to take a look.
>>     Any remarks/suggestions/proposals welcome.
>>
>>     Jay
>>     [hidden email] <mailto:[hidden email]>
>>     http://tone23.org
>
>

--
Howard Stearns
University of Wisconsin - Madison
Division of Information Technology
mailto:[hidden email]
[hidden email]
office:+1-608-262-3724
mobile:+1-608-658-2419

Reply | Threaded
Open this post in threaded view
|

Re: Automatic music collaboration in Croquet

Jay Hardesty-2
In reply to this post by Brad Fuller-3
     Jay,

Interesting concept. Can you tell us more about what a tag musically
consists of? You say that there is music playing in a space. Is there
music associated with the space before anyone enters? How are the
various parts of the music, that an avatar holds, prioritized with the
others in a space? I'm not real certain what a "tag" is.

In this incarnation, each "tag" consists of a piece of music four
or eight bars in length, that has some combination of the following
parts:  drums, bass, lead, comp (as in keys/guitar/etc), and pad
(synth/strings/etc).  The musical style is beat-oriented dance,
hiphop, acid-jazz, etc - stuff that's pretty forgiving about being
recombined, provided that the harmonies and rhythms are tended
to (by the music engine).  The crop of pieces in the video are drawn
from those I use in the web-based remixer - for Croquet I'm going
to replace them with more ambient (though still beat-oriented)
tracks.

Within each space a new piece is mixed together from parts
randomly selected from the tags of avatars in that space - for
example the bass part could come from avatar 1, the drums from
avatar 2, lead from avatar 1, and pad from avatar 3.  The parts
are manipulated at the note level by the music engine to create
harmonic and rhythmic variation and coherence (getting
the newly mixed parts into common keys and modes,
injecting rhythmic variation, pitch inversions, and all).  There's
a lot of leeway in the results, intended to encourage users to
keep fishing around for preferred combinations. 

There is no music associated with an empty space.  When there
is only a single avatar in the space, you'll hear a new mix that is
built from variations on the parts from that single avatar's tag.
 

Music has a wide definition, of course. However, music that is
interesting develops overtime, otherwise it's one big lump of looping
noise (ok... well.. others like Cage may argue that this, too, is
music.) How will the engine develop or grow a sonic idea over time?

The music engine has more compute-intensive optimization routines
for voice-leading, counterpoint, and harmonization.  These are used
by my web-based remixer, but turned off in Croquet because they
need time to run (most are genetic-algorithm based).  Hopefully those
can eventually also be used in this context someday.

BUT the main hope is that that collective preferences will be
expressed by participants who use their avatars to vote with their feet,
between different congregations of music inputs, and will produce new
musical mixes that will in turn be reused as individual tags.  In this way
users with shared musical tastes will over time be "cooperating"
to evolve new batches of musical tags that express those preferences.
Kind of like the way that clothing styles develop over time within
certain scenes.  Perhaps there'll be no such convergence, but it
seems like an experiment that is only now made possible, by
environments such as Croquet.

The point of having multiple subspaces is to allow this sort of
collective self-categorization to gradually unfold.
 

It seems like a great idea for performance art piece. I can see a setup
at a museum where people can explore and listen in various rooms.
Perhaps a series of museums could collaborate so that people around the
world could explore.

brad

Yes I think any situation where agents can choose musical tags could
drive the same basic setup - physical spaces as well as virtual ones,
maybe even location-based apps (collectively evolving background music
for GPS driving instructions, etc).  Any situation where more than one
person wants to inject musical material onto some shared space or
channel(s).

Blogs and chat rooms seem to demonstrate that people want to be
involved in collective content creation - music conveniently erases
the distinction between "lurking" and "holding the floor".

-Jay

--
brad fuller
www.bradfuller.com
+1 (408) 799-6124


Reply | Threaded
Open this post in threaded view
|

Re: Automatic music collaboration in Croquet

Howard Stearns
In reply to this post by Jay Hardesty-2
I'm not sure whether we're saying the same thing.

For example, it sounds like you have have the music engine for a  
given space running on a server, not on each participant's machine.

I'm imagining that there are three parts to what happens in that kind  
of architecture. I understand that you're just looking at things on a  
small scale, but I'm going to use exaggeration to make my point:

    1. The music engine has to find out who is present. You could  
have the Croquet software do something when the avatar enters a  
space, but the server just needs to hear once about each event. So  
you have to work to defeat Croquet's replication properties. (Not hard.)

    2. The music engine has to generate the mix. As you say, not hard  
unless the same server is trying to generate the mix for a lot of  
different spaces. For example, if you wanted this for all Croquet  
spaces (or even a tenth or a hundredth or thousandth of them), and if  
there were as many Croquet spaces as there are Web sites, then you're  
going to need a lot of servers!

    3. The music engine has to get the mixed sound to each  
participant. Again, on a global scale, you're talking about  
broadcasting music to every participant on the planet. Buy Cisco stock!

The "Croquet way" to do this is to have the music engine be on each  
participating machine:
   1. As each avatar enters a space, it tells the LOCAL engine about  
the event. No complication. No network traffic.
   2. There's no computation on a server because each participant  
generates the music locally. Each participant is only in one space at  
a time, so they never have to do more than one mixing at a time.
   3. The music is already local on each machine, so there's no need  
to pipe it back to all the participants.

My question is whether you are doing, or can do, the generation the  
"Croquet way."

Jay Hardesty wrote:

> On 4/4/07, *Howard Stearns* <[hidden email] ... Is it possible to
>> replicate the mixing? I.e., to have the mixing software on each  
>> machine (in
>> Squeak) so that each participant generates the mix rather than  
>> having it be
>> generated and and piped-out from a central bottleneck-server?
>> For example, it might be interesting to have EVERY space in the  
>> CroquetCollaborative allow music generation. The sample data could  
>> be stored
>> in the Collaborative's shared media cache (which already stores  
>> sound and
>> other media), so that it could be shared (and cached) efficiently  
>> between islands. The ability to generate new media would be a big  
>> motivator to make
>> the shared media cache scale better than it does now (e.g., with  
>> distributed storage). [Frankly, I have other things to do and was  
>> hoping to defer that
>> until later, but I can't resist a good use-case.]
> The server process is separate now because I reuse it for other  
> apps as well.
>  But the fact that it's all been running on the same machine  
> probably indicates that performance isn't the reason (unless you  
> try to run too many
> images, as the video pretty much does).  So yes I suppose that the  
> simplest
> scheme might be to packagethe music engine into each image that  
> will host one
> of more shared spaces.

Reply | Threaded
Open this post in threaded view
|

Re: Automatic music collaboration in Croquet

jayh
OK I think I see - you mean Croquet's preference for duplicating
computations, rather than duplicating data from a single computation.
Yes I can see that server issues will loomer sooner or later
using my current scheme (which essentially boils down to using
Croquet as a multi-user client).

The Squeak image (not a Croquet image) that serves the music
engine relies on quite a bit of FFI C++ code (purely for speed,
especially math and GA's), so I'm guessing that code would
all need to be ported into just Croquet(?).   I agree that would
be the dream implementation, unfortunately for now the CPU
requirements are just too much - in fact the C++ code is pretty
finely threaded (to the level of each individuals in the GA's) in
order to take advantage of the 4-core Mac I'm running.

Perhaps I could begin to parcel out some of the less compute
intensive music algorithms and move then into the Croquet side,
with the hope that as computers get faster, eventually the server
side will "wither away".  (Speaking of economics, thanks for
the Cisco stock tip <grin>)


On Apr 4, 2007, at 8:11 PM, Howard Stearns wrote:

> I'm not sure whether we're saying the same thing.
>
> For example, it sounds like you have have the music engine for a  
> given space running on a server, not on each participant's machine.
>
> I'm imagining that there are three parts to what happens in that  
> kind of architecture. I understand that you're just looking at  
> things on a small scale, but I'm going to use exaggeration to make  
> my point:
>
>    1. The music engine has to find out who is present. You could  
> have the Croquet software do something when the avatar enters a  
> space, but the server just needs to hear once about each event. So  
> you have to work to defeat Croquet's replication properties. (Not  
> hard.)
>
>    2. The music engine has to generate the mix. As you say, not  
> hard unless the same server is trying to generate the mix for a lot  
> of different spaces. For example, if you wanted this for all  
> Croquet spaces (or even a tenth or a hundredth or thousandth of  
> them), and if there were as many Croquet spaces as there are Web  
> sites, then you're going to need a lot of servers!
>
>    3. The music engine has to get the mixed sound to each  
> participant. Again, on a global scale, you're talking about  
> broadcasting music to every participant on the planet. Buy Cisco  
> stock!
>
> The "Croquet way" to do this is to have the music engine be on each  
> participating machine:
>   1. As each avatar enters a space, it tells the LOCAL engine about  
> the event. No complication. No network traffic.
>   2. There's no computation on a server because each participant  
> generates the music locally. Each participant is only in one space  
> at a time, so they never have to do more than one mixing at a time.
>   3. The music is already local on each machine, so there's no need  
> to pipe it back to all the participants.
>
> My question is whether you are doing, or can do, the generation the  
> "Croquet way."
>
> Jay Hardesty wrote:
>> On 4/4/07, *Howard Stearns* <[hidden email] ... Is it possible to
>>> replicate the mixing? I.e., to have the mixing software on each  
>>> machine (in
>>> Squeak) so that each participant generates the mix rather than  
>>> having it be
>>> generated and and piped-out from a central bottleneck-server?
>>> For example, it might be interesting to have EVERY space in the  
>>> CroquetCollaborative allow music generation. The sample data  
>>> could be stored
>>> in the Collaborative's shared media cache (which already stores  
>>> sound and
>>> other media), so that it could be shared (and cached) efficiently  
>>> between islands. The ability to generate new media would be a big  
>>> motivator to make
>>> the shared media cache scale better than it does now (e.g., with  
>>> distributed storage). [Frankly, I have other things to do and was  
>>> hoping to defer that
>>> until later, but I can't resist a good use-case.]
>> The server process is separate now because I reuse it for other  
>> apps as well.
>>  But the fact that it's all been running on the same machine  
>> probably indicates that performance isn't the reason (unless you  
>> try to run too many
>> images, as the video pretty much does).  So yes I suppose that the  
>> simplest
>> scheme might be to packagethe music engine into each image that  
>> will host one
>> of more shared spaces.
>

/Z
Reply | Threaded
Open this post in threaded view
|

Re: Automatic music collaboration in Croquet

/Z
The part I don't get here is how the music got/gets to the client
machine in the first place?


On 4/4/07, Jay Hardesty <[hidden email]> wrote:

> OK I think I see - you mean Croquet's preference for duplicating
> computations, rather than duplicating data from a single computation.
> Yes I can see that server issues will loomer sooner or later
> using my current scheme (which essentially boils down to using
> Croquet as a multi-user client).
>
> The Squeak image (not a Croquet image) that serves the music
> engine relies on quite a bit of FFI C++ code (purely for speed,
> especially math and GA's), so I'm guessing that code would
> all need to be ported into just Croquet(?).   I agree that would
> be the dream implementation, unfortunately for now the CPU
> requirements are just too much - in fact the C++ code is pretty
> finely threaded (to the level of each individuals in the GA's) in
> order to take advantage of the 4-core Mac I'm running.
>
> Perhaps I could begin to parcel out some of the less compute
> intensive music algorithms and move then into the Croquet side,
> with the hope that as computers get faster, eventually the server
> side will "wither away".  (Speaking of economics, thanks for
> the Cisco stock tip <grin>)
>
>
> On Apr 4, 2007, at 8:11 PM, Howard Stearns wrote:
>
> > I'm not sure whether we're saying the same thing.
> >
> > For example, it sounds like you have have the music engine for a
> > given space running on a server, not on each participant's machine.
> >
> > I'm imagining that there are three parts to what happens in that
> > kind of architecture. I understand that you're just looking at
> > things on a small scale, but I'm going to use exaggeration to make
> > my point:
> >
> >    1. The music engine has to find out who is present. You could
> > have the Croquet software do something when the avatar enters a
> > space, but the server just needs to hear once about each event. So
> > you have to work to defeat Croquet's replication properties. (Not
> > hard.)
> >
> >    2. The music engine has to generate the mix. As you say, not
> > hard unless the same server is trying to generate the mix for a lot
> > of different spaces. For example, if you wanted this for all
> > Croquet spaces (or even a tenth or a hundredth or thousandth of
> > them), and if there were as many Croquet spaces as there are Web
> > sites, then you're going to need a lot of servers!
> >
> >    3. The music engine has to get the mixed sound to each
> > participant. Again, on a global scale, you're talking about
> > broadcasting music to every participant on the planet. Buy Cisco
> > stock!
> >
> > The "Croquet way" to do this is to have the music engine be on each
> > participating machine:
> >   1. As each avatar enters a space, it tells the LOCAL engine about
> > the event. No complication. No network traffic.
> >   2. There's no computation on a server because each participant
> > generates the music locally. Each participant is only in one space
> > at a time, so they never have to do more than one mixing at a time.
> >   3. The music is already local on each machine, so there's no need
> > to pipe it back to all the participants.
> >
> > My question is whether you are doing, or can do, the generation the
> > "Croquet way."
> >
> > Jay Hardesty wrote:
> >> On 4/4/07, *Howard Stearns* <[hidden email] ... Is it possible to
> >>> replicate the mixing? I.e., to have the mixing software on each
> >>> machine (in
> >>> Squeak) so that each participant generates the mix rather than
> >>> having it be
> >>> generated and and piped-out from a central bottleneck-server?
> >>> For example, it might be interesting to have EVERY space in the
> >>> CroquetCollaborative allow music generation. The sample data
> >>> could be stored
> >>> in the Collaborative's shared media cache (which already stores
> >>> sound and
> >>> other media), so that it could be shared (and cached) efficiently
> >>> between islands. The ability to generate new media would be a big
> >>> motivator to make
> >>> the shared media cache scale better than it does now (e.g., with
> >>> distributed storage). [Frankly, I have other things to do and was
> >>> hoping to defer that
> >>> until later, but I can't resist a good use-case.]
> >> The server process is separate now because I reuse it for other
> >> apps as well.
> >>  But the fact that it's all been running on the same machine
> >> probably indicates that performance isn't the reason (unless you
> >> try to run too many
> >> images, as the video pretty much does).  So yes I suppose that the
> >> simplest
> >> scheme might be to packagethe music engine into each image that
> >> will host one
> >> of more shared spaces.
> >
>
>
Reply | Threaded
Open this post in threaded view
|

Re: Automatic music collaboration in Croquet

Howard Stearns
In reply to this post by jayh

On Apr 4, 2007, at 8:06 PM, Jay Hardesty wrote:

> OK I think I see - you mean Croquet's preference for duplicating
> computations, rather than duplicating data from a single computation.

exactly.

> Yes I can see that server issues will loomer sooner or later
> using my current scheme (which essentially boils down to using
> Croquet as a multi-user client).

In principle, there are several potential problems for you and your  
users with the old-school architecture:
   - server load (computing the mix)
   - network load (sending the mix back)
   - latency (round trip telling the server to compute and getting  
the mix back)

Which of these bites you sooner rather than later? It depends...

>
> The Squeak image (not a Croquet image) that serves the music
> engine relies on quite a bit of FFI C++ code (purely for speed,
> especially math and GA's), so I'm guessing that code would
> all need to be ported into just Croquet(?).   I agree that would

FFI is ok. We use FFI for several things in Croquet.

[There are technical things to think about, but let's save that for  
[hidden email] and leave the "economics" discussion here.]

> be the dream implementation, unfortunately for now the CPU
> requirements are just too much - in fact the C++ code is pretty
> finely threaded (to the level of each individuals in the GA's) in
> order to take advantage of the 4-core Mac I'm running.
>
> Perhaps I could begin to parcel out some of the less compute
> intensive music algorithms and move then into the Croquet side,
> with the hope that as computers get faster, eventually the server
> side will "wither away".  (Speaking of economics, thanks for
> the Cisco stock tip <grin>)
>

The short version of (one aspect of) the technical discussion is that  
the "parameters" of the mix are part of the definition of the Croquet  
simulation. These are things like:
- the tag for each avatar present
- the random selections of how-much / which-parts of each tag is to  
be used
- [optional] the current "play position" or some periodic trigger if  
you want the playback to be synchronized for each user

But the mix itself can be computed and stored "in" squeak (or at  
least, controlled by Squeak via FFI), but not actually within the  
Croquet simulation.  As it turns out, we already treat media this  
way.  Textures, movies, and sound are in Squeak and managed by  
Croquet code, but are not part of the Croquet simulation. Only the  
"tags" and other "parameters" (like the "play event") are part of the  
synchronized Croquet simulation.

>
> On Apr 4, 2007, at 8:11 PM, Howard Stearns wrote:
>
>> I'm not sure whether we're saying the same thing.
>>
>> For example, it sounds like you have have the music engine for a  
>> given space running on a server, not on each participant's machine.
>>
>> I'm imagining that there are three parts to what happens in that  
>> kind of architecture. I understand that you're just looking at  
>> things on a small scale, but I'm going to use exaggeration to make  
>> my point:
>>
>>    1. The music engine has to find out who is present. You could  
>> have the Croquet software do something when the avatar enters a  
>> space, but the server just needs to hear once about each event. So  
>> you have to work to defeat Croquet's replication properties. (Not  
>> hard.)
>>
>>    2. The music engine has to generate the mix. As you say, not  
>> hard unless the same server is trying to generate the mix for a  
>> lot of different spaces. For example, if you wanted this for all  
>> Croquet spaces (or even a tenth or a hundredth or thousandth of  
>> them), and if there were as many Croquet spaces as there are Web  
>> sites, then you're going to need a lot of servers!
>>
>>    3. The music engine has to get the mixed sound to each  
>> participant. Again, on a global scale, you're talking about  
>> broadcasting music to every participant on the planet. Buy Cisco  
>> stock!
>>
>> The "Croquet way" to do this is to have the music engine be on  
>> each participating machine:
>>   1. As each avatar enters a space, it tells the LOCAL engine  
>> about the event. No complication. No network traffic.
>>   2. There's no computation on a server because each participant  
>> generates the music locally. Each participant is only in one space  
>> at a time, so they never have to do more than one mixing at a time.
>>   3. The music is already local on each machine, so there's no  
>> need to pipe it back to all the participants.
>>
>> My question is whether you are doing, or can do, the generation  
>> the "Croquet way."
>>
>> Jay Hardesty wrote:
>>> On 4/4/07, *Howard Stearns* <[hidden email] ... Is it possible to
>>>> replicate the mixing? I.e., to have the mixing software on each  
>>>> machine (in
>>>> Squeak) so that each participant generates the mix rather than  
>>>> having it be
>>>> generated and and piped-out from a central bottleneck-server?
>>>> For example, it might be interesting to have EVERY space in the  
>>>> CroquetCollaborative allow music generation. The sample data  
>>>> could be stored
>>>> in the Collaborative's shared media cache (which already stores  
>>>> sound and
>>>> other media), so that it could be shared (and cached)  
>>>> efficiently between islands. The ability to generate new media  
>>>> would be a big motivator to make
>>>> the shared media cache scale better than it does now (e.g., with  
>>>> distributed storage). [Frankly, I have other things to do and  
>>>> was hoping to defer that
>>>> until later, but I can't resist a good use-case.]
>>> The server process is separate now because I reuse it for other  
>>> apps as well.
>>>  But the fact that it's all been running on the same machine  
>>> probably indicates that performance isn't the reason (unless you  
>>> try to run too many
>>> images, as the video pretty much does).  So yes I suppose that  
>>> the simplest
>>> scheme might be to packagethe music engine into each image that  
>>> will host one
>>> of more shared spaces.
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Automatic music collaboration in Croquet

Joshua Gargus-2
In reply to this post by jayh
On Apr 4, 2007, at 6:06 PM, Jay Hardesty wrote:

> OK I think I see - you mean Croquet's preference for duplicating
> computations, rather than duplicating data from a single computation.
> Yes I can see that server issues will loomer sooner or later
> using my current scheme (which essentially boils down to using
> Croquet as a multi-user client).
>
> The Squeak image (not a Croquet image) that serves the music
> engine relies on quite a bit of FFI C++ code (purely for speed,
> especially math and GA's), so I'm guessing that code would
> all need to be ported into just Croquet(?).   I agree that would
> be the dream implementation, unfortunately for now the CPU
> requirements are just too much - in fact the C++ code is pretty
> finely threaded (to the level of each individuals in the GA's) in
> order to take advantage of the 4-core Mac I'm running.
>
> Perhaps I could begin to parcel out some of the less compute
> intensive music algorithms and move then into the Croquet side,
> with the hope that as computers get faster, eventually the server
> side will "wither away".

Maybe the server would eventually wither away, and maybe it  
wouldn't.  There's nothing inherently anti-Croquet about feeding in  
events from a server (that's essentially what you're doing when  
interacting via your mouse, keyboard, or webcam).  I wouldn't be  
surprised if replicating a few of the few of the fittest individuals  
generated by a GA compute-server takes less bandwidth than a typical  
mouse-driven interactive session; if this is true, then you could  
scale such a scheme quite far (especially as the methods for  
distributing Croquet events becomes more sophisticated: think  
hierarchical routers).

Josh


> (Speaking of economics, thanks for
> the Cisco stock tip <grin>)
>
>
> On Apr 4, 2007, at 8:11 PM, Howard Stearns wrote:
>
>> I'm not sure whether we're saying the same thing.
>>
>> For example, it sounds like you have have the music engine for a  
>> given space running on a server, not on each participant's machine.
>>
>> I'm imagining that there are three parts to what happens in that  
>> kind of architecture. I understand that you're just looking at  
>> things on a small scale, but I'm going to use exaggeration to make  
>> my point:
>>
>>    1. The music engine has to find out who is present. You could  
>> have the Croquet software do something when the avatar enters a  
>> space, but the server just needs to hear once about each event. So  
>> you have to work to defeat Croquet's replication properties. (Not  
>> hard.)
>>
>>    2. The music engine has to generate the mix. As you say, not  
>> hard unless the same server is trying to generate the mix for a  
>> lot of different spaces. For example, if you wanted this for all  
>> Croquet spaces (or even a tenth or a hundredth or thousandth of  
>> them), and if there were as many Croquet spaces as there are Web  
>> sites, then you're going to need a lot of servers!
>>
>>    3. The music engine has to get the mixed sound to each  
>> participant. Again, on a global scale, you're talking about  
>> broadcasting music to every participant on the planet. Buy Cisco  
>> stock!
>>
>> The "Croquet way" to do this is to have the music engine be on  
>> each participating machine:
>>   1. As each avatar enters a space, it tells the LOCAL engine  
>> about the event. No complication. No network traffic.
>>   2. There's no computation on a server because each participant  
>> generates the music locally. Each participant is only in one space  
>> at a time, so they never have to do more than one mixing at a time.
>>   3. The music is already local on each machine, so there's no  
>> need to pipe it back to all the participants.
>>
>> My question is whether you are doing, or can do, the generation  
>> the "Croquet way."
>>
>> Jay Hardesty wrote:
>>> On 4/4/07, *Howard Stearns* <[hidden email] ... Is it possible to
>>>> replicate the mixing? I.e., to have the mixing software on each  
>>>> machine (in
>>>> Squeak) so that each participant generates the mix rather than  
>>>> having it be
>>>> generated and and piped-out from a central bottleneck-server?
>>>> For example, it might be interesting to have EVERY space in the  
>>>> CroquetCollaborative allow music generation. The sample data  
>>>> could be stored
>>>> in the Collaborative's shared media cache (which already stores  
>>>> sound and
>>>> other media), so that it could be shared (and cached)  
>>>> efficiently between islands. The ability to generate new media  
>>>> would be a big motivator to make
>>>> the shared media cache scale better than it does now (e.g., with  
>>>> distributed storage). [Frankly, I have other things to do and  
>>>> was hoping to defer that
>>>> until later, but I can't resist a good use-case.]
>>> The server process is separate now because I reuse it for other  
>>> apps as well.
>>>  But the fact that it's all been running on the same machine  
>>> probably indicates that performance isn't the reason (unless you  
>>> try to run too many
>>> images, as the video pretty much does).  So yes I suppose that  
>>> the simplest
>>> scheme might be to packagethe music engine into each image that  
>>> will host one
>>> of more shared spaces.
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Automatic music collaboration in Croquet

Jay Hardesty-2
In reply to this post by /Z
[sorry if this shows up twice - meant to send this to the
list originally -J]

For now, I just allow each avatar to select a musical
tag from a list of pieces known to the server (in the form of
MIDI scores).  The tag is simply the name of the piece,
and those names are all that get passed to the client.

Whenever an avatar enters/exits a subspace, a "remix"
command is sent to the server along with the names
(of musical pieces) associated with the avatars in that
space.  The server then looks up those scores by name,
and performs the calculations.  When new pieces are
created, their MIDI scores are stored, and their names
added to the list supplied to the client.

The MIDI data for the resulting piece is sent back to
the client via http.  The client uses Quicktime's General
MIDI synth or some other softsynth to render the audio.

Eventually the aim is to let people bring their own MIDI
scores (or equivalent) as tags. That will require
some preprocessing/validation of the MIDI data, and
for that MIDI data to be passed both directions.  And
it would allow getting rid of a centralized database, and
allow each space to evolve and store its own music
material data.

On 4/4/07, /Z <[hidden email]> wrote:
The part I don't get here is how the music got/gets to the client
machine in the first place?


On 4/4/07, Jay Hardesty <[hidden email]> wrote:

> OK I think I see - you mean Croquet's preference for duplicating
> computations, rather than duplicating data from a single computation.
> Yes I can see that server issues will loomer sooner or later
> using my current scheme (which essentially boils down to using
> Croquet as a multi-user client).
>
> The Squeak image (not a Croquet image) that serves the music
> engine relies on quite a bit of FFI C++ code (purely for speed,
> especially math and GA's), so I'm guessing that code would
> all need to be ported into just Croquet(?).   I agree that would
> be the dream implementation, unfortunately for now the CPU

> requirements are just too much - in fact the C++ code is pretty
> finely threaded (to the level of each individuals in the GA's) in
> order to take advantage of the 4-core Mac I'm running.
>
> Perhaps I could begin to parcel out some of the less compute
> intensive music algorithms and move then into the Croquet side,
> with the hope that as computers get faster, eventually the server
> side will "wither away".  (Speaking of economics, thanks for
> the Cisco stock tip <grin>)
>
>
> On Apr 4, 2007, at 8:11 PM, Howard Stearns wrote:
>
> > I'm not sure whether we're saying the same thing.
> >
> > For example, it sounds like you have have the music engine for a
> > given space running on a server, not on each participant's machine.
> >
> > I'm imagining that there are three parts to what happens in that
> > kind of architecture. I understand that you're just looking at
> > things on a small scale, but I'm going to use exaggeration to make
> > my point:
> >
> >    1. The music engine has to find out who is present. You could
> > have the Croquet software do something when the avatar enters a
> > space, but the server just needs to hear once about each event. So
> > you have to work to defeat Croquet's replication properties. (Not
> > hard.)
> >
> >    2. The music engine has to generate the mix. As you say, not
> > hard unless the same server is trying to generate the mix for a lot
> > of different spaces. For example, if you wanted this for all
> > Croquet spaces (or even a tenth or a hundredth or thousandth of
> > them), and if there were as many Croquet spaces as there are Web
> > sites, then you're going to need a lot of servers!
> >
> >    3. The music engine has to get the mixed sound to each
> > participant. Again, on a global scale, you're talking about
> > broadcasting music to every participant on the planet. Buy Cisco
> > stock!
> >
> > The "Croquet way" to do this is to have the music engine be on each
> > participating machine:
> >   1. As each avatar enters a space, it tells the LOCAL engine about
> > the event. No complication. No network traffic.
> >   2. There's no computation on a server because each participant
> > generates the music locally. Each participant is only in one space
> > at a time, so they never have to do more than one mixing at a time.
> >   3. The music is already local on each machine, so there's no need
> > to pipe it back to all the participants.
> >

> > My question is whether you are doing, or can do, the generation the
> > "Croquet way."
> >
> > Jay Hardesty wrote:
> >> On 4/4/07, *Howard Stearns* < [hidden email] ... Is it possible to
> >>> replicate the mixing? I.e., to have the mixing software on each
> >>> machine (in
> >>> Squeak) so that each participant generates the mix rather than
> >>> having it be
> >>> generated and and piped-out from a central bottleneck-server?
> >>> For example, it might be interesting to have EVERY space in the
> >>> CroquetCollaborative allow music generation. The sample data
> >>> could be stored
> >>> in the Collaborative's shared media cache (which already stores
> >>> sound and
> >>> other media), so that it could be shared (and cached) efficiently
> >>> between islands. The ability to generate new media would be a big
> >>> motivator to make
> >>> the shared media cache scale better than it does now (e.g., with
> >>> distributed storage). [Frankly, I have other things to do and was
> >>> hoping to defer that
> >>> until later, but I can't resist a good use-case.]
> >> The server process is separate now because I reuse it for other
> >> apps as well.
> >>  But the fact that it's all been running on the same machine
> >> probably indicates that performance isn't the reason (unless you
> >> try to run too many
> >> images, as the video pretty much does).  So yes I suppose that the
> >> simplest
> >> scheme might be to packagethe music engine into each image that
> >> will host one
> >> of more shared spaces.
> >
>
>

Reply | Threaded
Open this post in threaded view
|

Re: Automatic music collaboration in Croquet

Jay Hardesty-2
In reply to this post by Joshua Gargus-2


On 4/4/07, Joshua Gargus <[hidden email]> wrote:

Maybe the server would eventually wither away, and maybe it
wouldn't.  There's nothing inherently anti-Croquet about feeding in
events from a server (that's essentially what you're doing when
interacting via your mouse, keyboard, or webcam).  I wouldn't be
surprised if replicating a few of the few of the fittest individuals
generated by a GA compute-server takes less bandwidth than a typical
mouse-driven interactive session; if this is true, then you could
scale such a scheme quite far (especially as the methods for
distributing Croquet events becomes more sophisticated: think
hierarchical routers).

Josh

Yes duplicating GA individuals would just mean some lightweight
data copying.  But replicating the fitness calculation for every
individual in every generation of the GA would of course be an
enormous amount of duplication.

I wonder if GA's would ever be practical to replicate considering
how many intermediate calculations and data objects are involved.
It could be done if everyone uses the same random seed I suppose,
but is duplication of computations on that scale really what is
sought after in Croquet?  (asking here, not saying)
Reply | Threaded
Open this post in threaded view
|

Re: Automatic music collaboration in Croquet

Joshua Gargus-2

On Apr 5, 2007, at 10:58 AM, Jay Hardesty wrote:
>
> I wonder if GA's would ever be practical to replicate considering
> how many intermediate calculations and data objects are involved.
> It could be done if everyone uses the same random seed I suppose,
> but is duplication of computations on that scale really what is
> sought after in Croquet?  (asking here, not saying)

No.  Croquet-style replication is a tool, not a religion, and should  
be applied sensibly.  You tend to hear about replication a lot  
because Croquet has unique capabilities in this area, not because it  
is universally applicable.

Josh
Reply | Threaded
Open this post in threaded view
|

Re: Automatic music collaboration in Croquet

Howard Stearns
I think we're muddying the issues. As I see it, there are two things  
that Josh is saying that I very much agree with:

1. Although there is a specific and unique model of computation in  
Croquet (called Tea Time), the Croquet SDK is a "big tent" framework  
in which lots of technologies are welcome. Indeed, even though Tea  
Time is quite different from sever-oriented/pass-the-data-around  
architectures, this unique difference paradoxically is one of the  
things that allows Croquet to "play well with others."  For example,  
you could imagine a different, more conventional architecture that  
tried to coordinate distributed 3D and multimedia data models in real  
wall-clock time. It would be so complicated and ad hock that trying  
to introduce a different media into the picture, not already coved by  
such a model, would hopelessly break that model. By contrast,  
Croquet's model of synchronization is, loosely speaking, so different  
that it allows random things to be synchronized differently through a  
different channel. http://www.wetmachine.com/itf/item/689

2. You should use the right tool for the job. There's plenty of stuff  
that can be quite cleanly handled in a different way from Tea Time.  
http://www.wetmachine.com/itf/item/734

However, I respectfully disagree about replicating the GA/music-
generation. By doing this locally on each machine, you save:
   1. The bandwidth and complication of telling the no-longer-needed  
music server about events. (tiny gain.)
   2. The computation on the no-longer-needed server.
   3. The bandwidth to send all the data back.
   4. The latency of 1 and 3.
Croquet is already careful to provide a random stream for each  
Croquet simulation (e.g., each space) that is unique to that  
simulation but replicated exactly for each participant in that  
simulation.

[There's a technical detail that I'm leaving out, but am happy to  
discuss on croquet-dev: I am NOT giving an opinion here of whether GA/
music-generation should be done within the simulation itself, or  
whether it should be done locally in Squeak based on the PARAMETERS  
that are part of the simulation itself.]

I admit that it can "feel weird" to duplicate all that computation.  
Get over it. Why would it feel "better" to accept the duplication of  
all those data bits, or the duplication of all the computation  
necessary to repeatedly send those bits over and over again?

On Apr 5, 2007, at 1:13 PM, Joshua Gargus wrote:

>
> On Apr 5, 2007, at 10:58 AM, Jay Hardesty wrote:
>>
>> I wonder if GA's would ever be practical to replicate considering
>> how many intermediate calculations and data objects are involved.
>> It could be done if everyone uses the same random seed I suppose,
>> but is duplication of computations on that scale really what is
>> sought after in Croquet?  (asking here, not saying)
>
> No.  Croquet-style replication is a tool, not a religion, and  
> should be applied sensibly.  You tend to hear about replication a  
> lot because Croquet has unique capabilities in this area, not  
> because it is universally applicable.
>
> Josh