[squeak-dev] Possible approaches for rendering in Morphic 3

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
14 messages Options
Reply | Threaded
Open this post in threaded view
|

[squeak-dev] Possible approaches for rendering in Morphic 3

Juan Vuletich-4
Hi Folks,

I've added this to my web:
http://www.jvuletich.org/RenderingApproaches.html . It is about the
different approaches for rendering of morphs for Morphic 3. This is the
problem I'm dealing with now, so I'd appreciate any ideas or pointers.

Cheers,
Juan Vuletich

Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] Possible approaches for rendering in Morphic 3

Matthias Berth-2
Hi Juan,

you write that you use nonlinear transforms to draw morphs. So every
morph has its own nonlinear transform? For example, if I have a
RectangleMorph then I could draw a bended version of it on the canvas
by assigning some BendingTransform to the RectangleMorph?

What if the same morph has to be displayed on multiple canvases?

Does a morph's transform have any consequence for the transforms of
its submorphs?

As you might guess, I am not really familiar with the details of your
design - though I just read
http://www.jvuletich.org/Morphic3/TheFutureOfTheGUI_01.html :-)

Cheers

Matthias

On Thu, Sep 4, 2008 at 8:36 PM, Juan Vuletich <[hidden email]> wrote:

> Hi Folks,
>
> I've added this to my web: http://www.jvuletich.org/RenderingApproaches.html
> . It is about the different approaches for rendering of morphs for Morphic
> 3. This is the problem I'm dealing with now, so I'd appreciate any ideas or
> pointers.
>
> Cheers,
> Juan Vuletich
>
>

Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] Possible approaches for rendering in Morphic 3

Igor Stasenko
In reply to this post by Juan Vuletich-4
2008/9/4 Juan Vuletich <[hidden email]>:
> Hi Folks,
>
> I've added this to my web: http://www.jvuletich.org/RenderingApproaches.html
> . It is about the different approaches for rendering of morphs for Morphic
> 3. This is the problem I'm dealing with now, so I'd appreciate any ideas or
> pointers.
>

This two approaches having same pros and cons as shading vs ray
tracing. Shading technique using filled triangles to draw graphics,
and similarily can override pixels multiple times. With ray tracing
each pixel color computed separately which enables to draw images of
high photorealistic quality. The issue is same - speed.
I don't think that per-pixel rendering is the way to go, because most
of hardware is not ready yet.
In 10 years situation will change, and we will have enough computing
power in our desktop computers to do that, but not now.

> Cheers,
> Juan Vuletich
>
>



--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] Possible approaches for rendering in Morphic 3

Juan Vuletich-4
In reply to this post by Matthias Berth-2
Hi Matthias,

Matthias Berth wrote:
> Hi Juan,
>
> you write that you use nonlinear transforms to draw morphs. So every
> morph has its own nonlinear transform? For example, if I have a
> RectangleMorph then I could draw a bended version of it on the canvas
> by assigning some BendingTransform to the RectangleMorph?
>
>  
Yes. Of course, linear transforms are allowed too.

> What if the same morph has to be displayed on multiple canvases?
>  

Drawing in any canvas should look similar, i.e. the same transformations
would be applied. BTW, in the new approach to rendering, there is no
#drawOn: and no canvas (at least yet).

> Does a morph's transform have any consequence for the transforms of
> its submorphs?
>  

Yes, the actual transformation for points in a morph includes all
changes in coordinate systems up to the world. (As it is with Morphic 2
in Squeak).

> As you might guess, I am not really familiar with the details of your
> design - though I just read
> http://www.jvuletich.org/Morphic3/TheFutureOfTheGUI_01.html :-)
>
> Cheers
>
> Matthias
>  

Don't worry. I wrote little about it, and it is changing all the time.

Thanks for you interest!
Juan Vuletich

> On Thu, Sep 4, 2008 at 8:36 PM, Juan Vuletich <[hidden email]> wrote:
>  
>> Hi Folks,
>>
>> I've added this to my web: http://www.jvuletich.org/RenderingApproaches.html
>> . It is about the different approaches for rendering of morphs for Morphic
>> 3. This is the problem I'm dealing with now, so I'd appreciate any ideas or
>> pointers.
>>
>> Cheers,
>> Juan Vuletich
>>
>>
>>    
>
>
>
>  


Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] Possible approaches for rendering in Morphic 3

Juan Vuletich-4
In reply to this post by Igor Stasenko
Hi Igor,

Igor Stasenko wrote:

> 2008/9/4 Juan Vuletich <[hidden email]>:
>  
>> Hi Folks,
>>
>> I've added this to my web: http://www.jvuletich.org/RenderingApproaches.html
>> . It is about the different approaches for rendering of morphs for Morphic
>> 3. This is the problem I'm dealing with now, so I'd appreciate any ideas or
>> pointers.
>>
>>    
>
> This two approaches having same pros and cons as shading vs ray
> tracing. Shading technique using filled triangles to draw graphics,
> and similarily can override pixels multiple times.

Didn't know that. Thanks. (BTW, I'm no 3d expert, my background comes
from Signal and Image Processing).

> With ray tracing
> each pixel color computed separately which enables to draw images of
> high photorealistic quality. The issue is same - speed.
> I don't think that per-pixel rendering is the way to go, because most
> of hardware is not ready yet.
> In 10 years situation will change, and we will have enough computing
> power in our desktop computers to do that, but not now.
>  

May be you're right, but as I'm only rendering 2d objects, the
computational cost should be much lower than for ray tracing. Besides,
if Morphic 3 turns to be usable only with help from OpenCL or CUDA or
some other special hardware, it is not too bad.

Thanks for your comments!
Cheers,
Juan Vuletich

Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] Possible approaches for rendering in Morphic 3

Joshua Gargus-2
In reply to this post by Igor Stasenko
Igor Stasenko wrote:
2008/9/4 Juan Vuletich [hidden email]:
  
Hi Folks,

I've added this to my web: http://www.jvuletich.org/RenderingApproaches.html
. It is about the different approaches for rendering of morphs for Morphic
3. This is the problem I'm dealing with now, so I'd appreciate any ideas or
pointers.

    

This two approaches having same pros and cons as shading vs ray
tracing. Shading technique using filled triangles to draw graphics,
and similarily can override pixels multiple times. With ray tracing
each pixel color computed separately which enables to draw images of
high photorealistic quality. The issue is same - speed.
  
A small (at least in the context of this thread) nitpick: it's not the per-pixel color computation that allows ray-tracing to produce high-quality images.  These days, graphics hardware also allows "arbitrary" code to be run to compute the color of each pixel.  The main reason that ray-tracing looks better for some scenes is the ability to easily incorporate multiple reflections/refractions into the per-pixel color computation.

Cheers,
Josh

I don't think that per-pixel rendering is the way to go, because most
of hardware is not ready yet.
In 10 years situation will change, and we will have enough computing
power in our desktop computers to do that, but not now.

  
Cheers,
Juan Vuletich


    



  



Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] Possible approaches for rendering in Morphic 3

Joshua Gargus-2
In reply to this post by Juan Vuletich-4
Juan Vuletich wrote:

> Hi Igor,
>
> Igor Stasenko wrote:
>> 2008/9/4 Juan Vuletich <[hidden email]>:
>>  
>>> Hi Folks,
>>>
>>> I've added this to my web:
>>> http://www.jvuletich.org/RenderingApproaches.html
>>> . It is about the different approaches for rendering of morphs for
>>> Morphic
>>> 3. This is the problem I'm dealing with now, so I'd appreciate any
>>> ideas or
>>> pointers.
>>>
>>>    
>>
>> This two approaches having same pros and cons as shading vs ray
>> tracing. Shading technique using filled triangles to draw graphics,
>> and similarily can override pixels multiple times.
>
> Didn't know that. Thanks. (BTW, I'm no 3d expert, my background comes
> from Signal and Image Processing).
>
>> With ray tracing
>> each pixel color computed separately which enables to draw images of
>> high photorealistic quality. The issue is same - speed.
>> I don't think that per-pixel rendering is the way to go, because most
>> of hardware is not ready yet.
>> In 10 years situation will change, and we will have enough computing
>> power in our desktop computers to do that, but not now.
>>  
>
> May be you're right, but as I'm only rendering 2d objects, the
> computational cost should be much lower than for ray tracing.

It depends on what kind of non-affine transforms you want to render
with, since this will determine the cost of each intersection-test.  For
arbitrary transforms, these interesection-tests can become arbitrarily
expensive.

> Besides, if Morphic 3 turns to be usable only with help from OpenCL or
> CUDA or some other special hardware, it is not too bad.

My first thought when I read your original post was "how is he planning
to take advantage of graphics hardware?".  In order to use OpenCL or
CUDA effectively, you need to set up a parallelized workload for the GPU
to munch through.  The implementation you describe ("for each pixel,
iterate over the morphs, starting at the one at the top, and going
through morphs behind it, etc...") requires a traversal of the Morphic
scene-graph for each pixel.  A CUDA program running on the GPU can't ask
Squeak for information about which morph is where... all of the GPU
processors will pile up behind this sequential bottleneck.  Do you have
some idea of how you would approach this?  It seems like you'd need to
generate (and perhaps cache) a CUDA-friendly data structure.

Stepping back, one of the main difficulties I have when thinking about
how rendering should work in Morphic 3 is that I don't really understand
what the end-user APIs will look like.  For example, suppose that I
program a graph that plots data against linear axes (each data point is
rendered as a circle).  If I want to instead plot data against
logarithmic axes, I can't simply render the whole graph with a different
transform, because the circles will no longer look like circles.  How
could this be avoided?  I can't see how.

I think that the idea of rendering entire scenes with arbitrary,
non-linear transforms is very cool.  However, I don't see how it would
be very useful in practice (for reasons like the example above).
Hopefully, I'm just missing something, and you'll be able to explain it
to me.

Cheers,
Josh

>
> Thanks for your comments!
> Cheers,
> Juan Vuletich
>


Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] Possible approaches for rendering in Morphic 3

Igor Stasenko
In reply to this post by Joshua Gargus-2
2008/9/7 Joshua Gargus <[hidden email]>:

> Igor Stasenko wrote:
>
> 2008/9/4 Juan Vuletich <[hidden email]>:
>
>
> Hi Folks,
>
> I've added this to my web: http://www.jvuletich.org/RenderingApproaches.html
> . It is about the different approaches for rendering of morphs for Morphic
> 3. This is the problem I'm dealing with now, so I'd appreciate any ideas or
> pointers.
>
>
>
> This two approaches having same pros and cons as shading vs ray
> tracing. Shading technique using filled triangles to draw graphics,
> and similarily can override pixels multiple times. With ray tracing
> each pixel color computed separately which enables to draw images of
> high photorealistic quality. The issue is same - speed.
>
>
> A small (at least in the context of this thread) nitpick: it's not the
> per-pixel color computation that allows ray-tracing to produce high-quality
> images.  These days, graphics hardware also allows "arbitrary" code to be
> run to compute the color of each pixel.  The main reason that ray-tracing
> looks better for some scenes is the ability to easily incorporate multiple
> reflections/refractions into the per-pixel color computation.
>

Right, and this is achieved by analyzing scene geometry for each
interesting pixel(ray).
So, here the analogy: it deals with scene on a per-pixel basis.

> Cheers,
> Josh
>
> I don't think that per-pixel rendering is the way to go, because most
> of hardware is not ready yet.
> In 10 years situation will change, and we will have enough computing
> power in our desktop computers to do that, but not now.
>
>
>
> Cheers,
> Juan Vuletich
>
>
>
>
>
>
>
>
>



--
Best regards,
Igor Stasenko AKA sig.

Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] Possible approaches for rendering in Morphic 3

Juan Vuletich-4
In reply to this post by Joshua Gargus-2
Hi Joshua,

Joshua Gargus wrote:

> ...
>> May be you're right, but as I'm only rendering 2d objects, the
>> computational cost should be much lower than for ray tracing.
>>    
>
> It depends on what kind of non-affine transforms you want to render
> with, since this will determine the cost of each intersection-test.  For
> arbitrary transforms, these interesection-tests can become arbitrarily
> expensive.
>
>  
Not sure what you mean. In my implementation there are no intersection
tests. At each pixel, every morph is asked for its value there
(actually, the coordinates are transformed to the morph's space before
asking). The morph might answer transparent, if the point falls outside
it. The transformation of coordinates is done before asking the morph,
so the only code that can get more expensive is the transformation
itself. The code for the morph, something like #colorAt: is the same
regardless of the transformations involved.

If this doesn't make sense to you, please elaborate, so I can follow you
better.

>> Besides, if Morphic 3 turns to be usable only with help from OpenCL or
>> CUDA or some other special hardware, it is not too bad.
>>    
>
> My first thought when I read your original post was "how is he planning
> to take advantage of graphics hardware?".  In order to use OpenCL or
> CUDA effectively, you need to set up a parallelized workload for the GPU
> to munch through.  The implementation you describe ("for each pixel,
> iterate over the morphs, starting at the one at the top, and going
> through morphs behind it, etc...") requires a traversal of the Morphic
> scene-graph for each pixel.  A CUDA program running on the GPU can't ask
> Squeak for information about which morph is where... all of the GPU
> processors will pile up behind this sequential bottleneck.  Do you have
> some idea of how you would approach this?  It seems like you'd need to
> generate (and perhaps cache) a CUDA-friendly data structure.
>  

Hehehe. You're right! Today I'm experimenting with an idea. I'd like to
traverse the morphs graph (actually a tree) and for each morph iterate
over the pixels. This would require an enormous amount of memory, to
compute all pixels in parallel. What I'm trying to do, is to iterate
blocks of, let's say, 32x32 pixels. For each block, traverse the morph
graph. At each morph (actually at each "shape"), compute the effect of
it over each pixel in the 32x32 block. To do this, instead of building
just one "color stack" I need to build 1024 of them (32x32). This is a
reasonable amount of memory. The work inside each shape, with 1024
pixels can be parallelized and be made CUDA friendly. The data
structures involved at this step are some float arrays and float stacks.

Besides making it CUDA friendly, it will divide the cost of fetching the
Smalltalk objects and traversing the tree by a factor of 1024. This
should have a big effect on performance!
> Stepping back, one of the main difficulties I have when thinking about
> how rendering should work in Morphic 3 is that I don't really understand
> what the end-user APIs will look like.  For example, suppose that I
> program a graph that plots data against linear axes (each data point is
> rendered as a circle).  If I want to instead plot data against
> logarithmic axes, I can't simply render the whole graph with a different
> transform, because the circles will no longer look like circles.  How
> could this be avoided?  I can't see how.
>  

You're right again. This is one of the problems I had open without
finding a solution for several years now. So far, circles won't look
like circles anymore. I believe that should be the default behavior. If
the graph is logarithmic, everything gets logarithmic!

However I understand that an application might want those circles to
look like circles, or to add text labels that look like regular text,
etc. I still haven't worked it out completely. But I envision some
morphs (the circles and the labels), that live inside an owner with
logarithmic coordinates, and they can say something like "instead of
transforming pixel coordinates all the way down to my parent for me to
render, stop at my grandparent, and translate my parameters (circle
center and radius, etc) to its space, and render there". It can be done.
I'm still not sure on how the programmer should specify this. I don't
want to drive programmers crazy!
> I think that the idea of rendering entire scenes with arbitrary,
> non-linear transforms is very cool.  However, I don't see how it would
> be very useful in practice (for reasons like the example above).
> Hopefully, I'm just missing something, and you'll be able to explain it
> to me.
>
> Cheers,
> Josh
>  

I hope what I said above makes sense to you. However, this is
experimental stuff. I hope it will turn out to be very useful. Only time
will tell.
Thanks for your interest!

Cheers,
Juan Vuletich

Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] Possible approaches for rendering in Morphic 3

Juan Vuletich-4
In reply to this post by Igor Stasenko
Igor Stasenko wrote:
>
>
> Right, and this is achieved by analyzing scene geometry for each
> interesting pixel(ray).
> So, here the analogy: it deals with scene on a per-pixel basis.
>
>  
Yes!

Cheers,
Juan Vuletich

Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] Possible approaches for rendering in Morphic 3

Joshua Gargus-2
In reply to this post by Juan Vuletich-4
Juan Vuletich wrote:

> Hi Joshua,
>
> Joshua Gargus wrote:
>> ...
>>> May be you're right, but as I'm only rendering 2d objects, the
>>> computational cost should be much lower than for ray tracing.    
>>
>> It depends on what kind of non-affine transforms you want to render
>> with, since this will determine the cost of each intersection-test.  For
>> arbitrary transforms, these interesection-tests can become arbitrarily
>> expensive.
>>
>>  
> Not sure what you mean. In my implementation there are no intersection
> tests. At each pixel, every morph is asked for its value there
> (actually, the coordinates are transformed to the morph's space before
> asking). The morph might answer transparent, if the point falls
> outside it. The transformation of coordinates is done before asking
> the morph, so the only code that can get more expensive is the
> transformation itself. The code for the morph, something like
> #colorAt: is the same regardless of the transformations involved.
>
> If this doesn't make sense to you, please elaborate, so I can follow
> you better.
That's fine, it makes sense.  I was thinking that you might be doing
something more complicated than you are (which would be fun to go into
over a beer, but I'm sure that I would fail to get it across via email).

>>> Besides, if Morphic 3 turns to be usable only with help from OpenCL or
>>> CUDA or some other special hardware, it is not too bad.
>>>    
>>
>> My first thought when I read your original post was "how is he planning
>> to take advantage of graphics hardware?".  In order to use OpenCL or
>> CUDA effectively, you need to set up a parallelized workload for the GPU
>> to munch through.  The implementation you describe ("for each pixel,
>> iterate over the morphs, starting at the one at the top, and going
>> through morphs behind it, etc...") requires a traversal of the Morphic
>> scene-graph for each pixel.  A CUDA program running on the GPU can't ask
>> Squeak for information about which morph is where... all of the GPU
>> processors will pile up behind this sequential bottleneck.  Do you have
>> some idea of how you would approach this?  It seems like you'd need to
>> generate (and perhaps cache) a CUDA-friendly data structure.
>>  
>
> Hehehe. You're right! Today I'm experimenting with an idea. I'd like
> to traverse the morphs graph (actually a tree) and for each morph
> iterate over the pixels. This would require an enormous amount of
> memory, to compute all pixels in parallel. What I'm trying to do, is
> to iterate blocks of, let's say, 32x32 pixels. For each block,
> traverse the morph graph. At each morph (actually at each "shape"),
> compute the effect of it over each pixel in the 32x32 block. To do
> this, instead of building just one "color stack" I need to build 1024
> of them (32x32). This is a reasonable amount of memory. The work
> inside each shape, with 1024 pixels can be parallelized and be made
> CUDA friendly. The data structures involved at this step are some
> float arrays and float stacks.
>
> Besides making it CUDA friendly, it will divide the cost of fetching
> the Smalltalk objects and traversing the tree by a factor of 1024.
> This should have a big effect on performance!

Neat idea!  The Larrabee paper at SIGGRAPH (which I seem to constantly
be recommending to people) describes a tile-based rendering pipeline
that is similar to this.

http://softwarecommunity.intel.com/UserFiles/en-us/File/larrabee_manycore.pdf

>> Stepping back, one of the main difficulties I have when thinking about
>> how rendering should work in Morphic 3 is that I don't really understand
>> what the end-user APIs will look like.  For example, suppose that I
>> program a graph that plots data against linear axes (each data point is
>> rendered as a circle).  If I want to instead plot data against
>> logarithmic axes, I can't simply render the whole graph with a different
>> transform, because the circles will no longer look like circles.  How
>> could this be avoided?  I can't see how.
>>  
>
> You're right again. This is one of the problems I had open without
> finding a solution for several years now. So far, circles won't look
> like circles anymore. I believe that should be the default behavior.
> If the graph is logarithmic, everything gets logarithmic!
>
> However I understand that an application might want those circles to
> look like circles, or to add text labels that look like regular text,
> etc. I still haven't worked it out completely. But I envision some
> morphs (the circles and the labels), that live inside an owner with
> logarithmic coordinates, and they can say something like "instead of
> transforming pixel coordinates all the way down to my parent for me to
> render, stop at my grandparent, and translate my parameters (circle
> center and radius, etc) to its space, and render there". It can be
> done. I'm still not sure on how the programmer should specify this. I
> don't want to drive programmers crazy!

Are you planning to use the same transform on the whole screen, or do
you have ideas about how to use different transforms in different parts
of the screen (or different sub-trees of the morph hierarchy)?  If the
latter, I can imagine explicitly referring to outer transforms to
transform some properties (eg: circle radius) while letting others use
the default transform for that context (eg: the circle's center would
use the logarithmic transform).  If there were reified slots, like in
Tweak, then the transform to use could be attached to the slot (with nil
meaning the default transform w/in that context).

Half-baked, I know :-)

Cheers,
Josh

>> I think that the idea of rendering entire scenes with arbitrary,
>> non-linear transforms is very cool.  However, I don't see how it would
>> be very useful in practice (for reasons like the example above).
>> Hopefully, I'm just missing something, and you'll be able to explain it
>> to me.
>>
>> Cheers,
>> Josh
>>  
>
> I hope what I said above makes sense to you. However, this is
> experimental stuff. I hope it will turn out to be very useful. Only
> time will tell.
> Thanks for your interest!
>
> Cheers,
> Juan Vuletich
>


Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] Possible approaches for rendering in Morphic 3

Jecel Assumpcao Jr
In reply to this post by Juan Vuletich-4
Juan,

you might find my proposal for adaptive rendering interesting:

http://www.lsi.usp.br/~jecel/gmodel.html

-- Jecel


Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] Possible approaches for rendering in Morphic 3

Juan Vuletich-4
In reply to this post by Joshua Gargus-2
Hi Joshua,

Joshua Gargus wrote:
>
> That's fine, it makes sense.  I was thinking that you might be doing
> something more complicated than you are (which would be fun to go into
> over a beer, but I'm sure that I would fail to get it across via email).
>
>  
That's the biggest problem in this community: Not being able to get
together to drink beer easily!

>>>> Besides, if Morphic 3 turns to be usable only with help from OpenCL or
>>>> CUDA or some other special hardware, it is not too bad.
>>>>    
>>>>        
>>> My first thought when I read your original post was "how is he planning
>>> to take advantage of graphics hardware?".  In order to use OpenCL or
>>> CUDA effectively, you need to set up a parallelized workload for the GPU
>>> to munch through.  The implementation you describe ("for each pixel,
>>> iterate over the morphs, starting at the one at the top, and going
>>> through morphs behind it, etc...") requires a traversal of the Morphic
>>> scene-graph for each pixel.  A CUDA program running on the GPU can't ask
>>> Squeak for information about which morph is where... all of the GPU
>>> processors will pile up behind this sequential bottleneck.  Do you have
>>> some idea of how you would approach this?  It seems like you'd need to
>>> generate (and perhaps cache) a CUDA-friendly data structure.
>>>  
>>>      
>> Hehehe. You're right! Today I'm experimenting with an idea. I'd like
>> to traverse the morphs graph (actually a tree) and for each morph
>> iterate over the pixels. This would require an enormous amount of
>> memory, to compute all pixels in parallel. What I'm trying to do, is
>> to iterate blocks of, let's say, 32x32 pixels. For each block,
>> traverse the morph graph. At each morph (actually at each "shape"),
>> compute the effect of it over each pixel in the 32x32 block. To do
>> this, instead of building just one "color stack" I need to build 1024
>> of them (32x32). This is a reasonable amount of memory. The work
>> inside each shape, with 1024 pixels can be parallelized and be made
>> CUDA friendly. The data structures involved at this step are some
>> float arrays and float stacks.
>>
>> Besides making it CUDA friendly, it will divide the cost of fetching
>> the Smalltalk objects and traversing the tree by a factor of 1024.
>> This should have a big effect on performance!
>>    
>
> Neat idea!  The Larrabee paper at SIGGRAPH (which I seem to constantly
> be recommending to people) describes a tile-based rendering pipeline
> that is similar to this.
>
> http://softwarecommunity.intel.com/UserFiles/en-us/File/larrabee_manycore.pdf
>
>  
Good! Morphic 3 could really use such an architecture.

I'm really happy with the result so far. These tiles also allowed me to
avoid going into them in many cases, especially if all transformations
are linear. It is running 10 times as fast as it did before doing it!

>
> Are you planning to use the same transform on the whole screen, or do
> you have ideas about how to use different transforms in different parts
> of the screen (or different sub-trees of the morph hierarchy)?  If the
> latter, I can imagine explicitly referring to outer transforms to
> transform some properties (eg: circle radius) while letting others use
> the default transform for that context (eg: the circle's center would
> use the logarithmic transform).  If there were reified slots, like in
> Tweak, then the transform to use could be attached to the slot (with nil
> meaning the default transform w/in that context).
>
> Half-baked, I know :-)
>
> Cheers,
> Josh
>
>  
Each morph defines its own coordinate system, for itself and its subtree
to use. So, the transformation to apply at a certain morph is the
composition of all transformations up to the world.

Soon I hope to be able to upload a new version of the code, for you (and
everybody) to see.

Cheers,
Juan Vuletich

Reply | Threaded
Open this post in threaded view
|

Re: [squeak-dev] Possible approaches for rendering in Morphic 3

Juan Vuletich-4
In reply to this post by Juan Vuletich-4
Hi Jecel,

I was already aware of your strategy, and I think it is great. I believe
it was Merik who told me about it. Rendering in Morphic 3 is pretty
different that in other engines. So, to apply your idea, I need to come
up with several rendering strategies with different quality / speed
tradeoff. When I have more stuff done in my current strategy, I'll try
to find them!

Cheers,
Juan Vuletich

Jecel Assumpcao Jr wrote:

> Juan,
>
> you might find my proposal for adaptive rendering interesting:
>
> http://www.lsi.usp.br/~jecel/gmodel.html
>
> -- Jecel
>
>
>
>
>