Morphic 3.0: The future of the Gui

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
25 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Morphic 3.0: The future of the Gui

Joshua Gargus-2
That makes sense, thanks for the clarification.

Josh


On Aug 29, 2007, at 11:24 PM, Igor Stasenko wrote:

> On 30/08/2007, Joshua Gargus <[hidden email]> wrote:
>> Squeak currently includes a fisheye morph that provides a distorted
>> view of what is underneath it.  What you have written sounds like it
>> would not support such morphs.  If I'm misunderstanding you, could
>> you try to restate your thought to give me another chance to  
>> understand?
>>
> Well, i think you understood correctly. Morph can't rely on results of
> previous drawings.
> There are some morphs, like magnifying lens which use current state of
> screen to draw effects.
> But suppose that you rendering a lens morph in PostScript file, or on
> canvas which translates drawings to the network, or you using a HUGE
> drawing surface which simply cannot fit in operative memory. This is a
> simple reasons why morph should not access any current state.
> To get around the problem you can, instead redraw a portion of world
> using your own (applied before redrawing) transformations/effects.
>
>> Thanks,
>> Josh
>>
>>
>> On Aug 29, 2007, at 10:17 PM, Igor Stasenko wrote:
>>
>>> Forgot to add..
>>>
>>> Morphs must not rely on any current display medium state such as
>>> background color or results of previous drawings, because some media
>>> types cannot hold or provide its current state in form, which can be
>>> easily accessible or manipulated.
>>> Any internal/current state of display medium can be accessible and
>>> manageable only by its canvas object.
>>>
>>> --
>>> Best regards,
>>> Igor Stasenko AKA sig.
>>>
>>
>>
>>
>
>
> --
> Best regards,
> Igor Stasenko AKA sig.
>


Reply | Threaded
Open this post in threaded view
|

Re: Morphic 3.0: The future of the Gui

Joshua Gargus-2
In reply to this post by K. K. Subramaniam
I haven't read the paper, but the video explains the idea quite  
well.  The algorithm operates at the pixel level. When shrinking, it  
removes pixels without adjusting the newly-adjacent pixels, and when  
growing it creates pixels by using the average of the pixels that it  
is to be inserted between.  I think that this technique is great,  
because it provides good results, yet is MUCH simpler to implement  
than the other texture-synthesis techniques that have been published  
at SIGGRAPH in recent years.

Unfortunately this pixel-centricity of the algorithm means that it  
would not be good for high-quality font resizing.  Another main  
reason that this algorithm would not work for fonts is that the  
energy function basically runs the image through an edge detector and  
then finds low-energy paths through the resulting image.  This proves  
to be a good heuristic for photographs, but fonts are pretty much  
"all edge" to an edge detector (I'm thinking of normal-sized document  
fonts, not large fonts used in eg: an advertisement), so the  
heuristic would be unlikely to make good choices about which pixels  
to remove.

The basic problem with high-quality pixel-based font resizing in  
general (not just this technique) is that font design is a very  
difficult skill; experts make a living doing nothing but designing  
fonts.  These people are artists with a fine sense of balance,  
symmetry, etc. who have developed a deep intuition about how slight  
changes in a font will be perceived (typically unconsciously) by the  
viewer.  One pixel can easily make the difference between an  
excellent character and complete rubbish. In order to approach this  
level of performance, the computer program would need to test  
potential font modifications against a model of human visual  
perception.  Even harder, the model should be as sophisticated (read  
"refined", not "complicated") as an expert font designer, not the  
average person.

If available, a much better choice is to use an outline-based font  
(eg: TrueType) and render it at the desired size.

Josh


On Aug 30, 2007, at 8:15 AM, subbukk wrote:

> On Thursday 30 August 2007 10:55 am, [hidden email] wrote:
>> Have you seen this video about image resizing
>> http://www.youtube.com/watch?v=qadw0BRKeMk
> Brad,
>
> This is a very interesting approach to image resizing. The video  
> shows only
> photographic images. Has it been applied to font resizing?
>
> Regards .. Subbu
>


Reply | Threaded
Open this post in threaded view
|

Re: Morphic 3.0: The future of the Gui

Brad Fuller-2
In reply to this post by Joshua Gargus-2
On Wed August 29 2007, Joshua Gargus wrote:

> On Aug 29, 2007, at 11:04 PM, [hidden email] wrote:
> >>> Another thought: I always wanted to be able to have _any_ object
> >>> that communicates to the
> >>> user to be a morph. This would be any graphic object, but also
> >>> any video object, audio object,
> >>> anything that deals with the senses of the user. What do you
> >>> think about this idea?
> >>
> >> This is already so. see #asMorph message :)
> >> You can build up any morph(s) for representing your object.
> >
> > I'm not referring to building a graphic object to represent a sound
> > (for instance), I want to manipulate a sound or video just like I
> > can a morph graphic object.
>
> I'm interested...
>
> What do you mean by this?  Do you want any object to be able to
> provide a default graphical representation of itself and its
> properties?  Do you want to inherit the morphs ability to #step?  Do
> you want to give non-visual objects like sounds a position so that
> you can hear them pan back and forth?

I haven't thought about it too deeply or lately. But, I think it would be
nice, compositionally, if I could massage a sound just like I can a graphic
morph - alter it's 3D position (bigger, father away, left, up, etc.) stretch
it; shrink it both horizontally (time) and vertically (e.g. instrumentation,
orchestration, harmonic), and it's timbre over time.

It'd be nice if sounds could accept drops so that a sound could be dropped on
sounds to create sub-sounds (I guess, you could call them that.) Time is an
issue here, but not hard to accomodate.

And, sounds can be anything from a note, or a sample to a complete music
piece.

FM is already in Squeak - lots of things we could do with an FM sound beyond
the traditional setting up of operators, feedback, LFOs, etc. like
FM/spectral Morphing

I'm not thinking of linear compositional tools such as sequencers or languages
like MusicV, but more multidimensional AND at the object level. I guess more
the lines of CSound or SuperCollider.

The issue with both video and audio is the dimension of time that would
require more thought beyond the concept of stepping.

My point of my original comment was that maybe Morphic 3.0 should step up a
level and include all forms of multimedia from the individual object level.
If we want to do that, I would be involved AND I could find other composers
that would love to help (at least comment on the
ideas/features/architecture.)

brad

Reply | Threaded
Open this post in threaded view
|

Re: Morphic 3.0: The future of the Gui

Bert Freudenberg
In reply to this post by Tapple Gao

On Aug 30, 2007, at 0:20 , Matthew Fulmer wrote:

> On Thu, Aug 30, 2007 at 12:14:03AM -0300, Juan Vuletich wrote:
>> Hi Folks,
>>
>> I started to write a "paper to be" about my Morphic 3.0 project. The
>> objective is to convince you that Morphic 3.0 is the coolest thing
>> around :). The first draft is available at www.jvuletich.org. I  
>> hope you
>> enjoy it. Any comment is welcome.
>
> Nice. I just had one question: would gamma correction be taken
> into account at all?
>
> Especially when rendering text, light-on-dark always looks
> bigger than dark-on-light for exactly the same shape. Is it
> possible to (in general) render a shape with constant "mass" in
> the face of gamma inconsistencies? For text, this would involve
> procedural bolding/thinning.

Well, the Right Thing To Do might be rendering and compositing in  
linear color space and only applying gamma when pushing to the  
screen ... that means you have to have color components of higher  
resolution (16 bit fixed-point? 32 bit floats?) and/or do real super-
sampling ... all of which is expensive. But would be cool to have  
(though most people wouldn't care and rather take performance over  
being "correct").

- Bert -



Reply | Threaded
Open this post in threaded view
|

Re: Morphic 3.0: The future of the Gui

Igor Stasenko
In reply to this post by Juan Vuletich-4
On 30/08/2007, Juan Vuletich <[hidden email]> wrote:
> I think the problem is well stated. I understand arguments on both
> sides. It's hard to make a decision...
>
Well, i don't think we lose any features with this. There are couple
of ways how to get around the problem, like using special caching
canvas(es).
I think you know why i'm against any operations like getting contents
of screen for using in effects. In case of using OpenGL, for instance,
you actually can obtain the pixel data but as you may know, its very
slow operation and not recommended for use in rendering cycles. This
operation is used only to get screenshots mainly, but not for
rendering effects.
And it sometimes faster and better to redraw given portion of screen ,
by issuing commands second time rather than capture pixel data in
buffer.
Also, please note , that such feature, like getting pixel data is
mainly accessible for display devices. Supporting these ops with
printers or PostScript files or network canvases will require a huge
amount of memory, not saying that it can be incorrect and totally
ineffective.
And different distortion effects is possible to render only if you use
device only for drawing  bitmaps, but suppose that i issuing command
to printer , like draw string of text with specific font. I don't need
to distort it, because printer uses own font glyphs and rasterising
them by hardware, while printing on paper.
Of course you can rasterise glyphs before sending them to printer,
this can be more 'compatible' approach, but very inefficient: you then
forced to rasterise whole document and send it to printer as one big
blob bitmap.
And then we have plotters, on which we can draw using only vectors. If
we support only pixel devices , so we bury possibility to rendering on
vector devices. Or any other devices which simply don't support pixel
rendering.


> Juan Vuletich
> www.jvuletich.org
>
> Igor Stasenko wrote:
> > On 30/08/2007, Joshua Gargus <[hidden email]> wrote:
> >
> >> Squeak currently includes a fisheye morph that provides a distorted
> >> view of what is underneath it.  What you have written sounds like it
> >> would not support such morphs.  If I'm misunderstanding you, could
> >> you try to restate your thought to give me another chance to understand?
> >>
> >>
> > Well, i think you understood correctly. Morph can't rely on results of
> > previous drawings.
> > There are some morphs, like magnifying lens which use current state of
> > screen to draw effects.
> > But suppose that you rendering a lens morph in PostScript file, or on
> > canvas which translates drawings to the network, or you using a HUGE
> > drawing surface which simply cannot fit in operative memory. This is a
> > simple reasons why morph should not access any current state.
> > To get around the problem you can, instead redraw a portion of world
> > using your own (applied before redrawing) transformations/effects.
> >
> >
> >> Thanks,
> >> Josh
> >>
> >>
> >> On Aug 29, 2007, at 10:17 PM, Igor Stasenko wrote:
> >>
> >>
> >>> Forgot to add..
> >>>
> >>> Morphs must not rely on any current display medium state such as
> >>> background color or results of previous drawings, because some media
> >>> types cannot hold or provide its current state in form, which can be
> >>> easily accessible or manipulated.
> >>> Any internal/current state of display medium can be accessible and
> >>> manageable only by its canvas object.
> >>>
> >>> --
> >>> Best regards,
> >>> Igor Stasenko AKA sig.
> >>>
> >>>
> >>
> >>
> >
> >
> >
>
>
>


--
Best regards,
Igor Stasenko AKA sig.

12