to deconvolve before rasterization?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

to deconvolve before rasterization?

Paul Sheldon-2
Deblurring requires adding negative stuff which might involve laser systems or a dc bias losing contrast.

With sampling, info at reciprocal lattice vectors get mixed up so you don't know which frequencies they came from to put them back there. The modulation tranfer function model of degradation assumes information is not moved to different frequency. Zeroing of brightness in spatial domain can't be undone. An inverse doesn't exist.

Suppose you spatial frequency limited before and after rasterization, there would ne no aliasing down and the post processing would mean no aliasing up.

Only reliable information for linear analysis would be passed.

Does the brain cut above niquist? Is the rasterization random like a poirson process that a lucasfilm research paper said would allow greater than average sampling rate resolution by what they called antaliasing.

In secatic eye movements, does the brain average between pixels with light integration time and do something farther out than lucasfilm?

I believe rods and cones appear one or the other dominant in fovial vs perpheral.

Fovea has hi RES color cells while peripheral have low light and motion sensitive probably involved in tracking for the fovea interest.

I wish I could save as draft for bigger than iPhone yahoo view on computer and write some sort of gestalt summary.

Yahoo doesn't present an expandable text window on iPhone. Gotta send and respond with gestalt.

I suspect the gestalt is we want to match to higher levels of visual system than retina and must be humble and patient to do so.
Reply | Threaded
Open this post in threaded view
|

Re: to deconvolve before rasterization?

Les Howell
On Thu, 2008-02-28 at 12:05 -0800, PAUL SHELDON wrote:

> Deblurring requires adding negative stuff which might involve laser systems or a dc bias losing contrast.
>
> With sampling, info at reciprocal lattice vectors get mixed up so you don't know which frequencies they came from to put them back there. The modulation tranfer function model of degradation assumes information is not moved to different frequency. Zeroing of brightness in spatial domain can't be undone. An inverse doesn't exist.
>
> Suppose you spatial frequency limited before and after rasterization, there would ne no aliasing down and the post processing would mean no aliasing up.
>
> Only reliable information for linear analysis would be passed.
>
> Does the brain cut above niquist? Is the rasterization random like a poirson process that a lucasfilm research paper said would allow greater than average sampling rate resolution by what they called antaliasing.
>
> In secatic eye movements, does the brain average between pixels with light integration time and do something farther out than lucasfilm?
>
> I believe rods and cones appear one or the other dominant in fovial vs perpheral.
>
> Fovea has hi RES color cells while peripheral have low light and motion sensitive probably involved in tracking for the fovea interest.
>
> I wish I could save as draft for bigger than iPhone yahoo view on computer and write some sort of gestalt summary.
>
> Yahoo doesn't present an expandable text window on iPhone. Gotta send and respond with gestalt.
>
> I suspect the gestalt is we want to match to higher levels of visual system than retina and must be humble and patient to do so.

Hi, Paul,
        I'm not as strong in the math, but believe that your summation is
pretty accurate.  Eye/color/resolution is one issue, but dealing with
eye movement with out re-rasterizing the image would seem to have
significant advantage in perception.  Processor speeds continue to
climb, parallelism and scaling are also multiplying the power
significantly, and I think that setting the standard higher is better.
It may appear slow and even somewhat "jerky" right now, but in a couple
of years, less than 5 certainly, the processing and parallelism
capabilities will overcome the issues.  Furthermore algorithm
improvements are likely as well.  There are some unexplored areas of 3d
drawing yet to fall to thorough analysis I think.  One of the largest
hurdles is network through put, and that is beginning to rise
exponentially as well.  "The truth is out there." as Mulder would say.

Regards,
Les H