Login  Register

Re: Foveal and peripheral vision (supporting Les's case against blurring peripheral)

Posted by Paul Sheldon-2 on Feb 28, 2008; 4:13pm
URL: https://forum.world.st/Re-hardware-pixels-aren-t-hardware-pixels-tp128969p128975.html


--- Les <[hidden email]> wrote:

> I don't remember the 30 degree figure, but I do know
> that there has been
> considerable research ...
> that.  
>
> When you leave data out of an image, you are
> removing information.
> Often movement is based upon receptor transitions
> sensed by the neurons
> in the retina, and processed in the matrix behind
> the eyes at the
> crossover of the eye stems to the brain (lots of
> motion and relationship
> abstraction occurs in this area but is not yet fully
> understood.)
 
I vaguely  recall eye tracking of objects when I'm
moving or they
are moving might come from edge detectors
outside the "high focus" area.

An edge detector can't detect something defocused.
There might be different sorts of high focus
where "pixelators" might only think of but one,
the modulation transfer function abbreviated MTF!

As an amateur astronomer, I have learned to look for
dim
objects with averted vision. I "imagine" I see high
resolution there.
I'll try to give a little of the experience here.
There is something peculiar about deblurring sets of
points
as opposed to general scenes with ordinary spatial
frequencies.
This was mentioned to me as an aside in holographic
image deblurring environment
in a graduate school I attended.
I did something funny with my eyes with such star sets
of points and a telescope
so I could focus without glasses and without changing
the focus knob.
I googled a lot at the time to confirm I had this
strange experience.

>I am
> suspicious that removing resolution, while not
> consciously noticed, will
> still affect response times due to the lack of some
> as yet not clearly
> understood precursor action arising from that
> crossover detection
> matrix.
Those who think of their figure of merit being economy
frame by frame
may lose what could be done by considering a movie or
a bunch of frames.

I once got in severe trouble when assigned to program
calculating the luminance on a pixel
as a first step to telling what was going to happen
with a focal plane.

One pixel one buck, 1000 pixels $1000 bucks, rewrite
for a Cooley Tukey fast convolution salary for another
three months,
another guy had a picture language tooled up and I was
"out of the picture".

One pixel at a time, one frame at a time, both sorts
of thinking might get you out of the big picture.

>  Moreover, because we do not always know the
> fine grained
> relationships of precursors in our senses, it is
> often not accounted for
> in technology until something adverse shows up.
Like my manager saying "Did you imagine that my
inquiry would end at one pixel!"  
>
> I am going to relate some personal experiences here
> ...
>
You have attuned to the mystery of humans seeing based
on years of evolution or sea captains hard experience
rather than the plans of theorists not dying in the
field.

I understand.

You emphasize the surprise that the mystery of human
visual adaptation can have
compared to sound byte theoretical sales pitches that
everyone can easily understand and buy and pay the
piper later for.
> But the peripheral vision is what saves our lives.
Spies are tested for good peripheral vision, cutting
costs there would be bad for spies and probably other
people in their "situations".

>
> I also have had friends from in-country forces in
> Vietnam.  Their
> senses were on a totally different plane, and could
> perceive things
> moving that didn't even appear to me.  In short, the
> human nervous
> system is adaptive, and while the optic system is
> not, the neurons it
> feeds, the processing centers they enter and entwine
> within and the
> brain itself are.

We should make our interfaces, not to the non-adaptive
optical system, but to the brain
and that is hard to build a sound byte case for, for
an impatient public.  One does development with close
friends
before foisting it on the impatient public.

>
> One other issue that the loss of vision and hearing
> have done for me is
> to make me aware that my brain interpolates, not
> always accurately, but
> well above 70%, and while it is generally accurate,
> it also causes
> misconceptions about sight and hearing of which I
> was not aware, leading
> me to believe I heard or saw something that others
> did not.  It affects
> communication, and can lead to errors in judgment.
This can lead to loss of friends and loves,
relationships that are critically important to
personal value.
Suppose that one is, by deference using averted vision
on face to face communication and the periferal vision
is blurred
(convolutions multiple in fourier domain and blur
worse, the matching blur to blur concept might not be
so, rather adding blur to blur).
Because of lack of emotional intelligence, you suppose
hostile behavior and have dark theories about intent
just as Rand Corporation worries about in netiquette
of putting down emoticons.

While having a frequency cutoff at the Nyquist
sampling frequency has great meaning in certain
contexts
this context might not be it.

One blurs after the sampled image to smooth it. After
means in the brain (assuming there were an endless
regress of retinas
which there isn't, there is much much more).

The arguments don't go for blurring before the
sampling as far as my long studies have shown.

So, in summary, I've tried to support Les's case in
actively listening to him.

I hope that I have infected you all with the need to
not "match impedance" to pixels but rather to the
little understood brain.

I might also try to get opposition points of view to
finally get some sort of integration plan on views,
but I've only read to Les so far.