Re: "hardware pixels aren't hardware pixels"

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: "hardware pixels aren't hardware pixels"

Aaron Brancotti

Hi all,

I received a polite, private mail regarding a previous post of mine -
the one ending with the statement "hardware pixels are hardware pixels"
- in which I was asked to "revise it publicly with some study herein
recommended", which should also "stimulate my research". My research is
so already stimulated that here I am back again, thanks.

Well, there was something at the back of my mind about what was proposed
(using eye-tracking inside an HMD to augment the perceived resolution),
and yes, the well-known idea of anti-aliasing demonstrates that
"hardware pixels aren't hardware pixels", after all.

Sure, after such a statement from me, someone could think I am a
complete newbie, even if I stated (in my first message, containing also
a line of presentation, as netiquette asks when you first join) that,
regarding what concerns this mailing list, I come from an older wave of
VR development back in 1992 and so, probably, I know something about
antialiasing, mip-mapping and all that stuff you can find in a Foley-Van
Dam. I don't really care about being believed a newbie and after all we
don't know each other personally, and after 29 years of cohesistence
with computers and having touched nearly all fields, from DBs and
telephony to AI and genetic programming passing thru medicals and
education, I have no residual needs to demonstrate anything. But, I will
be more careful in the future.

Back on track: sure, you could do eyetracking inside an HMD and do what
described, if you are very good with mechanics, optics, and you don't
really care to make something you want to mass market and sell on shelf
for  < 300$ within a couple of years - I do, but, hey, we are all
researchers and hackers inside (we won't be on the Croquet mailing list
if not). It would work, even if you will need good filtering because I
suspect that, summing up the intrinsic noise of a head-tracker, be it
gyroscope or electromagnetical, to the intrinsic noise of an
eye-tracker, would sum up to quite a disturbed information stream. But,
again, nothing a good filter would not tame. Just a question of
fine-tuning all the stuff.

But I would not go in this direction anyway, or at least would not stop
there: as I already told, IMHO the HMD problems (the original post was
about HMDs, obviously this reasoning must be revised when talking about
projectors, CAVEs etc) are more FoV and optics (I mean lenses) related
than resolution related. LCDs with higher and higher resolution will
come, pushed from small devices market as Les says somewhere else. Sure,
you can use such a trick, but you will always have the impression of
wearing a diving mask (here I AM a newbie, and if DeadGenome you think
eyetracking could be used to overcome that ugly "box" effect, well, I am
quite happy to think that GOOD HMDs can be done... but again, I would
like to see such stuff in stores and not only in labs. It's a lot of
years now that I am waiting :) ).

It could be interesting for other stuff without FoV problems, like this,
though:

http://www.eurekalert.org/pub_releases/2008-01/uow-clw011708.php

Want to try something else with eyetracking? Exploit the difference of
resolution between foveal and peripheral vision: track where one is
looking at and use more powerful rendering techniques to draw much
better images (and not just higher perceived resolution) only where
needed. Sit in front of a very big, hi-res LCD display, do some
eyetracking, and do real-time raytracing+radiosity+all the best
rendering you can imagine on a 300 pixel radius, just where you are
looking, and draw everything else in a very fast and rough manner. I
believe you could achieve interesting frame rates and perceived quality
of images with something like this, which is more interesting than just
"multiplexing pixels" IMHO.

This can be pushed further: develop a plugin for Maya or other pro 3D
packages (to be used in conjunction with an eye-tracker) able to track
eye movements of a 3D developer so that, when rendering frames and even
animations, only "interesting" zones are rendered in full details,
leaving everything else rougher and faster. This will NOT eliminate the
need for the definitive, super-high-detail rendering of your 3D
Hollywood movie, but will speed the developement a lot. Or maybe you
wont get a speedup but, at least, you can use the computing power you
dont'use for uninteresting zones to render interesting ones with much
higher detail, which is anyway interesting (but I would like to hear
from a 3D movie producer... it's just an idea, maybe I am completely
wrong and the bottlenecks are not there).


Deadgenome: I meta-copy you:

Legal Note - The content and concepts conveyed in this email are covered by the latest version of the Gnu GPL -
http://www.gnu.org/copyleft/gpl.html


Ciao,
Aaron Brancotti/Babele Dunnit

Reply | Threaded
Open this post in threaded view
|

Re: Re: "hardware pixels aren't hardware pixels"

Joshua Gargus-2

On Feb 22, 2008, at 3:14 AM, Aaron Brancotti wrote:

>
> Want to try something else with eyetracking? Exploit the difference  
> of resolution between foveal and peripheral vision: track where one  
> is looking at and use more powerful rendering techniques to draw  
> much better images (and not just higher perceived resolution) only  
> where needed. Sit in front of a very big, hi-res LCD display, do  
> some eyetracking, and do real-time raytracing+radiosity+all the best  
> rendering you can imagine on a 300 pixel radius, just where you are  
> looking, and draw everything else in a very fast and rough manner. I  
> believe you could achieve interesting frame rates and perceived  
> quality of images with something like this, which is more  
> interesting than just "multiplexing pixels" IMHO.
>

A quick pointer to anyone interested in this line of inquiry...  Ben  
Watson did some research in this area in the 90s (http://dgrc.ncsu.edu/watson/projects/periphery/index.shtml 
); following the references to his papers should lead you to the state-
of-the-art.

Josh
Reply | Threaded
Open this post in threaded view
|

Re: Re: "hardware pixels aren't hardware pixels"

deadgenome -.,.-*`*-.,.-*`*-
In reply to this post by Aaron Brancotti
... (here I AM a newbie, and if DeadGenome you think
>  eyetracking could be used to overcome that ugly "box" effect, well, I am
>  quite happy to think that GOOD HMDs can be done... but again, I would
>  like to see such stuff in stores and not only in labs. It's a lot of
>  years now that I am waiting :) ).

I was originally looking at this idea to do a system with a limited
fov and was looking at the fibre optic just to shift the beamer into
the base unit which makes the whole system easier to make sportsproof
(which I thought was important to get this sort of tech into the
commercial market) and gets all the bulky stuff off the head so that
it could be integrated into something like a pair of sunglasses.

Design 1 was using a fibre bundle, 1 fibre per pixel, but I realised
that this was a hell of a lot of fibre and would be a *female dog* to
align with the DLP as well as needing a fat umbilical, then I
remembered a program I had seen demonstrating single fibre endoscopes
and realised that I could run the same concept in reverse. Once I had
got to the concept of moving the fibre to get the scan pattern, it was
then a simple step to realising that by moving the scan pattern
around, that eye tracking could be used to achieve a very wide fov
indeed while keeping the beamer resolution down to an acceptable
level.

>  Want to try something else with eyetracking? Exploit the difference of
>  resolution between foveal and peripheral vision: track where one is
>  looking at and use more powerful rendering techniques to draw much
>  better images (and not just higher perceived resolution) only where
>  needed. Sit in front of a very big, hi-res LCD display, do some
>  eyetracking, and do real-time raytracing+radiosity+all the best
>  rendering you can imagine on a 300 pixel radius, just where you are
>  looking, and draw everything else in a very fast and rough manner. I
>  believe you could achieve interesting frame rates and perceived quality
>  of images with something like this, which is more interesting than just
>  "multiplexing pixels" IMHO.

your idea idea is great... if the sweep on my system is then made a
little wider and also uses optics in the beamer to make the sweep
towards the outside have less resolution than the center (also the
center will have to be brighter lit to compensate for the fact that
the fibre is moving fastest at that point) and combine this with your
rendering concepts, we could be well on the way to something that
could produce insanely good graphics off a belt worn unit.

Another addendum to this is the idea of putting a couple of cameras
facing forwards on the frame of the sunglasses... these could either
be CCDs on the frames themselves, or a couple more vibrating fibres
leading to CCDs in the base unit... as long as the cameras/fibres are
able to see UV, visible and IR (this may require more than one fibre
per cam to do cheaply) and have a range of filters to cut out unwanted
frequencies - ta-da... a predator HUD... :) this could also be
combined with a GPS plugged into the base unit to give a top down
quake style map... very useful for snowboarding in the dark.

This should make a damn good AR system, however to further extend the
idea, just put lcd shutters into the sunglasses to shut out the real
world when required (possibly triggered to dangerous events so that
you never lose your cool by seeing something that could possibly upset
or alarm you - thanks D.A.) and we then have a damn good VR system as
well..

<< meta-meta-copy   consider-this-section-to-be-a-quote="false" >>
>
>  Deadgenome: I meta-copy you:
>
>  Legal Note - The content and concepts conveyed in this email are covered by the latest version of the Gnu GPL -
>  http://www.gnu.org/copyleft/gpl.html
<</ meta-meta-copy >>
Reply | Threaded
Open this post in threaded view
|

Re: Re: "hardware pixels aren't hardware pixels"

Paul Sheldon-2
In reply to this post by Aaron Brancotti

--- Aaron Brancotti <[hidden email]> wrote:

>
> Hi all,
>
> I received a polite, private mail regarding a
> previous post of mine -
> the one ending with the statement "hardware pixels
> are hardware pixels"
> - in which I was asked to "revise it publicly with
> some study herein
> recommended", which should also "stimulate my
> research". My research is
> so already stimulated that here I am back again,
> thanks.
Good.
>
> ...
> and yes, the well-known idea of anti-aliasing
> demonstrates that
> "hardware pixels aren't hardware pixels", after all.
Yeah figure of merits of optical systems are much more
interesting!
>
> ...
> Back on track: sure, you could do eyetracking inside
> an HMD and do what
> described, if you are very good with mechanics,
> optics, and you don't
> really care to make something you want to mass
> market and sell on shelf
> for  < 300$ within a couple of years
Yeah it might be tough to do, I was hoping someone out
there
would see it as easy.

> I do, but,
> hey, we are all
> researchers and hackers inside (we won't be on the
> Croquet mailing list
> if not).
Extended early design phase group, I hope!

;-)

>It would work, even if you will need good
> filtering because I
> suspect that, summing up the intrinsic noise of a
> head-tracker, be it
> gyroscope or electromagnetical, to the intrinsic
> noise of an
> eye-tracker, would sum up to quite a disturbed
> information stream.
Yeah, I might try to get courage with something less
daunting
were I to do it with just my manhours to put in.
> But,
> again, nothing a good filter would not tame. Just a
> question of
> fine-tuning all the stuff.
>
> ... LCDs with higher and higher
> resolution will
> come, pushed from small devices market as Les says
> somewhere else.
Though eyewear has computer screen resolution in HMD
at $10,000,
sure when such resolution has market base
in small devices this will cheapen.
>Sure,
> you can use such a trick, but you will always have
> the impression of
> wearing a diving mask ...
> It could be interesting for other stuff without FoV
tunnel field of view is problem, I see.
> problems, like this,
> though:
>
>
http://www.eurekalert.org/pub_releases/2008-01/uow-clw011708.php
On the lens?! Great field of view!

>
> Want to try something else with eyetracking? Exploit
> the difference of
> resolution between foveal and peripheral vision:
> track where one is
> looking at and use more powerful rendering
> techniques to draw much
> better images (and not just higher perceived
> resolution) only where
> needed.
I can only think bypass rastorization
with analog electron beam directing, but maybe I don't
know
new stuff. Microvision has micro mirrors to direct
laser beams.
Using poisson sampling can antalias, there was a
research paper
by Lucasfilm on this Ivor Page at my university showed
me.
>Sit in front of a very big, hi-res LCD
> display, do some
> eyetracking, and do real-time
> raytracing+radiosity+all the best
> rendering you can imagine on a 300 pixel radius,
> just where you are
> looking, and draw everything else in a very fast and
> rough manner.
So, though you have hi res display costing, not so
processor
rasterization time.

> I
> believe you could achieve interesting frame rates
> and perceived quality
> of images with something like this, which is more
> interesting than just
> "multiplexing pixels" IMHO.
The figure of merit is the impression of resolution
and quite alright,
we want to match to the human viewer if there is one
human viewing.
>
> This can be pushed further: develop a plugin for
> Maya or other pro 3D
> packages (to be used in conjunction with an
> eye-tracker) able to track
> eye movements of a 3D developer so that, when
> rendering frames and even
> animations, only "interesting" zones are rendered in
> full details,
True! Human value of an individual must come into
figure of merit.
> leaving everything else rougher and faster. This
> will NOT eliminate the
> need for the definitive, super-high-detail rendering
> of your 3D
> Hollywood movie, but will speed the developement a
> lot.
I think I vaguely recall something like this for
architects
in an autodesk demo I went to (they also demoed on a
super
multicore Del machine).
> ...
>
>
> Ciao,
> Aaron Brancotti/Babele Dunnit
>

Good to hear all this stuff. Thank you.


Reply | Threaded
Open this post in threaded view
|

Foveal and peripheral vision

Aaron Brancotti-2
In reply to this post by Joshua Gargus-2

>
>
> A quick pointer to anyone interested in this line of inquiry...  Ben  
> Watson did some research in this area in the 90s (http://dgrc.ncsu.edu/watson/projects/periphery/index.shtml 
> ); following the references to his papers should lead you to the  
> state-of-the-art.
> Josh


Woagh. Impressive pointer. Thanks, really. Well, it's SO difficult to  
come out with a really original idea.. :) But, on the other hand, I  
can read if it is a good idea or not!


Reply | Threaded
Open this post in threaded view
|

Re: Foveal and peripheral vision

Les Howell
On Sat, 2008-02-23 at 22:43 +0100, Aaron Brancotti wrote:

> >
> >
> > A quick pointer to anyone interested in this line of inquiry...  Ben  
> > Watson did some research in this area in the 90s (http://dgrc.ncsu.edu/watson/projects/periphery/index.shtml 
> > ); following the references to his papers should lead you to the  
> > state-of-the-art.
> > Josh
>
>
> Woagh. Impressive pointer. Thanks, really. Well, it's SO difficult to  
> come out with a really original idea.. :) But, on the other hand, I  
> can read if it is a good idea or not!
>
I believe that the concept of how the eye works was explored a few
decades ago.  I remember reading something on it in some conceptual
papers about machine vision in the early 80's or late 70's.  But the
technology of the time couldn't make much use of it.  Also I believe
that this was machine vision equivalent to human vision, not as
described here.  

But I wonder about eye-strain.  I have worked in an environment where
lenses distorted the peripheral field and I got extreme headaches from
using it.  And today I have glaucoma, which mucks up lots of areas of
your vision, requiring you to move your focal point and your brain to
interpolate what you see to extract the lines and intelligence out of
it.  

        I have a similar problem with my hearing where I lost much of the high
frequency spectrum, so my brain is working overtime just to make sense
of the world.  My hearing was so bad that when I started using hearing
aids I practically had to begin to learn to listen to people all over
again.  I still don't recognize some words well.  

        This is related because if you think of the bandpass of hearing as
similar to the foveal vision discussed, the higher frequencies are like
the peripheral vision of hearing.  Missing frequency components (which
is more or less what is being discussed) removes some areas of
recognition.  

        While it is probably worth studying, I would bet it lacks sufficient
consumer benefit due to the stress factors and other issues that would
lead to customer dissatisfaction.

regards,
Les H


Reply | Threaded
Open this post in threaded view
|

Re: Foveal and peripheral vision

Aaron Brancotti

Les,

This is a really good point. My assumption was this: I remember from
having read it somewhere that the vast majority of eye receptors is
located in the fovea, AFAIK. So, "deleting information" from peripheral
vision SHOULD (I repeat, SHOULD) not even be detectable. This is
different from losing high-harmonics (let it be audio or visual) due to
some "sensor" damage: in that case your brain WAS accustomed to treat
that information, so it will definitely miss it, and will need to
develop different strategies (call it algorithms, or neural paths,
or...) to reconstruct missing informations.

So, if my assumption is correct, you should not notice any change when
rendering at low-res in peripheral zones of your field of view, because
you have not enough "sensors" there... we were BORN low-res there... and
our brain is trained that way since we are alive. I repeat, As Far As I
Know. :)

One thing impressed me, BTW, of the studies you were pointing at: the
statement that, when wearing an HMD, one can render high-res only a 30
degrees square of the whole FoV... it means that we normally don't move
our eyes more than 30 degrees from straight direction and, if we need
to, we start turning our head. This is VERY interesting...


Les wrote:
> But I wonder about eye-strain.  I have worked in an environment where
>  
[snip]

> While it is probably worth studying, I would bet it lacks sufficient
> consumer benefit due to the stress factors and other issues that would
> lead to customer dissatisfaction.
>
> regards,
> Les H

Reply | Threaded
Open this post in threaded view
|

Re: Foveal and peripheral vision

Florent THIERY-2
>  So, if my assumption is correct, you should not notice any change when
>  rendering at low-res in peripheral zones of your field of view, because
>  you have not enough "sensors" there... we were BORN low-res there... and
>  our brain is trained that way since we are alive. I repeat, As Far As I
>  Know. :)

Then, the only purpose of lower quality peripheral rendering should be
about computing power sparing...

Any news about visibility/distance dependant rendering
quality/detailing ? I remember a conversation a while ago....

Regards,

Florent
Reply | Threaded
Open this post in threaded view
|

Re: Foveal and peripheral vision

Aaron Brancotti
Florent wrote:
> Then, the only purpose of lower quality peripheral rendering should be
> about computing power sparing...
>  
mainly, yes, but also you could have some advantage if you have some
kind of bandwidth problem, so you can trade information vs. time. For
example, there is no need to transfer a full face image in order to do
eyetracking, you can use a smaller Region Of Interest (ROI) just around
your eyes and nose and get a much higher framerate... unfortunately,
this concept is covered by a copyright (yecchhhh!).

Thinking at that Deadgenome's fiber moving in front of your eyes, using
such a trick means less stuff you must send to it, which means faster
framerate. Faster pixels are those you don't draw (old computer
scientist motto)... :)

One could also argue that we don't' need that computer power sparing,
after all, because our computers are already quite powerful and graphics
are now done in hardware. I do quite agree, but not completely... having
"some more" power is always a good thing. And still the holy grail of
realtime super-high detail rendering (think Pixar and Dreamworks) has to
be hit, even with the most powerful graphics workstations, so IMHO this
is not a purely academical line of thought.

Aaron

Reply | Threaded
Open this post in threaded view
|

Re: Foveal and peripheral vision

Florent THIERY-2
>  mainly, yes, but also you could have some advantage if you have some
>  kind of bandwidth problem, so you can trade information vs. time. For
>  example, there is no need to transfer a full face image in order to do
>  eyetracking, you can use a smaller Region Of Interest (ROI) just around
>  your eyes and nose and get a much higher framerate... unfortunately,
>  this concept is covered by a copyright (yecchhhh!).

This is still ok for remote 3D rendering: given a custom codec, with
HQ encoding in the center and lower res in the outer area, you'd get
the peripheral additional immersion for practically no cost...
Reply | Threaded
Open this post in threaded view
|

Re: Foveal and peripheral vision

Les Howell
In reply to this post by Aaron Brancotti
I don't remember the 30 degree figure, but I do know that there has been
considerable research in NASA and by MIT on head position vs focal
point.  The MIT media lab has done tons of research, starting way before
most folks realized that media was a useful application, and PARC before
that.  

        When you leave data out of an image, you are removing information.
Often movement is based upon receptor transitions sensed by the neurons
in the retina, and processed in the matrix behind the eyes at the
crossover of the eye stems to the brain (lots of motion and relationship
abstraction occurs in this area but is not yet fully understood.)  I am
suspicious that removing resolution, while not consciously noticed, will
still affect response times due to the lack of some as yet not clearly
understood precursor action arising from that crossover detection
matrix.  Moreover, because we do not always know the fine grained
relationships of precursors in our senses, it is often not accounted for
in technology until something adverse shows up.  

        I am going to relate some personal experiences here to illustrate some
of these effects.

        I ride motorcycles, and I can tell you that there is information about
the road, other drivers, pedestrians and signage that affect me
differently on the bike than in the car.  I cannot tell you the
differences, only that I am "adversarily alert" on the bike much more so
than when driving.  

        I can tell you that as a lookout on a ship I was more attuned to
movement on the far horizon than normal.  Even without binoculars, I
would often sense something outside the normal field of view, and it
would turn out to be a whale spout on the horizon or a mast just visible
over the horizon (approximately 6 miles away on the bridge of a
destroyer).

        Yes, this is anecdotal, and you can find the documentation of how the
Navy teaches lookouts, to focus, stare, move your head 1 compass point
(11.5 degrees), focus ....
and repeat for four hours.  But this was learned over about 500 years of
captains learning what worked and what didn't.  Think about the overlap
requirement, and the field of vision and it makes sense.  

        But the peripheral vision is what saves our lives.  We posses the
ability to see over about a 270 degree angle as a species, from straight
ahead to 135 degrees to either side. I am suspicious that if we tested
some people from a hunting village in the Congo or Amazon, we would find
that their vision angle and acuteness is greater (a survival trait when
you might be food for the next large predator, and you need to predate
to survive.)

        I also have had friends from in-country forces in Vietnam.  Their
senses were on a totally different plane, and could perceive things
moving that didn't even appear to me.  In short, the human nervous
system is adaptive, and while the optic system is not, the neurons it
feeds, the processing centers they enter and entwine within and the
brain itself are.  

        One other issue that the loss of vision and hearing have done for me is
to make me aware that my brain interpolates, not always accurately, but
well above 70%, and while it is generally accurate, it also causes
misconceptions about sight and hearing of which I was not aware, leading
me to believe I heard or saw something that others did not.  It affects
communication, and can lead to errors in judgment.  I believe that this
is the effect that will happen if less resolution is utilized as
discussed.  It is not trivial, as the error in perception is totally
unknown and undetectable to the person (me in this case), but obvious to
everyone else participating.

Regards,
Les H
On Mon, 2008-02-25 at 10:11 +0100, Aaron Brancotti wrote:

> Les,
>
> This is a really good point. My assumption was this: I remember from
> having read it somewhere that the vast majority of eye receptors is
> located in the fovea, AFAIK. So, "deleting information" from peripheral
> vision SHOULD (I repeat, SHOULD) not even be detectable. This is
> different from losing high-harmonics (let it be audio or visual) due to
> some "sensor" damage: in that case your brain WAS accustomed to treat
> that information, so it will definitely miss it, and will need to
> develop different strategies (call it algorithms, or neural paths,
> or...) to reconstruct missing informations.
>
> So, if my assumption is correct, you should not notice any change when
> rendering at low-res in peripheral zones of your field of view, because
> you have not enough "sensors" there... we were BORN low-res there... and
> our brain is trained that way since we are alive. I repeat, As Far As I
> Know. :)
>
> One thing impressed me, BTW, of the studies you were pointing at: the
> statement that, when wearing an HMD, one can render high-res only a 30
> degrees square of the whole FoV... it means that we normally don't move
> our eyes more than 30 degrees from straight direction and, if we need
> to, we start turning our head. This is VERY interesting...
>
>
> Les wrote:
> > But I wonder about eye-strain.  I have worked in an environment where
> >  
> [snip]
>
> > While it is probably worth studying, I would bet it lacks sufficient
> > consumer benefit due to the stress factors and other issues that would
> > lead to customer dissatisfaction.
> >
> > regards,
> > Les H
>


Reply | Threaded
Open this post in threaded view
|

Re: Foveal and peripheral vision (supporting Les's case against blurring peripheral)

Paul Sheldon-2

--- Les <[hidden email]> wrote:

> I don't remember the 30 degree figure, but I do know
> that there has been
> considerable research ...
> that.  
>
> When you leave data out of an image, you are
> removing information.
> Often movement is based upon receptor transitions
> sensed by the neurons
> in the retina, and processed in the matrix behind
> the eyes at the
> crossover of the eye stems to the brain (lots of
> motion and relationship
> abstraction occurs in this area but is not yet fully
> understood.)
 
I vaguely  recall eye tracking of objects when I'm
moving or they
are moving might come from edge detectors
outside the "high focus" area.

An edge detector can't detect something defocused.
There might be different sorts of high focus
where "pixelators" might only think of but one,
the modulation transfer function abbreviated MTF!

As an amateur astronomer, I have learned to look for
dim
objects with averted vision. I "imagine" I see high
resolution there.
I'll try to give a little of the experience here.
There is something peculiar about deblurring sets of
points
as opposed to general scenes with ordinary spatial
frequencies.
This was mentioned to me as an aside in holographic
image deblurring environment
in a graduate school I attended.
I did something funny with my eyes with such star sets
of points and a telescope
so I could focus without glasses and without changing
the focus knob.
I googled a lot at the time to confirm I had this
strange experience.

>I am
> suspicious that removing resolution, while not
> consciously noticed, will
> still affect response times due to the lack of some
> as yet not clearly
> understood precursor action arising from that
> crossover detection
> matrix.
Those who think of their figure of merit being economy
frame by frame
may lose what could be done by considering a movie or
a bunch of frames.

I once got in severe trouble when assigned to program
calculating the luminance on a pixel
as a first step to telling what was going to happen
with a focal plane.

One pixel one buck, 1000 pixels $1000 bucks, rewrite
for a Cooley Tukey fast convolution salary for another
three months,
another guy had a picture language tooled up and I was
"out of the picture".

One pixel at a time, one frame at a time, both sorts
of thinking might get you out of the big picture.

>  Moreover, because we do not always know the
> fine grained
> relationships of precursors in our senses, it is
> often not accounted for
> in technology until something adverse shows up.
Like my manager saying "Did you imagine that my
inquiry would end at one pixel!"  
>
> I am going to relate some personal experiences here
> ...
>
You have attuned to the mystery of humans seeing based
on years of evolution or sea captains hard experience
rather than the plans of theorists not dying in the
field.

I understand.

You emphasize the surprise that the mystery of human
visual adaptation can have
compared to sound byte theoretical sales pitches that
everyone can easily understand and buy and pay the
piper later for.
> But the peripheral vision is what saves our lives.
Spies are tested for good peripheral vision, cutting
costs there would be bad for spies and probably other
people in their "situations".

>
> I also have had friends from in-country forces in
> Vietnam.  Their
> senses were on a totally different plane, and could
> perceive things
> moving that didn't even appear to me.  In short, the
> human nervous
> system is adaptive, and while the optic system is
> not, the neurons it
> feeds, the processing centers they enter and entwine
> within and the
> brain itself are.

We should make our interfaces, not to the non-adaptive
optical system, but to the brain
and that is hard to build a sound byte case for, for
an impatient public.  One does development with close
friends
before foisting it on the impatient public.

>
> One other issue that the loss of vision and hearing
> have done for me is
> to make me aware that my brain interpolates, not
> always accurately, but
> well above 70%, and while it is generally accurate,
> it also causes
> misconceptions about sight and hearing of which I
> was not aware, leading
> me to believe I heard or saw something that others
> did not.  It affects
> communication, and can lead to errors in judgment.
This can lead to loss of friends and loves,
relationships that are critically important to
personal value.
Suppose that one is, by deference using averted vision
on face to face communication and the periferal vision
is blurred
(convolutions multiple in fourier domain and blur
worse, the matching blur to blur concept might not be
so, rather adding blur to blur).
Because of lack of emotional intelligence, you suppose
hostile behavior and have dark theories about intent
just as Rand Corporation worries about in netiquette
of putting down emoticons.

While having a frequency cutoff at the Nyquist
sampling frequency has great meaning in certain
contexts
this context might not be it.

One blurs after the sampled image to smooth it. After
means in the brain (assuming there were an endless
regress of retinas
which there isn't, there is much much more).

The arguments don't go for blurring before the
sampling as far as my long studies have shown.

So, in summary, I've tried to support Les's case in
actively listening to him.

I hope that I have infected you all with the need to
not "match impedance" to pixels but rather to the
little understood brain.

I might also try to get opposition points of view to
finally get some sort of integration plan on views,
but I've only read to Les so far.