Cursor annoyMe showWhile: [ ]

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
15 messages Options
Reply | Threaded
Open this post in threaded view
|

Cursor annoyMe showWhile: [ ]

Holger Kleinsorgen-4
Hello,

while loading a bunch of packages, the mouse cursor constantly changed
for some milliseonds and showed the full portfolio (database,
compacting, etc.). The mouse cursor was probably trying to direct my
attention away from the new Store progress animation ;)
looking back at my experiences with VW, I can't think of a moment where
these special mouse cursors ever helped. Some cursors also look really
odd (for instance, the garbage cursor once was described by a customer
as a tongue). The cursors are used way too often in the base image. And
of course there's the problem with multiple non-UI-processes happily
switching the mouse cursor.
I would be happy if most senders/implementors could be retired.
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Cursor annoyMe showWhile: [ ]

Travis Griggs-3

On Mar 27, 2010, at 5:37 PM, Holger Kleinsorgen wrote:

> Hello,
>
> while loading a bunch of packages, the mouse cursor constantly changed
> for some milliseonds and showed the full portfolio (database,
> compacting, etc.). The mouse cursor was probably trying to direct my
> attention away from the new Store progress animation ;)
> looking back at my experiences with VW, I can't think of a moment  
> where
> these special mouse cursors ever helped. Some cursors also look really
> odd (for instance, the garbage cursor once was described by a customer
> as a tongue). The cursors are used way too often in the base image.  
> And
> of course there's the problem with multiple non-UI-processes happily
> switching the mouse cursor.
> I would be happy if most senders/implementors could be retired.
>

http://www.cincomsmalltalk.com/userblogs/travis/blogView?showComments=true&printTitle=Cursor_consider_showWhile:_ 
[Harmful]&entry=3432339015

A collage of Vassili Bykov's and my own opinions on the subject.

I remember when at Key, our techs would describe the garbage collect  
cursor as the "coffee pot" cursor.

--
Travis Griggs
Objologist
"Some people are like slinkies, not really good for much, but they can  
bring a smile to your face when you push them down the stairs."

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Cursor annoyMe showWhile: [ ]

Andre Schnoor
In reply to this post by Holger Kleinsorgen-4
+1

Cursor changes should be reqested by top-level UI code only.

The standard way to deal with this nowadays is to have some fixed area  
in the associated application window (e.g. a status bar) where  
progress widgets and messages go that indicate that something is being  
processed. Temporary progress dialogs and cursor changes make for a  
very disruptive user experience, especially if there are many of them.

As we have this wonderful IncrementNotification signal, it should  
perhaps be provided with an optional message text and be used  
throughout the system, so interested applications can catch it and  
update their status widgets (or the cursor, if that is desired). Other  
non-UI code may silently ignore it.

Andre

--
Am 28.03.2010 um 01:37 schrieb Holger Kleinsorgen:

> Hello,
>
> while loading a bunch of packages, the mouse cursor constantly changed
> for some milliseonds and showed the full portfolio (database,
> compacting, etc.). The mouse cursor was probably trying to direct my
> attention away from the new Store progress animation ;)
> looking back at my experiences with VW, I can't think of a moment  
> where
> these special mouse cursors ever helped. Some cursors also look really
> odd (for instance, the garbage cursor once was described by a customer
> as a tongue). The cursors are used way too often in the base image.  
> And
> of course there's the problem with multiple non-UI-processes happily
> switching the mouse cursor.
> I would be happy if most senders/implementors could be retired.
> _______________________________________________
> vwnc mailing list
> [hidden email]
> http://lists.cs.uiuc.edu/mailman/listinfo/vwnc

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Cursor annoyMe showWhile: [ ]

giorgiof
+1

giorgio

On Sun, Mar 28, 2010 at 11:47 AM, Andre Schnoor <[hidden email]> wrote:
+1

Cursor changes should be reqested by top-level UI code only.

The standard way to deal with this nowadays is to have some fixed area
in the associated application window (e.g. a status bar) where
progress widgets and messages go that indicate that something is being
processed. Temporary progress dialogs and cursor changes make for a
very disruptive user experience, especially if there are many of them.

As we have this wonderful IncrementNotification signal, it should
perhaps be provided with an optional message text and be used
throughout the system, so interested applications can catch it and
update their status widgets (or the cursor, if that is desired). Other
non-UI code may silently ignore it.

Andre

--
Am 28.03.2010 um 01:37 schrieb Holger Kleinsorgen:

> Hello,
>
> while loading a bunch of packages, the mouse cursor constantly changed
> for some milliseonds and showed the full portfolio (database,
> compacting, etc.). The mouse cursor was probably trying to direct my
> attention away from the new Store progress animation ;)
> looking back at my experiences with VW, I can't think of a moment
> where
> these special mouse cursors ever helped. Some cursors also look really
> odd (for instance, the garbage cursor once was described by a customer
> as a tongue). The cursors are used way too often in the base image.
> And
> of course there's the problem with multiple non-UI-processes happily
> switching the mouse cursor.
> I would be happy if most senders/implementors could be retired.
> _______________________________________________
> vwnc mailing list
> [hidden email]
> http://lists.cs.uiuc.edu/mailman/listinfo/vwnc

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Cursor annoyMe showWhile: [ ]

Andres Valloud-6
In reply to this post by Holger Kleinsorgen-4
Maybe you guys have been lucky and haven't been stuck with a GC that
takes several seconds on a fast machine.  And yes, sometimes that can
happen.  For example, right now, I am running some memory policy stress
tests that require 5-10 seconds' worth of GC to clean up each time.
Maybe they could be replaced with the busy cursor, but something should
be done to indicate that the app will be unresponsive for a bit...

-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On
Behalf Of Holger Kleinsorgen
Sent: Saturday, March 27, 2010 5:38 PM
To: VWNC
Subject: [vwnc] Cursor annoyMe showWhile: [ ]

Hello,

while loading a bunch of packages, the mouse cursor constantly changed
for some milliseonds and showed the full portfolio (database,
compacting, etc.). The mouse cursor was probably trying to direct my
attention away from the new Store progress animation ;) looking back at
my experiences with VW, I can't think of a moment where these special
mouse cursors ever helped. Some cursors also look really odd (for
instance, the garbage cursor once was described by a customer as a
tongue). The cursors are used way too often in the base image. And of
course there's the problem with multiple non-UI-processes happily
switching the mouse cursor.
I would be happy if most senders/implementors could be retired.
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Cursor annoyMe showWhile: [ ]

Joachim Geidel
Andres,

Am 04.04.10 07:17 schrieb Valloud, Andres:
> Maybe you guys have been lucky and haven't been stuck with a GC that
> takes several seconds on a fast machine.  And yes, sometimes that can
> happen.  For example, right now, I am running some memory policy stress
> tests that require 5-10 seconds' worth of GC to clean up each time.
> Maybe they could be replaced with the busy cursor, but something should
> be done to indicate that the app will be unresponsive for a bit...

Been there, seen that. But when that happens in a real application, the
application has created too much garbage in the first place, and usually the
memory policy is misconfigured, too. If garbage collection is disruptive in
a GUI, then fix the design bugs which cause that: excessive creation of
temporary objects, wrong choice of algorithms, wrong distribution of logic
between clients and servers, misconfiguration of memory management etc.
Incremental garbage collection doesn't misbehave if configured correctly,
and blocking global garbage collection can usually be avoided - if done
correctly, you only get it when you ask for it.

The decision whether or not an indication of garbage collection is visible
should be left to the user interface designer. If garbage collection can
block the application, then raise an Announcement at the beginning and at
the end of the blocking part. Please don't mix GUI code (showing a cursor)
into the deepest layers of technical infrastructure! This violates the
design rule that there shouldn't be cyclical dependencies between code
components, and it doesn't give us a choice whether we want to see that
cursor or not. UI packages depend on kernel packages including memory
management, there shouldn't be a dependency in the opposite direction. It
also means that headless applications have to include GUI code and that GUI
code has to deal with headlessness.

There is another issue here: Why does VisualWorks still have a default
MemoryPolicy which is clearly outdated, has default parameters which are
tuned for PCs from 1990, and contains code which is completely dysfunctional
and sometimes even harmful since VW 5i (threadedOTEntries,
primeThreadedDataList etc.)? Why are the default ObjectMemory size
parameters still the same as twenty years ago, tuned for machines with 32 MB
of RAM? Have a look at the questions about memory management regularly
popping up on this list. More often than not, setting the sizes of Eden and
SurvivorSpace to 10 or 20 times of the default is an easy way to get rid of
garbage collection problems. Why do I have to set the
incGCAccelerationFactor, which controls how much work is done per
millisecond by the incremental garbage collector, to 50 which is more
appropriate for todays processors than the default 5? Why doesn't
VisualWorks come with an adaptive memory policy which checks RAM size and
processor speed itself, and computes whatever parameters need adjustment
without needing help from developers? Why isn't the hard memoryUpperBound
just an option for special situations? Why can't the ObjectMemory space size
be changed at runtime by the application? Why do Eden and SurvivorSpace have
fixed sizes instead of being dynamically resized by the VM depending on the
number of objects created per unit of time or the amount of early tenuring?

Solve that, and you'll see disruptive garbage collection much less often.

Best regards,
Joachim Geidel


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Cursor annoyMe showWhile: [ ]

Andres Valloud-6
Joachim,

In general terms, I do not think it is possible to solve the problem of
e.g.: not creating a lot of garbage in the first place.  While the
problem may be mitigated most of the time, it really is not solvable.
Even for developers, I do not think it's acceptable not to know whether
you just broke the event sensor or the VM is performing GC.  I don't
necessarily care the mechanism through which the GC cursors appear.
However, I think it's hard to argue GC cursors (or some other form of GC
activity visual notification) should not appear at all under any
circumstances.

You ask a number of technical questions, which I will try to answer.

> There is another issue here: Why does VisualWorks still have a default
MemoryPolicy which is clearly outdated, has default parameters which are
tuned for PCs from 1990, and contains code which is completely
dysfunctional and sometimes even harmful since VW 5i (threadedOTEntries,
primeThreadedDataList etc.)? Why are the default ObjectMemory size
parameters still the same as twenty years ago, tuned for machines with
32 MB of RAM? Have a look at the questions about memory management
regularly popping up on this list.

Maybe it's not widely known yet, but I have been debugging memory policy
problems reported by customers for quite some time.  Moreover, although
I cannot give you an answer with regards to how priorities were decided
in the past, I am working on exactly these problems for 7.7.1 (fka 7.8).
Somewhat coincidentally with your email, I have begun integrating ARs in
the past few days.  I still have quite a bit of work to do with regards
to stress testing, but we already have something significantly better
than what shipped in 7.7.

At this time, my first priority is to get the various bounds and edge
conditions right to prevent VM failure due to memory policy misbehavior.
It's not great to go really fast if you can still crash.  I would also
like it if memory policies tuned themselves for 64 bit images, instead
of imposing 32 bit values on 64 bit images.  After the memory policy is
stable, I will get to the idle loop and IGC actions and fix those as
well.  I would also like to do some sizesAtStartup tuning to match more
contemporary workloads.  So, if you (or anybody else) have specific
complaints about things like threadedOTEntries, primeThreadedDataList
and so on, please forward them to me so I can look at them.  I
understand these problems can be very tricky and take time to figure
out, so I'd appreciate any bits of hard earned knowledge you can share.

> More often than not, setting the sizes of Eden and SurvivorSpace to 10
or 20 times of the default is an easy way to get rid of garbage
collection problems.

In general terms, you can't really do that with today's memory policy
unless you exercise a lot of care.  The hard low space limit, the
available space safety limit, and the contiguous space safety limit must
be adjusted accordingly.  You should also adjust the old space headroom
in sizesAtStartup.  You should keep in mind that larger new spaces could
imply larger remember tables, so you have to keep that in mind as well
(both in terms of performance, and in terms of available contiguous
memory so the RTs can always grow as needed).  You must ensure that the
preferred growth increment can cope with a larger new space.  The free
memory upper bound should be revisited so it does not induce a low
memory condition.  Finally, you should note that currently you need to
install a new memory policy as soon as an image starts after changing
sizesAtStartup (assuming it starts), because otherwise you will still be
using the old memory policy instance with stale values.  If you do not
do all of these, then you risk the VM exiting abruptly on you due to
scavenge failure.  Last but not least, I think there may be a point past
which it's just not productive to keep increasing the new space sizes
because they won't fit in CPU cache anymore.

In short, it's not that easy without tools.  Although I already fixed
some of these problems, there is the problem of changing the parameters
safely.  Consequently, I am working on three SUnit test suites.  The
first one will tell you if the current (or any other, non-active) memory
policy complies with the necessary invariants.  The failures will tell
you what is wrong.  A failure on this test suite means the memory policy
cannot prevent a scavenge failure.  The second test suite will make
tuning suggestions.  A failure in this test suite means that, even
though the VM will not crash, it may be possible to improve the tuning
of the memory policy.  The third test suite is a stress test suite which
can be used to verify that the image doesn't crash, that emergency
situations are really so (e.g.: I am debugging why the current memory
policy can somehow fail to do a GC before it claims there's no memory
left), and, by measuring how much time it takes to complete, that the
memory policy is tuned reasonably.

Finally, you may be interested in the package "Memory Monitor" in the
public Store repository.  It's an improved version of John Brant and Don
Roberts' memory monitor.  Among other things, I added a headless mode
that dumps a nice CSV file for later analysis.  I have been using this
enhanced memory monitor to detect performance problems.

> Why do I have to set the incGCAccelerationFactor, which controls how
much work is done per millisecond by the incremental garbage collector,
to 50 which is more appropriate for todays processors than the default
5? Why doesn't VisualWorks come with an adaptive memory policy which
checks RAM size and processor speed itself, and computes whatever
parameters need adjustment without needing help from developers?

Yeah, well... :).  Roughly speaking, I think it would be better to skip
all the factors and have the IGC tune itself as it goes based on the
time it takes to do a bit of work.  Also, note you can't really
calculate the speed factors once and forget about them because you do
not know the load of the machine at the time you calculated the values,
or the load of the machine when they actually get used.  Hence, the
memory policy should adjust the IGC work slices on the fly.

> Why isn't the hard memoryUpperBound just an option for special
situations?

What do you mean?  Can you be more specific?  If you mean why should
there be a memoryUpperBound at all, consider what would happen with an
infinite recursion left unchecked overnight on a 64 bit machine.  And
note it could be an infinite recursion, or just a bug which causes an
allocation of far more data than can fit in RAM.  Although a 1TB byte
array may fit in virtual memory, it's probably not what you want.

> Why can't the ObjectMemory space size be changed at runtime by the
application?

What space exactly are you referring to?

> Why do Eden and SurvivorSpace have fixed sizes instead of being
dynamically resized by the VM depending on the number of objects created
per unit of time or the amount of early tenuring?

Right now, because they are allocated on startup and, after that, their
addresses are used to determine object locations by boundary checks.
So, if you reallocate the new spaces, the addresses of these spaces will
change and, most likely, they will fall between two old segments or
something similar.  If that happens, then the VM will crash very
quickly.  On the other hand, determining the type of object with one
integer (pointer) comparison is extremely fast.

If you want to adjust how much of the eden and survivor spaces are used
at runtime, thus providing some flexibility, look at the scavenging
thresholds for said spaces (ObjectMemory class>>thresholds).

> Solve that, and you'll see disruptive garbage collection much less
often.

Although I would like things to get better just as you, the above
improvements will not address every possible scenario.  For the cases in
which Murphy causes things to go wrong, I still think having absolutely
no GC activity indication is a bad idea.  I may be wrong, but I'd like
to change my mind after hearing a stronger argument first :).

Andres.

-----Original Message-----
From: Joachim Geidel [mailto:[hidden email]]
Sent: Sunday, April 04, 2010 12:03 AM
To: Valloud, Andres; VWNC,
Subject: Re: [vwnc] Cursor annoyMe showWhile: [ ]

Andres,

Am 04.04.10 07:17 schrieb Valloud, Andres:
> Maybe you guys have been lucky and haven't been stuck with a GC that
> takes several seconds on a fast machine.  And yes, sometimes that can
> happen.  For example, right now, I am running some memory policy
> stress tests that require 5-10 seconds' worth of GC to clean up each
time.
> Maybe they could be replaced with the busy cursor, but something
> should be done to indicate that the app will be unresponsive for a
bit...

Been there, seen that. But when that happens in a real application, the
application has created too much garbage in the first place, and usually
the memory policy is misconfigured, too. If garbage collection is
disruptive in a GUI, then fix the design bugs which cause that:
excessive creation of temporary objects, wrong choice of algorithms,
wrong distribution of logic between clients and servers,
misconfiguration of memory management etc.
Incremental garbage collection doesn't misbehave if configured
correctly, and blocking global garbage collection can usually be avoided
- if done correctly, you only get it when you ask for it.

The decision whether or not an indication of garbage collection is
visible should be left to the user interface designer. If garbage
collection can block the application, then raise an Announcement at the
beginning and at the end of the blocking part. Please don't mix GUI code
(showing a cursor) into the deepest layers of technical infrastructure!
This violates the design rule that there shouldn't be cyclical
dependencies between code components, and it doesn't give us a choice
whether we want to see that cursor or not. UI packages depend on kernel
packages including memory management, there shouldn't be a dependency in
the opposite direction. It also means that headless applications have to
include GUI code and that GUI code has to deal with headlessness.

There is another issue here: Why does VisualWorks still have a default
MemoryPolicy which is clearly outdated, has default parameters which are
tuned for PCs from 1990, and contains code which is completely
dysfunctional and sometimes even harmful since VW 5i (threadedOTEntries,
primeThreadedDataList etc.)? Why are the default ObjectMemory size
parameters still the same as twenty years ago, tuned for machines with
32 MB of RAM? Have a look at the questions about memory management
regularly popping up on this list. More often than not, setting the
sizes of Eden and SurvivorSpace to 10 or 20 times of the default is an
easy way to get rid of garbage collection problems. Why do I have to set
the incGCAccelerationFactor, which controls how much work is done per
millisecond by the incremental garbage collector, to 50 which is more
appropriate for todays processors than the default 5? Why doesn't
VisualWorks come with an adaptive memory policy which checks RAM size
and processor speed itself, and computes whatever parameters need
adjustment without needing help from developers? Why isn't the hard
memoryUpperBound just an option for special situations? Why can't the
ObjectMemory space size be changed at runtime by the application? Why do
Eden and SurvivorSpace have fixed sizes instead of being dynamically
resized by the VM depending on the number of objects created per unit of
time or the amount of early tenuring?

Solve that, and you'll see disruptive garbage collection much less
often.

Best regards,
Joachim Geidel



_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Cursor annoyMe showWhile: [ ]

Holger Kleinsorgen-4
Hello, Andres

Am 04.04.2010 10:13, schrieb Valloud, Andres:
> Joachim,
>

> Finally, you may be interested in the package "Memory Monitor" in the
> public Store repository.  It's an improved version of John Brant and Don
> Roberts' memory monitor.  Among other things, I added a headless mode
> that dumps a nice CSV file for later analysis.  I have been using this
> enhanced memory monitor to detect performance problems.
> ...
> Although I would like things to get better just as you, the above
> improvements will not address every possible scenario.  For the cases in
> which Murphy causes things to go wrong, I still think having absolutely
> no GC activity indication is a bad idea.  I may be wrong, but I'd like
> to change my mind after hearing a stronger argument first :).

evaluating the following code doesn't show any cursor feedback, although
the UI is completely unresponsive until you press Ctrl-Y:

   [
      100 milliseconds wait.
      [ 50 factorial ] repeat.
   ] forkAt: 60

there's no special mouse cursor for network streams waiting for data
either, which is another common reason for unresponsive UIs.

So why is the GC different and requires a special mouse cursor?

Besides, it's completely useless and sometimes confusing for the people
using an application. It's just an indicator for developers, and far
less useful than the Memory Monitor or similar tools.

Of course we can override all senders of #showWhile: in the base image,
or disable Cursor>>showWhile and implement Cursor>>reallyShowWhile:, but
I think that the base classes would benefit from a better separation of
UI code (references to Cursor, Dialog etc.).

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Cursor annoyMe showWhile: [ ]

Joachim Geidel
In reply to this post by Andres Valloud-6
Andres,

Thank you for the detailed answer!

> In general terms, I do not think it is possible to solve the problem of
> e.g.: not creating a lot of garbage in the first place.  While the
> problem may be mitigated most of the time, it really is not solvable.

You are right, of course. There are situations where you can't avoid
creating lots of garbage. But those are rare.

> Even for developers, I do not think it's acceptable not to know whether
> you just broke the event sensor or the VM is performing GC.  I don't
> necessarily care the mechanism through which the GC cursors appear.
> However, I think it's hard to argue GC cursors (or some other form of GC
> activity visual notification) should not appear at all under any
> circumstances.

That's right, too. I didn't say that there should never be an indication of
GC. However, the decision of what to show should be part of the GUI. Some
GUIs may choose to change the cursor, headless apps may choose to ignore GC
or write a message to some kind of log, etc. But currently, showing the
garbage cursor is hardwired, and it is so in the wrong place.

> Maybe it's not widely known yet, but I have been debugging memory policy

I noticed this. :-)

> problems reported by customers for quite some time.  Moreover, although
> I cannot give you an answer with regards to how priorities were decided
> in the past, I am working on exactly these problems for 7.7.1 (fka 7.8).
> Somewhat coincidentally with your email, I have begun integrating ARs in
> the past few days.  I still have quite a bit of work to do with regards
> to stress testing, but we already have something significantly better
> than what shipped in 7.7.

Great! That's a good thing. I've been waiting for this for a while. That's
not your fault, of course. :-)

> At this time, my first priority is to get the various bounds and edge
> conditions right to prevent VM failure due to memory policy misbehavior.
> It's not great to go really fast if you can still crash.  I would also
> like it if memory policies tuned themselves for 64 bit images, instead
> of imposing 32 bit values on 64 bit images.  After the memory policy is
> stable, I will get to the idle loop and IGC actions and fix those as
> well.  I would also like to do some sizesAtStartup tuning to match more
> contemporary workloads.  So, if you (or anybody else) have specific
> complaints about things like threadedOTEntries, primeThreadedDataList
> and so on, please forward them to me so I can look at them.  I
> understand these problems can be very tricky and take time to figure
> out, so I'd appreciate any bits of hard earned knowledge you can share.

threadedOTEntries and primeThreadedDataList date back to VW 3 and earlier
when the free list in OldSpace was a simple linear list holding all free
memory chunks regardless of their size. Priming adds a single large chunk to
the free list under certain circumstances. With the current data structures
(introduced in VW 5i) for handling free space, this is unnecessary and
doesn't help. There are more bits of logic in this area which are only
appropriate for VW 3.x VMs. I'll see if I can send you some more details
about what was found in our project when I get back to work on Tuesday.

>> More often than not, setting the sizes of Eden and SurvivorSpace to 10
>> or 20 times of the default is an easy way to get rid of garbage
>> collection problems.
>
> In general terms, you can't really do that with today's memory policy
> unless you exercise a lot of care.  The hard low space limit, the
> available space safety limit, and the contiguous space safety limit must
> be adjusted accordingly.  You should also adjust the old space headroom
> in sizesAtStartup.  You should keep in mind that larger new spaces could
> imply larger remember tables, so you have to keep that in mind as well
> (both in terms of performance, and in terms of available contiguous
> memory so the RTs can always grow as needed).  You must ensure that the
> preferred growth increment can cope with a larger new space.  The free
> memory upper bound should be revisited so it does not induce a low
> memory condition.  Finally, you should note that currently you need to
> install a new memory policy as soon as an image starts after changing
> sizesAtStartup (assuming it starts), because otherwise you will still be
> using the old memory policy instance with stale values.  If you do not
> do all of these, then you risk the VM exiting abruptly on you due to
> scavenge failure.  Last but not least, I think there may be a point past
> which it's just not productive to keep increasing the new space sizes
> because they won't fit in CPU cache anymore.

So far, I haven't had any problems with Eden and SurvivorSpace being 10 to
20 times larger than the default. But then, I also routinely enlarge
preferredGrowthIncrement and other parameters. Anyway, what you write
indicates that the task of configuring the MemoryPolicy has to become much
easier than it currently is. E.g., the MemoryPolicy should adjust
interdependent parameters automatically.

>> Why do I have to set the incGCAccelerationFactor, which controls how
>> much work is done per millisecond by the incremental garbage collector,
>> to 50 which is more appropriate for todays processors than the default
>> 5? Why doesn't VisualWorks come with an adaptive memory policy which
>> checks RAM size and processor speed itself, and computes whatever
>> parameters need adjustment without needing help from developers?
>
> Yeah, well... :).  Roughly speaking, I think it would be better to skip
> all the factors and have the IGC tune itself as it goes based on the
> time it takes to do a bit of work.  Also, note you can't really
> calculate the speed factors once and forget about them because you do
> not know the load of the machine at the time you calculated the values,
> or the load of the machine when they actually get used.  Hence, the
> memory policy should adjust the IGC work slices on the fly.

That's the right direction IMO: Simplify it from a user perspective, do away
with parameters which nobody except a few enlightened souls know about.

>> Why isn't the hard memoryUpperBound just an option for special
> situations?
>
> What do you mean?  Can you be more specific?  If you mean why should
> there be a memoryUpperBound at all, consider what would happen with an
> infinite recursion left unchecked overnight on a 64 bit machine.  And
> note it could be an infinite recursion, or just a bug which causes an
> allocation of far more data than can fit in RAM.  Although a 1TB byte
> array may fit in virtual memory, it's probably not what you want.

Okay, I hadn't thought about this kind of environment. I was thinking in
terms of 1 to 4 GB PCs, where it would be easy to detect how much physical
RAM is still available and adapt memoryUpperBound and growthRegimeUpperBound
automatically to what's available. So maybe it would be a good idea to have
a choice of a few preconfigured MemoryPolicies which work well in certain
situations? I know that one such MemoryPolicy comes with VW as a goodie, but
it might simplify things for non-specialist if they could just choose
between "Small GUI application", "Large GUI application", "1 GB server", "16
GB server" etc. out of the box, some with a hard memoryUpperBound, some with
a dynamically adjusted bound?

Could the behavior for dealing with reaching a certain memory size be made
pluggable? Examples:
"When the image needs more than 2 GB, write a warning to a log file. When it
reaches 4 GB, shutdown and restart."
"When the image is a runtime image and needs more than 90% of physical RAM,
shutdown and restart."
"When the image is in development mode and needs more than 90% of physical
RAM, interrupt the current process and open it in a debugger."
Or whatever the developers think is appropriate.
Maybe it's sufficient to raise Announcements when OldSpace segments are
allocated or deallocated and in some other circumstances. Applications which
want to monitor memory consumption could the simply watch for those
Announcements.

Currently, there is only the emergencyLowSpaceAction, which calls
ObjectMemory class>>interruptProcessForLowSpace, which contains GUI code at
the most inappropriate level again, and which may fail, because it is
usually a bad idea to open a dialog when there's no space left.

>> Why can't the ObjectMemory space size be changed at runtime by the
>> application?
>
> What space exactly are you referring to?

All of them, I meant to write "space sizes" instead of "space size".

>> Why do Eden and SurvivorSpace have fixed sizes instead of being
>> dynamically resized by the VM depending on the number of objects created
>> per unit of time or the amount of early tenuring?
>
> Right now, because they are allocated on startup and, after that, their
> addresses are used to determine object locations by boundary checks.
> So, if you reallocate the new spaces, the addresses of these spaces will
> change and, most likely, they will fall between two old segments or
> something similar.  If that happens, then the VM will crash very
> quickly.  On the other hand, determining the type of object with one
> integer (pointer) comparison is extremely fast.
>
> If you want to adjust how much of the eden and survivor spaces are used
> at runtime, thus providing some flexibility, look at the scavenging
> thresholds for said spaces (ObjectMemory class>>thresholds).

Thanks for the explanation, I wasn't aware of the technical reasons behind
having fixed sizes.

>> Solve that, and you'll see disruptive garbage collection much less
>> often.
>
> Although I would like things to get better just as you, the above
> improvements will not address every possible scenario.  For the cases in
> which Murphy causes things to go wrong, I still think having absolutely
> no GC activity indication is a bad idea.  I may be wrong, but I'd like
> to change my mind after hearing a stronger argument first :).

Well, I never said that indication of GC should go away completely. I just
challenge the current implementation. Knowing about problematic garbage
collections is important for developers, so if the GC code announces that it
is going to start working, the development environment might choose to show
the GC cursor or visualize what's going on in a monitoring tool like the
MemoryMonitor. In an end user application which does not allocate lots of
objects and where GC is fast, the developers should be able to choose not to
show anything.

Best regards,
Joachim Geidel


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Cursor annoyMe showWhile: [ ]

Andres Valloud-6
In reply to this post by Holger Kleinsorgen-4
I think the difference is that the image won't respond to ctrl-y or
anything else until GC is done.  This behavior is usually not a problem
when GC takes on the order of 1 second, but unfortunately it's hard to
predict a priori how long a GC will take.

-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On
Behalf Of Holger Kleinsorgen
Sent: Sunday, April 04, 2010 3:16 AM
To: [hidden email]
Subject: Re: [vwnc] Cursor annoyMe showWhile: [ ]

Hello, Andres

Am 04.04.2010 10:13, schrieb Valloud, Andres:
> Joachim,
>

> Finally, you may be interested in the package "Memory Monitor" in the
> public Store repository.  It's an improved version of John Brant and
> Don Roberts' memory monitor.  Among other things, I added a headless
> mode that dumps a nice CSV file for later analysis.  I have been using

> this enhanced memory monitor to detect performance problems.
> ...
> Although I would like things to get better just as you, the above
> improvements will not address every possible scenario.  For the cases
> in which Murphy causes things to go wrong, I still think having
> absolutely no GC activity indication is a bad idea.  I may be wrong,
> but I'd like to change my mind after hearing a stronger argument first
:).

evaluating the following code doesn't show any cursor feedback, although
the UI is completely unresponsive until you press Ctrl-Y:

   [
      100 milliseconds wait.
      [ 50 factorial ] repeat.
   ] forkAt: 60

there's no special mouse cursor for network streams waiting for data
either, which is another common reason for unresponsive UIs.

So why is the GC different and requires a special mouse cursor?

Besides, it's completely useless and sometimes confusing for the people
using an application. It's just an indicator for developers, and far
less useful than the Memory Monitor or similar tools.

Of course we can override all senders of #showWhile: in the base image,
or disable Cursor>>showWhile and implement Cursor>>reallyShowWhile:, but
I think that the base classes would benefit from a better separation of
UI code (references to Cursor, Dialog etc.).

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: Cursor annoyMe showWhile: [ ]

Andres Valloud-6
In reply to this post by Joachim Geidel
Joachim,

> I didn't say that there should never be an indication of GC. However,
the decision of what to show should be part of the GUI. Some GUIs may
choose to change the cursor, headless apps may choose to ignore GC or
write a message to some kind of log, etc. But currently, showing the
garbage cursor is hardwired, and it is so in the wrong place.

Ok.  I think we also agree that the behavior should be easily
configurable, preferably without requiring an override of the base
image.

> threadedOTEntries and primeThreadedDataList date back to VW 3 and
earlier when the free list in OldSpace was a simple linear list holding
all free memory chunks regardless of their size. Priming adds a single
large chunk to the free list under certain circumstances. With the
current data structures (introduced in VW 5i) for handling free space,
this is unnecessary and doesn't help. There are more bits of logic in
this area which are only appropriate for VW 3.x VMs. I'll see if I can
send you some more details about what was found in our project when I
get back to work on Tuesday.

Thank you, I'd really appreciate these details.

> So far, I haven't had any problems with Eden and SurvivorSpace being
10 to 20 times larger than the default. But then, I also routinely
enlarge preferredGrowthIncrement and other parameters. Anyway, what you
write indicates that the task of configuring the MemoryPolicy has to
become much easier than it currently is. E.g., the MemoryPolicy should
adjust interdependent parameters automatically.

That's where I feel the work is taking me.  Basically, set the memory
upper bound, and the other things just fall into place.  I don't know if
that's exactly where things will go, but so far they have at tendency to
veer towards that direction.

> That's the right direction IMO: Simplify it from a user perspective,
do away with parameters which nobody except a few enlightened souls know
about.

Also, IMO, it should be easier to have more enlightened souls :).  Just
a handful are not enough for sustainable development.

> Okay, I hadn't thought about this kind of environment. I was thinking
in terms of 1 to 4 GB PCs, where it would be easy to detect how much
physical RAM is still available and adapt memoryUpperBound and
growthRegimeUpperBound automatically to what's available.

This needs care because you may have several VW applications running.
If they sense what is available and configure themselves at startup,
then the startup sequence will change their performance characteristics.
If you will run several images to cooperatively solve a problem, then
that needs to be accounted for.  I am concerned it's really hard to get
a self adjusting, one size fits all solution here.

> So maybe it would be a good idea to have a choice of a few
preconfigured MemoryPolicies which work well in certain situations? I
know that one such MemoryPolicy comes with VW as a goodie, but it might
simplify things for non-specialist if they could just choose between
"Small GUI application", "Large GUI application", "1 GB server", "16 GB
server" etc. out of the box, some with a hard memoryUpperBound, some
with a dynamically adjusted bound?

How about a slider for the memory upper bound?  I think that would be a
good first step to try.

> Could the behavior for dealing with reaching a certain memory size be
made pluggable?

Yes, that's a good observation.  Also, even if the action is to invoke
the emergency process monitor, applications may want a chance to clean
up caches or whatever else before the decision is made to invoke the
process monitor.

> Currently, there is only the emergencyLowSpaceAction, which calls
ObjectMemory class>>interruptProcessForLowSpace, which contains GUI code
at the most inappropriate level again, and which may fail, because it is
usually a bad idea to open a dialog when there's no space left.

Well, more or less, because the strategy is to invoke the process
monitor while there is enough memory for that to come up and work for a
bit.  I do get concerned with e.g.: a more resource intensive debugger,
however.  What good is to be able to bring up the process monitor when
you have less than 1mb left if using the debugger requires 5mb?  Is it
reasonable to say 20mb (debugger + a couple browsers + a couple
inspectors) constitutes a low memory condition these days?  Note I am
making up the numbers for the sake of illustration only, I have no idea
of what requirements these tools have.  Maybe we need light versions of
the tools?  I don't know.

Andres.

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: MemoryPolicy (was: Cursor annoyMe showWhile: [ ])

Joachim Geidel
Andres,

>> So far, I haven't had any problems with Eden and SurvivorSpace being
>> 10 to 20 times larger than the default. But then, I also routinely
>> enlarge preferredGrowthIncrement and other parameters. Anyway, what you
>> write indicates that the task of configuring the MemoryPolicy has to
>> become much easier than it currently is. E.g., the MemoryPolicy should
>> adjust interdependent parameters automatically.
>
> That's where I feel the work is taking me.  Basically, set the memory
> upper bound, and the other things just fall into place.  I don't know if
> that's exactly where things will go, but so far they have at tendency to
> veer towards that direction.

Wonderful! I am looking forward to the result. Does it need VM changes, too,
or will it be portable to VW 7.6 (which is the version our project currently
uses)?

> Also, IMO, it should be easier to have more enlightened souls :).  Just
> a handful are not enough for sustainable development.

...which is probably one of the reasons why the current implementation
hasn't been fixed for such a long time. :-(
 

>> Okay, I hadn't thought about this kind of environment. I was thinking
>> in terms of 1 to 4 GB PCs, where it would be easy to detect how much
>> physical RAM is still available and adapt memoryUpperBound and
>> growthRegimeUpperBound automatically to what's available.
>
> This needs care because you may have several VW applications running.
> If they sense what is available and configure themselves at startup,
> then the startup sequence will change their performance characteristics.
> If you will run several images to cooperatively solve a problem, then
> that needs to be accounted for.  I am concerned it's really hard to get
> a self adjusting, one size fits all solution here.

Indeed. We are in such a situation: Our users tend to start more than one
instance of our application, and they use several additional programs which
need from dozens to several hundred megabytes of RAM each. The strategy I
implemented for this is:
- We still have a hard, but high memoryUpperBound.
- growthRegimeUpperBound is at ca. 80% of memoryUpperBound.
- The methods which decide whether growth is permitted and whether growth is
preferred over reclamation are overridden:
- Growth is permitted when memoryUpperBound won't be reached *and* when
there is enough free physical RAM to allocate one more OldSpace segment
*and* if it will still leave a certain small amount of free RAM available
for other purposes.
- Growth if favored over reclamation when growthRegimeUpperBound is not yet
reached *and* if more than a certain percentage of physical RAM is still
free *and* if a certain absolute amount of physical RAM is free.

That isn't perfect, as another application can allocate so much memory that
free RAM is completely allocated, leading to the usual problems. Still, it
has worked well enough so far. BTW, if you have gotten the impression that
our application needs lots of memory: that's right, but nothing can be done
about it.

>> So maybe it would be a good idea to have a choice of a few
>> preconfigured MemoryPolicies which work well in certain situations? I
>> know that one such MemoryPolicy comes with VW as a goodie, but it might
>> simplify things for non-specialist if they could just choose between
>> "Small GUI application", "Large GUI application", "1 GB server", "16 GB
>> server" etc. out of the box, some with a hard memoryUpperBound, some
>> with a dynamically adjusted bound?
>
> How about a slider for the memory upper bound?  I think that would be a
> good first step to try.

Yes, as a first step. If that's all that's needed for configuring the new
MemoryPolicies, that should be sufficient. OTOH, there might be different
strategies for multi-application environments vs. a dedicated machine for
one server app which may need different implementations, i.e. different
subclasses of MemoryPolicy.

>> Could the behavior for dealing with reaching a certain memory size be
>> made pluggable?
>
> Yes, that's a good observation.  Also, even if the action is to invoke
> the emergency process monitor, applications may want a chance to clean
> up caches or whatever else before the decision is made to invoke the
> process monitor.
>
>> Currently, there is only the emergencyLowSpaceAction, which calls
>> ObjectMemory class>>interruptProcessForLowSpace, which contains GUI code
>> at the most inappropriate level again, and which may fail, because it is
>> usually a bad idea to open a dialog when there's no space left.
>
> Well, more or less, because the strategy is to invoke the process
> monitor while there is enough memory for that to come up and work for a
> bit.

Unfortunately, sometimes the emergency is detected too late, such that there
is not enough memory to open a process monitor (or a warning dialog in a
runtime application).

> I do get concerned with e.g.: a more resource intensive debugger,
> however.  What good is to be able to bring up the process monitor when
> you have less than 1mb left if using the debugger requires 5mb?  Is it
> reasonable to say 20mb (debugger + a couple browsers + a couple
> inspectors) constitutes a low memory condition these days?  Note I am
> making up the numbers for the sake of illustration only, I have no idea
> of what requirements these tools have.  Maybe we need light versions of
> the tools?  I don't know.

Well, I don't know either. ;-) With our customized MemoryPolicy and since
installing at least 2 GB of RAM on the user's PCs, we haven't seen any
emergencies for a while. Also, I am more concerned about the behavior of a
runtime application. It usually does more damage when a business critical
server application crashes than when a developer loses half an hour's work.

I think that for developers, being able to open a debugger in a space
emergency situation is mostly useful for debugging memory leaks and infinite
recursions. If we had a way to hook into memory management events, this
might be easier, because one could open a debugger before actually being in
an emergency.

Best regards,
Joachim Geidel


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: MemoryPolicy (was: Cursor annoyMe showWhile: [ ])

Andres Valloud-6
Joachim,

> Wonderful! I am looking forward to the result. Does it need VM
changes, too, or will it be portable to VW 7.6 (which is the version our
project currently uses)?

So far, I have not needed to modify the VM's behavior.

>> Also, IMO, it should be easier to have more enlightened souls :).  
>> Just a handful are not enough for sustainable development.

> ...which is probably one of the reasons why the current implementation
hasn't been fixed for such a long time. :-(

Maybe.  Or perhaps we tried to bite too much for the intellectual
capacity we as a community had.  One way or the other, lately I've been
feeling we need more enlightened souls, to borrow your wording :).
There is just A LOT of stuff to study and understand.

> growthRegimeUpperBound is at ca. 80% of memoryUpperBound.

Funny, I was looking at leaving the growth regime upper bound at 70% of
the memoryUpperBound (because low space currently occurs at 75%).  How
strict are you with the free memory upper bound?

> Unfortunately, sometimes the emergency is detected too late, such that
there is not enough memory to open a process monitor (or a warning
dialog in a runtime application).

Some of this occurs because the memory policy does not always consider
things like the size of new space or the necessary contiguous space
required for the image to continue functioning during a memory
emergency.

> I think that for developers, being able to open a debugger in a space
emergency situation is mostly useful for debugging memory leaks and
infinite recursions. If we had a way to hook into memory management
events, this might be easier, because one could open a debugger before
actually being in an emergency.

For that, you could theoretically bump up the hardLowSpaceLimit by, say,
5x.  Then, the "emergency" wouldn't be so dire when you get the
notifier.

Andres.

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: MemoryPolicy (was: Cursor annoyMe showWhile: [ ])

Joachim Geidel
Andres,

>> growthRegimeUpperBound is at ca. 80% of memoryUpperBound.
>
> Funny, I was looking at leaving the growth regime upper bound at 70% of
> the memoryUpperBound (because low space currently occurs at 75%).  How
> strict are you with the free memory upper bound?

I wouldn't use a fixed percentage. When memoryUpperBound and/or physical RAM
is small, 30% difference is okay. When memoryUpperBound is at 1.5 GB as in
our case, 30% difference would mean that garbage collection would be
preferred beyond 1.05 GB, i.e. when there are still 450 MB free. That's much
too early. We are using that 80% rule only because we know that the 80% is
several hundred MB higher than the typical high water mark of our
application. Currently we use memoryUpperBound = 1.5GB,
growthRegimeUpperBound = 1.2B. It used to be memoryUpperBound = 900MB and
growthRegimeUpperBound = 800MB, but raised it because one user managed to
run into the 900MB limit.

For computing growthRegimeUpperBound automatically, I would use a rule like
"maximum of 70% of memoryUpperBound and memoryUpperBound minus X
preferredGrowthIncrements" (or memoryUpperBound minus 100/200 MB?).

In our case, preferredGrowthIncrement is 10 MB. But then, I would compute
preferredGrowthIncrement depending on memoryUpperBound. It doesn't make much
sense to allocate OldSpace segments in 1 MB increments when the limit is
2GB. Setting preferredGrowthIncrement to 1% of memoryUpperBound should be
okay in most cases, with a lower limit of 1 MB and an upper limit of
something like 20 or 50 MB (it mustn't be too large, or freeing OldSpace
segments won't work because there will never be an empty segment which can
be freed).

Our freeMemoryUpperBound is at 15 MB, because we found that there could be
crashes when freeMemoryUpperBound is lower than preferredGrowthIncrement,
and it could lead to new OldSpace segments being freed immediately again
IIRC.

Best regards,
Joachim Geidel


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: MemoryPolicy (was: Cursor annoyMe showWhile: [ ])

Andres Valloud-6
Joachim, I will have to do some experiments to see what to do about the
growth regime upper bound.  I am not so convinced it's ok to wait until
it's comparatively late.  For example, without a compacting GC, you
cannot get rid of fragmentation.  The IGC won't solve fragmentation
either.  Maybe the image grew so large because it needed contiguous
chunks of memory that left old segments looking like swiss cheese,
rather than actually needing more allocations.  I need to think about
this more.

> Our freeMemoryUpperBound is at 15 MB, because we found that there
could be crashes when freeMemoryUpperBound is lower than
preferredGrowthIncrement, and it could lead to new OldSpace segments
being freed immediately again IIRC.

I'd set it a bit higher, something like 2x the preferred growth
increment, so you don't get into a cycle past the growth regime upper
bound where the app needs a new preferred growth increment, so it does a
GC, it doesn't find more memory so it allocates it, then it does a bit
of work and eventually throws the objects away, so the memory policy
decides to shrink memory for some reason, the old space is deallocated,
then the app needs a new preferred growth increment, so it does a GC, it
doesn't find more memory so it allocates it, etc...

Andres.

-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On
Behalf Of Joachim Geidel
Sent: Monday, April 05, 2010 12:35 AM
To: VWNC,
Subject: Re: [vwnc] MemoryPolicy (was: Cursor annoyMe showWhile: [ ])

Andres,

>> growthRegimeUpperBound is at ca. 80% of memoryUpperBound.
>
> Funny, I was looking at leaving the growth regime upper bound at 70%
> of the memoryUpperBound (because low space currently occurs at 75%).  
> How strict are you with the free memory upper bound?

I wouldn't use a fixed percentage. When memoryUpperBound and/or physical
RAM is small, 30% difference is okay. When memoryUpperBound is at 1.5 GB
as in our case, 30% difference would mean that garbage collection would
be preferred beyond 1.05 GB, i.e. when there are still 450 MB free.
That's much too early. We are using that 80% rule only because we know
that the 80% is several hundred MB higher than the typical high water
mark of our application. Currently we use memoryUpperBound = 1.5GB,
growthRegimeUpperBound = 1.2B. It used to be memoryUpperBound = 900MB
and growthRegimeUpperBound = 800MB, but raised it because one user
managed to run into the 900MB limit.

For computing growthRegimeUpperBound automatically, I would use a rule
like "maximum of 70% of memoryUpperBound and memoryUpperBound minus X
preferredGrowthIncrements" (or memoryUpperBound minus 100/200 MB?).

In our case, preferredGrowthIncrement is 10 MB. But then, I would
compute preferredGrowthIncrement depending on memoryUpperBound. It
doesn't make much sense to allocate OldSpace segments in 1 MB increments
when the limit is 2GB. Setting preferredGrowthIncrement to 1% of
memoryUpperBound should be okay in most cases, with a lower limit of 1
MB and an upper limit of something like 20 or 50 MB (it mustn't be too
large, or freeing OldSpace segments won't work because there will never
be an empty segment which can be freed).

Our freeMemoryUpperBound is at 15 MB, because we found that there could
be crashes when freeMemoryUpperBound is lower than
preferredGrowthIncrement, and it could lead to new OldSpace segments
being freed immediately again IIRC.

Best regards,
Joachim Geidel


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc