Steps to reproduce:
1. Using a web app for a while brings a sockets problem (basically it can't
open a new one)
2. you'll see that after a while you get an exception like this: Not enough
space for external objects, set a larger size at startup!
3. in VirtualMachine>>maxExternalSemaphores: aSize
and this one for linux:
'CoInterpreter VMMaker.oscog-eem.154 uuid:
5cbb57c7-0a54-4b7e-848c-1f292759f1fa Mar 3 2012,
StackToRegisterMappingCogit VMMaker.oscog-eem.154 uuid:
5cbb57c7-0a54-4b7e-848c-1f292759f1fa Mar 3 2012, r2540
Did you read all the method comments in VirtualMachine concerning
There is a limit and you can change it, if you cross it, you get an
These kind of resources are limited anyway.
What would you otherwise suggest ?
The other, more important question is: why are you consuming so many
resource ? It could be normal due to load, or there could be a leak
somewhere. The leak can be your fault: not cleaning up, or a system fault.
A simple, reproduceable piece of code creating the problem as you see it
This is not a pharo bug.
a) Web App not releasing sockets in a timely manner (framework/application
b) Limit not set high enough for the usage scenario (user fix -> increase
the limit as the message suggests)
The alternative (and behaviour in Pharos < 1.4 with Cog VM's) would be
Sockets silently starting to always wait for the max timeout, since their
semaphores never get signalled by the VM.
One consideration, if the limit were to be increased in the development
image, short of using the commandline switch (and I haven't tested it would
work giving a parameter lower than stored in image header) added in Cog
there'd be no going back to a smaller size.
Each entry is an oop of 4 bytes, so the default 256 is 1KiB of the VM's
On modern machines not the end of the world to increase it abit, but could
be a needless limitation for deployment on limited hardware where Sockets
will not be used.
Please note it rounds to the next higher power of two, so for example
Smalltalk vm maxSemaphoresSilently: 1024
would increase the limit to 2048.
PS: There's a limit to what is sensible irregardless of memory concerns wrt
Sockets, when testing large amounts Windows crapped out after a while (1k?
10k? can't remember...) due to restrictions in internal Windows resources
used by the socket implementation :)
The Eliot's design of sqExternalSemaphores.c assumes that there potentially
very high number of semaphore signals between interrupt checks. Otherwise i
why not keep using Squeak VM's implementation (a bit modified of course,
but without introducing a limit).
In Cog it keeps separate counters for every external semaphore in private
table, and hence it needs to know the limit of max entries in semaphore
While before, in Squeak VM, it was just a simple list of 512 entries, and
it was just appended new item to the list (with semaphore number to signal).
Upon interrupt check this table were flushed.
Frankly, i cannot see how anything in our VM&plugins can produce so much
signals between two interrupt checks (which happen every millisecond IIRC),
and i cannot foresee such high rate in future (the CPU clock rate won't
grow in the future, and for manycore systems, which can produce so many
signals, we will need completely different VM(s) anyways ).
The issue is not a high rate of signals. The issue is thread-safety.
Signals can be delivered from other threads. My design is thread-safe and
will not lose a signal if delivered form another thrwad. The Squeak VM
implementation is *not* thread0safe. It can lose signals. Lose one signal
and your application may become unresponsive.
Yes. That would be ideal. That was the step I was too lazy to take.
Writing the table management so that it can grow is difficult. It is a
lock-free data structure. Locking to allow growing is not an option.
Do you think that signals table needs to be dynamically allocated?
A static limit , like in squeak before - 512 entries looks fine as to me.
It means max 512 signals between two interrupt checks.
Under what conditions we can exceed such already big number?
That's where we came in. My compromise was to allow resizing the table at
start-up or with a VM parameter I thought this whole thread was about
avoiding that and allowing it to dynamically resize. In which case one has
to bite the bullet and write a lock-free growth mechanism.
Oh.. by saying rewrite, i dind't meant i would use any dynamic relocation :)
Just a static-sized queue with 512 entries, which filled up by signalers
and flushed by interrupt checker.
Why we need sophisticated growth dances, if we can avoid it?
Especially, since you introduced CAS, the implementation is straightforward.