|
On Mon, Sep 21, 2009 at 2:55 PM, Igor Stasenko <[hidden email]> wrote:
2009/9/21 John M McIntosh <[hidden email]>:
>
> Er, did we reach a consensus about what to do?
>
> My problem is determining what is best based on some the code concepts given
> (which people pointed out flaws in, or confusion about).
> and the premise they work all the same on different hardware platforms
> intel, powerpc, arm, which I'm not sure about.
>
> Perhaps a more heavy weight, platform dependent solution using the generic
> acceptable locking logic is required.
>
+1. As i mentioned before, it would be nice to extend the platform
API, which VM could use
to deal with multithreading.
And platforms which have no threading support could simply do nothing
in these functions.
> Er like
> acquireTheHostPlatformIndexedSemaphoreLock()
> {Do what ever is required to remember the semaphore index so that
> checkForInterrupts can find it, a queue perhaps?
> releaseTheHostPlatformIndexSemaphoreLock()
>
> I'd keep in mind
>
> (a) How many times do we execute the signalExternalSemaphore logic per
> seconds, and
> (b) if someone want to do this a million times a second I think they can do
> their own "exotic" solution via overriding
> acquireTheHostPlatformIndexedSemaphoreLock &
> releaseTheHostPlatformIndexSemaphoreLock
> (c) keep it simple so I don't have to worry how it works on powerpc, intel,
> and arm.
>
> acquireTheHostPlatformIndexedSemaphoreLock/releaseTheHostPlatformIndexSemaphoreLock
> Obviously I'd just throw myself on the evil pthread solution.
>
>
> Would we do a linked list, or queue for the semaphores, versus that fixed
> size list? A size I picked based on exploring network interrupt value rates
> on a mind numbling 200Mhz powerpc machine?
>
My own experience is following:
- i implemented a shared queue to use in Hydra based on atomic xchg
available on Intel
platforms. It worked well, until i had chance to run Hydra on a
multicore PC, where it failed
just after a few seconds of running the VM, causing VM to freeze infinitely.
Obviously, because implementation was wrong :)
The moral of it that, its not a question, how often we need the
synchronization between threads,
but how correct it is :)
The correct use of mfence & sfence around the XCHGs are essential for it to work on multi-core.
>
> On 2009-09-20, at 7:00 PM, Igor Stasenko wrote:
>>
>
> --
> ===========================================================================
> John M. McIntosh < [hidden email]> Twitter: squeaker68882
> Corporate Smalltalk Consulting Ltd. http://www.smalltalkconsulting.com
> ===========================================================================
>
>
>
>
>
--
Best regards,
Igor Stasenko AKA sig.
|