Informing the VM that the display bits have changed via primitiveBeDisplay

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Informing the VM that the display bits have changed via primitiveBeDisplay

Eliot Miranda-2
 
Tim, John, Bert, et al,

    if you look at e.g. resizing code in the platforms/iOS code (in e.g. platforms/iOS//vm/OSX/sqSqueakOSXOpenGLView.m) the code accesses the display bits by looking up the display object installed via primitiveBeDisplay in the specialObjectsArray:

- (void) performDraw: (CGRect)rect {
    sqInt form = interpreterProxy->displayObject(); // Form

    CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
    CGContextSaveGState(context);

    int width = interpreterProxy->positive32BitValueOf(interpreterProxy->fetchPointerofObject(1, form));
    int height = interpreterProxy->positive32BitValueOf(interpreterProxy->fetchPointerofObject(2, form));
    sqInt formBits = interpreterProxy->fetchPointerofObject(0, form);   // bits
    void* bits = (void*)interpreterProxy->firstIndexableField(formBits); // bits
    int bitSize = interpreterProxy->byteSizeOf(formBits);
    int bytePerRow = 4*width;

This is really unsafe.  If it gets called while the VM is compacting, or doing a become, then potentially boom!

Storing the display in the specialObjectsArray stops it form being GCed, but it doesn't stop it moving around, and it doesn't stop a become altering the location of the objects surrounding it, etc.

Surely a better approach is to inform the VM of the location, depth and extent of the bits via some ioSetDisplayBits function and then cacheing those values somewhere in the VM.  Thoughts?

P.S.  John McIntosh tells me that you, Tim R, may have written such code in the past.

_,,,^..^,,,_
best, Eliot
Reply | Threaded
Open this post in threaded view
|

Re: Informing the VM that the display bits have changed via primitiveBeDisplay

timrowledge
 
Err, yeah, that’s old stuff. So far as I can dredge up from long-term storage RISC OS has not likely problem with this because I have to copy the bits from the Display to the window backing store with a transform (ARGB-> 0BGR)  which means I couldn’t simply extract pixels at any time other than ioShowWindow(). The cost is two copies of the Display bitmap sitting around, which is a bit painful.

This is presumably an issue when a vm has threads for event handling of assorted OS stuff like window changes? Isn’t this something that pinning is supposed to help? And surely the BitMap is just about certain to be in old space?
 

> On 01-05-2017, at 3:50 PM, Eliot Miranda <[hidden email]> wrote:
>
> Tim, John, Bert, et al,
>
>     if you look at e.g. resizing code in the platforms/iOS code (in e.g. platforms/iOS//vm/OSX/sqSqueakOSXOpenGLView.m) the code accesses the display bits by looking up the display object installed via primitiveBeDisplay in the specialObjectsArray:
>
> - (void) performDraw: (CGRect)rect {
>     sqInt form = interpreterProxy->displayObject(); // Form
>
>     CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
>     CGContextSaveGState(context);
>
>     int width = interpreterProxy->positive32BitValueOf(interpreterProxy->fetchPointerofObject(1, form));
>     int height = interpreterProxy->positive32BitValueOf(interpreterProxy->fetchPointerofObject(2, form));
>     sqInt formBits = interpreterProxy->fetchPointerofObject(0, form);   // bits
>     void* bits = (void*)interpreterProxy->firstIndexableField(formBits); // bits
>     int bitSize = interpreterProxy->byteSizeOf(formBits);
>     int bytePerRow = 4*width;
>
> This is really unsafe.  If it gets called while the VM is compacting, or doing a become, then potentially boom!
>
> Storing the display in the specialObjectsArray stops it form being GCed, but it doesn't stop it moving around, and it doesn't stop a become altering the location of the objects surrounding it, etc.
>
> Surely a better approach is to inform the VM of the location, depth and extent of the bits via some ioSetDisplayBits function and then cacheing those values somewhere in the VM.  Thoughts?
>
> P.S.  John McIntosh tells me that you, Tim R, may have written such code in the past.
>
> _,,,^..^,,,_
> best, Eliot


tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
Strange OpCodes: WK: Write to Keyboard


Reply | Threaded
Open this post in threaded view
|

Re: Informing the VM that the display bits have changed via primitiveBeDisplay

Eliot Miranda-2
 


On Mon, May 1, 2017 at 5:25 PM, tim Rowledge <[hidden email]> wrote:

Err, yeah, that’s old stuff. So far as I can dredge up from long-term storage RISC OS has not likely problem with this because I have to copy the bits from the Display to the window backing store with a transform (ARGB-> 0BGR)  which means I couldn’t simply extract pixels at any time other than ioShowWindow(). The cost is two copies of the Display bitmap sitting around, which is a bit painful.

This is presumably an issue when a vm has threads for event handling of assorted OS stuff like window changes? Isn’t this something that pinning is supposed to help? And surely the BitMap is just about certain to be in old space?

As I said, even of the bits are pinned, the current access coder has to go through the specialObjectsArray.  Look at the code I posted.  Any of those fetchPointerofObject cals could cause an error mid way through a become or compact.


> On 01-05-2017, at 3:50 PM, Eliot Miranda <[hidden email]> wrote:
>
> Tim, John, Bert, et al,
>
>     if you look at e.g. resizing code in the platforms/iOS code (in e.g. platforms/iOS//vm/OSX/sqSqueakOSXOpenGLView.m) the code accesses the display bits by looking up the display object installed via primitiveBeDisplay in the specialObjectsArray:
>
> - (void) performDraw: (CGRect)rect {
>     sqInt form = interpreterProxy->displayObject(); // Form
>
>     CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
>     CGContextSaveGState(context);
>
>     int width = interpreterProxy->positive32BitValueOf(interpreterProxy->fetchPointerofObject(1, form));
>     int height = interpreterProxy->positive32BitValueOf(interpreterProxy->fetchPointerofObject(2, form));
>     sqInt formBits = interpreterProxy->fetchPointerofObject(0, form);   // bits
>     void* bits = (void*)interpreterProxy->firstIndexableField(formBits); // bits
>     int bitSize = interpreterProxy->byteSizeOf(formBits);
>     int bytePerRow = 4*width;
>
> This is really unsafe.  If it gets called while the VM is compacting, or doing a become, then potentially boom!
>
> Storing the display in the specialObjectsArray stops it form being GCed, but it doesn't stop it moving around, and it doesn't stop a become altering the location of the objects surrounding it, etc.
>
> Surely a better approach is to inform the VM of the location, depth and extent of the bits via some ioSetDisplayBits function and then cacheing those values somewhere in the VM.  Thoughts?
>
> P.S.  John McIntosh tells me that you, Tim R, may have written such code in the past.
>
> _,,,^..^,,,_
> best, Eliot


tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
Strange OpCodes: WK: Write to Keyboard





--
_,,,^..^,,,_
best, Eliot
Reply | Threaded
Open this post in threaded view
|

Re: Informing the VM that the display bits have changed via primitiveBeDisplay

Bert Freudenberg
In reply to this post by Eliot Miranda-2
 


On Tue, May 2, 2017 at 12:50 AM, Eliot Miranda <[hidden email]> wrote:
 
Tim, John, Bert, et al,

    if you look at e.g. resizing code in the platforms/iOS code (in e.g. platforms/iOS//vm/OSX/sqSqueakOSXOpenGLView.m) the code accesses the display bits by looking up the display object installed via primitiveBeDisplay in the specialObjectsArray:

- (void) performDraw: (CGRect)rect {
    sqInt form = interpreterProxy->displayObject(); // Form

    CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
    CGContextSaveGState(context);

    int width = interpreterProxy->positive32BitValueOf(interpreterProxy->fetchPointerofObject(1, form));
    int height = interpreterProxy->positive32BitValueOf(interpreterProxy->fetchPointerofObject(2, form));
    sqInt formBits = interpreterProxy->fetchPointerofObject(0, form);   // bits
    void* bits = (void*)interpreterProxy->firstIndexableField(formBits); // bits
    int bitSize = interpreterProxy->byteSizeOf(formBits);
    int bytePerRow = 4*width;

This is really unsafe.  If it gets called while the VM is compacting, or doing a become, then potentially boom!

Storing the display in the specialObjectsArray stops it form being GCed, but it doesn't stop it moving around, and it doesn't stop a become altering the location of the objects surrounding it, etc.

Surely a better approach is to inform the VM of the location, depth and extent of the bits via some ioSetDisplayBits function and then cacheing those values somewhere in the VM.  Thoughts?

Wouldn't the display bits still be moved around during GC? Nothing the image can do about that.

You could cache the values in the beDisplay primitive, but I don't see how that would change anything.

The original VM was single-threaded so this was not an issue. Are you trying to make the GC concurrent? I bet there are many many places that would break ... Maybe you need to temporarily pin the involved objects?

- Bert - 
Reply | Threaded
Open this post in threaded view
|

Re: Informing the VM that the display bits have changed via primitiveBeDisplay

timrowledge
 

> On 02-05-2017, at 7:47 AM, Bert Freudenberg <[hidden email]> wrote:
>
>
>
>> On Tue, May 2, 2017 at 12:50 AM, Eliot Miranda <[hidden email]> wrote:
>
>> [snip]
>
>> Surely a better approach is to inform the VM of the location, depth and extent of the bits via some ioSetDisplayBits function and then cacheing those values somewhere in the VM.  Thoughts?
>>
> Wouldn't the display bits still be moved around during GC? Nothing the image can do about that.
>
> You could cache the values in the beDisplay primitive, but I don't see how that would change anything.
>
> The original VM was single-threaded so this was not an issue. Are you trying to make the GC concurrent? I bet there are many many places that would break ... Maybe you need to temporarily pin the involved objects?

This is mostly a problem when the OS window drawing code is in a separate thread; if the VM is doing a GC and the window is moved on-screen the OS will send some event about the move, the thread will try to read the partly-processed Display values, they will be … odd… and strange things will happen.
The assorted sizes are going to be SmallInts (unless we have mind-bogglingly big displays in our near future) so they’re trivial to cache, and we  only have to worry about the bitmap. There’s a fairly tiny window of time when it might be getting moved, so maybe we could detect that and simply block using it for that period? Or recognise that the danger is when the old bitmap has been copied and we are now overwriting that space, so as soon as the copy of the bitmap is done we need to update the cached pointer. Err, it gets a bit more complex if the bitmap is being moved just a small distance such that the new copy writes over part of the old copy.

For RISC OS there’s a) no problem anyway because co-operative multi-tasking, b) I keep a separate pixmap for the OS to read anyway because the damn pixel format is 0BGR not ARGB. I was under the impression that macOS magically double-buffers anyway thus avoiding this (probably) but I can see iOS not doing that to save memory. After all, they only have several GB of ram and we all know that even a trivial texteditor needs 64Gb free space to open a 5 character document these days.

tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
Death is a nonmaskable interrupt. For now...


Reply | Threaded
Open this post in threaded view
|

Re: Informing the VM that the display bits have changed via primitiveBeDisplay

Bert Freudenberg
 
On Tue, May 2, 2017 at 7:27 PM, tim Rowledge <[hidden email]> wrote:
This is mostly a problem when the OS window drawing code is in a separate thread; if the VM is doing a GC and the window is moved on-screen the OS will send some event about the move, the thread will try to read the partly-processed Display values, they will be … odd… and strange things will happen.

Well it should "just" be a visual glitch. Unless the bitmap is pinned, the GC could move it during the copy-to-screen, even if you grab the pointer right before.

So how about simply fixing it up after the copy-to-screen? Save display/bitmap oop before, compare afterwards, and if changed, just draw again.

- Bert -