Who understand bilinear interpolation for reducing image size?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
23 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Who understand bilinear interpolation for reducing image size?

timrowledge
The ScratchPlugin implements a prim to shrink a 32bpp image by use of bilinear interpolation. Unfortunately it completely ignores the alpha channel in 32bpp pixels and does some rather odd futzing to kinda-sorta fake handling of transparency.

I can see how to add in (what I think would be) proper ARGB interpolating, and I think that simply removing the futzing would be correct - but I’d much rather have some input from somebody with a bit of image processing theory so there is some hope of my final result being actually correct.

This the core of the code -
        inX := inY := 0. "source x and y, scaled by 1024"
        xIncr := (inW * 1024) // outW. "source x increment, scaled by 1024"
        yIncr := (inH * 1024) // outH. "source y increment, scaled by 1024"

        0 to: (outH - 1) do: [:outY |
                inX := 0.
                0 to: (outW - 1) do: [:outX |
                        "compute weights, scaled by 2^20"
                        w1 := (1024 - (inX bitAnd: 1023)) * (1024 - (inY bitAnd: 1023)).
                        w2 := (inX bitAnd: 1023) * (1024 - (inY bitAnd: 1023)).
                        w3 := (1024 - (inX bitAnd: 1023)) * (inY bitAnd: 1023).
                        w4 := (inX bitAnd: 1023) * (inY bitAnd: 1023).

                        "get source pixels"
                        t := ((inY >> 10) * inW) + (inX >> 10).
                        p1 := in at: t.
                        ((inX >> 10) < (inW - 1)) ifTrue: [p2 := in at: t + 1] ifFalse: [p2 := p1].
                        (inY >> 10) < (inH - 1) ifTrue: [t := t + inW].  "next row"
                        p3 := in at: t.
                        ((inX >> 10) < (inW - 1)) ifTrue: [p4 := in at: t + 1] ifFalse: [p4 := p3].

                        "deal with transparent pixels"
                        tWeight := 0.
                        p1 = 0 ifTrue: [p1 := p2. tWeight := tWeight + w1].
                        p2 = 0 ifTrue: [p2 := p1. tWeight := tWeight + w2].
                        p3 = 0 ifTrue: [p3 := p4. tWeight := tWeight + w3].
                        p4 = 0 ifTrue: [p4 := p3. tWeight := tWeight + w4].
                        p1 = 0 ifTrue: [p1 := p3. p2 := p4].  "both top pixels were transparent; use bottom row"
                        p3 = 0 ifTrue: [p3 := p1. p4 := p2].  "both bottom pixels were transparent; use top row"

                        outPix := 0.
                        tWeight < 500000 ifTrue: [  "compute an (opaque) output pixel if less than 50% transparent"
                                t := (w1 * ((p1 >> 16) bitAnd: 255)) + (w2 * ((p2 >> 16) bitAnd: 255)) + (w3 * ((p3 >> 16) bitAnd: 255)) + (w4 * ((p4 >> 16) bitAnd: 255)).
                                outPix := ((t >> 20) bitAnd: 255) << 16.
                                t := (w1 * ((p1 >> 8) bitAnd: 255)) + (w2 * ((p2 >> 8) bitAnd: 255)) + (w3 * ((p3 >> 8) bitAnd: 255)) + (w4 * ((p4 >> 8) bitAnd: 255)).
                                outPix := outPix bitOr: (((t >> 20) bitAnd: 255) << 8).
                                t := (w1 * (p1 bitAnd: 255)) + (w2 * (p2 bitAnd: 255)) + (w3 * (p3 bitAnd: 255)) + (w4 * (p4 bitAnd: 255)).
                                outPix := outPix bitOr: ((t >> 20) bitAnd: 255).
                                outPix = 0 ifTrue: [outPix := 1]].

                        out at: (outY * outW) + outX put: outPix.
                        inX := inX + xIncr].
                inY := inY + yIncr].

Note that it doesn’t do any clipping, relying upon having only a full 32bpp input Form and a pre-built correctly sized 32bpp output Form.
On a Pi this prim is roughly 5 times faster than using a warpblt, so it would be nice to keep it in use.

tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
Strange OpCodes: BPP: Branch Pretty Please



Reply | Threaded
Open this post in threaded view
|

Re: Who understand bilinear interpolation for reducing image size?

Bert Freudenberg
On 08.12.2014, at 21:15, tim Rowledge <[hidden email]> wrote:
>
> The ScratchPlugin implements a prim to shrink a 32bpp image by use of bilinear interpolation. Unfortunately it completely ignores the alpha channel in 32bpp pixels and does some rather odd futzing to kinda-sorta fake handling of transparency.
>
> I can see how to add in (what I think would be) proper ARGB interpolating, and I think that simply removing the futzing would be correct - but I’d much rather have some input from somebody with a bit of image processing theory so there is some hope of my final result being actually correct.

Why would you like to change it? To accept a wider range of inputs?

This purposely does only output fully opaque and fully transparent pixels, which likely is a requirement further down the pipeline. Makes rendering faster, too: true alpha-blending is expensive.

- Bert -




smime.p7s (5K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Who understand bilinear interpolation for reducing image size?

timrowledge

On 08-12-2014, at 3:01 PM, Bert Freudenberg <[hidden email]> wrote:

> On 08.12.2014, at 21:15, tim Rowledge <[hidden email]> wrote:
>>
>> The ScratchPlugin implements a prim to shrink a 32bpp image by use of bilinear interpolation. Unfortunately it completely ignores the alpha channel in 32bpp pixels and does some rather odd futzing to kinda-sorta fake handling of transparency.
>>
>> I can see how to add in (what I think would be) proper ARGB interpolating, and I think that simply removing the futzing would be correct - but I’d much rather have some input from somebody with a bit of image processing theory so there is some hope of my final result being actually correct.
>
> Why would you like to change it? To accept a wider range of inputs?

Well, the main need is to stop it breaking imported images. A frequent problem I’ve had reported is that importing gifs & pngs results in images that simply aren’t correct. Transparent backgrounds that are white, or black, for example. A fairly common problem is ‘transparent’ source pixels that are in the file as 0 alpha but RGB = white.

In the process if trying to sort them out it was noticed that this scaling prim strips away any alpha channel; I don’t particularly mean partial transparency, just all of it. Feed in a pixel that is A=255 and it comes out A=0. Bit of a pain if you ever need to display it with Form blend.

>
> This purposely does only output fully opaque and fully transparent pixels, which likely is a requirement further down the pipeline. Makes rendering faster, too: true alpha-blending is expensive.

The code gives the impression of having been written before the 32bpp ARGB pixel format was put into use. That could explain why it doesn’t set the alpha bits for the output. I could of course just do a bitblt with the fixAlpha rule but it’s faster to fix it inside the prim if possible.

The question becomes one of the final effect that is wanted - doing ‘the right thing’ by mixing the alpha values is simple and works ok for the examples I have right now but yes, they’re not using partial transparency and are finally displayed with Form paint instead of blend. And that is an issue too, since people are quite likely to try importing images with partial transparency from assorted paint programs and clipart, only to find it looks really strange.

The old code was fudging transparency and effectively rounding it up to opaque if the weighted average was > ~half.  I could do similar easily enough I think.

But I’m not at all knowledgeable about image processing stuff, which is why I ask.


tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
Useful Latin Phrases:- Vescere bracis meis. = Eat my shorts.



Reply | Threaded
Open this post in threaded view
|

Re: Who understand bilinear interpolation for reducing image size?

timrowledge
Just to make life even more fun, Scratch does actually use translucency IF you’ve set the ‘ghost’ graphic effect for a sprite. But if you happen to have a sprite which is supposed to be translucent in places it doesn’t get displayed correctly. Sigh.

tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
A computer program does what you tell it to do, not what you want it to do.



Reply | Threaded
Open this post in threaded view
|

Re: Who understand bilinear interpolation for reducing image size?

Bert Freudenberg
In reply to this post by timrowledge

> On 09.12.2014, at 00:45, tim Rowledge <[hidden email]> wrote:
>
>
> On 08-12-2014, at 3:01 PM, Bert Freudenberg <[hidden email]> wrote:
>
>> On 08.12.2014, at 21:15, tim Rowledge <[hidden email]> wrote:
>>>
>>> The ScratchPlugin implements a prim to shrink a 32bpp image by use of bilinear interpolation. Unfortunately it completely ignores the alpha channel in 32bpp pixels and does some rather odd futzing to kinda-sorta fake handling of transparency.
>>>
>>> I can see how to add in (what I think would be) proper ARGB interpolating, and I think that simply removing the futzing would be correct - but I’d much rather have some input from somebody with a bit of image processing theory so there is some hope of my final result being actually correct.
>>
>> Why would you like to change it? To accept a wider range of inputs?
>
> Well, the main need is to stop it breaking imported images. A frequent problem I’ve had reported is that importing gifs & pngs results in images that simply aren’t correct. Transparent backgrounds that are white, or black, for example. A fairly common problem is ‘transparent’ source pixels that are in the file as 0 alpha but RGB = white.
>
> In the process if trying to sort them out it was noticed that this scaling prim strips away any alpha channel; I don’t particularly mean partial transparency, just all of it. Feed in a pixel that is A=255 and it comes out A=0. Bit of a pain if you ever need to display it with Form blend.
Looking at the code it should preserve pixel value 16r00000000, which is the only one recognized as transparent by BitBlt.

It's the job if the image importer to make sure that if alpha is zero, the whole pixel value must be zero.

>> This purposely does only output fully opaque and fully transparent pixels, which likely is a requirement further down the pipeline. Makes rendering faster, too: true alpha-blending is expensive.
>
> The code gives the impression of having been written before the 32bpp ARGB pixel format was put into use.

Nope. The ARGB pixel format has been around forever. Dan took Smalltalk-80's 1-bit BitBlt and extended it to work in 1-2-4-8-16-32 bits per pixel before the first Squeak release.

> That could explain why it doesn’t set the alpha bits for the output.

No. What *has* changed is that people are much more willing to waste *thirtytwo* bits for *each* tiny pixel in stored image files nowadays. Think about it. Eight bits for transparency where surely one would suffice, right? I'm just half kidding. This code is from an era where "transparent image" meant "GIF". Which didn't even have 1 bit of transparency per pixel, but I think 1/32nd of a bit, if my math is correct (8 bits per pixel with 1 out of 256 values meaning "transparent").

There simply was no way to import an image with 256 levels of transparency, so the code did not waste cycles dealing with them. 32-bit PNGs got popular much later.

The most common way to get a transparent sprite in Scratch is actually importing a photo (directly from camera, or via JPG, no transparency in any case) and use the eraser tool in the image editor to erase pixels. Again, no smooth alpha there.

> I could of course just do a bitblt with the fixAlpha rule but it’s faster to fix it inside the prim if possible.

Agreed.

> The question becomes one of the final effect that is wanted - doing ‘the right thing’ by mixing the alpha values is simple and works ok for the examples I have right now but yes, they’re not using partial transparency and are finally displayed with Form paint instead of blend. And that is an issue too, since people are quite likely to try importing images with partial transparency from assorted paint programs and clipart, only to find it looks really strange.

Yep. Kids these days ...

> The old code was fudging transparency and effectively rounding it up to opaque if the weighted average was > ~half.  I could do similar easily enough I think.
>
> But I’m not at all knowledgeable about image processing stuff, which is why I ask.

Well, I would preserve the output (only 0 and 255 for alpha) but extend the inputs: instead of comparing input pixel value to 0 to determine if it's transparent, compare the alpha. I think if the pixel value is a sqInt then comparing "pix < 0" would work, although doing an unsigned compare with 16r7F000000 would be less obfuscated.

- Bert -




smime.p7s (5K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Who understand bilinear interpolation for reducing image size?

David T. Lewis
On Tue, Dec 09, 2014 at 10:59:48AM +0100, Bert Freudenberg wrote:
>
> > On 09.12.2014, at 00:45, tim Rowledge <[hidden email]> wrote:
> >
> > But I?m not at all knowledgeable about image processing stuff, which is why I ask.
>
> Well, I would preserve the output (only 0 and 255 for alpha) but extend the inputs: instead of comparing input pixel value to 0 to determine if it's transparent, compare the alpha. I think if the pixel value is a sqInt then comparing "pix < 0" would work, although doing an unsigned compare with 16r7F000000 would be less obfuscated.
>

It would not be a sqInt (because this could be 64 bits), which means it needs to
be declared explicitly, which in turn means that your "less obfuscated" approach
would be the better thing to do.

No worries, I think it's already declared correctly in #primitiveInterpolate:

    <var: 'in' declareC: 'unsigned int *in'>

Which in C is this:

    unsigned int *in;


Dave


Reply | Threaded
Open this post in threaded view
|

Re: Who understand bilinear interpolation for reducing image size?

Yoshiki Ohshima-3
In reply to this post by timrowledge
One issue is that Scratch's "color touching", and also the fill tool
in the paint editor rely on some assumptions.  I'd wary to fiddle them
as they may change the existing projects behavior.

On Mon, Dec 8, 2014 at 3:45 PM, tim Rowledge <[hidden email]> wrote:

>
> On 08-12-2014, at 3:01 PM, Bert Freudenberg <[hidden email]> wrote:
>
>> On 08.12.2014, at 21:15, tim Rowledge <[hidden email]> wrote:
>>>
>>> The ScratchPlugin implements a prim to shrink a 32bpp image by use of bilinear interpolation. Unfortunately it completely ignores the alpha channel in 32bpp pixels and does some rather odd futzing to kinda-sorta fake handling of transparency.
>>>
>>> I can see how to add in (what I think would be) proper ARGB interpolating, and I think that simply removing the futzing would be correct - but I’d much rather have some input from somebody with a bit of image processing theory so there is some hope of my final result being actually correct.
>>
>> Why would you like to change it? To accept a wider range of inputs?
>
> Well, the main need is to stop it breaking imported images. A frequent problem I’ve had reported is that importing gifs & pngs results in images that simply aren’t correct. Transparent backgrounds that are white, or black, for example. A fairly common problem is ‘transparent’ source pixels that are in the file as 0 alpha but RGB = white.
>
> In the process if trying to sort them out it was noticed that this scaling prim strips away any alpha channel; I don’t particularly mean partial transparency, just all of it. Feed in a pixel that is A=255 and it comes out A=0. Bit of a pain if you ever need to display it with Form blend.
>
>>
>> This purposely does only output fully opaque and fully transparent pixels, which likely is a requirement further down the pipeline. Makes rendering faster, too: true alpha-blending is expensive.
>
> The code gives the impression of having been written before the 32bpp ARGB pixel format was put into use. That could explain why it doesn’t set the alpha bits for the output. I could of course just do a bitblt with the fixAlpha rule but it’s faster to fix it inside the prim if possible.
>
> The question becomes one of the final effect that is wanted - doing ‘the right thing’ by mixing the alpha values is simple and works ok for the examples I have right now but yes, they’re not using partial transparency and are finally displayed with Form paint instead of blend. And that is an issue too, since people are quite likely to try importing images with partial transparency from assorted paint programs and clipart, only to find it looks really strange.
>
> The old code was fudging transparency and effectively rounding it up to opaque if the weighted average was > ~half.  I could do similar easily enough I think.
>
> But I’m not at all knowledgeable about image processing stuff, which is why I ask.
>
>
> tim
> --
> tim Rowledge; [hidden email]; http://www.rowledge.org/tim
> Useful Latin Phrases:- Vescere bracis meis. = Eat my shorts.
>
>
>



--
-- Yoshiki

Reply | Threaded
Open this post in threaded view
|

Re: Who understand bilinear interpolation for reducing image size?

timrowledge
In reply to this post by Bert Freudenberg
(we -as in several thousand people - have no power and trees down across power/phone lines so things are bit iffy)

On 09-12-2014, at 1:59 AM, Bert Freudenberg <[hidden email]> wrote:


>>
>
> Looking at the code it should preserve pixel value 16r00000000, which is the only one recognized as transparent by BitBlt.
>

Yes, but it also completely strips away the alpha channel, which causes problems elsewhere.

> It's the job if the image importer to make sure that if alpha is zero, the whole pixel value must be zero.

Well, yes. We need to fix that too.

>
>>> This purposely does only output fully opaque and fully transparent pixels, which likely is a requirement further down the pipeline. Makes rendering faster, too: true alpha-blending is expensive.
>>
>> The code gives the impression of having been written before the 32bpp ARGB pixel format was put into use.
>
> Nope. The ARGB pixel format has been around forever. Dan took Smalltalk-80's 1-bit BitBlt and extended it to work in 1-2-4-8-16-32 bits per pixel before the first Squeak release.

If you say so; I don’t recall the alpha stuff being around till later but then I don’t recall a lot of things from last century.

>
>> That could explain why it doesn’t set the alpha bits for the output.
>
> No. What *has* changed is that people are much more willing to waste *thirtytwo* bits for *each* tiny pixel in stored image files nowadays. Think about it. Eight bits for transparency where surely one would suffice, right? I'm just half kidding. This code is from an era where "transparent image" meant "GIF". Which didn't even have 1 bit of transparency per pixel, but I think 1/32nd of a bit, if my math is correct (8 bits per pixel with 1 out of 256 values meaning "transparent").
>
> There simply was no way to import an image with 256 levels of transparency, so the code did not waste cycles dealing with them. 32-bit PNGs got popular much later.

So we need to deal with them now.

>
> The most common way to get a transparent sprite in Scratch is actually importing a photo (directly from camera, or via JPG, no transparency in any case) and use the eraser tool in the image editor to erase pixels. Again, no smooth alpha there.
>
>> I could of course just do a bitblt with the fixAlpha rule but it’s faster to fix it inside the prim if possible.
>
> Agreed.
>
>> The question becomes one of the final effect that is wanted - doing ‘the right thing’ by mixing the alpha values is simple and works ok for the examples I have right now but yes, they’re not using partial transparency and are finally displayed with Form paint instead of blend. And that is an issue too, since people are quite likely to try importing images with partial transparency from assorted paint programs and clipart, only to find it looks really strange.
>
> Yep. Kids these days ...
>
>> The old code was fudging transparency and effectively rounding it up to opaque if the weighted average was > ~half.  I could do similar easily enough I think.
>>
>> But I’m not at all knowledgeable about image processing stuff, which is why I ask.
>
> Well, I would preserve the output (only 0 and 255 for alpha) but extend the inputs: instead of comparing input pixel value to 0 to determine if it's transparent, compare the alpha. I think if the pixel value is a sqInt then comparing "pix < 0" would work, although doing an unsigned compare with 16r7F000000 would be less obfuscated.

I guess something like that is probably best but I do dislike fudges of this sort.


tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
Any sufficiently advanced bug is indistinguishable from a feature.



Reply | Threaded
Open this post in threaded view
|

Re: Who understand bilinear interpolation for reducing image size?

Bert Freudenberg
On 09.12.2014, at 20:29, tim Rowledge <[hidden email]> wrote:
>
> On 09-12-2014, at 1:59 AM, Bert Freudenberg <[hidden email]> wrote:
>>
>> I would preserve the output (only 0 and 255 for alpha) but extend the inputs: instead of comparing input pixel value to 0 to determine if it's transparent, compare the alpha. I think if the pixel value is a sqInt then comparing "pix < 0" would work, although doing an unsigned compare with 16r7F000000 would be less obfuscated.
>>
> I guess something like that is probably best but I do dislike fudges of this sort.

It's not fudging, it's preserving expected behavior, while making it work over a wider range of inputs.

- Bert -






smime.p7s (5K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

PNG importing appears to be broken ( was Re: [squeak-dev] Who understand bilinear interpolation for reducing image size?)

timrowledge
Looking into why png imports sometimes have issues with the alpha channel has lead me to the conclusion that something is broken in the area of PNGReadWriter>copyPixelsRGBA:

In the (very) old version in the original Scratch image this code carefully checks pixels with an alpha == 0 and makes them ‘properly’ transparent. The newer code in 4.5 does not, and so some images import with supposedly transparent pixels where the ARGB is 00FFFFFF.

It looks to me as if we can correct this part of the problem by changing

        | i pixel tempForm tempBits ff |
        bitsPerChannel = 8 ifTrue: [
                ff := Form extent: width@1 depth: 32 bits: thisScanline.
                cachedDecoderMap
                        ifNil:[cachedDecoderMap := self rgbaDecoderMapForDepth: depth].
                (BitBlt toForm: form)
                        sourceForm: ff;
                        destOrigin: 0@y;
                        combinationRule: Form over;
                        colorMap: cachedDecoderMap;
                        copyBits.
                ^self.
        ].

to use 'combinationRule: Form blend' so that the alpha values are considered. Obviously a blend is a bit more expensive than an over, but we’re doing a lot of processing to import a png so I doubt it will make a noticeable difference - and it’s likely faster than manually iterating across all the pixels to fix things up.

I’m a touch puzzled that I can’t find any code that seems to be doing the ‘black=16rFF000001' fudge we have to do, so maybe that is in need of fixing?

Also I find myself a little suspicious of #copyPixelsGrayAlpha:, copyPixelsGrayAlpha:at:by:, copyPixelsRGBA:at:by:, copyPixelsGray:, copyPixelsGray:at:by: and I’ve probably missed some. Interestingly #copyPixelsRGB: seems to carefully do the right thing to add the alpha channel.

Does anyone actually feel they understand this stuff?

tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
Old programmers never die; they just branch to a new address.



Reply | Threaded
Open this post in threaded view
|

Re: PNG importing appears to be broken ( was Re: [squeak-dev] Who understand bilinear interpolation for reducing image size?)

Bert Freudenberg
On 11.12.2014, at 01:06, tim Rowledge <[hidden email]> wrote:

>
> Looking into why png imports sometimes have issues with the alpha channel has lead me to the conclusion that something is broken in the area of PNGReadWriter>copyPixelsRGBA:
>
> In the (very) old version in the original Scratch image this code carefully checks pixels with an alpha == 0 and makes them ‘properly’ transparent. The newer code in 4.5 does not, and so some images import with supposedly transparent pixels where the ARGB is 00FFFFFF.
>
> It looks to me as if we can correct this part of the problem by changing
>
> | i pixel tempForm tempBits ff |
> bitsPerChannel = 8 ifTrue: [
> ff := Form extent: width@1 depth: 32 bits: thisScanline.
> cachedDecoderMap
> ifNil:[cachedDecoderMap := self rgbaDecoderMapForDepth: depth].
> (BitBlt toForm: form)
> sourceForm: ff;
> destOrigin: 0@y;
> combinationRule: Form over;
> colorMap: cachedDecoderMap;
> copyBits.
> ^self.
> ].
>
> to use 'combinationRule: Form blend' so that the alpha values are considered.
Don't think so. "Form over" is a simple copy, it copies everything including alpha unmodified. Besides, "form" is empty so there really is no reason to blend. There could potentially be a problem in rgbaDecoderMapForDepth: but it looks okay to me.

> Obviously a blend is a bit more expensive than an over, but we’re doing a lot of processing to import a png so I doubt it will make a noticeable difference - and it’s likely faster than manually iterating across all the pixels to fix things up.

If you look at alphaBlend:with: you'll see "blend" does not do what you might think it would be doing?

> I’m a touch puzzled that I can’t find any code that seems to be doing the ‘black=16rFF000001' fudge we have to do, so maybe that is in need of fixing?

No, the code assumes that if you are actually caring about alpha you know what you're doing. It assumes you will use proper alpha blending to display stuff, in which case fudging isn't needed.

> Also I find myself a little suspicious of #copyPixelsGrayAlpha:, copyPixelsGrayAlpha:at:by:, copyPixelsRGBA:at:by:, copyPixelsGray:, copyPixelsGray:at:by: and I’ve probably missed some. Interestingly #copyPixelsRGB: seems to carefully do the right thing to add the alpha channel.

Of course, because unlike its RGBA sister it does not have a real alpha channel to work with. Instead it has to do chroma-keying as a poor-mans replacement.

> Does anyone actually feel they understand this stuff?

It's not rocket science. You just need to find the exact point where things go wrong.

- Bert -






smime.p7s (5K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: PNG importing appears to be broken ( was Re: [squeak-dev] Who understand bilinear interpolation for reducing image size?)

timrowledge

On 11-12-2014, at 4:15 AM, Bert Freudenberg <[hidden email]> wrote:
>>
>> to use 'combinationRule: Form blend' so that the alpha values are considered.
>
> Don't think so. "Form over" is a simple copy, it copies everything including alpha unmodified. Besides, "form" is empty so there really is no reason to blend.

And ‘over’ simply copies malformed pixels to the target; those pixels that are 16r00FFFFFF and ought to be 16r0. Using ‘blend’ can solve that particular problem, but with the obvious downside that partial alpha pixels can become amusingly wrong. So yeah, it’s not a robust solution.

> There could potentially be a problem in rgbaDecoderMapForDepth: but it looks okay to me.

It works very cleverly as a way to handle endian conversions and some depth conversion, but when changing the code to use that the bad-alpha issue was forgotten or otherwise ignored. I’m not totally convinced that the overheads of a fairly complex bitlbt setup for every row of an image  are better than a fairly simple loop to futz each pixel. It’ll depend upon the machine, the virtual machine and the run of data. Given that one then has to do something to each pixel to fix the bad-transparent cases it’s probably not that great.

Maybe there is a place for a new blt rule as a companion to fixAlpha, that replaces alpha=0 pixels with all-0, perhaps does the mapping of 0 to black->1 etc.

>
>> Obviously a blend is a bit more expensive than an over, but we’re doing a lot of processing to import a png so I doubt it will make a noticeable difference - and it’s likely faster than manually iterating across all the pixels to fix things up.
>
> If you look at alphaBlend:with: you'll see "blend" does not do what you might think it would be doing?

It doesn’t do exactly what I thought but it doesn’t do what the comment claims either. Specifically the comment claims "The high byte of the result will be 0.” which doesn’t appear to be the case. A simple  workspace fragment derived from the BitBltSimulation shows the alpha value in the result. Which I rather think is what we really want.

>> I’m a touch puzzled that I can’t find any code that seems to be doing the ‘black=16rFF000001' fudge we have to do, so maybe that is in need of fixing?
>
> No, the code assumes that if you are actually caring about alpha you know what you're doing. It assumes you will use proper alpha blending to display stuff, in which case fudging isn't needed.

The black = very very dark blue thing is nothing to do with blending etc. Somewhere we have to decide if an incoming png pixel value is meant to be black and convert that to our pseudo-black.

>
>> Also I find myself a little suspicious of #copyPixelsGrayAlpha:, copyPixelsGrayAlpha:at:by:, copyPixelsRGBA:at:by:, copyPixelsGray:, copyPixelsGray:at:by: and I’ve probably missed some. Interestingly #copyPixelsRGB: seems to carefully do the right thing to add the alpha channel.
>
> Of course, because unlike its RGBA sister it does not have a real alpha channel to work with. Instead it has to do chroma-keying as a poor-mans replacement.
>
>> Does anyone actually feel they understand this stuff?
>
> It's not rocket science.

No, it’s *much* more complicated than that.

> You just need to find the exact point where things go wrong.

I’m pretty sure that not checking for bad pixel values within these routines is where it goes wrong. I suggest that the fact that an older version of the code fixed the pixels and loaded the files correctly is quite strong supportive evidence.


tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
That’s the second time I’ve seen a Word doc eat a man’s soul. Time for a bug report...


Reply | Threaded
Open this post in threaded view
|

Re: PNG importing appears to be broken ( was Re: [squeak-dev] Who understand bilinear interpolation for reducing image size?)

Bert Freudenberg
On 11.12.2014, at 19:55, tim Rowledge <[hidden email]> wrote:
>
> On 11-12-2014, at 4:15 AM, Bert Freudenberg <[hidden email]> wrote:
>>>
>>> to use 'combinationRule: Form blend' so that the alpha values are considered.
>>
>> Don't think so. "Form over" is a simple copy, it copies everything including alpha unmodified. Besides, "form" is empty so there really is no reason to blend.
>
> And ‘over’ simply copies malformed pixels to the target; those pixels that are 16r00FFFFFF and ought to be 16r0.

IMHO it's not wrong if reading a file results in the exact contents of the file.

> Using ‘blend’ can solve that particular problem, but with the obvious downside that partial alpha pixels can become amusingly wrong. So yeah, it’s not a robust solution.

The Right Way to do blending later is using premultiplied alpha (Porter/Duff, 1984). When you convert 16r00FFFFFF to its pre-multiplied form, it becomes 16r00000000.

The problem is that the Right Way was only implemented in Squeak when some guy who actually understood graphics added bitblt mode 34. Everything else is wrong one way or another.

>> There could potentially be a problem in rgbaDecoderMapForDepth: but it looks okay to me.
>
> It works very cleverly as a way to handle endian conversions and some depth conversion, but when changing the code to use that the bad-alpha issue was forgotten or otherwise ignored. I’m not totally convinced that the overheads of a fairly complex bitlbt setup for every row of an image  are better than a fairly simple loop to futz each pixel. It’ll depend upon the machine, the virtual machine and the run of data. Given that one then has to do something to each pixel to fix the bad-transparent cases it’s probably not that great.
>
> Maybe there is a place for a new blt rule as a companion to fixAlpha, that replaces alpha=0 pixels with all-0, perhaps does the mapping of 0 to black->1 etc.

That would be piling on more of the wrong thing.

>>> Obviously a blend is a bit more expensive than an over, but we’re doing a lot of processing to import a png so I doubt it will make a noticeable difference - and it’s likely faster than manually iterating across all the pixels to fix things up.
>>
>> If you look at alphaBlend:with: you'll see "blend" does not do what you might think it would be doing?
>
> It doesn’t do exactly what I thought but it doesn’t do what the comment claims either. Specifically the comment claims "The high byte of the result will be 0.” which doesn’t appear to be the case. A simple  workspace fragment derived from the BitBltSimulation shows the alpha value in the result. Which I rather think is what we really want.

The comment is obviously wrong, yes, but otherwise it does do what I expected. In particular, no futzing with FF000001 vs 00000000.

>>> I’m a touch puzzled that I can’t find any code that seems to be doing the ‘black=16rFF000001' fudge we have to do, so maybe that is in need of fixing?
>>
>> No, the code assumes that if you are actually caring about alpha you know what you're doing. It assumes you will use proper alpha blending to display stuff, in which case fudging isn't needed.
>
> The black = very very dark blue thing is nothing to do with blending etc. Somewhere we have to decide if an incoming png pixel value is meant to be black and convert that to our pseudo-black.

No, it really depends on what you are going to do with that pixel. If you have a proper rendering pipeline down the road, FF000000 is a perfectly fine opaque black.

>>> Also I find myself a little suspicious of #copyPixelsGrayAlpha:, copyPixelsGrayAlpha:at:by:, copyPixelsRGBA:at:by:, copyPixelsGray:, copyPixelsGray:at:by: and I’ve probably missed some. Interestingly #copyPixelsRGB: seems to carefully do the right thing to add the alpha channel.
>>
>> Of course, because unlike its RGBA sister it does not have a real alpha channel to work with. Instead it has to do chroma-keying as a poor-mans replacement.
>>
>>> Does anyone actually feel they understand this stuff?
>>
>> It's not rocket science.
>
> No, it’s *much* more complicated than that.
>
>> You just need to find the exact point where things go wrong.
>
> I’m pretty sure that not checking for bad pixel values within these routines is where it goes wrong. I suggest that the fact that an older version of the code fixed the pixels and loaded the files correctly is quite strong supportive evidence.
Two wrongs don't make a right. For an image loader to not give me the pixel values an image was saved with is wrong.

I guess I should take a look at the problem. How can I reproduce?

- Bert -




smime.p7s (5K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: PNG importing appears to be broken ( was Re: [squeak-dev] Who understand bilinear interpolation for reducing image size?)

timrowledge
I’m going to snip out almost everything because this is getting confusing.

> IMHO it's not wrong if reading a file results in the exact contents of the file.

I think I have to disagree. At the most trivial level, reading a file with compressed data fails this assertion, for example. A little more seriously I claim that reading a file should provide a representation of the data contained in that file that we can use meaningfully within the context of our system. An obvious corollary is that our writing code must do the proper conversion too.

>>
>> The black = very very dark blue thing is nothing to do with blending etc. Somewhere we have to decide if an incoming png pixel value is meant to be black and convert that to our pseudo-black.
>
> No, it really depends on what you are going to do with that pixel. If you have a proper rendering pipeline down the road, FF000000 is a perfectly fine opaque black.

I can see that it ought to be so. Would we claim to have a proper rendering pipeline?

Another factor that arise from the old scratch code for loading pngs is that apparently some applications (ab)use the low bits of the alpha channel for Foul Deeds and thus some true fudging was required; the code actually looks at the alpha and if it is < 2(or 3, or 4, whatever) sets the pixel as all-0. As a side effect this caught any 0-alpha/non-0-RGB cases.

Two good example images are
monkey1.png where the background should be transparent but appears as white -https://copy.com/lgAtv9rDlTGaGb9m
bananas1.png (similar but different in detail) - https://copy.com/Vsj8lgqtyQmwIerA

It’s probably relevant that these get rendered with paint in the scratch code. I’m not overly keen on having to change much of that. I suggest that our png reading code ought to deal with the issue of 0RGB pixels and possibly with the low-alpha pixels wrongly produced by some applications. Yes, it could be done with post-load code but is it really important to make life difficult?

tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
"Like, no bother man.." said Pooh as he spaced out on hash



Reply | Threaded
Open this post in threaded view
|

Re: PNG importing appears to be broken ( was Re: [squeak-dev] Who understand bilinear interpolation for reducing image size?)

Bert Freudenberg
On 12.12.2014, at 04:47, tim Rowledge <[hidden email]> wrote:
>
> monkey1.png where the background should be transparent but appears as white -https://copy.com/lgAtv9rDlTGaGb9m
> bananas1.png (similar but different in detail) - https://copy.com/Vsj8lgqtyQmwIerA

Both work fine if I simply drop them into a 4.5 image. I can move/rotate/scale no problem. Shadow rendering is not nice, but that's a problem of the shadow renderer, not the bitmap itself.

Which suggests that the PNG importer is not, in fact, broken.

Additional touchup work should be done in Scratch, if it needs these pictures to be in some special format.

- Bert -




smime.p7s (5K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: PNG importing appears to be broken ( was Re: [squeak-dev] Who understand bilinear interpolation for reducing image size?)

timrowledge

On 12-12-2014, at 3:04 AM, Bert Freudenberg <[hidden email]> wrote:

> On 12.12.2014, at 04:47, tim Rowledge <[hidden email]> wrote:
>>
>> monkey1.png where the background should be transparent but appears as white -https://copy.com/lgAtv9rDlTGaGb9m
>> bananas1.png (similar but different in detail) - https://copy.com/Vsj8lgqtyQmwIerA
>
> Both work fine if I simply drop them into a 4.5 image. I can move/rotate/scale no problem. Shadow rendering is not nice, but that's a problem of the shadow renderer, not the bitmap itself.

Sigh. It looks ok when used that way because the process makes a SketchMorph which draws by use of blending. Which is pretty much exactly what I said, though possibly rather long-windedly. Scratch has to use paint. Well, it could of course be changed to use blend but as previously explained that has a risk of messing up old projects and costing too much time on slower machines.

OK, I give up.

tim
--
tim Rowledge; [hidden email]; http://www.rowledge.org/tim
How do I set my laser printer on stun?



Reply | Threaded
Open this post in threaded view
|

Re: PNG importing appears to be broken ( was Re: [squeak-dev] Who understand bilinear interpolation for reducing image size?)

Bert Freudenberg

On 12.12.2014, at 20:09, tim Rowledge <[hidden email]> wrote:
>  Scratch has to use paint.

Then Scratch has to make it usable for that. All I'm saying is that we should not make the general importer worse for the special case of Scratch.

So as soon as the imported form enters Scratch code, fix up the RGB values. Why wouldn't that work?

- Bert -


smime.p7s (8K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: PNG importing appears to be broken ( was Re: [squeak-dev] Who understand bilinear interpolation for reducing image size?)

J. Vuletich (mail lists)

Quoting Bert Freudenberg <[hidden email]>:

> On 12.12.2014, at 20:09, tim Rowledge <[hidden email]> wrote:
>>  Scratch has to use paint.
>
> Then Scratch has to make it usable for that. All I'm saying is that  
> we should not make the general importer worse for the special case  
> of Scratch.
>
> So as soon as the imported form enters Scratch code, fix up the RGB  
> values. Why wouldn't that work?
>
> - Bert -

+1

Cheers,
Juan Vuletich


Reply | Threaded
Open this post in threaded view
|

Re: PNG importing appears to be broken ( was Re: [squeak-dev] Who understand bilinear interpolation for reducing image size?)

J. Vuletich (mail lists)
Quoting "J. Vuletich (mail lists)" <[hidden email]>:

> Quoting Bert Freudenberg <[hidden email]>:
>
>> On 12.12.2014, at 20:09, tim Rowledge <[hidden email]> wrote:
> i>> Scratch has to use paint.
>>
>> Then Scratch has to make it usable for that. All I'm saying is that  
>> we should not make the general importer worse for the special case  
>> of Scratch.
>>
>> So as soon as the imported form enters Scratch code, fix up the RGB  
>> values. Why wouldn't that work?
>>
>> - Bert -
>
> +1
>
> Cheers,
> Juan Vuletich

Just doing 'aForm asFormOfDepth: 16' should suffice. This should  
convert pixels with alpha below some threshold to zero, giving forms  
that work ok with 'paint'.
Cheers,
Juan Vuletich


Reply | Threaded
Open this post in threaded view
|

Re: PNG importing appears to be broken ( was Re: [squeak-dev] Who understand bilinear interpolation for reducing image size?)

J. Vuletich (mail lists)
Hi Tim,

I think this is the real solution. See below.

Quoting "J. Vuletich (mail lists)" <[hidden email]>:

> Quoting "J. Vuletich (mail lists)" <[hidden email]>:
>
>> Quoting Bert Freudenberg <[hidden email]>:
>>
>>> On 12.12.2014, at 20:09, tim Rowledge <[hidden email]> wrote:
>> i>> Scratch has to use paint.
>>>
>>> Then Scratch has to make it usable for that. All I'm saying is  
>>> that we should not make the general importer worse for the special  
>>> case of Scratch.
>>>
>>> So as soon as the imported form enters Scratch code, fix up the  
>>> RGB values. Why wouldn't that work?
>>>
>>> - Bert -
>>
>> +1
>>
>> Cheers,
>> Juan Vuletich
>
> Just doing 'aForm asFormOfDepth: 16' should suffice. This should  
> convert pixels with alpha below some threshold to zero, giving forms  
> that work ok with 'paint'.
> Cheers,
> Juan Vuletich
The problem is that #asFormOfDepth: is broken if the source is 32bpp  
with translucency. The attach (only tested in Cuis) fixes it.

Cheers,
Juan Vuletich



2134-32to16bpp-fix-JuanVuletich-2014Dec14-00h43m-jmv.1.cs.st (2K) Download Attachment
12