[Xtreams-Compression] Compression streams do not work with sockets

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
22 messages Options
12
Reply | Threaded
Open this post in threaded view
|

[Xtreams-Compression] Compression streams do not work with sockets

Reinout Heeck-2
Hi,

I am trying to use the Xtreams in our project.
Using the following works:


skt := SocketAccessor openPair.
out := skt first writing.
in := skt last reading.

out nextPutAll: #[1 2 3 4 5 6 7 8 9 0].
out flush


in read: 10



When I try the following I end up waiting for ever:

skt := SocketAccessor openPair.
out := skt first writing compressing.
in := skt last reading compressing.

out nextPutAll: #[1 2 3 4 5 6 7 8 9 0].
out flush


in read: 10

What am I doing wrong?  I was under the impression that those streams were stack-able.

regards,

Cham

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

Kogan, Tamara

Try the following:

 

skt := SocketAccessor openPair.

out := skt first writing compressing.

in := (skt last reading limiting: 15) compressing.

out nextPutAll: #[1 2 3 4 5 6 7 8 9 0].

out flush.

(in read: 10) inspect.

 

Tamara Kogan

Smalltalk Development

Cincom Systems

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of Cham Püschel
Sent: Monday, January 20, 2014 6:05 AM
To: [hidden email]
Subject: [vwnc] [Xtreams-Compression] Compression streams do not work with sockets

 

Hi,

I am trying to use the Xtreams in our project.
Using the following works:


skt := SocketAccessor openPair.
out := skt first writing.
in := skt last reading.

out nextPutAll: #[1 2 3 4 5 6 7 8 9 0].
out flush


in read: 10




When I try the following I end up waiting for ever:


skt := SocketAccessor openPair.
out := skt first writing compressing.
in := skt last reading compressing.

out nextPutAll: #[1 2 3 4 5 6 7 8 9 0].
out flush


in read: 10

What am I doing wrong?  I was under the impression that those streams were stack-able.

regards,

Cham


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

Reinout Heeck-2
Hi Tamara,

this works once, however we need to have a compressed stream through a socket and we don't know how many bytes will be available so limiting the underlying stream won't work.

Regards,

Cham

On 20-1-2014 16:43, Kogan, Tamara wrote:

Try the following:

 

skt := SocketAccessor openPair.

out := skt first writing compressing.

in := (skt last reading limiting: 15) compressing.

out nextPutAll: #[1 2 3 4 5 6 7 8 9 0].

out flush.

(in read: 10) inspect.

 

Tamara Kogan

Smalltalk Development

Cincom Systems

 

From: [hidden email] [[hidden email]] On Behalf Of Cham Püschel
Sent: Monday, January 20, 2014 6:05 AM
To: [hidden email]
Subject: [vwnc] [Xtreams-Compression] Compression streams do not work with sockets

 

Hi,

I am trying to use the Xtreams in our project.
Using the following works:


skt := SocketAccessor openPair.
out := skt first writing.
in := skt last reading.

out nextPutAll: #[1 2 3 4 5 6 7 8 9 0].
out flush


in read: 10




When I try the following I end up waiting for ever:


skt := SocketAccessor openPair.
out := skt first writing compressing.
in := skt last reading compressing.

out nextPutAll: #[1 2 3 4 5 6 7 8 9 0].
out flush


in read: 10

What am I doing wrong?  I was under the impression that those streams were stack-able.

regards,

Cham



_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

Kogan, Tamara

Hi Cham,

 

When you use socket streams in HTTP connection there are two ways indicate that the server (out in your case) side is done. The server can close the socket stream or the server reply includes the data size.

 

When the input compression stream created like:

in := skt last reading compressing.

It will initialize the input buffer with default size 32768 and try to read this amount from the source which is socket stream in our case.

(in read: 10) – reads and decodes 10 bytes from the compression stream input buffer.

 

Closing out socket stream may help:

skt := SocketAccessor openPair.

out := skt first writing compressing.

in := skt last reading compressing.

out nextPutAll: #[1 2 3 4 5 6 7 8 9 0].

out flush.

out close.

(in read: 10) inspect.

out close.

in close.

 

Or specifying amount of bytes to read from the socket stream:

skt := SocketAccessor openPair.

out := skt first writing compressing.

in := (skt last reading limiting: 15) compressing. ß LimitReadSubstream will raise Incomplete after reading 15 bytes from the socket stream and limits the compression stream input buffer to 15 bytes.

out nextPutAll: #[1 2 3 4 5 6 7 8 9 0].

out flush.

(in read: 10) inspect.

out close.

in close.

 

HTH,

Tamara Kogan

Smalltalk Development

Cincom Systems

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of Cham Püschel
Sent: Tuesday, January 21, 2014 7:01 AM
To: [hidden email]
Subject: Re: [vwnc] [Xtreams-Compression] Compression streams do not work with sockets

 

Hi Tamara,

this works once, however we need to have a compressed stream through a socket and we don't know how many bytes will be available so limiting the underlying stream won't work.

Regards,

Cham

On 20-1-2014 16:43, Kogan, Tamara wrote:

Try the following:

 

skt := SocketAccessor openPair.

out := skt first writing compressing.

in := (skt last reading limiting: 15) compressing.

out nextPutAll: #[1 2 3 4 5 6 7 8 9 0].

out flush.

(in read: 10) inspect.

 

Tamara Kogan

Smalltalk Development

Cincom Systems

 

From: [hidden email] [[hidden email]] On Behalf Of Cham Püschel
Sent: Monday, January 20, 2014 6:05 AM
To: [hidden email]
Subject: [vwnc] [Xtreams-Compression] Compression streams do not work with sockets

 

Hi,

I am trying to use the Xtreams in our project.
Using the following works:


skt := SocketAccessor openPair.
out := skt first writing.
in := skt last reading.

out nextPutAll: #[1 2 3 4 5 6 7 8 9 0].
out flush


in read: 10




When I try the following I end up waiting for ever:


skt := SocketAccessor openPair.
out := skt first writing compressing.
in := skt last reading compressing.

out nextPutAll: #[1 2 3 4 5 6 7 8 9 0].
out flush


in read: 10

What am I doing wrong?  I was under the impression that those streams were stack-able.

regards,

Cham

 


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

mkobetic
In reply to this post by Reinout Heeck-2
Hi Cham,

The heart of the problem is that the compressed content doesn't give any hint about where it ends. The decompressor can detect it, but it has to be fed with the content first before it gives you an answer. The $1M question is how much of it do you need to get to arrive at "finished". There may be some clever strategy of trying to feed it byte by byte, but then the stream would be incredibly slow. In any case the CompressedReadStream doesn't even try (from CompressedReadStream>>read:into:at:):

                "both input and output should be completely exhausted at this point;
                we need to read more, but it's not clear how much since the input is compressed,
                so let's attempt to read a full buffer."
                sourceAtEnd ifFalse: [
                        unconsumedIn := [ source read: input size into: input at: 1 ] on: Incomplete do: [ :ex | sourceAtEnd := true. ex count ] ].

So the consequence is that the compressed stream needs a hint where the compressed content ends. Tamara mentioned two strategies, closed connection, length prefix. If neither of those works, e.g. you don't want to close the connection and you don't know how long the content is upfront, then some variation of chunking the compressed bytes could be another possibility.

HTH,

Martin

"Cham Püschel"<[hidden email]> wrote:

> Date: January 21, 2014 7:01:04 AM
> From: Cham Püschel <[hidden email]>
> To: "[hidden email]"<[hidden email]>
> Subject: Re: [vwnc] [Xtreams-Compression] Compression streams do not work with sockets
>
> Hi Tamara,
>
> this works once, however we need to have a compressed stream through a
> socket and we don't know how many bytes will be available so limiting
> the underlying stream won't work.
>
> Regards,
>
> Cham
>
> On 20-1-2014 16:43, Kogan, Tamara wrote:
> >
> > Try the following:
> >
> > skt := SocketAccessor openPair.
> >
> > out := skt first writing compressing.
> >
> > in := (skt last reading limiting: 15) compressing.
> >
> > out nextPutAll: #[1 2 3 4 5 6 7 8 9 0].
> >
> > out flush.
> >
> > (in read: 10) inspect.
> >
> > Tamara Kogan
> >
> > Smalltalk Development
> >
> > Cincom Systems
> >
> > *From:*[hidden email] [mailto:[hidden email]] *On
> > Behalf Of *Cham Püschel
> > *Sent:* Monday, January 20, 2014 6:05 AM
> > *To:* [hidden email]
> > *Subject:* [vwnc] [Xtreams-Compression] Compression streams do not
> > work with sockets
> >
> > *Hi,
> >
> > I am trying to use the Xtreams in our project.
> > Using the following works:*
> >
> > /skt := SocketAccessor openPair.
> > out := skt first writing.
> > in := skt last reading.
> >
> > out nextPutAll: #[1 2 3 4 5 6 7 8 9 0].
> > out flush
> >
> >
> > in read: 10/
> >
> >
> > *
> > When I try the following I end up waiting for ever:*
> >
> > skt := SocketAccessor openPair.
> > out := skt first writing compressing.
> > in := skt last reading compressing.
> >
> > out nextPutAll: #[1 2 3 4 5 6 7 8 9 0].
> > out flush
> >
> >
> > in read: 10
> >
> > *What am I doing wrong?**I was under the impression that those streams
> > were stack-able.*
> > *
> > regards,
> >
> > Cham*
> >
>
>
> _______________________________________________
> vwnc mailing list
> [hidden email]
> http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
>


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

Reinout Heeck-2

I read in the responses that Cham's problem is a feature, not a bug.


I love the abstractions that Xstream promises, I really want to be able to write at abstract level:
   stream :=  aSocket reading compressing marshalling: JSON.
    [foo handle: stream next] whileTrue.

I don't understand guidance saying I should put '15' in there, or more generally that such higher level code has to tell lower levels /how/ to do stuff instead of /what/ to do.

I also do not understand the suggestion that we should alter the content of the stream to support Xstreams. Please imagine: we would have to tell all the European electricity grid operators and gas grid operators we work with to alter their physical clearing protocol because Xstreams cannot handle encodings that do not announce the size of substructures.

These remarks also seem to imply that XStreams cannot handle basic stuff like reading multiple null-terminated strings of arbitrary length from a socket. (or CR/LF terminated records on an interactive pipe?), particularly when the data is compressed.

This is a solvable problem, the answers seem to imply that Cincom does not want to solve it but prefers to defend its deficient form and push the resulting problems towards their paying customers?


Reinout
-------


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

Steven Kelly

Cincom have put Xtreams in Contributed, so they don’t have to promise it works or will be developed, and they don’t offer support for it. Given the MIT license and copyright line:

  “Copyright 2010-2013 Cincom Systems, Martin Kobetic and Michael Lucas-Smith”

then I’d like to express my gratitude to Martin and Michael, and my hope that in future when Cincom gets a package for free and unencumbered from its employees, and pays to develop it further, it would more readily consider it as supported - particularly if they start using it to market Cincom Smalltalk:

http://www.cincomsmalltalk.com/main/2013/07/cincom-smalltalk-foundation-series-xtreams/

 

All the best,

Steve

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of Reinout Heeck
Sent: Wednesday, January 22, 2014 1:34 PM
To: [hidden email]
Subject: Re: [vwnc] [Xtreams-Compression] Compression streams do not work with sockets

 


I read in the responses that Cham's problem is a feature, not a bug.


I love the abstractions that Xstream promises, I really want to be able to write at abstract level:
   stream :=  aSocket reading compressing marshalling: JSON.
    [foo handle: stream next] whileTrue.

I don't understand guidance saying I should put '15' in there, or more generally that such higher level code has to tell lower levels /how/ to do stuff instead of /what/ to do.

I also do not understand the suggestion that we should alter the content of the stream to support Xstreams. Please imagine: we would have to tell all the European electricity grid operators and gas grid operators we work with to alter their physical clearing protocol because Xstreams cannot handle encodings that do not announce the size of substructures.

These remarks also seem to imply that XStreams cannot handle basic stuff like reading multiple null-terminated strings of arbitrary length from a socket. (or CR/LF terminated records on an interactive pipe?), particularly when the data is compressed.

This is a solvable problem, the answers seem to imply that Cincom does not want to solve it but prefers to defend its deficient form and push the resulting problems towards their paying customers?


Reinout
-------


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

Reinout Heeck-2
On 1/22/2014 1:24 PM, Steven Kelly wrote:

Cincom have put Xtreams in Contributed, so they don’t have to promise it works or will be developed, and they don’t offer support for it.


Now I am totally confused .... is it in Contributed because it is license-incompatible with the Cincom code or because it is not supported?

If the latter then we urgently need a hint in the browser or perhaps a code critic rule wherever we are building on unsupported code.
Particularly in those cases when Cincom code pulls in unsupported parcels. If indeed Xstreams is unsupported I would appreciate it if we can use TLS without loading unsupported code.


Given the MIT license and copyright line:

  “Copyright 2010-2013 Cincom Systems, Martin Kobetic and Michael Lucas-Smith”

then I’d like to express my gratitude to Martin and Michael,


Steve, when we ever meet I owe you more beers that you can handle -- for having adorned my gripes with courtesy so many times over the past years.

R
-


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

Boris Popov, DeepCove Labs (SNN)
In reply to this post by Steven Kelly

I’ll note that Cincom is not just using it to market VisualWorks, but also to build other core features of a product and it is no longer in Contributed directory (at least in 7.10), i.e.

 

Autoloading SiouX-Server from $(VISUALWORKS)\www\SiouX-Server.pcl

Autoloading Protocols-Common from $(VISUALWORKS)\parcels\Protocols-Common.pcl

Autoloading Protocol-Common-Namespace from $(VISUALWORKS)\parcels\ProtocolNamespace.pcl

Autoloading Xtreams-Terminals from $(VISUALWORKS)\xtreams\Xtreams-Terminals.pcl

Autoloading Xtreams-Core from $(VISUALWORKS)\xtreams\Xtreams-Core.pcl

Autoloading Xtreams-Support from $(VISUALWORKS)\xtreams\Xtreams-Support.pcl

Autoloading Xtreams-Transforms from $(VISUALWORKS)\xtreams\Xtreams-Transforms.pcl

Autoloading Xtreams-Substreams from $(VISUALWORKS)\xtreams\Xtreams-Substreams.pcl

 

Autoloading Xtreams-Compression from $(VISUALWORKS)\xtreams\Xtreams-Compression.pcl

Autoloading Compression-ZLib from $(VISUALWORKS)\parcels\Compression-ZLib.pcl

 

-Boris

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of Steven Kelly
Sent: January 22, 2014 7:25 AM
To: [hidden email]
Subject: Re: [vwnc] [Xtreams-Compression] Compression streams do not work with sockets

 

Cincom have put Xtreams in Contributed, so they don’t have to promise it works or will be developed, and they don’t offer support for it. Given the MIT license and copyright line:

  “Copyright 2010-2013 Cincom Systems, Martin Kobetic and Michael Lucas-Smith”

then I’d like to express my gratitude to Martin and Michael, and my hope that in future when Cincom gets a package for free and unencumbered from its employees, and pays to develop it further, it would more readily consider it as supported - particularly if they start using it to market Cincom Smalltalk:

http://www.cincomsmalltalk.com/main/2013/07/cincom-smalltalk-foundation-series-xtreams/

 

All the best,

Steve

 

From: [hidden email] [[hidden email]] On Behalf Of Reinout Heeck
Sent: Wednesday, January 22, 2014 1:34 PM
To: [hidden email]
Subject: Re: [vwnc] [Xtreams-Compression] Compression streams do not work with sockets

 


I read in the responses that Cham's problem is a feature, not a bug.


I love the abstractions that Xstream promises, I really want to be able to write at abstract level:
   stream :=  aSocket reading compressing marshalling: JSON.
    [foo handle: stream next] whileTrue.

I don't understand guidance saying I should put '15' in there, or more generally that such higher level code has to tell lower levels /how/ to do stuff instead of /what/ to do.

I also do not understand the suggestion that we should alter the content of the stream to support Xstreams. Please imagine: we would have to tell all the European electricity grid operators and gas grid operators we work with to alter their physical clearing protocol because Xstreams cannot handle encodings that do not announce the size of substructures.

These remarks also seem to imply that XStreams cannot handle basic stuff like reading multiple null-terminated strings of arbitrary length from a socket. (or CR/LF terminated records on an interactive pipe?), particularly when the data is compressed.

This is a solvable problem, the answers seem to imply that Cincom does not want to solve it but prefers to defend its deficient form and push the resulting problems towards their paying customers?


Reinout
-------


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

Michael Lucas-Smith-2
In reply to this post by Steven Kelly
I don't consider the current implementation of Compression to be inline with one of my core goals of Xtreams, which is not to have hidden side effects that block. In this case, the Compression substream has its own buffer, instead of letting the source stream buffer if it requires it, which means that in this case reading a small amount of information incorrectly blocks.

If you needed a buffer for performance, I'd expect to have to write:
(aSource buffering: 32768) compressing. As it is right now, that buffer is built in to the compressing substream, which I do not think it should.

As for a broader developer audience, as you can see Tamara was the first to jump in to try to answer the question. She is one of the primary maintainers of the framework.

Certainly suggestions that Xtreams is unable to handle multiple null terminated strings or cr-lf's is absolutely untrue and even the documentation talks about how to do that sort of stuff.

I believe the compression substream was built with performance in mind - one might consider it malformed to be unperformant for the common case, instead of being malformed to block on a low throughput case. I do not believe Compression was included in the core of Xtreams because it was not something that could be easily supported across Smalltalk implementations (particularly Pharo/Squeak, where they have their own Compression API to plug in to).

In my humble opinion, the built in buffer for compression should be removed and if I get some spare time I might look in to doing that. If Tamara decides its of high enough importance (perhaps via customer request) she or Jerry might work on it sooner.

Parts of Xtreams will likely be supported by Cincom soon. I say parts because some bits are not core to the framework and are optional / trivial and not part of the required implementation.

Cheers,
Michael


On 22 January 2014 23:24, Steven Kelly <[hidden email]> wrote:

Cincom have put Xtreams in Contributed, so they don’t have to promise it works or will be developed, and they don’t offer support for it. Given the MIT license and copyright line:

  “Copyright 2010-2013 Cincom Systems, Martin Kobetic and Michael Lucas-Smith”

then I’d like to express my gratitude to Martin and Michael, and my hope that in future when Cincom gets a package for free and unencumbered from its employees, and pays to develop it further, it would more readily consider it as supported - particularly if they start using it to market Cincom Smalltalk:

http://www.cincomsmalltalk.com/main/2013/07/cincom-smalltalk-foundation-series-xtreams/

 

All the best,

Steve

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of Reinout Heeck
Sent: Wednesday, January 22, 2014 1:34 PM
To: [hidden email]


Subject: Re: [vwnc] [Xtreams-Compression] Compression streams do not work with sockets

 


I read in the responses that Cham's problem is a feature, not a bug.


I love the abstractions that Xstream promises, I really want to be able to write at abstract level:
   stream :=  aSocket reading compressing marshalling: JSON.
    [foo handle: stream next] whileTrue.

I don't understand guidance saying I should put '15' in there, or more generally that such higher level code has to tell lower levels /how/ to do stuff instead of /what/ to do.

I also do not understand the suggestion that we should alter the content of the stream to support Xstreams. Please imagine: we would have to tell all the European electricity grid operators and gas grid operators we work with to alter their physical clearing protocol because Xstreams cannot handle encodings that do not announce the size of substructures.

These remarks also seem to imply that XStreams cannot handle basic stuff like reading multiple null-terminated strings of arbitrary length from a socket. (or CR/LF terminated records on an interactive pipe?), particularly when the data is compressed.

This is a solvable problem, the answers seem to imply that Cincom does not want to solve it but prefers to defend its deficient form and push the resulting problems towards their paying customers?


Reinout
-------


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc



_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not workwith sockets

Steven Kelly
In reply to this post by Boris Popov, DeepCove Labs (SNN)

My apologies, and please ignore my comments – Cincom have already done pretty much exactly what I suggested: as Boris points out, the main part of Xtreams is directly in its own directory. I’d looked under Contributed and mistaken Xtreams-Parsing there for a core part of Xtreams.

 

Reinout: I look forward to sharing a beer one day! Please just pass the extra beers on to Cincom, as a small recompense for my own gripes and inaccuracies :).

 

All the best,

Steve

 

From: Boris Slobodin, DeepCove Labs [mailto:[hidden email]]
Sent: Wednesday, January 22, 2014 3:13 PM
To: Steven Kelly; [hidden email]
Subject: RE: [vwnc] [Xtreams-Compression] Compression streams do not workwith sockets

 

I’ll note that Cincom is not just using it to market VisualWorks, but also to build other core features of a product and it is no longer in Contributed directory (at least in 7.10), i.e.

 

Autoloading SiouX-Server from $(VISUALWORKS)\www\SiouX-Server.pcl

Autoloading Protocols-Common from $(VISUALWORKS)\parcels\Protocols-Common.pcl

Autoloading Protocol-Common-Namespace from $(VISUALWORKS)\parcels\ProtocolNamespace.pcl

Autoloading Xtreams-Terminals from $(VISUALWORKS)\xtreams\Xtreams-Terminals.pcl

Autoloading Xtreams-Core from $(VISUALWORKS)\xtreams\Xtreams-Core.pcl

Autoloading Xtreams-Support from $(VISUALWORKS)\xtreams\Xtreams-Support.pcl

Autoloading Xtreams-Transforms from $(VISUALWORKS)\xtreams\Xtreams-Transforms.pcl

Autoloading Xtreams-Substreams from $(VISUALWORKS)\xtreams\Xtreams-Substreams.pcl

 

Autoloading Xtreams-Compression from $(VISUALWORKS)\xtreams\Xtreams-Compression.pcl

Autoloading Compression-ZLib from $(VISUALWORKS)\parcels\Compression-ZLib.pcl

 

-Boris

 

From: [hidden email] [[hidden email]] On Behalf Of Steven Kelly
Sent: January 22, 2014 7:25 AM
To: [hidden email]
Subject: Re: [vwnc] [Xtreams-Compression] Compression streams do not work with sockets

 

Cincom have put Xtreams in Contributed, so they don’t have to promise it works or will be developed, and they don’t offer support for it. Given the MIT license and copyright line:

  “Copyright 2010-2013 Cincom Systems, Martin Kobetic and Michael Lucas-Smith”

then I’d like to express my gratitude to Martin and Michael, and my hope that in future when Cincom gets a package for free and unencumbered from its employees, and pays to develop it further, it would more readily consider it as supported - particularly if they start using it to market Cincom Smalltalk:

http://www.cincomsmalltalk.com/main/2013/07/cincom-smalltalk-foundation-series-xtreams/

 

All the best,

Steve

 

From: [hidden email] [[hidden email]] On Behalf Of Reinout Heeck
Sent: Wednesday, January 22, 2014 1:34 PM
To: [hidden email]
Subject: Re: [vwnc] [Xtreams-Compression] Compression streams do not work with sockets

 


I read in the responses that Cham's problem is a feature, not a bug.


I love the abstractions that Xstream promises, I really want to be able to write at abstract level:
   stream :=  aSocket reading compressing marshalling: JSON.
    [foo handle: stream next] whileTrue.

I don't understand guidance saying I should put '15' in there, or more generally that such higher level code has to tell lower levels /how/ to do stuff instead of /what/ to do.

I also do not understand the suggestion that we should alter the content of the stream to support Xstreams. Please imagine: we would have to tell all the European electricity grid operators and gas grid operators we work with to alter their physical clearing protocol because Xstreams cannot handle encodings that do not announce the size of substructures.

These remarks also seem to imply that XStreams cannot handle basic stuff like reading multiple null-terminated strings of arbitrary length from a socket. (or CR/LF terminated records on an interactive pipe?), particularly when the data is compressed.

This is a solvable problem, the answers seem to imply that Cincom does not want to solve it but prefers to defend its deficient form and push the resulting problems towards their paying customers?


Reinout
-------


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

Reinout Heeck-3
In reply to this post by Michael Lucas-Smith-2
On 1/22/2014 2:46 PM, Michael Lucas-Smith wrote:
> I don't consider the current implementation of Compression to be
> inline with one of my core goals of Xtreams, which is not to have
> hidden side effects that block.

Ok, so if I understand correctly:

--the current implementation of Compression is broken for common usages
of TCP sockets.
--this problem is entirely confined to Compression and not a generic bug
in the Xtreams design as I read into Martin's wording.
--it is in our image because TLS uses it,
--it works for TLS because that usage wraps compressed data in a
chunking protocol.

In the meantime Cham found out that CompressReadStream works just fine
if instantiated directly #on: the socket without any intervening
Xstreams, so that's how we hacked our code for now.


Thanks all!

Reinout
-------
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

mkobetic
In reply to this post by Reinout Heeck-2
I'm no longer with Cincom, so whatever I say (or said in last few months) is strictly on my behalf and does not reflect any intents, attitudes or policies of the company. I apologize if I mislead anyone in that regard.

But I am one that could be blamed for the current state of the compression streams, so I tried to offer an explanation, albeit unsuccessfully so far. Let me try once more. The problem with decompression is that when you ask for 10 bytes of decompressed content, it is difficult to efficiently determine how much of the compressed content you need to bring to satisfy the request. The read stream already tries to squeeze as much decompressed content as it can from what it already read before going and asking for more. But once it's clear that it has to read more, how do you figure out how much? I don't know a satisfactory solution to that. Moreover in real life examples that we were dealing with (HTTP, TLS) there's always an external indication of where the compressed content ends, so we didn't need that. I'd be curious to hear more about your particular circumstances, Reinout, because, to be honest, I'd be suspicious of a protocol that does rely on something like that. Can you !
 provide more context?

> On 1/22/2014 2:46 PM, Michael Lucas-Smith wrote:
> > I don't consider the current implementation of Compression to be
> > inline with one of my core goals of Xtreams, which is not to have
> > hidden side effects that block.

I respectfully, disagree. You can either have an asynchronous API (ala node.js) or you have a blocking API, there's no two ways about that. If you ask for 10 bytes, and there are only 5 in the incoming socket buffer when you do that, you need to block. The compressed stream asked for a buffer's worth, that's why it blocked, as it should have. Unless you can somehow predict how did the next 10 bytes, that you haven't even seen yet, compress, I don't see any sane way to avoid blocking (other than clearly indicating the end of the compressed content somehow).

Yes we tried hard to avoid read-ahead in Xtreams, but in some cases it's simply unavoidable, decompression being one of them. Anyway, I'm very interested to see alternative solutions, always eager to learn :-)

Martin

>
> Ok, so if I understand correctly:
>
> --the current implementation of Compression is broken for common usages
> of TCP sockets.
> --this problem is entirely confined to Compression and not a generic bug
> in the Xtreams design as I read into Martin's wording.
> --it is in our image because TLS uses it,
> --it works for TLS because that usage wraps compressed data in a
> chunking protocol.
>
> In the meantime Cham found out that CompressReadStream works just fine
> if instantiated directly #on: the socket without any intervening
> Xstreams, so that's how we hacked our code for now.
>
>
> Thanks all!
>
> Reinout
> -------
> _______________________________________________
> vwnc mailing list
> [hidden email]
> http://lists.cs.uiuc.edu/mailman/listinfo/vwnc


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

Reinout Heeck-2
I wasn't sure whether you were asking a question in your previous mail,
seeing that you repeat the question here I'd better show my 2 cents worth.

On 1/23/2014 1:52 AM, [hidden email] wrote:

>   The problem with decompression is that when you ask for 10 bytes of decompressed content, it is difficult to efficiently determine how much of the compressed content you need to bring to satisfy the request. The read stream already tries to squeeze as much decompressed content as it can from what it already read before going and asking for more. But once it's clear that it has to read more, how do you figure out how much? I don't know a satisfactory solution to that. Moreover in real life examples that we were dealing with (HTTP, TLS) there's always an external indication of where the compressed content ends, so we didn't need that.
Ah, one conceptual leap you seem to miss: if I wrap compression
immediately around the socket I _do_ have framing, particularly at the
TCP level (and of course the various lower level levels before it goes
through our DSL modem ... and then some..).

So because of this framing the system knows how much is available and
flushing has (just) sufficiently well defined semantics across the zlib
compressor and the network all the way through the receiving end's TCP
stream API to be meaningful.
So even though the bulk of the communication stack layering is not
visible at the image level, flushing is sufficiently defined: the read
semaphore in the receiving VW image does get signaled as a result of a
flush on the sending end and the flushed data will /all/ be available
shortly after the flush.

This leads to the following answer regarding input in my use case: you
hand to zlib whatever data is available.
Since we use compression for the lifetime of the connection the answer
for the output is trivial as well: you request data until the
application layer has enough 'for now' or the connection is closed.

More involved use cases have some form of uncompressed record
separators: when the smalltalk image implements protocols on top of the
TCP API of the OS you have records /around/ the compressed data, this is
handled nicely with the XStreams abstraction as per your HTTP and TLS
examples.

The final use case is one I have never seen:
the records are inside the compressed stream AND these records may
contain a command to stop compressing (so it is important to not
optimistically decompress too much).
The zlib API supports this use case but I would not be surprised if this
is inefficient (but not hard!) to implement in Xstreams. In essence you
hand whatever read data is available to the zlib decompressor, and the
application must be very careful to ask for the right amount of
decompressed data. After each such decompression step the zlib API will
report to you how much compressed data it consumed. For CR separated
records this seems to degenerate to asking zlib to decompress one output
byte per C call.


>   I'd be curious to hear more about your particular circumstances, Reinout, because, to be honest, I'd be suspicious of a protocol that does rely on something like that. Can you !
>   provide more context?

Heh, I never had such suspicion :-)
I find it entirely natural that various protocols can wrap each other
and that they all support flushing because that is considered a basic
function of protocol stacks. So if I want to wrap various streams inside
each other I 'just do that' and expect it to work because that is how
things have always been. If I want to compress my interactive TCP stream
I 'just do that' (as per Cham's original post).

In the subset of cases involving records with a byte count in their
vanguard the choice between compression 'inside' records and compression
'around' records is trivially answered: compression around records
yields better compression, particularly if you have wildly varying
payload sizes and the record separators might at times comprise the bulk
of the bytes to transmit.
A further advantage of wrapping compression this way is that I do not
have to worry much about optimizing the record vanguard layout in our
private protocols: I can use a nicely naive encoding and rely on zlib to
squeeze a large enough amount of air out of the resulting bulk.

>
>> On 1/22/2014 2:46 PM, Michael Lucas-Smith wrote:
>>> I don't consider the current implementation of Compression to be
>>> inline with one of my core goals of Xtreams, which is not to have
>>> hidden side effects that block.
> I respectfully, disagree. You can either have an asynchronous API (ala node.js) or you have a blocking API, there's no two ways about that. If you ask for 10 bytes, and there are only 5 in the incoming socket buffer when you do that, you need to block. The compressed stream asked for a buffer's worth, that's why it blocked, as it should have.
I hope the above has made it clear that the misunderstanding lies here.
You do not hand the decompressor a 'buffers worth', but you potentially
hand it less: 'what is available without blocking'. The decompressor
will report how much it consumed after output has been taken and from
there on things should start to fall nicely in place.
I have not studied the internals of the Xstreams impementation yet, so I
have no idea how much impedance mismatch there might be around this
'asking for available data without blocking' functionality.

>
> Yes we tried hard to avoid read-ahead in Xtreams, but in some cases it's simply unavoidable, decompression being one of them. Anyway, I'm very interested to see alternative solutions, always eager to learn :-)

Hope this helped :-)

I also want to stress that my take on Smalltalk is that naive code
should 'just work' so we can start iterating working code early.
Hence IMO libraries should support making things work above choosing to
fail naive use cases in the name of optimization.



Reinout
-------
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

Arden-8
In reply to this post by Steven Kelly
Hi Steve, et al;

Xtreams is an atypical component for us in that parts are supported and parts are unsupported.  We use it in the product, and because we do, that use of it is supported.   Since Xtreams was experimental, new, large, and open source, we refrained from declaring it completely supported, as we did not want the potential liability of needing to put a lot of resources on it that would impact our commitment to customers for  product development, fixes and improvements.  

As an open source licensed library, there is some expectation for community discussion and support of Xtreams.

I welcome  customers and developers to share any feedback on their use and interest in Xtreams.
I hope this helps clarify the issue.

Thanks!

Regards

Arden Thomas

     Arden Thomas
     Cincom Smalltalk Product Manager


On Wed, Jan 22, 2014 at 7:24 AM, Steven Kelly <[hidden email]> wrote:

Cincom have put Xtreams in Contributed, so they don’t have to promise it works or will be developed, and they don’t offer support for it. Given the MIT license and copyright line:

  “Copyright 2010-2013 Cincom Systems, Martin Kobetic and Michael Lucas-Smith”

then I’d like to express my gratitude to Martin and Michael, and my hope that in future when Cincom gets a package for free and unencumbered from its employees, and pays to develop it further, it would more readily consider it as supported - particularly if they start using it to market Cincom Smalltalk:

http://www.cincomsmalltalk.com/main/2013/07/cincom-smalltalk-foundation-series-xtreams/

 

All the best,

Steve

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of Reinout Heeck
Sent: Wednesday, January 22, 2014 1:34 PM
To: [hidden email]


Subject: Re: [vwnc] [Xtreams-Compression] Compression streams do not work with sockets

 


I read in the responses that Cham's problem is a feature, not a bug.


I love the abstractions that Xstream promises, I really want to be able to write at abstract level:
   stream :=  aSocket reading compressing marshalling: JSON.
    [foo handle: stream next] whileTrue.

I don't understand guidance saying I should put '15' in there, or more generally that such higher level code has to tell lower levels /how/ to do stuff instead of /what/ to do.

I also do not understand the suggestion that we should alter the content of the stream to support Xstreams. Please imagine: we would have to tell all the European electricity grid operators and gas grid operators we work with to alter their physical clearing protocol because Xstreams cannot handle encodings that do not announce the size of substructures.

These remarks also seem to imply that XStreams cannot handle basic stuff like reading multiple null-terminated strings of arbitrary length from a socket. (or CR/LF terminated records on an interactive pipe?), particularly when the data is compressed.

This is a solvable problem, the answers seem to imply that Cincom does not want to solve it but prefers to defend its deficient form and push the resulting problems towards their paying customers?


Reinout
-------


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc



_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

mkobetic
In reply to this post by Reinout Heeck-2
"Reinout Heeck"<[hidden email]> wrote:
> I wasn't sure whether you were asking a question in your previous mail,
> seeing that you repeat the question here I'd better show my 2 cents worth.

Thank you, Reinout, much appreciated.

> On 1/23/2014 1:52 AM, [hidden email] wrote:
>
> >   The problem with decompression is that when you ask for 10 bytes of decompressed content, it is difficult to efficiently determine how much of the compressed content you need to bring to satisfy the request. The read stream already tries to squeeze as much decompressed content as it can from what it already read before going and asking for more. But once it's clear that it has to read more, how do you figure out how much? I don't know a satisfactory solution to that. Moreover in real life examples that we were dealing with (HTTP, TLS) there's always an external indication of where the compressed content ends, so we didn't need that.
> Ah, one conceptual leap you seem to miss: if I wrap compression
> immediately around the socket I _do_ have framing, particularly at the
> TCP level (and of course the various lower level levels before it goes
> through our DSL modem ... and then some..).
>
> So because of this framing the system knows how much is available and
> flushing has (just) sufficiently well defined semantics across the zlib
> compressor and the network all the way through the receiving end's TCP
> stream API to be meaningful.
>
> So even though the bulk of the communication stack layering is not
> visible at the image level, flushing is sufficiently defined: the read
> semaphore in the receiving VW image does get signaled as a result of a
> flush on the sending end and the flushed data will /all/ be available
> shortly after the flush.

Agreed, as long as we admit that /all/ doesn't necessarily apply to every case. Basically any fixed size framing, e.g. block encryption, may prevent /some/ bytes from being flushed on the writing end. So while I sympathise with the sentiment that "naive code should just work", we should also acknowledge that "sometimes it simply can't". Nevertheless, I admit, that in Cham's particular example of compression over a bare socket, the information needed to make it work is available on the reading side.

> This leads to the following answer regarding input in my use case: you
> hand to zlib whatever data is available.
> Since we use compression for the lifetime of the connection the answer
> for the output is trivial as well: you request data until the
> application layer has enough 'for now' or the connection is closed.

Yes, that will however require supporting "read what's available right now", more on that below.

> More involved use cases have some form of uncompressed record
> separators: when the smalltalk image implements protocols on top of the
> TCP API of the OS you have records /around/ the compressed data, this is
> handled nicely with the XStreams abstraction as per your HTTP and TLS
> examples.

I'd like to add a clarification. It's not just arbitrary outer framing that current compression read stream needs. It must be framing that specifically delimits where the compressed content starts and where it ends. So in that sense, lifetime of a TCP connection is sufficient in terms of framing. The issue is that the stream will block until a buffer's worth of compressed content is available and only then it will satisfy as many read requests as it can out of it, before it blocks again waiting for another buffer's worth (or end of the stream). We can debate whether that's satisfactory or not, and how it can be improved, but that's how it currently works.

> The final use case is one I have never seen:
> the records are inside the compressed stream AND these records may
> contain a command to stop compressing (so it is important to not
> optimistically decompress too much).

IIRC, zlib will not decompress too much, the decompressor knows where the compressed content ends and it will not attempt to decompress anything after that.

> The zlib API supports this use case but I would not be surprised if this
> is inefficient (but not hard!) to implement in Xstreams. In essence you
> hand whatever read data is available to the zlib decompressor, and the
> application must be very careful to ask for the right amount of
> decompressed data.

I don't think it's about being careful, zlib will not give you more than there was. The caller can only say give me at most X bytes of decompressed content, but how much it actually gets is up to zlib.

> After each such decompression step the zlib API will
> report to you how much compressed data it consumed.
> For CR separated
> records this seems to degenerate to asking zlib to decompress one output
> byte per C call.

This is where I think it gets harder and it's regardless of the format of the compressed content (CR separated or not). If it's critical that you don't read from the underlying source past the compressed content, then you pretty much have to degrade to reading one byte at a time. I think the only sensible way to do this is to accept that you'll likely read past the end and buffer the leftovers for later. Or you can externally frame the compressed content yourself and then you don't have to worry about it, which is what Tamara and I were talking about.

> >   I'd be curious to hear more about your particular circumstances, Reinout, because, to be honest, I'd be suspicious of a protocol that does rely on something like that. Can you !
> >   provide more context?
>
> Heh, I never had such suspicion :-)
> I find it entirely natural that various protocols can wrap each other
> and that they all support flushing because that is considered a basic
> function of protocol stacks. So if I want to wrap various streams inside
> each other I 'just do that' and expect it to work because that is how
> things have always been. If I want to compress my interactive TCP stream
> I 'just do that' (as per Cham's original post).

I think you're being too generous with "just do it and it should just work". While I still believe in the compositional approach to streaming, I doubt that it's feasible in the general case of composing arbitrary transformations. And I mean seemingly sensible compositions, not some random examples trying to prove one's point. The interactions between the layers can be quite bewildering sometimes. But I can acknowledge that for some subset of all sensible "it"s it should "just work" :-).

> In the subset of cases involving records with a byte count in their
> vanguard the choice between compression 'inside' records and compression
> 'around' records is trivially answered: compression around records
> yields better compression, particularly if you have wildly varying
> payload sizes and the record separators might at times comprise the bulk
> of the bytes to transmit.
> A further advantage of wrapping compression this way is that I do not
> have to worry much about optimizing the record vanguard layout in our
> private protocols: I can use a nicely naive encoding and rely on zlib to
> squeeze a large enough amount of air out of the resulting bulk.
>
> >
> >> On 1/22/2014 2:46 PM, Michael Lucas-Smith wrote:
> >>> I don't consider the current implementation of Compression to be
> >>> inline with one of my core goals of Xtreams, which is not to have
> >>> hidden side effects that block.
> > I respectfully, disagree. You can either have an asynchronous API (ala node.js) or you have a blocking API, there's no two ways about that. If you ask for 10 bytes, and there are only 5 in the incoming socket buffer when you do that, you need to block. The compressed stream asked for a buffer's worth, that's why it blocked, as it should have.
> I hope the above has made it clear that the misunderstanding lies here.
> You do not hand the decompressor a 'buffers worth', but you potentially
> hand it less: 'what is available without blocking'. The decompressor
> will report how much it consumed after output has been taken and from
> there on things should start to fall nicely in place.
> I have not studied the internals of the Xstreams impementation yet, so I
> have no idea how much impedance mismatch there might be around this
> 'asking for available data without blocking' functionality.

Right, my previous comments were largely from the point of view of current Xtreams API. To do what you're suggesting we would need to add a notion of a "partial" read, where it may return less than what you've asked for. Initially I thought that wouldn't be that hard, but the longer I'm pondering it the more of a "rat hole" it seems. Let's say we would add another flag to read, that will control whether it can be partial or not, e.g. #read:into:at:partial: (true/false). On most streams it would simply ignore the extra flag and just do what it did before. On socket streams it can return partial read if the flag allows it. I would probably make even partial reads potentially block if there's nothing available at the time of the call. That seems the simplest way to support waiting until something arrives. So far so good, Cham's example would be trivial with this.

But things get less clear once you start adding layers to the picture. If the compression stream sits at the top, you really need the partiality to be passed down through all the layers for it to have the desired effect. I suspect that many transforms would potentially require two different read implementations, one partial and one full. Naively a full read should be just a special case of partial, but I suspect that if that pans out at all, it could have significant impact on execution of a full read. I'm not saying it's not feasible, but I doubt it will be as easy as one would hope.

Michael and I were considering partial reads early on into the project. I don't remember the details anymore but we decided against it back then. It isn't really useful as a general purpose API, you really don't want to write your algorithms against a non-deterministic API call. But yes, it would help with implementing better compression stream. It's not that hard to support specific cases, but it's harder to come up with something that fits the general scheme and doesn't double the complexity or create odd inconsistencies.

>
> >
> > Yes we tried hard to avoid read-ahead in Xtreams, but in some cases it's simply unavoidable, decompression being one of them. Anyway, I'm very interested to see alternative solutions, always eager to learn :-)
>
> Hope this helped :-)
>
> I also want to stress that my take on Smalltalk is that naive code
> should 'just work' so we can start iterating working code early.
> Hence IMO libraries should support making things work above choosing to
> fail naive use cases in the name of optimization.

Optimization for the sake of optimization, sure. But what about deoptimization for the sake of supporting a questionable naive code to the point of making it too slow for practical real use cases?

>
>
>
> Reinout
> -------
> _______________________________________________
> vwnc mailing list
> [hidden email]
> http://lists.cs.uiuc.edu/mailman/listinfo/vwnc


_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

Reinout Heeck-3
In reply to this post by Arden-8
Thank you for clarifying this!

R
-


On 1/23/2014 6:46 PM, Arden Thomas wrote:
Hi Steve, et al;

Xtreams is an atypical component for us in that parts are supported and parts are unsupported.  We use it in the product, and because we do, that use of it is supported.   Since Xtreams was experimental, new, large, and open source, we refrained from declaring it completely supported, as we did not want the potential liability of needing to put a lot of resources on it that would impact our commitment to customers for  product development, fixes and improvements.  

As an open source licensed library, there is some expectation for community discussion and support of Xtreams.

I welcome  customers and developers to share any feedback on their use and interest in Xtreams.
I hope this helps clarify the issue.

Thanks!

Regards

Arden Thomas

     Arden Thomas
     Cincom Smalltalk Product Manager


On Wed, Jan 22, 2014 at 7:24 AM, Steven Kelly <[hidden email]> wrote:

Cincom have put Xtreams in Contributed, so they don’t have to promise it works or will be developed, and they don’t offer support for it. Given the MIT license and copyright line:

  “Copyright 2010-2013 Cincom Systems, Martin Kobetic and Michael Lucas-Smith”

then I’d like to express my gratitude to Martin and Michael, and my hope that in future when Cincom gets a package for free and unencumbered from its employees, and pays to develop it further, it would more readily consider it as supported - particularly if they start using it to market Cincom Smalltalk:

http://www.cincomsmalltalk.com/main/2013/07/cincom-smalltalk-foundation-series-xtreams/

 

All the best,

Steve

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of Reinout Heeck
Sent: Wednesday, January 22, 2014 1:34 PM
To: [hidden email]


Subject: Re: [vwnc] [Xtreams-Compression] Compression streams do not work with sockets

 


I read in the responses that Cham's problem is a feature, not a bug.


I love the abstractions that Xstream promises, I really want to be able to write at abstract level:
   stream :=  aSocket reading compressing marshalling: JSON.
    [foo handle: stream next] whileTrue.

I don't understand guidance saying I should put '15' in there, or more generally that such higher level code has to tell lower levels /how/ to do stuff instead of /what/ to do.

I also do not understand the suggestion that we should alter the content of the stream to support Xstreams. Please imagine: we would have to tell all the European electricity grid operators and gas grid operators we work with to alter their physical clearing protocol because Xstreams cannot handle encodings that do not announce the size of substructures.

These remarks also seem to imply that XStreams cannot handle basic stuff like reading multiple null-terminated strings of arbitrary length from a socket. (or CR/LF terminated records on an interactive pipe?), particularly when the data is compressed.

This is a solvable problem, the answers seem to imply that Cincom does not want to solve it but prefers to defend its deficient form and push the resulting problems towards their paying customers?


Reinout
-------

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc




_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

Reinout Heeck-2
In reply to this post by mkobetic
Martin,
  I currently lack the time to decently address this problem, so forgive
me for this discussion will progress at glacial speed.

My current hunch is that it is solvable -- not quite a rats nest, merely
a copious amount of tedium ;-)



I easily can answer your last question though:


> Optimization for the sake of optimization, sure. But what about deoptimization for the sake of supporting a questionable naive code to the point of making it too slow for practical real use cases?
>

This reminds me of Smalltalk in its first decade:
Use whole object pointers to store just one bit of info,
require many machine instructions to dispatch a single 'subroutine',
de-optimized subroutines merely for reasons of human readability...
Obviosly this cannot run on a desktop machine.
  :^)

R
-

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

Reinout Heeck-2
In reply to this post by mkobetic
Can you recommend a document or presentation that shows the internals
and enumerates the design decisions in XStreams?

Thanks!

R
-
_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
Reply | Threaded
Open this post in threaded view
|

Re: [Xtreams-Compression] Compression streams do not work with sockets

Niall Ross
Dear Reinout,

> Can you recommend a document or presentation that shows the internals
> and enumerates the design decisions in XStreams?

My 2010 ESUG report contains a combined write up of a talk Martin gave
at ESUG and a talk Michael gave later that year.  The PDF is at

http://www.esug.org/data/ReportsFromNiallRoss/NiallRossESUG2010Report.pdf

Martin's slides for his ESUG talk are at
    http://www.slideshare.net/esug/xtreams
or
    http://www.slideshare.net/mkobetic/xtreams-esug-2010

The various web pages written by Martin and Michael are at

http://code.google.com/p/xtreams/

(BTW, unless I'm missing something, these documentation locations are
not actually mentioned in the Xtreams' packages comments or other
obvious location with the code.  Am I missing an obvious reference?  If
not, perhaps they should be.)

             HTH
                   Niall Ross

_______________________________________________
vwnc mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/vwnc
12