Cog issue? Re: [Pharo-project] StandardFileStream size limit?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

Cog issue? Re: [Pharo-project] StandardFileStream size limit?

Göran Krampe
 
Hey!

Ok, so the plot thickens:

If I run this in a "oneclick 1.4" I will get a file write error on 2Gb:

| f b |
f := StandardFileStream newFileNamed: 'test'.
b := ByteArray new: 1024*1024*100. "100Mb"
[30 timesRepeat: [f nextPutAll: b]] ensure: [f close] "3Gb"


...but it worked fine using a self built "standard" VM from squeakvm.org
(4.4.7.2357)! Same image btw.

Now, does it work with the bleeding edge Cog? Tried r2556 and nope, same
problem.

So I "guess" it is a Cog thing? I haven't tried building Cog from source.

regards, Göran
Reply | Threaded
Open this post in threaded view
|

Re: Cog issue? Re: [Pharo-project] StandardFileStream size limit?

David T. Lewis
 
On Wed, Jun 13, 2012 at 04:08:14PM +0200, G?ran Krampe wrote:

>
> Hey!
>
> Ok, so the plot thickens:
>
> If I run this in a "oneclick 1.4" I will get a file write error on 2Gb:
>
> | f b |
> f := StandardFileStream newFileNamed: 'test'.
> b := ByteArray new: 1024*1024*100. "100Mb"
> [30 timesRepeat: [f nextPutAll: b]] ensure: [f close] "3Gb"
>
>
> ...but it worked fine using a self built "standard" VM from squeakvm.org
> (4.4.7.2357)! Same image btw.
>
> Now, does it work with the bleeding edge Cog? Tried r2556 and nope, same
> problem.
>
> So I "guess" it is a Cog thing? I haven't tried building Cog from source.
>
> regards, G?ran

Background on large file support, see especially Bert's summary:

http://forum.world.st/Re-squeak-dev-filesize-reporting-0-for-very-large-files-td4483646.html
http://bugs.squeak.org/view.php?id=7522
http://www.suse.de/~aj/linux_lfs.html

I'm not sure if the Cog VMs are being compiled with the the LFS option, although
I expect that if you compile it yourself with the build options that Bert explains,
then it should start working.

Dave

Reply | Threaded
Open this post in threaded view
|

Re: Cog issue? Re: [Pharo-project] StandardFileStream size limit?

Eliot Miranda-2
 


On Wed, Jun 13, 2012 at 7:40 AM, David T. Lewis <[hidden email]> wrote:

On Wed, Jun 13, 2012 at 04:08:14PM +0200, G?ran Krampe wrote:
>
> Hey!
>
> Ok, so the plot thickens:
>
> If I run this in a "oneclick 1.4" I will get a file write error on 2Gb:
>
> | f b |
> f := StandardFileStream newFileNamed: 'test'.
> b := ByteArray new: 1024*1024*100. "100Mb"
> [30 timesRepeat: [f nextPutAll: b]] ensure: [f close] "3Gb"
>
>
> ...but it worked fine using a self built "standard" VM from squeakvm.org
> (4.4.7.2357)! Same image btw.
>
> Now, does it work with the bleeding edge Cog? Tried r2556 and nope, same
> problem.
>
> So I "guess" it is a Cog thing? I haven't tried building Cog from source.
>
> regards, G?ran

Background on large file support, see especially Bert's summary:

http://forum.world.st/Re-squeak-dev-filesize-reporting-0-for-very-large-files-td4483646.html
http://bugs.squeak.org/view.php?id=7522
http://www.suse.de/~aj/linux_lfs.html

I'm not sure if the Cog VMs are being compiled with the the LFS option, although
I expect that if you compile it yourself with the build options that Bert explains,
then it should start working.

I just committed the necessary changes for my branch.  Will rebuild soon.  Göran, if you're in a hurry build your own?  You'd need to be in http://www.squeakvm.org/svn/squeak/branches/Cog/unixbuild/bld and run ./mvm.
--
best,
Eliot

Reply | Threaded
Open this post in threaded view
|

Re: Cog issue? Re: [Pharo-project] StandardFileStream size limit?

David T. Lewis
 
On Wed, Jun 13, 2012 at 03:00:29PM -0700, Eliot Miranda wrote:

>  
> On Wed, Jun 13, 2012 at 7:40 AM, David T. Lewis <[hidden email]> wrote:
>
> >
> > On Wed, Jun 13, 2012 at 04:08:14PM +0200, G?ran Krampe wrote:
> > >
> > > Hey!
> > >
> > > Ok, so the plot thickens:
> > >
> > > If I run this in a "oneclick 1.4" I will get a file write error on 2Gb:
> > >
> > > | f b |
> > > f := StandardFileStream newFileNamed: 'test'.
> > > b := ByteArray new: 1024*1024*100. "100Mb"
> > > [30 timesRepeat: [f nextPutAll: b]] ensure: [f close] "3Gb"
> > >
> > >
> > > ...but it worked fine using a self built "standard" VM from squeakvm.org
> > > (4.4.7.2357)! Same image btw.
> > >
> > > Now, does it work with the bleeding edge Cog? Tried r2556 and nope, same
> > > problem.
> > >
> > > So I "guess" it is a Cog thing? I haven't tried building Cog from source.
> > >
> > > regards, G?ran
> >
> > Background on large file support, see especially Bert's summary:
> >
> >
> > http://forum.world.st/Re-squeak-dev-filesize-reporting-0-for-very-large-files-td4483646.html
> > http://bugs.squeak.org/view.php?id=7522
> > http://www.suse.de/~aj/linux_lfs.html
> >
> > I'm not sure if the Cog VMs are being compiled with the the LFS option,
> > although
> > I expect that if you compile it yourself with the build options that Bert
> > explains,
> > then it should start working.
> >
>
> I just committed the necessary changes for my branch.  Will rebuild soon.
>  G?ran, if you're in a hurry build your own?  You'd need to be in
> http://www.squeakvm.org/svn/squeak/branches/Cog/unixbuild/bld and run ./mvm.
> --
> best,
> Eliot

Excellent :)

Dave

Reply | Threaded
Open this post in threaded view
|

Re: Cog issue? Re: [Pharo-project] StandardFileStream size limit?

Bert Freudenberg


On 2012-06-14, at 00:09, David T. Lewis wrote:

>
> On Wed, Jun 13, 2012 at 03:00:29PM -0700, Eliot Miranda wrote:
>>
>> On Wed, Jun 13, 2012 at 7:40 AM, David T. Lewis <[hidden email]> wrote:
>>
>>>
>>> On Wed, Jun 13, 2012 at 04:08:14PM +0200, G?ran Krampe wrote:
>>>>
>>>> Hey!
>>>>
>>>> Ok, so the plot thickens:
>>>>
>>>> If I run this in a "oneclick 1.4" I will get a file write error on 2Gb:
>>>>
>>>> | f b |
>>>> f := StandardFileStream newFileNamed: 'test'.
>>>> b := ByteArray new: 1024*1024*100. "100Mb"
>>>> [30 timesRepeat: [f nextPutAll: b]] ensure: [f close] "3Gb"
>>>>
>>>>
>>>> ...but it worked fine using a self built "standard" VM from squeakvm.org
>>>> (4.4.7.2357)! Same image btw.
>>>>
>>>> Now, does it work with the bleeding edge Cog? Tried r2556 and nope, same
>>>> problem.
>>>>
>>>> So I "guess" it is a Cog thing? I haven't tried building Cog from source.
>>>>
>>>> regards, G?ran
>>>
>>> Background on large file support, see especially Bert's summary:
>>>
>>>
>>> http://forum.world.st/Re-squeak-dev-filesize-reporting-0-for-very-large-files-td4483646.html
>>> http://bugs.squeak.org/view.php?id=7522
>>> http://www.suse.de/~aj/linux_lfs.html
>>>
>>> I'm not sure if the Cog VMs are being compiled with the the LFS option,
>>> although
>>> I expect that if you compile it yourself with the build options that Bert
>>> explains,
>>> then it should start working.
>>>
>>
>> I just committed the necessary changes for my branch.  Will rebuild soon.
>> G?ran, if you're in a hurry build your own?  You'd need to be in
>> http://www.squeakvm.org/svn/squeak/branches/Cog/unixbuild/bld and run ./mvm.
>> --
>> best,
>> Eliot
>
> Excellent :)
>
> Dave

I wonder why the interpreter VM works though - I didn't see any of those flags in its build settings. Or maybe some Linuxes enable large files by default?

- Bert -


Reply | Threaded
Open this post in threaded view
|

Re: Cog issue? Re: [Pharo-project] StandardFileStream size limit?

David T. Lewis
 
On Thu, Jun 14, 2012 at 11:17:40AM +0200, Bert Freudenberg wrote:

>
>
> On 2012-06-14, at 00:09, David T. Lewis wrote:
>
> >
> > On Wed, Jun 13, 2012 at 03:00:29PM -0700, Eliot Miranda wrote:
> >>
> >> On Wed, Jun 13, 2012 at 7:40 AM, David T. Lewis <[hidden email]> wrote:
> >>
> >>>
> >>> On Wed, Jun 13, 2012 at 04:08:14PM +0200, G?ran Krampe wrote:
> >>>>
> >>>> Hey!
> >>>>
> >>>> Ok, so the plot thickens:
> >>>>
> >>>> If I run this in a "oneclick 1.4" I will get a file write error on 2Gb:
> >>>>
> >>>> | f b |
> >>>> f := StandardFileStream newFileNamed: 'test'.
> >>>> b := ByteArray new: 1024*1024*100. "100Mb"
> >>>> [30 timesRepeat: [f nextPutAll: b]] ensure: [f close] "3Gb"
> >>>>
> >>>>
> >>>> ...but it worked fine using a self built "standard" VM from squeakvm.org
> >>>> (4.4.7.2357)! Same image btw.
> >>>>
> >>>> Now, does it work with the bleeding edge Cog? Tried r2556 and nope, same
> >>>> problem.
> >>>>
> >>>> So I "guess" it is a Cog thing? I haven't tried building Cog from source.
> >>>>
> >>>> regards, G?ran
> >>>
> >>> Background on large file support, see especially Bert's summary:
> >>>
> >>>
> >>> http://forum.world.st/Re-squeak-dev-filesize-reporting-0-for-very-large-files-td4483646.html
> >>> http://bugs.squeak.org/view.php?id=7522
> >>> http://www.suse.de/~aj/linux_lfs.html
> >>>
> >>> I'm not sure if the Cog VMs are being compiled with the the LFS option,
> >>> although
> >>> I expect that if you compile it yourself with the build options that Bert
> >>> explains,
> >>> then it should start working.
> >>>
> >>
> >> I just committed the necessary changes for my branch.  Will rebuild soon.
> >> G?ran, if you're in a hurry build your own?  You'd need to be in
> >> http://www.squeakvm.org/svn/squeak/branches/Cog/unixbuild/bld and run ./mvm.
> >> --
> >> best,
> >> Eliot
> >
> > Excellent :)
> >
> > Dave
>
> I wonder why the interpreter VM works though - I didn't see any of those flags in its build settings. Or maybe some Linuxes enable large files by default?
>
> - Bert -
>

You are right, those flags are not in the build settings.

I just double checked, and it turns out that it is not true that the standard
interpreter VM works. The interpreter VM works for large files when compiled as
a 64-bit executable, and it works when compiled as a 32-bit executable your
instructions for the LFS flags are followed. But the officially distributed
32-bit interpreter VM does not have LFS support enabled, and if you open a file
list on very large file, the file size will not be displayed correctly.

Dave
 
Reply | Threaded
Open this post in threaded view
|

[unix[ Large file support (was: StandardFileStream size limit?)

Bert Freudenberg

On 2012-06-14, at 14:29, David T. Lewis wrote:

> I just double checked, and it turns out that it is not true that the standard
> interpreter VM works. The interpreter VM works for large files when compiled as
> a 64-bit executable, and it works when compiled as a 32-bit executable your
> instructions for the LFS flags are followed. But the officially distributed
> 32-bit interpreter VM does not have LFS support enabled, and if you open a file
> list on very large file, the file size will not be displayed correctly.
>
> Dave

Ah, okay. So I guess it would be a Good Idea to enable LFS in the interpreter too, right? There shouldn't be bad side effects.

- Bert -

PS: Don't give me too much credit. These are not "my" instructions, I just found and posted them :)

Reply | Threaded
Open this post in threaded view
|

Re: [unix[ Large file support

Göran Krampe
 
On 06/14/2012 02:59 PM, Bert Freudenberg wrote:
>
> On 2012-06-14, at 14:29, David T. Lewis wrote:
>
>> I just double checked, and it turns out that it is not true that the standard
>> interpreter VM works. The interpreter VM works for large files when compiled as
>> a 64-bit executable,

Ok, and... yeah, that is probably what I did when just doing the regular
compile-dance on Ubuntu 64:

gokr@quigon:~/Squeak-4.4.7.2357-src/bld$ file squeakvm
squeakvm: ELF 64-bit LSB executable, x86-64, version 1 (SYSV),
dynamically linked (uses shared libs), for GNU/Linux 2.6.24,
BuildID[sha1]=0x7f4085955eddcafa728f78e1b21a1b2385903524, not stripped

regards, Göran
Reply | Threaded
Open this post in threaded view
|

Re: [unix[ Large file support (was: StandardFileStream size limit?)

David T. Lewis
In reply to this post by Bert Freudenberg
 
On Thu, Jun 14, 2012 at 02:59:12PM +0200, Bert Freudenberg wrote:

>
> On 2012-06-14, at 14:29, David T. Lewis wrote:
>
> > I just double checked, and it turns out that it is not true that the standard
> > interpreter VM works. The interpreter VM works for large files when compiled as
> > a 64-bit executable, and it works when compiled as a 32-bit executable your
> > instructions for the LFS flags are followed. But the officially distributed
> > 32-bit interpreter VM does not have LFS support enabled, and if you open a file
> > list on very large file, the file size will not be displayed correctly.
> >
> > Dave
>
> Ah, okay. So I guess it would be a Good Idea to enable LFS in the interpreter too, right? There shouldn't be bad side effects.

Presumably yes. Although I don't know if there might be some performance
tradeoffs involved in using 64 bit file addresses. But my guess would be
that Linux uses 32-bit file addresses in 32-bit mode to accomodate older
programs that assume 32-bit file references. Maybe Eliot can comment?

>
> - Bert -
>
> PS: Don't give me too much credit. These are not "my" instructions, I just found and posted them :)

Well, *I* certainly would not have figured it out any time soon ;-)

Dave

Reply | Threaded
Open this post in threaded view
|

Re: [unix[ Large file support (was: StandardFileStream size limit?)

johnmci

No you have to back a decade when I implemented these. The 64 bit file calls are not part of the generic standard unix headers across all platforms so Ian choose to make them optional so the vm will compile on the least capable platform.

Sent from my iPhone

On Jun 14, 2012, at 9:59 AM, "David T. Lewis" <[hidden email]> wrote:

>
> On Thu, Jun 14, 2012 at 02:59:12PM +0200, Bert Freudenberg wrote:
>>
>> On 2012-06-14, at 14:29, David T. Lewis wrote:
>>
>>> I just double checked, and it turns out that it is not true that the standard
>>> interpreter VM works. The interpreter VM works for large files when compiled as
>>> a 64-bit executable, and it works when compiled as a 32-bit executable your
>>> instructions for the LFS flags are followed. But the officially distributed
>>> 32-bit interpreter VM does not have LFS support enabled, and if you open a file
>>> list on very large file, the file size will not be displayed correctly.
>>>
>>> Dave
>>
>> Ah, okay. So I guess it would be a Good Idea to enable LFS in the interpreter too, right? There shouldn't be bad side effects.
>
> Presumably yes. Although I don't know if there might be some performance
> tradeoffs involved in using 64 bit file addresses. But my guess would be
> that Linux uses 32-bit file addresses in 32-bit mode to accomodate older
> programs that assume 32-bit file references. Maybe Eliot can comment?
>
>>
>> - Bert -
>>
>> PS: Don't give me too much credit. These are not "my" instructions, I just found and posted them :)
>
> Well, *I* certainly would not have figured it out any time soon ;-)
>
> Dave
>
Reply | Threaded
Open this post in threaded view
|

Re: [unix[ Large file support (was: StandardFileStream size limit?)

David T. Lewis
 
Ah yes, of course. Thanks John!

Dave

On Thu, Jun 14, 2012 at 10:14:39AM -0400, John M McIntosh wrote:

>
> No you have to back a decade when I implemented these. The 64 bit file calls are not part of the generic standard unix headers across all platforms so Ian choose to make them optional so the vm will compile on the least capable platform.
>
> Sent from my iPhone
>
> On Jun 14, 2012, at 9:59 AM, "David T. Lewis" <[hidden email]> wrote:
>
> >
> > On Thu, Jun 14, 2012 at 02:59:12PM +0200, Bert Freudenberg wrote:
> >>
> >> On 2012-06-14, at 14:29, David T. Lewis wrote:
> >>
> >>> I just double checked, and it turns out that it is not true that the standard
> >>> interpreter VM works. The interpreter VM works for large files when compiled as
> >>> a 64-bit executable, and it works when compiled as a 32-bit executable your
> >>> instructions for the LFS flags are followed. But the officially distributed
> >>> 32-bit interpreter VM does not have LFS support enabled, and if you open a file
> >>> list on very large file, the file size will not be displayed correctly.
> >>>
> >>> Dave
> >>
> >> Ah, okay. So I guess it would be a Good Idea to enable LFS in the interpreter too, right? There shouldn't be bad side effects.
> >
> > Presumably yes. Although I don't know if there might be some performance
> > tradeoffs involved in using 64 bit file addresses. But my guess would be
> > that Linux uses 32-bit file addresses in 32-bit mode to accomodate older
> > programs that assume 32-bit file references. Maybe Eliot can comment?
> >
> >>
> >> - Bert -
> >>
> >> PS: Don't give me too much credit. These are not "my" instructions, I just found and posted them :)
> >
> > Well, *I* certainly would not have figured it out any time soon ;-)
> >
> > Dave
> >