Crash and burn, object table limits

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

Crash and burn, object table limits

Chris Uppal-3
Hi,

This one's probably for Blair.

First, and just FYI, I have now for the first time exceeded the default limits
of the OT in "normal" activity.  I.e. not benchmarking or purposely exploring
the system limits.

At least I assume that's why Dolphin just crashed with a dump (I can send it to
you if you want), as I was creating several million objects at the time.

Secondly, and more disturbingly, I followed the instructions on the Wiki for
increasing the OT size but it didn't work.  Before doing so I used a related
expression to print out the value of the SDWord at the relevant location (0x20)
and it seemed OK, 0x800000 (or however many zeros).  Then saved the image, then
reran the expression (being careful to ensure that the #position: was executed
again!), and then exited without saving.  I assume that's how to get the image
patched without overwriting the OT-size datum.

But the saved image won't re-load.   In fact Dolphin crashes trying to open it
:-(

According to VS the Dolphin application fails with
    Unhandled exception at 0x77c3dfa3 (msvcrt.dll) in Dolphin.exe:
    OxC00000005: Access violation writing location 0x10040000.

I'll keep that image for a few days in case you want to take a look at it.

I didn't loose any data, but obviously I'm not going to try increasing the OT
size again until this is working...

    -- chris


Reply | Threaded
Open this post in threaded view
|

Re: Crash and burn, object table limits

Dmitry Zamotkin-5
Hello Chris,

> I didn't loose any data, but obviously I'm not going to try increasing the
OT
> size again until this is working...

I've used same technique to increase OT size for a long time, and never got
same exception. I revealed only error message like "
Unable to reserve memory for 33554432 objects. Please reduce maximum OT and
restart." Unfortunately, this exception raises during saving of image and I
found no way to get rid of it except OT size reducing.

--
Dmitry Zamotkin


Reply | Threaded
Open this post in threaded view
|

Re: Crash and burn, object table limits

Blair McGlashan
"Dmitry Zamotkin" <[hidden email]> wrote in message
news:c8amnv$a5k$[hidden email]...
> Hello Chris,
>
> > I didn't loose any data, but obviously I'm not going to try increasing
the
> OT
> > size again until this is working...
>
> I've used same technique to increase OT size for a long time, and never
got
> same exception. I revealed only error message like "
> Unable to reserve memory for 33554432 objects. Please reduce maximum OT
and
> restart." Unfortunately, this exception raises during saving of image and
I
> found no way to get rid of it except OT size reducing.

You're running out of virtual memory address space. What is the size of your
paging file?

Regards

Blair


Reply | Threaded
Open this post in threaded view
|

Re: Crash and burn, object table limits

Dmitry Zamotkin-5
Hello Blair,

> > I've used same technique to increase OT size for a long time, and never
> got
> > same exception. I revealed only error message like "
> > Unable to reserve memory for 33554432 objects. Please reduce maximum OT
> and
> > restart." Unfortunately, this exception raises during saving of image
and
> I
> > found no way to get rid of it except OT size reducing.
>
> You're running out of virtual memory address space. What is the size of
your
> paging file?

My paging file size is 192 Mb.

--
Dmitry Zamotkin


Reply | Threaded
Open this post in threaded view
|

Re: Crash and burn, object table limits

Chris Uppal-3
In reply to this post by Dmitry Zamotkin-5
Dmitry Zamotkin wrote:

> I've used same technique to increase OT size for a long time, and never
> got same exception.

Thank you.  With that encouragement I tried it again and it worked this time.

As far as I can see, I did the same thing as before, but it worked :-)

I've checked the broken image and there doesn't seem to be anything odd about
the way it was patched -- the correct "field" was overwritten, and none of the
other fields (at least in the first 16 DWords) are obviously corrupt.  I even
tried putting the OT size field back to the default, but that image still
crashes Dolphin as it loads.  Oh well....

    -- chris


Reply | Threaded
Open this post in threaded view
|

Re: Crash and burn, object table limits

Blair McGlashan
In reply to this post by Chris Uppal-3
"Chris Uppal" <[hidden email]> wrote in message
news:[hidden email]...
> Hi,
>
> This one's probably for Blair.
>
> First, and just FYI, I have now for the first time exceeded the default
limits
> of the OT in "normal" activity.  I.e. not benchmarking or purposely
exploring
> the system limits.
>
> At least I assume that's why Dolphin just crashed with a dump (I can send
it to
> you if you want), as I was creating several million objects at the time.
>
> Secondly, and more disturbingly, I followed the instructions on the Wiki
for
> increasing the OT size but it didn't work.  Before doing so I used a
related
> expression to print out the value of the SDWord at the relevant location
(0x20)
> and it seemed OK, 0x800000 (or however many zeros).  Then saved the image,
then
> reran the expression (being careful to ensure that the #position: was
executed
> again!), and then exited without saving.  I assume that's how to get the
image
> patched without overwriting the OT-size datum.
>
> But the saved image won't re-load.   In fact Dolphin crashes trying to
open it
> :-(

Well not knowing what you attempted to set it to, I would guess that if
failed because of the sequence of operations you used. A clue is that the
expression on the Wiki uses a FileStream, which is a buffered stream.
Ringing any bells yet? If not read on - lets say you did this:
1) fs := FileStream readWrite: 'dolphin.img'. fs beBinary. (or the
eexpression from the Wiki)
2) fs position: 16r20.
3) fs nextSDWORD.
4) Save the image.
5) fs postion: 16r20.
6) fs nextSDWORDPut: <some larger, but reasonable, value>
7) fs close (or flush)

Then what would have happened is that on the first read the FileStream would
have the first 8Kb of the file. The File would then have been overwritten
with the newly saved image (note that important details on that first page
will change each time the image is saved). Now you update the buffer writing
the new header value. Finally you close the filestream and the modified page
gets written back to the file. I'm quite surprised its possible to do this
without getting some sort of sharing violation (must be something to do with
the default modes used), but anyway the net result would be an invalid image
file because the 8Kb of the original image file in the buffer would not be
valid for the newly saved image, hence it gets corrupted.

It is also possible to choose a value that is invalid (too large or too
small), so I would suggest modifying the original value (approx 8 million)
by two as a start point. If you make it too large, then it is possible to
run out of virtual memory space when allocating the object table at startup,
or when an image is saved.

Regards

Blair


Reply | Threaded
Open this post in threaded view
|

Re: Crash and burn, object table limits

Blair McGlashan
In reply to this post by Dmitry Zamotkin-5
"Dmitry Zamotkin" <[hidden email]> wrote in message
news:c8c8g7$pnn$[hidden email]...
> Hello Blair,
>
> > > I've used same technique to increase OT size for a long time, and
never
> > got
> > > same exception. I revealed only error message like "
> > > Unable to reserve memory for 33554432 objects. Please reduce maximum
OT

> > and
> > > restart." Unfortunately, this exception raises during saving of image
> and
> > I
> > > found no way to get rid of it except OT size reducing.
> >
> > You're running out of virtual memory address space. What is the size of
> your
> > paging file?
>
> My paging file size is 192 Mb.
>

That seems very small. Although in theory just reserving the memory for an
"object table" that can hold 33554432 objects (4x the default maximum size)
will not require an additional 384Mb of storage (which it would do if
committed, the object headers begin 12 bytes each), it does appear to be a
limitation in practice. Or perhaps you have really allocated a very large
number of objects and you are simply running out of virtual memory - when
the VM saves an image it requires double the space the OT normally occupies,
so although you might be able to load the image you will run out of memory
when trying to save it. Increasing your page file size will probably solve
the problem. Certainly on a 512Mb laptop with a 1024Mb paging file I can
find no problems with a maximum number of objects of 33554432.

BTW: You can use Chris Uppal's excellent Space Breakdown tool to get an
reasonably accurate approximation of the memory usage of the objects in your
image.

Regards

Blair


Reply | Threaded
Open this post in threaded view
|

Re: Crash and burn, object table limits

Chris Uppal-3
In reply to this post by Blair McGlashan
Blair McGlashan wrote:

> Finally you close the filestream
> and the modified page gets written back to the file.

<fx>Slaps forehead<fx/>

Thanks!

(And now you mention it, I can remember a little voice saying "you should
re-open that file..." which I ignored.  <fx>Slaps forehead again<fx/>)

    -- chris