How does Time>millisecondClockValue get a resolution of 1 millisecond?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

How does Time>millisecondClockValue get a resolution of 1 millisecond?

Louis LaBrunda
 
Hi VM Guys,

How does Squeak's Time>millisecondClockValue get a resolution of 1
millisecond?  It is primitive: 135.  I thought it was based upon an OS
function that kept a millisecond clock from when the OS was booted like in
my case for Windows GetTickCount.  The resolution of the GetTickCount
function is limited to the resolution of the system timer, which is
typically in the range of 10 milliseconds to 16 milliseconds.

On my machine with VA Smalltalk the resolution seems to be about 15
milliseconds.  Yet, in Squeak it is 1 millisecond.  So it would seem the
Squeak VM is using something else.

Lou
-----------------------------------------------------------
Louis LaBrunda
Keystone Software Corp.
SkypeMe callto://PhotonDemon
mailto:[hidden email] http://www.Keystone-Software.com

Reply | Threaded
Open this post in threaded view
|

Re: How does Time>millisecondClockValue get a resolution of 1 millisecond?

Eliot Miranda-2
 
Hi Louis,

On Wed, Aug 8, 2012 at 1:12 PM, Louis LaBrunda <[hidden email]> wrote:

Hi VM Guys,

How does Squeak's Time>millisecondClockValue get a resolution of 1
millisecond?  It is primitive: 135.  I thought it was based upon an OS
function that kept a millisecond clock from when the OS was booted like in
my case for Windows GetTickCount.  The resolution of the GetTickCount
function is limited to the resolution of the system timer, which is
typically in the range of 10 milliseconds to 16 milliseconds.

On my machine with VA Smalltalk the resolution seems to be about 15
milliseconds.  Yet, in Squeak it is 1 millisecond.  So it would seem the
Squeak VM is using something else.

Well, things are different across platforms, and different between the Cog and the Interpreter VMs.  But on WIndows millisecond time is derived from timeGetTime, which answers milliseconds since Windows booted.  The different between Cog and the Interpreter is that the Interpreter accesses timeGetTime directly to answer milliseconds, whereas Cog updates the time in a background thread every 1 or two milliseconds, and answers the saved value.  So you may see the effective resolution in Cog be only 2 milliseconds, not 1.

HTH
Eliot
 

Lou
-----------------------------------------------------------
Louis LaBrunda
Keystone Software Corp.
SkypeMe callto://PhotonDemon
mailto:[hidden email] http://www.Keystone-Software.com




--
best,
Eliot

Reply | Threaded
Open this post in threaded view
|

Re: How does Time>millisecondClockValue get a resolution of 1 millisecond?

Eliot Miranda-2
 


On Wed, Aug 8, 2012 at 2:50 PM, Eliot Miranda <[hidden email]> wrote:
Hi Louis,

On Wed, Aug 8, 2012 at 1:12 PM, Louis LaBrunda <[hidden email]> wrote:

Hi VM Guys,

How does Squeak's Time>millisecondClockValue get a resolution of 1
millisecond?  It is primitive: 135.  I thought it was based upon an OS
function that kept a millisecond clock from when the OS was booted like in
my case for Windows GetTickCount.  The resolution of the GetTickCount
function is limited to the resolution of the system timer, which is
typically in the range of 10 milliseconds to 16 milliseconds.

On my machine with VA Smalltalk the resolution seems to be about 15
milliseconds.  Yet, in Squeak it is 1 millisecond.  So it would seem the
Squeak VM is using something else.

Well, things are different across platforms, and different between the Cog and the Interpreter VMs.  But on WIndows millisecond time is derived from timeGetTime, which answers milliseconds since Windows booted.  The different between Cog and the Interpreter is that the Interpreter accesses timeGetTime directly to answer milliseconds, whereas Cog updates the time in a background thread every 1 or two milliseconds, and answers the saved value.  So you may see the effective resolution in Cog be only 2 milliseconds, not 1.

Oh, and importantly, the VM ups the resolution of timeGetTime() via timeBeginPeriod to 1 millisecond if possible.
 

HTH
Eliot
 

Lou
-----------------------------------------------------------
Louis LaBrunda
Keystone Software Corp.
SkypeMe <a href="callto://PhotonDemon" target="_blank">callto://PhotonDemon
mailto:[hidden email] http://www.Keystone-Software.com




--
best,
Eliot




--
best,
Eliot

Reply | Threaded
Open this post in threaded view
|

Re: Re: How does Time>millisecondClockValue get a resolution of 1 millisecond?

Louis LaBrunda
 
Hi Eliot,

>Oh, and importantly, the VM ups the resolution of timeGetTime() via timeBeginPeriod to 1 millisecond if possible.

Thanks for the replies.  One more question, what is used for Linux or UNIX
systems?  I would like to recommend to Instantiations (VA Smalltalk) that
they change to functions that give a finer resolution than GetTickCount
(which is what I think they or IBM use).

The current VA Smalltalk code in this area asks for a timer interrupt every
100 milliseconds.  It then checks delays and callback that have been posted
to see if any need to expire.  So, you can't really do a delay for less
than 100 milliseconds.  Even though there are places in the base code that
sets delays at less than 100 milliseconds.

I have a few programs where this is a problem.  There is a method where I
can drop the interrupt period and I have used it to set the interrupt
period to 10 milliseconds and that helps my programs greatly.  But it
really only drops the resolution to 15 milliseconds.

I would like to point Instantiations to the functions that will give a 1
millisecond resolution on all the systems they support.

Many thanks.

Lou
-----------------------------------------------------------
Louis LaBrunda
Keystone Software Corp.
SkypeMe callto://PhotonDemon
mailto:[hidden email] http://www.Keystone-Software.com

Reply | Threaded
Open this post in threaded view
|

Re: Re: How does Time>millisecondClockValue get a resolution of 1 millisecond?

Eliot Miranda-2
 


On Wed, Aug 8, 2012 at 3:22 PM, Louis LaBrunda <[hidden email]> wrote:

Hi Eliot,

>Oh, and importantly, the VM ups the resolution of timeGetTime() via timeBeginPeriod to 1 millisecond if possible.

Thanks for the replies.  One more question, what is used for Linux or UNIX
systems?  I would like to recommend to Instantiations (VA Smalltalk) that
they change to functions that give a finer resolution than GetTickCount
(which is what I think they or IBM use).

gettimeofday.  On linux this does'nt necessarily have great resolution.  On Mac OS it has > 1ms resolution.
 

The current VA Smalltalk code in this area asks for a timer interrupt every
100 milliseconds.  It then checks delays and callback that have been posted
to see if any need to expire.  So, you can't really do a delay for less
than 100 milliseconds.  Even though there are places in the base code that
sets delays at less than 100 milliseconds.

I have a few programs where this is a problem.  There is a method where I
can drop the interrupt period and I have used it to set the interrupt
period to 10 milliseconds and that helps my programs greatly.  But it
really only drops the resolution to 15 milliseconds.

I would like to point Instantiations to the functions that will give a 1
millisecond resolution on all the systems they support.

Many thanks.

Lou
-----------------------------------------------------------
Louis LaBrunda
Keystone Software Corp.
SkypeMe callto://PhotonDemon
mailto:[hidden email] http://www.Keystone-Software.com




--
best,
Eliot