John M McIntosh asked me to make some overview on HydraVM internals -
events subsystem. So, here it is. :) An event in HydraVM can be represented by any abstract structure, and having only two mandatory fields: typedef struct vmEvent { struct vmEvent * volatile next; eventFnPtr fn; } vmEvent; The first field - next, used to be able to put events into queue, the second one is the function pointer of form: typedef sqInt (*eventFnPtr)(struct Interpreter*, struct vmEvent*); Any additional event payload can be implementation specific. For instance, for channels i generating events which consisting of information of destination channel + data buffer. So, event holds everything in itself. Each Interpreter instance having own event queue. Event queue implementation belongs to 3 platform specific functions: void ioInitEventQueue(struct vmEventQueue * queue); void ioEnqueueEventInto(struct vmEvent * event , struct vmEventQueue * queue); struct vmEvent * ioDequeueEventFrom(struct vmEventQueue * queue); The requirements of enqueue/dequeue functions implementation is simple: they should be atomic. On windows i'm using InterlockedCompareExchange(), and on other platforms, based on x86 architecture an equivalent is 'lock cmpxchg' asm instruction. To support other architectures, which may don't have atomic CAS (compare and store) instructions, there can be a need in changing vmEventQueue struct, to keep additional information, like mutex handle, to ensure that enqueue/dequeue operations are thread safe. If you still didn't catch how events working, here is some additional information: - since events are thread safe, you can generate an event from any native thread, and don't need to take any additional steps for synchronizing with VM/Interpreter instance. This, in particular, used in SocketPlugin to signal semaphores when socket (which served by separate native thread) changes it's state. About event handling function: this is the function which will be called when interpreter will interrupt for handling events, so in this function you have synchronized access to object memory, interpreter state e.t.c. and don't have to worry about concurrency. Also, function along with event payload are very convenient for determining context and what event means and what it will do. Instead of making dozens of event types, enumerating them.. then writing a case statements, it's doing a simple dispatch event->fn(interpreter, event); so, system are flexible and can handle events of any kind doing anything you want. As for example, suppose you wanna write a plugin which needs to post events to interpreter, but with your own, custom handling code, and with your event payload. So, declare an event in form: struct myEvent { struct vmEvent header; int myField1; int myField2; .. }; Now to post event we simply can do: myEvent * event = malloc(sizeof(myEvent)); event->header->fn = myHandler; event->myField1 = ... .... Now, a handler function: sqInt myHandler(struct Interpreter * intr, myEvent * evt) { ... do something nasty here, knowing that you can't be trapped by concurrency issues :).. free(evt); // release memory, allocated for event } -- Best regards, Igor Stasenko AKA sig. |
> myEvent * event = malloc(sizeof(myEvent));
> event->header->fn = myHandler; > event->myField1 = ... > .... > oops, forgot to add a call to actually _post_ an event: enqueueEvent(intr,myEvent); -- Best regards, Igor Stasenko AKA sig. |
In reply to this post by Igor Stasenko
Hello Igor,
thank you for your interesting explanations! One remark: On 28.02.2008 09:04, Igor Stasenko wrote: > John M McIntosh asked me to make some overview on HydraVM internals - > events subsystem. > So, here it is. :) > >... > About event handling function: > this is the function which will be called when interpreter will > interrupt for handling events, so in this function you have > synchronized access to object memory, interpreter state e.t.c. and > don't have to worry about concurrency. > Also, function along with event payload are very convenient for > determining context and what event means and what it will do. > Instead of making dozens of event types, enumerating them.. then > writing a case statements, it's doing a simple dispatch > event->fn(interpreter, event); so, system are flexible and can handle > events of any kind doing anything you want. This makes it possible to elegantly switch between different event handling funcs: e.g. between a normal and a debugging one, without *any* debugging code in the normal one (by an event func var in the plugin, set by a plugin primitive, used by event posting code). Regards, Stephan > > As for example, suppose you wanna write a plugin which needs to post > events to interpreter, but with your own, custom handling code, and > with your event payload. > So, declare an event in form: > > struct myEvent > { > struct vmEvent header; > int myField1; > int myField2; > .. > }; > > Now to post event we simply can do: > > myEvent * event = malloc(sizeof(myEvent)); > event->header->fn = myHandler; > event->myField1 = ... > .... > > Now, a handler function: > > sqInt myHandler(struct Interpreter * intr, myEvent * evt) > { > ... do something nasty here, knowing that you can't be trapped by > concurrency issues :).. > > free(evt); // release memory, allocated for event > } > > -- Stephan Rudlof "Genius doesn't work on an assembly line basis. You can't simply say, 'Today I will be brilliant.'" -- Kirk, "The Ultimate Computer", stardate 4731.3 |
2008/2/28 Stephan Rudlof <[hidden email]>:
> Hello Igor, > > thank you for your interesting explanations! > > One remark: > > > On 28.02.2008 09:04, Igor Stasenko wrote: > > John M McIntosh asked me to make some overview on HydraVM internals - > > events subsystem. > > So, here it is. :) > > > >... > > > About event handling function: > > this is the function which will be called when interpreter will > > interrupt for handling events, so in this function you have > > synchronized access to object memory, interpreter state e.t.c. and > > don't have to worry about concurrency. > > Also, function along with event payload are very convenient for > > determining context and what event means and what it will do. > > > Instead of making dozens of event types, enumerating them.. then > > writing a case statements, it's doing a simple dispatch > > event->fn(interpreter, event); so, system are flexible and can handle > > events of any kind doing anything you want. > > This makes it possible to elegantly switch between different event handling funcs: e.g. between a normal and a debugging one, without *any* debugging code in the normal one (by an event func var in the plugin, set by a plugin primitive, used by event posting code). > I'm already used function switching in single place, but for different reason (quirk). Also, i forgot to mention, that in some places events are allocated statically (or at initializing of plugin), to not use malloc/free patterns everytime. This approach is more safer, in a way that you can control different aspects of event generation: prevent queue overruns for example (when VM can't handle events at speed they generated). A pattern is simple: use static buffer, or preallocated one, ruled by event queue. So, when we need to generate new event, instead of using heap (don't forget it's time consuming too), we can do following: vmEventQueue unusedEvents; // a queue of unused events, initially filled with pointers to preallocated buffer space Generating event: myEvent * event = ioDequeueEventFrom(&unusedEvents); if (!event) { // unused event queue is empty, this means that VM still was unable to handle events in time // if this code runs in separate native thread, we simply can wait for free one do { sleep(1 ms); } while (0 == (event = ioDequeueEventFrom(&unusedEvents))); // or, if we can't see a reason why events are still didn't handled, then something is really wrong with design, so generate error error("this should never happen"); } event->header->fn = myHandler; event->myField1 = ... enqueueEvent(intr,myEvent); And in event handling function, we simply return event to it's queue: sqInt myHandler(struct Interpreter * intr, myEvent * evt) { ..... ioEnqueueEventInto(evt, &unusedEvents); // put event back to unused event queue } -- Best regards, Igor Stasenko AKA sig. |
Free forum by Nabble | Edit this page |