µEvLoop
A fast and lightweight event loop aimed at embedded platforms in C99.
|
A fast and lightweight event loop aimed at embedded platforms in C99.
µEvLoop is a microframework build around a lightweight event loop. It provides the programmer the building blocks to put together async, interrupt-based systems.
µEvLoop is loosely inspired on the Javascript event loop and aims to provide a similar programming model. Many similar concepts, such as events and closures are included. It is aimed at environments with very restricted resources, but can be run on all kinds of platforms.
µEvLoop is in its early days and the API may change at any moment for now. Although it's well tested, use it with caution. Anyway, feedback is most welcome.
The API documentation is automatically generated by Doxygen. Find it here.
Tests are written using a simple set of macros. To run them, execute make test
.
Please note that the makefile shipped is meant to be run in modern Linux systems. Right now, it makes use of bash commands and utilities as well as expects libSegFault.so
to be in a hardcoded path.
If this doesn't fit your needs, edit it as necessary.
To generate code coverage reports, run make coverage
. This requires gcov
, lcov
and genhtml
to be on your PATH
. After running, the results can be found on uevloop/coverage/index.html
.
These data structures are used across the whole framework. They can also be used by the programmer in userspace as required.
All core data structures are unsafe. Be sure to wrap access to them in critical sections if you mean to share them amongst contexts asynchronous to each other.
A closure is an object that binds a function to some context. When invoked with arbitrary parameters, the bound function is called with both context and parameters available. With closures, some very powerful programming patterns, as functional composition, become way easier to implement.
Closures are very light and it is often useful to pass them around by value.
Closures take the context and parameters as a void pointers and return the same. This is meant to make possible to pass and return complex objects from them.
At many times, however, the programmer may find the values passed/returned are small and simple (i.e.: smaller than a pointer). If so, it is absolutely valid to cast from/to a uintptr_t
or other data type known to be at most the size of a pointer. The above example does that to avoid creating unnecessary object pools or allocating dynamic memory.
Circular queues are fast FIFO (first-in-first-out) structures that rely on a pair of indices to maintain state. As the indices are moved forward on push/pop operations, the data itself is not moved at all.
The size of µEvLoop's circular queues are required to be powers of two, so it is possible to use fast modulo-2 arithmetic. As such, on queue creation, the size must be provided in its log2 form.
FORGETTING TO SUPPLY THE QUEUE'S SIZE IN LOG2 FORM MAY CAUSE THE STATIC ALLOCATION OF GIANT MEMORY POOLS
Circular queues store void pointers. As it is the case with closures, this make possible to store complex objects within the queue, but often typecasting to an smaller value type is more useful.
On embedded systems, hardware resources such as processing power or RAM memory are often very limited. As a consequence, dynamic memory management can become very expensive in both aspects.
Object pools are statically allocated arrays of objects whose addresses are stored in a queue. Whenever the programmer needs a object in runtime, instead of dynamically allocating memory, it is possible to simply pop an object pointer from the pool and use it away.
Because object pools are statically allocated and backed by circular queues, they are very manageable and fast to operate.
µEvLoop ships a simple linked list implementation that holds void pointers, as usual.
Containers are objects that encapsulate declaration, initialisation and manipulation of core data structures used by the framework.
They also encapsulates manipulation of these data structures inside critical sections, ensuring safe access to shared resources across the system.
The syspools
component is a container for the system internal object pools. It contains pools for events and linked list nodes used by the core components.
The system pools component is meant to be internally operated only. The only responsibility of the programmer is to allocate, initialise and provide it to other core components.
To configure the size of each pool created, edit include/uevloop/config.h
.
The sysqueues
component contains the necessary queues for sharing data amongst the core components. It holds queues for events in differing statuses.
As is the case with system pools, the sysqueues
component should not be directly operated by the programmer, except for declaration and initialisation.
Configure the size of each queue created in include/uevloop/config.h
.
The application
component is a convenient top-level container for all the internals of an µEvLoop'd app. It is not necessary at all but contains much of the boilerplate in a typical application.
It also proxies functions to the event loop
and scheduler
components, serving as a single point entry for the system operation.
The following code is a realistic minimal setup of the framework. ```c #include <uevloop/system/containers/application.h> #include <stdint.h>
static volatile uint32_t counter = 0; static uel_application_t my_app;
// 1 kHz timer void my_timer_isr(){ my_timer_isr_flag = 0; uel_app_update_timer(&my_app, ++counter); }
int main (int argc, char *argv[]){ uel_app_init(&my_app);
// From here, the programmer can: // - Schedule timers with uel_app_run_later
or uel_app_run_at_intervals
// - Enqueue closures with uel_app_enqueue_closure
// - Set up observers with uel_app_observe
// - Listen for signals set at other places
while(1){
uel_app_tick(&my_app); }
return 0; }
The scheduler
component accepts input of closures and scheduling info an then turns it into a timer event. This timer is then inserted in a timer list, which is sorted by each timer's due time.
The scheduler
must be fed regularly to work. It needs both an update on the running time as an stimulus to process enqueued timers. Ideally, a hardware timer will be consistently incrementing a counter and feeding it at an ISR while in the main loop the scheduler is oriented to process its queue.
When the function uel_sch_manage_timers
is called, two things happen:
schedule_queue
is flushed and every timer in it is scheduled accordingly;event_queue
, where they will be further collected and processed.Events are messages passed amongst the system internals that coordinate what tasks are to be run, when and in which order. Usually, the programmer don't have to interact directly with events, being timer events and observers the only exceptions to this. The functions uel_sch_run_later
and uel_sch_run_at_intervals
return a uel_event_t *
. With this handle, it is possible to pause and resume or even completely cancel a timer event.
```C #include <stddef.h>
uel_event_t *timer = uel_sch_run_at_intervals(&scheduler, 100, false, print_one, NULL);
// The event will be put on a hold queue in the scheduler uel_event_timer_pause(timer);
// The event will be rescheduled on the scheduler uel_event_timer_resume(timer);
// The event will be ignored by the scheduler and destroyed at the event loop
uel_event_timer_cancel(timer);
The event loop is mean to behave as a run-to-completion task scheduler. Its uel_evloop_run
function should be called as often as possible as to minimise execution latency. Each execution of uel_evloop_run
is called a runloop .
The only way the programmer interacts with it, besides creation / initialisation, is by enqueuing hand-tailored closures directly, but other system components operate on the event loop behind the stage.
Any closure can be enqueued multiple times.
WARNING! uel_evloop_run
is the single most important function within µEvLoop. Almost every other core component depends on the event loop and if this function is not called, the loop won't work at all. Don't ever let it starve.
The event loop can be instructed to observe some arbitrary volatile value and react to changes in it.
Because observers are completely passive, they are ideal for triggering side-effects from ISRs without any latency. However, each observer set does incur extra latency during runloops, as the observed value must be continuously polled.
```c static volatile uintptr_t adc_reading = 0;
void my_adc_isr(){ adc_reading = my_adc_buffer; my_adc_isr_flag = 0; }
static void *process_adc_reading(void *context, void *params){ uintptr_t value = (uintptr_t)params; // Do something with value
return NULL; } uel_closure_t processor = uel_closure_create(process_adc_reading, NULL);
// This ensures each runloop the adc_reading
variable is polled and, in case // of changes to it, the processor
closure is called with its new value as // parameter. uel_event_t *observer = uel_evloop_observe(&loop, &adc_reading, &processor);
// When an observer isn't needed anymore, it can be disposed of to release any // used system resources. // DON'T use an observer after it has been cancelled. uel_event_observer_cancel(observer).
Please note the listener function will not be executed immediately, despite what this last snippet can lead to believe. Internally, each closure will be sent to the event loop and only when it runs will the closures be invoked.
You can also unlisten for events. This will prevent the listener returned by a uel_signal_listen()
or uel_signal_listen_once()
operation to have its closure invoked when the event loop performs the next runloop. Additionally, said listener will be removed from the signal vector on such opportunity.
Promises are data structures that bind an asynchronous operation to the possible execution paths that derive from its result. They are heavily inspired by Javascript promises.
Promises allow for very clean asynchronous code and exceptional error handling.
All promises must be created at a store, to where they will come back once destroyed. A promise store encapsulates object pools for promises and segments, the two composing pieces for promise operation.
Promise store need access to two object pools, one for promises and one for segments.
As mentioned above, promises and segments are the two building blocks for composing asynchronous chains of events. Promises represent the asynchronous operation per se and segments are the synchronous processing that occurs when a promise settles.
Settling a promise means transitioning it into either resolved or rejected states which respectively indicate success or error of the asynchronous operation, optionally assigning a meaningful value to the promise.
There are two necessary pieces for creating a promise: a store and a constructor closure that starts the asynchronous operation.
When the promise is created, start_some_async
is invoked immediately, taking the promise pointer as parameter.
On creation, promises are in the pending state. This means its asynchronous operation has not been completed yet and the promise does not hold any meaningful value.
Once the operation is completed (and this can also be synchronously done from inside the constructor closure), there are two functions for signalling either success or failure of the asynchronous operation:
Once a promise is settled, it holds a value that can be accessed via promise->value
.
Segments represent the synchronous phase that follows a promise settling. They contain two closures, one for handling resolved promises and one for handling rejected promises. Either one is chosen at runtime, depending on the settled state of the promise, and is invoked with the promise as parameter.
Depending on the promise state, attaching segments have different effects. When a promise is pending, attached segments just get enqueued for execution once the promise is settled. Should the promise be already settled, attached segments get processed immediately instead.
The uel_promise_after()
function takes two closures as parameters. This is useful for splitting the execution path in two mutually exclusive routes depending on the settled state of the promise.
There are three other functions for enqueuing segments. They can be used for attaching segments that only produce effects on specific settled states or attaching the same closure to both states:
Any number of segments can be attached to some promise. They will be either processed immediately, in case the promise is already settled, or enqueued for processing upon settling in the future. Regardless, attached segments are always processed in registration order.
Chaining segments is useful because segments have the ability to commute between execution paths through promise resettling. To resettle a promise means changing its state and value.
This builds a segment chain attached to promise p1
. Each segment added describes one synchronous step to be executed for each of the two settled states.
Segments are processed sequentially, from first to last, starting with the closure relative to the state the promise was settled as. The following table illustrates this chain:
State | Segment 1 | Segment 2 | Segment 3 |
---|---|---|---|
resolved | store_char | nop | done |
rejected | nop | report_error | done |
The outcome of this chain is determined upon settling. For example, given the following resolution:
The first closure invoked is store_char
, in segment 1. In the closure function, the test condition promise->value <= 255
is true, so the closure proceeds to store its value in the c1
variable.
As the promise remains resolved, it advances to segment 2, where it finds a nop
(no-operation). This is due to the segment being attached via uel_promise_catch
.
The promise then advances to segment 3, where it finds the done
closure. The process then ends and the promise retains its state and value (UEL_PROMISE_RESOLVED
and (void *)10
in this case). By then, c1
holds (char)10
and ‘operation done with state 'resolved’` is printed.
If instead the promise was resolved as:
Then the test condition promise->value <= 255
would have failed. The store_char
would then skip storing the value and would rather resettle the promise as rejected, with some error message as value. This effectively commutes the execution path to the rejected branch.
Once the store_char
returns, as the promise is now rejected, the report_error
closure is invoked and ‘promise was rejected with error 'Value too big’is printed. The
doneclosure is then invoked and prints
operation done with state 'rejected'`.
Similarly, if instead the promise had been rejected as:
Segment 1 would be ignored, report_error
would be invoked and print ‘promise was rejected with error 'Asynchronous operation failed’and, at segment 3,
donewould be invoked and print
operation done with state 'rejected'`.
Resettling can also be used for recovering from errors if it commutes back to resolved
state. This constitutes an excellent exception system that allows errors raised in loose asynchronous operations to be rescued consistently. Even exclusively synchronous processes can benefit from this error rescuing system.
As a last note, segments can also resettle a promise as UEL_PROMISE_PENDING
. This halts the synchronous stage immediately, leaving any unprocessed segments in place. This phase can be restarted by either resolving or rejecting the promise again.
Promises can be nested into each other, allowing for complex asynchronous operations to compose a single segment chain. This provides superb flow control for related asynchronous operations that would otherwise produce a lot of spaghetti code.
Whenever a promise segment returns any non-null value, it is cast to a promise pointer. The original promise then transitions back to pending
state and awaits for the returned promise to settle. Once this new promise is settled, the original promise is resumed with whatever state and value the new promise was settled as.
For instance, suppose the programmer is programming an MCU that possesses an ADC with an arbitrarily long buffer and a DMA channel. The program must start the ADC, which will sample N
times and store it in its buffer. After N
samples have been taken, the DMA channel must be instructed to move it out of the ADC buffer into some memory-mapped buffer, where it will be processed.
This could be easily accomplished with signals or callbacks, but would eventually lead to confusing and discontinuous code. With nested promises, however, it is easy to describe the whole process into one single chain.
Suppose this is the implementation for the DMA and ADC modules:
Implementing the project requirements is this simple:
Note that, in the above example, promises are resolved synchronously inside the ISR's. This may be not desirable due to performance reasons, but can be easily improved by enqueueing a closure that resolves nested promises into the event loop.
To destroy a promise, call uel_promise_destroy()
. This will release all segments and then the promise itself. Settling a promise after it has been destroyed is undefined behaviour.
Because settling and destroying promises are so frequent, there are helper functions that emit closures that automate this work:
Modules are independent units of behaviour, self-contained and self-allocated, with clear lifecycle hooks, interface and dependencies. They enforce separation of concerns and isolation by making clear how your code interacts with the rest of the application.
Modules are absolutely optional and very thin on the library side. They are basically a convention of how to write code in a fashion that works well with µEvLoop.
Modules can serve as a variety of purposes:
Modules are operated by the application
component. It is responsible for loading, configuring and launching each module.
There are two method for injecting a registered module: parametrised injection and ad-hoc injection. Each is adequate for a different situation:
Parametrised dependencies are dependencies that are supplied during module construction.
Given the following module header:
The application loading procedure would be:
```c // File: main.c
uel_module_t *modules[MY_APP_MODULE_COUNT];
modules[MY_MODULE] = my_module(&my_app);
// Injects parametrised dependencies name
and other_module
modules[MY_GREETER] = my_greeter(&my_app, "King Kong", (my_module_t *)modules[MY_MODULE]);
uel_app_load(&my_app, modules, MY_APP_MODULE_COUNT)
While Ad-hoc injections seem easier, they make more difficult to know on which modules some particular piece of code depends on. Also, because they require the modules to already be loaded into the registry, they cannot be used during the configuration phase.
Iterators are abstractions on arbitrary collections of items. They provide a uniform interface for yielding each element in the iterated collection, disregarding implementation details of such collection.
There are two iterator specialisations shipped with µEvLoop:
Iterators live entirely on an embedded function pointer, next
. It is responsible by yielding a pointer to each element in the collection.
```c // cast to generic iterator uel_iterator_t *iterator = (uel_iterator_t *)
// when supplied with NULL
as parameter to next
, yields // the first element in the collection int *current = NULL;
while(true){ current = (int *)iterator->next(iterator, (void *)current); if(current != NULL){ // do something }else break; // when there are no more elements , yields NULL
} ```
Besides manually operating an iterator, there are several iteration helpers that automatise work.
```c #include <uevloop/utils/closure.h>
void *accumulate(void *context, void *params){ uintptr_t *acc = (uintptr_t *)context; uintptr_t num = *(uintptr_t *)params; *acc += num; return (void *)true; // required; returning false is equivalent to a break
}
uintptr_t acc = 0; uel_closure_t acumulate_into_acc = uel_closure_create(accumulate, (void *)&acc);
uel_iterator_foreach(iterator, &accumulate_into_acc); // if iterator
is the same array iterator defined previously, // acc == 15
Note that base
is required to be the first member in your custom iterator structure. That way, a pointer to your iterator can be safely cast to uel_iterator_t *
forth and back.
Conditionals are functional switches in the form of a tuple of closures <test, if_true, if_false>
.
When applied to some input, this input is submitted to the test
closure. If it returns true
, if_true
closure is invoked, otherwise, if_false
is invoked. All closures take the same input as arguments.
Pipelines are sequences of closures whose outputs are connected to the next closure's input.
When applied to some input, this input is passed along each closure, being transformed along the way. Applying a pipeline returns the value returned by the last closure in it.
Iterators, conditionals and pipelines are objects associated with synchronous operations.
To make more suitable to asynchronous contexts, there are numerous helpers that can abstract some of their operational details and export them into portable closures.
Please read the docs to find out more about them.
Automatic pools are wrappers objects that enhance the abilities of object pools. They allow constructors and destructors to be attached and, instead of yielding bare pointers, yield uel_autoptr_t
automatic pointers.
Automatic pointers are objects that associate some object to the pool it is from, making it trivial to destroy the object regardless of access to its source. An automatic pointer issued to an object of type T
can be safely cast to T**
. Casting to any other pointer type is undefined behaviour.
Automatic pools are created and initialised in very similar ways to object pools:
After an automatic pool has been created, it can allocate and deallocate objects, just like object pools.
It is possible to attach a constructor and a destructor to some automatic pool. This are closures that will be invoked upon object allocation and deallocation, taking a bare pointer to the object being operated on.
µEvLoop is meant to run baremetal, primarily in simple single-core MCUs. That said, nothing stops it from being employed as a side library in RTOSes or in full-fledged x86_64 multi-threaded desktop applications.
Communication between asynchronous contexts, such as ISRs and side threads, is done through some shared data structures defined inside the library's core components. As whenever dealing with non-atomic shared memory, there must be synchronisation between accesses to these structures as to avoid memory corruption.
µEvLoop does not try to implement a universal locking scheme fit for any device. Instead, some generic critical section definition is provided.
By default, critical sections in µEvLoop are a no-op. They are provided as a set of macros that can be overriden by the programmer to implement platform specific behaviour.
For instance, while running baremetal it may be only necessary to disable interrupts to make sure accesses are synchronised. On a RTOS multi-threaded environment, on the other hand, it may be necessary to use a mutex.
There are three macros that define critical section implementation:
UEL_CRITICAL_SECTION_OBJ_TYPE
If needed, a global critical section object can be declared. If this macro is defined, this object will be available to any critical section under the symbol uel_critical_section
.
The UEL_CRITICAL_SECTION_OBJ_TYPE
macro defines the type of the object. It is the programmer's responsibility to declare, globally allocate and initialise the object.
UEL_CRITICAL_ENTER
Enters a new critical section. From this point until the critical section exits, no other thread or ISR may attempt to access the system's shared memory.
UEL_CRITICAL_EXIT
Exits the current critical section. After this is called, any shared memory is allowed to be claimed by some party.
I often work with small MCUs (8-16bits) that simply don't have the necessary power to run a RTOS or any fancy scheduling solution. Right now I am working on a new commercial project and felt the need to build something by my own. µEvLoop is my bet on how a modern, interrupt-driven and predictable embedded application should be. I am also looking for a new job and needed to improve my portfolio.