Documentation reorganization. Moved the description from kernel.dox into the source code for ease of editing and reference.

git-svn-id: svn://svn.code.sf.net/p/chibios/svn/trunk@1746 35acf78f-673a-0410-8e92-d51de3d6d3f4
master
gdisirio 2010-03-16 19:36:21 +00:00
parent 0eed163a69
commit ad3d21e815
26 changed files with 329 additions and 275 deletions

View File

@ -2,269 +2,269 @@ Platform : PowerPC
OS Setup : Full kernel
Compiler : powerpc-eabi-gcc (Sourcery G++ Lite 4.4-79) 4.4.1
Options : -O2 -DCH_OPTIMIZE_SPEED=TRUE
Kernel Size = 11020
Kernel Size = 10900
Platform : PowerPC
OS Setup : Full kernel
Compiler : powerpc-eabi-gcc (Sourcery G++ Lite 4.4-79) 4.4.1
Options : -O2 -DCH_OPTIMIZE_SPEED=FALSE
Kernel Size = 10564
Kernel Size = 10436
Platform : PowerPC
OS Setup : Minimal kernel
Compiler : powerpc-eabi-gcc (Sourcery G++ Lite 4.4-79) 4.4.1
Options : -O2
Kernel Size = 2288
Kernel Size = 2176
Platform : PowerPC
OS Setup : Full kernel
Compiler : powerpc-eabi-gcc (Sourcery G++ Lite 4.4-79) 4.4.1
Options : -Os -DCH_OPTIMIZE_SPEED=TRUE
Kernel Size = 9680
Kernel Size = 9560
Platform : PowerPC
OS Setup : Full kernel
Compiler : powerpc-eabi-gcc (Sourcery G++ Lite 4.4-79) 4.4.1
Options : -Os -DCH_OPTIMIZE_SPEED=FALSE
Kernel Size = 9192
Kernel Size = 9076
Platform : PowerPC
OS Setup : Minimal kernel
Compiler : powerpc-eabi-gcc (Sourcery G++ Lite 4.4-79) 4.4.1
Options : -Os
Kernel Size = 2312
Kernel Size = 2200
Platform : ARM Cortex-M3
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -O2 -mthumb -DCH_OPTIMIZE_SPEED=TRUE
Kernel Size = 5424
Kernel Size = 5372
Platform : ARM Cortex-M3
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -O2 -mthumb -DCH_OPTIMIZE_SPEED=FALSE
Kernel Size = 4948
Kernel Size = 4900
Platform : ARM Cortex-M3
OS Setup : Minimal kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -O2 -mthumb
Kernel Size = 1408
Kernel Size = 1360
Platform : ARM Cortex-M3
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -mthumb -DCH_OPTIMIZE_SPEED=TRUE
Kernel Size = 5212
Kernel Size = 5152
Platform : ARM Cortex-M3
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -mthumb -DCH_OPTIMIZE_SPEED=FALSE
Kernel Size = 4784
Kernel Size = 4736
Platform : ARM Cortex-M3
OS Setup : Minimal kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -mthumb
Kernel Size = 1364
Kernel Size = 1308
Platform : ARM Cortex-M3
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -mthumb -ffixed-r7 -DCH_CURRP_REGISTER_CACHE=\"r7\" -DCH_OPTIMIZE_SPEED=TRUE
Kernel Size = 5028
Kernel Size = 4964
Platform : ARM Cortex-M3
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -mthumb -ffixed-r7 -DCH_CURRP_REGISTER_CACHE=\"r7\" -DCH_OPTIMIZE_SPEED=FALSE
Kernel Size = 4612
Kernel Size = 4560
Platform : ARM Cortex-M3
OS Setup : Minimal kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -mthumb -ffixed-r7 -DCH_CURRP_REGISTER_CACHE=\"r7\"
Kernel Size = 1332
Kernel Size = 1272
Platform : ARM7TDMI (ARM mode)
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -O2 -DCH_OPTIMIZE_SPEED=TRUE
Kernel Size = 7964
Kernel Size = 7852
Platform : ARM7TDMI (ARM mode)
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -O2 -DCH_OPTIMIZE_SPEED=FALSE
Kernel Size = 7532
Kernel Size = 7436
Platform : ARM7TDMI (ARM mode)
OS Setup : Minimal kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -O2
Kernel Size = 1972
Kernel Size = 1868
Platform : ARM7TDMI (ARM mode)
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -DCH_OPTIMIZE_SPEED=TRUE
Kernel Size = 7704
Kernel Size = 7600
Platform : ARM7TDMI (ARM mode)
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -DCH_OPTIMIZE_SPEED=FALSE
Kernel Size = 7312
Kernel Size = 7220
Platform : ARM7TDMI (ARM mode)
OS Setup : Minimal kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os
Kernel Size = 1916
Kernel Size = 1824
Platform : ARM7TDMI (ARM mode)
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -O2 -ffixed-r7 -DCH_CURRP_REGISTER_CACHE=\"r7\" -DCH_OPTIMIZE_SPEED=TRUE
Kernel Size = 7688
Kernel Size = 7572
Platform : ARM7TDMI (ARM mode)
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -O2 -ffixed-r7 -DCH_CURRP_REGISTER_CACHE=\"r7\" -DCH_OPTIMIZE_SPEED=FALSE
Kernel Size = 7268
Kernel Size = 7168
Platform : ARM7TDMI (ARM mode)
OS Setup : Minimal kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -O2 -ffixed-r7 -DCH_CURRP_REGISTER_CACHE=\"r7\"
Kernel Size = 1904
Kernel Size = 1796
Platform : ARM7TDMI (ARM mode)
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -ffixed-r7 -DCH_CURRP_REGISTER_CACHE=\"r7\" -DCH_OPTIMIZE_SPEED=TRUE
Kernel Size = 7412
Kernel Size = 7304
Platform : ARM7TDMI (ARM mode)
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -ffixed-r7 -DCH_CURRP_REGISTER_CACHE=\"r7\" -DCH_OPTIMIZE_SPEED=FALSE
Kernel Size = 7040
Kernel Size = 6944
Platform : ARM7TDMI (ARM mode)
OS Setup : Minimal kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -ffixed-r7 -DCH_CURRP_REGISTER_CACHE=\"r7\"
Kernel Size = 1872
Kernel Size = 1772
Platform : ARM7TDMI (THUMB mode)
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -O2 -mthumb -DCH_OPTIMIZE_SPEED=TRUE -DTHUMB -DTHUMB_PRESENT -DTHUMB_NO_INTERWORKING
Kernel Size = 5216
Kernel Size = 5168
Platform : ARM7TDMI (THUMB mode)
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -O2 -mthumb -DCH_OPTIMIZE_SPEED=FALSE -DTHUMB -DTHUMB_PRESENT -DTHUMB_NO_INTERWORKING
Kernel Size = 5008
Kernel Size = 4960
Platform : ARM7TDMI (THUMB mode)
OS Setup : Minimal kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -O2 -mthumb -DTHUMB -DTHUMB_PRESENT -DTHUMB_NO_INTERWORKING
Kernel Size = 1356
Kernel Size = 1312
Platform : ARM7TDMI (THUMB mode)
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -mthumb -DCH_OPTIMIZE_SPEED=TRUE -DTHUMB -DTHUMB_PRESENT -DTHUMB_NO_INTERWORKING
Kernel Size = 5036
Kernel Size = 4988
Platform : ARM7TDMI (THUMB mode)
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -mthumb -DCH_OPTIMIZE_SPEED=FALSE -DTHUMB -DTHUMB_PRESENT -DTHUMB_NO_INTERWORKING
Kernel Size = 4844
Kernel Size = 4796
Platform : ARM7TDMI (THUMB mode)
OS Setup : Minimal kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -mthumb -DTHUMB -DTHUMB_PRESENT -DTHUMB_NO_INTERWORKING
Kernel Size = 1336
Kernel Size = 1292
Platform : ARM7TDMI (THUMB mode)
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -O2 -mthumb -ffixed-r7 -DCH_CURRP_REGISTER_CACHE=\"r7\" -DCH_OPTIMIZE_SPEED=TRUE -DTHUMB -DTHUMB_PRESENT -DTHUMB_NO_INTERWORKING
Kernel Size = 5064
Kernel Size = 5012
Platform : ARM7TDMI (THUMB mode)
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -O2 -mthumb -ffixed-r7 -DCH_CURRP_REGISTER_CACHE=\"r7\" -DCH_OPTIMIZE_SPEED=FALSE -DTHUMB -DTHUMB_PRESENT -DTHUMB_NO_INTERWORKING
Kernel Size = 4872
Kernel Size = 4820
Platform : ARM7TDMI (THUMB mode)
OS Setup : Minimal kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -O2 -mthumb -ffixed-r7 -DCH_CURRP_REGISTER_CACHE=\"r7\" -DTHUMB -DTHUMB_PRESENT -DTHUMB_NO_INTERWORKING
Kernel Size = 1316
Kernel Size = 1268
Platform : ARM7TDMI (THUMB mode)
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -mthumb -ffixed-r7 -DCH_CURRP_REGISTER_CACHE=\"r7\" -DCH_OPTIMIZE_SPEED=TRUE -DTHUMB -DTHUMB_PRESENT -DTHUMB_NO_INTERWORKING
Kernel Size = 4844
Kernel Size = 4792
Platform : ARM7TDMI (THUMB mode)
OS Setup : Full kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -mthumb -ffixed-r7 -DCH_CURRP_REGISTER_CACHE=\"r7\" -DCH_OPTIMIZE_SPEED=FALSE -DTHUMB -DTHUMB_PRESENT -DTHUMB_NO_INTERWORKING
Kernel Size = 4688
Kernel Size = 4636
Platform : ARM7TDMI (THUMB mode)
OS Setup : Minimal kernel
Compiler : arm-elf-gcc (GCC) 4.4.2
Options : -Os -mthumb -ffixed-r7 -DCH_CURRP_REGISTER_CACHE=\"r7\" -DTHUMB -DTHUMB_PRESENT -DTHUMB_NO_INTERWORKING
Kernel Size = 1300
Kernel Size = 1252
Platform : MSP430
OS Setup : Full kernel
Compiler : msp430-gcc (GCC) 3.2.3
Options : -O2 -DCH_OPTIMIZE_SPEED=TRUE
Kernel Size = 5636
Kernel Size = 5548
Platform : MSP430
OS Setup : Full kernel
Compiler : msp430-gcc (GCC) 3.2.3
Options : -O2 -DCH_OPTIMIZE_SPEED=FALSE
Kernel Size = 5132
Kernel Size = 5044
Platform : MSP430
OS Setup : Minimal kernel
Compiler : msp430-gcc (GCC) 3.2.3
Options : -O2
Kernel Size = 1256
Kernel Size = 1172
Platform : MSP430
OS Setup : Full kernel
Compiler : msp430-gcc (GCC) 3.2.3
Options : -Os -DCH_OPTIMIZE_SPEED=TRUE
Kernel Size = 5572
Kernel Size = 5484
Platform : MSP430
OS Setup : Full kernel
Compiler : msp430-gcc (GCC) 3.2.3
Options : -Os -DCH_OPTIMIZE_SPEED=FALSE
Kernel Size = 5088
Kernel Size = 5000
Platform : MSP430
OS Setup : Minimal kernel
Compiler : msp430-gcc (GCC) 3.2.3
Options : -Os
Kernel Size = 1256
Kernel Size = 1172

View File

@ -24,6 +24,7 @@
* only kernel header you usually want to include in your application.
*
* @addtogroup kernel_info
* @details Kernel related info.
* @{
*/

View File

@ -24,6 +24,14 @@
* I/O resources in a standardized way.
*
* @addtogroup io_channels
* @details This module defines an abstract interface for I/O channels by
* extending the @p BaseSequentialStream interface. Note that no code
* is present, I/O channels are just abstract interface like
* structures, you should look at the systems as to a set of abstract
* C++ classes (even if written in C). Specific device drivers can
* use/extend the interface and implement them.<br>
* This system has the advantage to make the access to channels
* independent from the implementation logic.
* @{
*/

View File

@ -24,6 +24,14 @@
* data streams in a standardized way.
*
* @addtogroup data_streams
* @details This module define an abstract interface for generic data streams.
* Note that no code is present, streams are just abstract interfaces
* like structures, you should look at the systems as to a set of
* abstract C++ classes (even if written in C). This system has the
* advantage to make the access to streams independent from the
* implementation logic.<br>
* The stream interface can be used as base class for high level
* object types such as files, sockets, serial ports, pipes etc.
* @{
*/

View File

@ -19,332 +19,147 @@
/**
* @defgroup kernel Kernel
* The kernel is the portable part of ChibiOS/RT, this section documents the
* various kernel subsystems.
* @details The kernel is the portable part of ChibiOS/RT, this section
* documents the various kernel subsystems.
*/
/**
* @defgroup kernel_info Version Numbers and Identification
* Kernel related info.
* @ingroup kernel
*/
/**
* @defgroup config Configuration
* Kernel related settings and hooks.
* @ingroup kernel
*/
/**
* @defgroup types Types
* System types and macros.
* @ingroup kernel
*/
/**
* @defgroup base Base Kernel Services
* Base kernel services, the base subsystems are always included in the
* OS builds.
* @details Base kernel services, the base subsystems are always included in
* the OS builds.
* @ingroup kernel
*/
/**
* @defgroup system System Management
* Initialization, Locks, Interrupt Handling, Power Management, Abnormal
* Termination.
* @ingroup base
*/
/**
* @defgroup scheduler Scheduler
* @ingroup base
*/
/**
* @defgroup threads Threads
* @ingroup base
*/
/**
* @defgroup time Time and Virtual Timers
* Time and Virtual Timers related APIs.
* @ingroup base
*/
/**
* @defgroup synchronization Synchronization
* Synchronization services.
* @details Synchronization services.
* @ingroup kernel
*/
/**
* @defgroup semaphores Semaphores
* Semaphores and threads synchronization.
* <h2>Operation mode</h2>
* A semaphore is a threads synchronization object, some operations
* are defined on semaphores:
* - <b>Signal</b>: The semaphore counter is increased and if the result
* is non-positive then a waiting thread is removed from the semaphore
* queue and made ready for execution.
* - <b>Wait</b>: The semaphore counter is decreased and if the result
* becomes negative the thread is queued in the semaphore and suspended.
* - <b>Reset</b>: The semaphore counter is reset to a non-negative value
* and all the threads in the queue are released.
* .
* Semaphores can be used as guards for mutual exclusion code zones (note that
* mutexes are recommended for this kind of use) but also have other uses,
* queues guards and counters as example.<br>
* Semaphores usually use FIFO queues but it is possible to make them
* order threads by priority by specifying @p CH_USE_SEMAPHORES_PRIORITY in
* @p chconf.h.<br>
* In order to use the Semaphores APIs the @p CH_USE_SEMAPHORES
* option must be specified in @p chconf.h.<br><br>
* @ingroup synchronization
*/
/**
* @defgroup mutexes Mutexes
* Mutexes and threads synchronization.
* <h2>Operation mode</h2>
* A mutex is a threads synchronization object, some operations are defined
* on mutexes:
* - <b>Lock</b>: The mutex is checked, if the mutex is not owned by some
* other thread then it is locked else the current thread is queued on the
* mutex in a list ordered by priority.
* - <b>Unlock</b>: The mutex is released by the owner and the highest
* priority thread waiting in the queue, if any, is resumed and made owner
* of the mutex.
* .
* In order to use the Event APIs the @p CH_USE_MUTEXES option must be
* specified in @p chconf.h.<br>
*
* <h2>Constraints</h2>
* In ChibiOS/RT the Unlock operations are always performed in Lock-reverse
* order. The Unlock API does not even have a parameter, the mutex to unlock
* is taken from an internal stack of owned mutexes.
* This both improves the performance and is required by an efficient
* implementation of the priority inheritance mechanism.
*
* <h2>The priority inversion problem</h2>
* The mutexes in ChibiOS/RT implements the <b>full</b> priority
* inheritance mechanism in order handle the priority inversion problem.<br>
* When a thread is queued on a mutex, any thread, directly or indirectly,
* holding the mutex gains the same priority of the waiting thread (if their
* priority was not already equal or higher). The mechanism works with any
* number of nested mutexes and any number of involved threads. The algorithm
* complexity (worst case) is N with N equal to the number of nested mutexes.
* @ingroup synchronization
*/
/**
* @defgroup condvars Condition Variables
* Condition Variables and threads synchronization.
* <h2>Operation mode</h2>
* The condition variable is a synchronization object meant to be used inside
* a zone protected by a @p Mutex. Mutexes and CondVars together can implement
* a Monitor construct.<br>
* In order to use the Condition Variables APIs the @p CH_USE_CONDVARS
* option must be specified in @p chconf.h.<br><br>
* @ingroup synchronization
*/
/**
* @defgroup events Event Flags
* @brief Event Flags, Event Sources and Event Listeners.
* <h2>Operation mode</h2>
* Each thread has a mask of pending event flags inside its Thread structure.
* Several operations are defined:
* - <b>Wait</b>, the invoking thread goes to sleep until a certain AND/OR
* combination of event flags becomes pending.
* - <b>Clear</b>, a mask of event flags is cleared from the pending events
* mask, the cleared event flags mask is returned (only the flags that were
actually pending and then cleared).
* - <b>Signal</b>, an event mask is directly ORed to the mask of the signaled
* thread.
* - <b>Broadcast</b>, each thread registered on an Event Source is signaled
* with the event flags specified in its Event Listener.
* - <b>Dispatch</b>, an events mask is scanned and for each bit set to one
* an associated handler function is invoked. Bit masks are scanned from bit
* zero upward.
* .
* An Event Source is a special object that can be "broadcasted" by a thread or
* an interrupt service routine. Broadcasting an Event Source has the effect
* that all the threads registered on the Event Source will be signaled with
* and events mask.<br>
* An unlimited number of Event Sources can exists in a system and each
* thread can listen on an unlimited number of them.<br><br>
* In order to use the Event APIs the @p CH_USE_EVENTS option must be
* specified in @p chconf.h.
* @ingroup synchronization
*/
/**
* @defgroup messages Synchronous Messages
* Synchronous inter-thread messages.
* <h2>Operation Mode</h2>
* Synchronous messages are an easy to use and fast IPC mechanism, threads
* can both serve messages and send messages to other threads, the mechanism
* allows data to be carried in both directions. Data is not copied between
* the client and server threads but just a pointer passed so the exchange
* is very time efficient.<br>
* Messages are usually processed in FIFO order but it is possible to process
* them in priority order by specifying CH_USE_MESSAGES_PRIORITY
* in @p chconf.h.<br>
* Threads do not need to allocate space for message queues, the mechanism
* just requires two extra pointers in the @p Thread structure (the message
* queue header).<br>
* In order to use the Messages APIs the @p CH_USE_MESSAGES option must be
* specified in @p chconf.h.
* @ingroup synchronization
*/
/**
* @defgroup mailboxes Mailboxes
* Asynchronous messages.
* <h2>Operation mode</h2>
* A mailbox is an asynchronous communication mechanism.<br>
* The following operations are possible on a mailbox:
* - <b>Post</b>: Posts a message on the mailbox in FIFO order.
* - <b>Post Ahead</b>: Posts a message on the mailbox with high priority.
* - <b>Fetch</b>: A message is fetched from the mailbox and removed from
* the queue.
* - <b>Reset</b>: The mailbox is emptied and all the stored messages lost.
* .
* A message is a variable of type msg_t that is guaranteed to have the
* same size of and be compatible with pointers (an explicit cast is needed).
* If larger messages need to be exchanged then a pointer to a structure can
* be posted in the mailbox but the posting side has no predefined way to
* know when the message has been processed. A possible approach is to
* allocate memory (from a memory pool as example) from the posting side and
* free it on the fetching side. Another approach is to set a "done" flag into
* the structure pointed by the message.
* @ingroup synchronization
*/
/**
* @defgroup memory Memory Management
* Memory Management services.
* @details Memory Management services.
* @ingroup kernel
*/
/**
* @defgroup memcore Core Memory Manager
* Core Memory Manager related APIs.
* <h2>Operation mode</h2>
* The core memory manager is a simplified allocator that only allows to
* allocate memory blocks without the possibility to free them.<br>
* This allocator is meant as a memory blocks provider for the other
* allocators such as:
* - C-Runtime allocator.
* - Heap allocator (see @ref heaps).
* - Memory pools allocator (see @ref pools).
* .
* By having a centralized memory provider the various allocators can coexist
* and share the main memory.<br>
* This allocator, alone, is also useful for very simple applications that
* just require a simple way to get memory blocks.<br>
* In order to use the core memory manager APIs the @p CH_USE_MEMCORE option
* must be specified in @p chconf.h.
* @ingroup memory
*/
/**
* @defgroup heaps Heaps
* Heap Allocator related APIs.
* <h2>Operation mode</h2>
* The heap allocator implements a first-fit strategy and its APIs are
* functionally equivalent to the usual @p malloc() and @p free(). The main
* difference is that the heap APIs are thread safe.<br>
* By enabling the @p CH_USE_MALLOC_HEAP option the heap manager will use the
* runtime-provided @p malloc() and @p free() as backend for the heap APIs
* instead of the system provided allocator.<br>
* In order to use the heap APIs the @p CH_USE_HEAP option must be specified
* in @p chconf.h.
* @ingroup memory
*/
/**
* @defgroup pools Memory Pools
* Memory Pools related APIs.
* <h2>Operation mode</h2>
* The Memory Pools APIs allow to allocate/free fixed size objects in
* <b>constant time</b> and reliably without memory fragmentation problems.<br>
* In order to use the Time APIs the @p CH_USE_MEMPOOLS option must be
* specified in @p chconf.h.
* @ingroup memory
*/
/**
* @defgroup io_support I/O Support
* I/O related services.
* @details I/O related services.
* @ingroup kernel
*/
/**
* @defgroup data_streams Data Streams
* @brief Abstract Data Streams.
* @details This module define an abstract interface for generic data streams.
* Note that no code is present, streams are just abstract classes-like
* structures, you should look at the systems as to a set of abstract C++
* classes (even if written in C). This system has the advantage to make the
* access to streams independent from the implementation logic.<br>
* The stream interface can be used as base class for high level object types
* such as files, sockets, serial ports, pipes etc.
*
* @ingroup io_support
*/
/**
* @defgroup io_channels I/O Channels
* @brief Abstract I/O Channels.
* @details This module defines an abstract interface for I/O channels by
* extending the @p BaseSequentialStream interface. Note that no code is
* present, I/O channels are just abstract classes-like structures,
* you should look at the systems as to a set of abstract C++ classes
* (even if written in C). Specific device drivers can use/extend the
* interface and implement them.<br>
* This system has the advantage to make the access to channels
* independent from the implementation logic.
*
* @ingroup io_support
*/
/**
* @defgroup io_queues I/O Queues
* @brief I/O queues.
* @details ChibiOS/RT supports several kinds of queues. The queues are mostly
* used in serial-like device drivers. The device drivers are usually designed
* to have a lower side (lower driver, it is usually an interrupt service
* routine) and an upper side (upper driver, accessed by the application
* threads).<br>
* There are several kind of queues:<br>
* - <b>Input queue</b>, unidirectional queue where the writer is the
* lower side and the reader is the upper side.
* - <b>Output queue</b>, unidirectional queue where the writer is the
* upper side and the reader is the lower side.
* - <b>Full duplex queue</b>, bidirectional queue where read and write
* operations can happen at the same time. Full duplex queues
* are implemented by pairing an input queue and an output queue together.
* .
* In order to use the I/O queues the @p CH_USE_QUEUES option must
* be specified in @p chconf.h.<br>
* I/O queues are usually used as an implementation layer for the I/O channels
* interface.
*
* @ingroup io_support
*/
/**
* @defgroup registry Registry
* Threads Registry related APIs.
* @ingroup kernel
*/
/**
* @defgroup debug Debug
* Debug APIs and procedures.
* @ingroup kernel
*/
/**
* @defgroup core Port Templates
* Non portable code templates.
* @ingroup kernel
*/
/**
* @defgroup internals Internals
* Internal details, not APIs.
* @ingroup kernel
*/

View File

@ -26,6 +26,15 @@
* @brief Condition Variables code.
*
* @addtogroup condvars Condition Variables
* @details This module implements the Condition Variables mechanism. Condition
* variables are an extensions to the Mutex subsystem and cannot
* work alone.
* <h2>Operation mode</h2>
* The condition variable is a synchronization object meant to be
* used inside a zone protected by a @p Mutex. Mutexes and CondVars
* together can implement a Monitor construct.<br>
* In order to use the Condition Variables APIs the @p CH_USE_CONDVARS
* option must be enabled in @p chconf.h.
* @{
*/
@ -107,10 +116,11 @@ void chCondBroadcastI(CondVar *cp) {
/**
* @brief Waits on the condition variable releasing the mutex lock.
* @details Releases the mutex, waits on the condition variable, and finally
* acquires the mutex again. This is done atomically.
* @note The thread MUST already have locked the mutex when calling
* @p chCondWait().
* @details Releases the currently owned mutex, waits on the condition
* variable, and finally acquires the mutex again. All the sequence
* is performed atomically.
* @note The invoking thread <b>must</b> have at least one owned mutex on
* entry.
*
* @param[in] cp pointer to the @p CondVar structure
* @return The wakep mode.
@ -128,10 +138,11 @@ msg_t chCondWait(CondVar *cp) {
/**
* @brief Waits on the condition variable releasing the mutex lock.
* @details Releases the mutex, waits on the condition variable, and finally
* acquires the mutex again. This is done atomically.
* @note The thread MUST already have locked the mutex when calling
* @p chCondWaitS().
* @details Releases the currently owned mutex, waits on the condition
* variable, and finally acquires the mutex again. All the sequence
* is performed atomically.
* @note The invoking thread <b>must</b> have at least one owned mutex on
* entry.
*
* @param[in] cp pointer to the @p CondVar structure
* @return The wakep mode.
@ -160,10 +171,13 @@ msg_t chCondWaitS(CondVar *cp) {
#if CH_USE_CONDVARS_TIMEOUT
/**
* @brief Waits on the condition variable releasing the mutex lock.
* @details Releases the mutex, waits on the condition variable, and finally
* acquires the mutex again. This is done atomically.
* @note The thread MUST already have locked the mutex when calling
* @p chCondWaitTimeout().
* @details Releases the currently owned mutex, waits on the condition
* variable, and finally acquires the mutex again. All the sequence
* is performed atomically.
* @note The invoking thread <b>must</b> have at least one owned mutex on
* entry.
* @note Exiting the function because a timeout does not re-acquire the
* mutex, the mutex ownership is lost.
*
* @param[in] cp pointer to the @p CondVar structure
* @param[in] time the number of ticks before the operation timeouts,
@ -188,10 +202,13 @@ msg_t chCondWaitTimeout(CondVar *cp, systime_t time) {
/**
* @brief Waits on the condition variable releasing the mutex lock.
* @details Releases the mutex, waits on the condition variable, and finally
* acquires the mutex again. This is done atomically.
* @note The thread MUST already have locked the mutex when calling
* @p chCondWaitTimeoutS().
* @details Releases the currently owned mutex, waits on the condition
* variable, and finally acquires the mutex again. All the sequence
* is performed atomically.
* @note The invoking thread <b>must</b> have at least one owned mutex on
* entry.
* @note Exiting the function because a timeout does not re-acquire the
* mutex, the mutex ownership is lost.
*
* @param[in] cp pointer to the @p CondVar structure
* @param[in] time the number of ticks before the operation timeouts,
@ -218,7 +235,8 @@ msg_t chCondWaitTimeoutS(CondVar *cp, systime_t time) {
currp->p_u.wtobjp = cp;
prio_insert(currp, &cp->c_queue);
msg = chSchGoSleepTimeoutS(THD_STATE_WTCOND, time);
chMtxLockS(mp);
if (msg != RDY_TIMEOUT)
chMtxLockS(mp);
return msg;
}
#endif /* CH_USE_CONDVARS_TIMEOUT */

View File

@ -22,6 +22,11 @@
* @brief ChibiOS/RT Debug code.
*
* @addtogroup debug
* @details Debug APIs and services:
* - Trace buffer.
* - Parameters check.
* - Kernel assertions.
* .
* @{
*/

View File

@ -22,6 +22,33 @@
* @brief Events code.
*
* @addtogroup events
* @details Event Flags, Event Sources and Event Listeners.
* <h2>Operation mode</h2>
* Each thread has a mask of pending event flags inside its @p Thread
* structure.
* Several operations are defined:
* - <b>Wait</b>, the invoking thread goes to sleep until a certain
* AND/OR combination of event flags becomes pending.
* - <b>Clear</b>, a mask of event flags is cleared from the pending
* events mask, the cleared event flags mask is returned (only the
* flags that were actually pending and then cleared).
* - <b>Signal</b>, an event mask is directly ORed to the mask of the
* signaled thread.
* - <b>Broadcast</b>, each thread registered on an Event Source is
* signaled with the event flags specified in its Event Listener.
* - <b>Dispatch</b>, an events mask is scanned and for each bit set
* to one an associated handler function is invoked. Bit masks are
* scanned from bit zero upward.
* .
* An Event Source is a special object that can be "broadcasted" by
* a thread or an interrupt service routine. Broadcasting an Event
* Source has the effect that all the threads registered on the
* Event Source will be signaled with an events mask.<br>
* An unlimited number of Event Sources can exists in a system and
* each thread can be listening on an unlimited number of
* them.<br><br>
* In order to use the Events APIs the @p CH_USE_EVENTS option must be
* enabled in @p chconf.h.
* @{
*/
#include "ch.h"

View File

@ -22,6 +22,18 @@
* @brief Heaps code.
*
* @addtogroup heaps
* @details Heap Allocator related APIs.
* <h2>Operation mode</h2>
* The heap allocator implements a first-fit strategy and its APIs
* are functionally equivalent to the usual @p malloc() and @p free()
* library functions. The main difference is that the OS heap APIs
* are guaranteed to be thread safe.<br>
* By enabling the @p CH_USE_MALLOC_HEAP option the heap manager
* will use the runtime-provided @p malloc() and @p free() as
* backend for the heap APIs instead of the system provided
* allocator.<br>
* In order to use the heap APIs the @p CH_USE_HEAP option must
* be enabled in @p chconf.h.
* @{
*/

View File

@ -20,11 +20,11 @@
/**
* @file chlists.c
* @brief Thread queues/lists code.
* @note All the functions present in this module, while public, are not
* an OS API and should not be directly used in the user applications
* code.
*
* @addtogroup internals
* @details All the functions present in this module, while public, are not
* an OS API and should not be directly used in the user applications
* code.
* @{
*/
#include "ch.h"

View File

@ -22,6 +22,30 @@
* @brief Mailboxes code.
*
* @addtogroup mailboxes
* @details Asynchronous messages.
* <h2>Operation mode</h2>
* A mailbox is an asynchronous communication mechanism.<br>
* The following operations are possible on a mailbox:
* - <b>Post</b>: Posts a message on the mailbox in FIFO order.
* - <b>Post Ahead</b>: Posts a message on the mailbox with urgent
* priority.
* - <b>Fetch</b>: A message is fetched from the mailbox and removed
* from the queue.
* - <b>Reset</b>: The mailbox is emptied and all the stored messages
* are lost.
* .
* A message is a variable of type msg_t that is guaranteed to have
* the same size of and be compatible with (data) pointers (anyway an
* explicit cast is needed).
* If larger messages need to be exchanged then a pointer to a
* structure can be posted in the mailbox but the posting side has
* no predefined way to know when the message has been processed. A
* possible approach is to allocate memory (from a memory pool as
* example) from the posting side and free it on the fetching side.
* Another approach is to set a "done" flag into the structure pointed
* by the message.<br>
* In order to use the mailboxes APIs the @p CH_USE_MAILBOXES option
* must be enabled in @p chconf.h.
* @{
*/

View File

@ -22,6 +22,22 @@
* @brief Core memory manager code.
*
* @addtogroup memcore
* @details Core Memory Manager related APIs and services.
* <h2>Operation mode</h2>
* The core memory manager is a simplified allocator that only allows
* to allocate memory blocks without the possibility to free them.<br>
* This allocator is meant as a memory blocks provider for the other
* allocators such as:
* - C-Runtime allocator (through a compiler specific adapter module).
* - Heap allocator (see @ref heaps).
* - Memory pools allocator (see @ref pools).
* .
* By having a centralized memory provider the various allocators can
* coexist and share the main memory.<br>
* This allocator, alone, is also useful for very simple applications
* that just require a simple way to get memory blocks.<br>
* In order to use the core memory manager APIs the @p CH_USE_MEMCORE
* option must be enabled in @p chconf.h.
* @{
*/

View File

@ -22,6 +22,13 @@
* @brief Memory Pools code.
*
* @addtogroup pools
* @details Memory Pools related APIs and services.
* <h2>Operation mode</h2>
* The Memory Pools APIs allow to allocate/free fixed size objects in
* <b>constant time</b> and reliably without memory fragmentation
* problems.<br>
* In order to use the memory pools APIs the @p CH_USE_MEMPOOLS option
* must be enabled in @p chconf.h.
* @{
*/

View File

@ -22,6 +22,22 @@
* @brief Messages code.
*
* @addtogroup messages
* @details Synchronous inter-thread messages APIs and services.
* <h2>Operation Mode</h2>
* Synchronous messages are an easy to use and fast IPC mechanism,
* threads can both act as message servers and/or message clients,
* the mechanism allows data to be carried in both directions. Note
* that messages are not copied between the client and server threads
* but just a pointer passed so the exchange is very time
* efficient.<br>
* Messages are usually processed in FIFO order but it is possible to
* process them in priority order by enabling the
* @p CH_USE_MESSAGES_PRIORITY option in @p chconf.h.<br>
* Applications do not need to allocate buffers for synchronous
* message queues, the mechanism just requires two extra pointers in
* the @p Thread structure (the message queue header).<br>
* In order to use the Messages APIs the @p CH_USE_MESSAGES option
* must be enabled in @p chconf.h.
* @{
*/

View File

@ -22,6 +22,43 @@
* @brief Mutexes code.
*
* @addtogroup mutexes
* @details Mutexes related APIs and services.
*
* <h2>Operation mode</h2>
* A mutex is a threads synchronization object that can be in two
* distinct states:
* - Not owned.
* - Owned by a thread.
* .
* Some operations are defined on mutexes:
* - <b>Lock</b>: The mutex is checked, if the mutex is not owned by
* some other thread then it is associated to the locking thread
* else the thread is queued on the mutex in a list ordered by
* priority.
* - <b>Unlock</b>: The mutex is released by the owner and the highest
* priority thread waiting in the queue, if any, is resumed and made
* owner of the mutex.
* .
* In order to use the Mutexes APIs the @p CH_USE_MUTEXES option must
* be enabled in @p chconf.h.
* <h2>Constraints</h2>
* In ChibiOS/RT the Unlock operations are always performed in
* lock-reverse order. The unlock API does not even have a parameter,
* the mutex to unlock is selected from an internal, per-thread, stack
* of owned mutexes. This both improves the performance and is
* required for an efficient implementation of the priority
* inheritance mechanism.
*
* <h2>The priority inversion problem</h2>
* The mutexes in ChibiOS/RT implements the <b>full</b> priority
* inheritance mechanism in order handle the priority inversion
* problem.<br>
* When a thread is queued on a mutex, any thread, directly or
* indirectly, holding the mutex gains the same priority of the
* waiting thread (if their priority was not already equal or higher).
* The mechanism works with any number of nested mutexes and any
* number of involved threads. The algorithm complexity (worst case)
* is N with N equal to the number of nested mutexes.
* @{
*/

View File

@ -22,6 +22,23 @@
* @brief I/O Queues code.
*
* @addtogroup io_queues
* @details ChibiOS/RT queues are mostly used in serial-like device drivers.
* The device drivers are usually designed to have a lower side
* (lower driver, it is usually an interrupt service routine) and an
* upper side (upper driver, accessed by the application threads).<br>
* There are several kind of queues:<br>
* - <b>Input queue</b>, unidirectional queue where the writer is the
* lower side and the reader is the upper side.
* - <b>Output queue</b>, unidirectional queue where the writer is the
* upper side and the reader is the lower side.
* - <b>Full duplex queue</b>, bidirectional queue. Full duplex queues
* are implemented by pairing an input queue and an output queue
* together.
* .
* In order to use the I/O queues the @p CH_USE_QUEUES option must
* be enabled in @p chconf.h.<br>
* I/O queues are usually used as an implementation layer for the I/O
* channels interface, also see @ref io_channels.
* @{
*/

View File

@ -22,6 +22,9 @@
* @brief Threads registry code.
*
* @addtogroup registry
* @details Threads Registry related APIs and services.<br>
* In order to use the threads registry the @p CH_USE_REGISTRY option
* must be enabled in @p chconf.h.
* @{
*/
#include "ch.h"

View File

@ -21,8 +21,7 @@
* @file chschd.c
* @brief Scheduler code.
*
* @defgroup scheduler Scheduler
* @ingroup base
* @addtogroup scheduler
* @details This module provides the default portable scheduler code,
* scheduler functions can be individually captured by the port
* layer in order to provide architecture optimized equivalents.

View File

@ -22,6 +22,28 @@
* @brief Semaphores code.
*
* @addtogroup semaphores
* @details Semaphores and threads synchronization.
*
* <h2>Operation mode</h2>
* A semaphore is a threads synchronization object, some operations
* are defined on semaphores:
* - <b>Signal</b>: The semaphore counter is increased and if the
* result is non-positive then a waiting thread is removed from
* the semaphore queue and made ready for execution.
* - <b>Wait</b>: The semaphore counter is decreased and if the result
* becomes negative the thread is queued in the semaphore and
* suspended.
* - <b>Reset</b>: The semaphore counter is reset to a non-negative
* value and all the threads in the queue are released.
* .
* Semaphores can be used as guards for mutual exclusion code zones
* (note that mutexes are recommended for this kind of use) but also
* have other uses, queues guards and counters as example.<br>
* Semaphores usually use a FIFO queuing strategy but it is possible
* to make them order threads by priority by enabling
* @p CH_USE_SEMAPHORES_PRIORITY in @p chconf.h.<br>
* In order to use the Semaphores APIs the @p CH_USE_SEMAPHORES
* option must be enabled in @p chconf.h.
* @{
*/

View File

@ -22,6 +22,13 @@
* @brief System related code.
*
* @addtogroup system
* @details System related APIs and services:
* - Initialization.
* - Locks.
* - Interrupt Handling.
* - Power Management.
* - Abnormal Termination.
* .
* @{
*/

View File

@ -21,12 +21,15 @@
* @file chthreads.c
* @brief Threads code.
*
* @defgroup threads Threads
* @ingroup base
* @details This module contains all the threads related APIs, creation,
* termination, synchronization, delay etc. Dynamic variants of
* the base static API are also included.
*
* @addtogroup threads
* @details This module contains all the threads related APIs and services:
* - Creation.
* - Termination.
* - Synchronization.
* - Delays.
* - References.
* .
* Dynamic variants of the base static API are also included.
* @{
*/

View File

@ -22,6 +22,7 @@
* @brief Time and Virtual Timers related code.
*
* @addtogroup time
* @details Time and Virtual Timers related APIs and services.
* @{
*/

View File

@ -24,6 +24,7 @@
* contains the application specific kernel settings.
*
* @addtogroup config
* @details Kernel related settings and hooks.
* @{
*/

View File

@ -26,6 +26,7 @@
* advantage in doing so, as example because performance concerns.
*
* @addtogroup core
* @details Non portable code templates.
* @{
*/

View File

@ -26,6 +26,7 @@
* doing so.
*
* @addtogroup types
* @details System types and macros.
* @{
*/

View File

@ -70,11 +70,16 @@
versions. This is done because further scheduler optimizations are
becoming increasingly pointless without considering architecture and
compiler related constraints.
- NEW: Documentation improvements, now the description goes on top of each
page, doxygen defaulted it in the middle, not exactly the best for
readability. Improved many descriptions of the various subsystems.
- OPT: Optimization on the interface between scheduler and port layer, now
the kernel is even smaller and the context switch performance improved
quite a bit on all the supported architectures.
- OPT: Simplified the implementation of chSchYieldS() and made it a macro.
The previous implementation was probably overkill and took too much space.
- CHANGE: Exiting from a chCondWaitTimeout() because a timeout now does not
re-acquire the mutex, ownership is lost.
*** 1.5.3 ***
- FIX: Removed C99-style variables declarations (bug 2964418)(backported