Embedded Systems Architecture Course - PowerPoint PPT Presentation


PPT – Embedded Systems Architecture Course PowerPoint presentation | free to view - id: 12d956-MzY2O


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation

Embedded Systems Architecture Course


Cooperating tasks communicate and synchronize to perform some common operation. ... Join allows the invoker to synchronize with the completion of the invoked routine. ... – PowerPoint PPT presentation

Number of Views:153
Avg rating:3.0/5.0
Slides: 52
Provided by: Informatio367


Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Embedded Systems Architecture Course

Embedded Systems Architecture Course
  • Rajesh K. Gupta
  • University of California, Irvine
  • Ki-Seok Chung and Ali Dasdan
  • University of Illinois at Urbana-Champaign
  • Interstate Electronics Corporation, Anaheim, CA
  • 3-5 December 1997
  • 21-23 January 1998

  • Software Design and Modeling
  • Sections 4., 5., 13.
  • 4 hours
  • Scheduling Theory
  • Sections 11.
  • 4 hours
  • Real-Time Operating Systems
  • Sections 10.
  • 4 hours
  • Timing Issues
  • 2 hours
  • Performance Analysis
  • 2 hours
  • Design Automation for Embedded Systems
  • 1 hour

  • (Also discuss
  • complexity theory - just a little to give the
    concepts needed during the discussion of
    scheduling theory
  • trends in embedded systems
  • trends for languages
  • trends for compilers - give them a list of things
    that they can do as users to take more advantage
    of their compilers
  • trends for processors
  • trends for architectures
  • trends for everything … )

Real-Time Operating Systems
  • Task management
  • Task interaction
  • Memory management
  • Real-time kernels
  • Commercial and research real-time operating

Task Management
  • A real-time system typically has many activities
    (or tasks) occurring in parallel.
  • Thus, a task represents an activity in the
  • Historically, one task represents one sequential
    thread of execution however, multithreading
    allows multiple threads of control in the same
    task. We will assume a single thread of control.
  • The principles of concurrency are fundamental
    regardless of whether we use processes, tasks, or
    threads. Hence, we will examine concurrency in
    terms of tasks.

Task Management
  • Concurrent tasking means structuring a system
    into concurrent tasks.
  • Advantages of concurrent tasking
  • Concurrent tasking is a natural model for many
    real-time applications. For these applications, a
    design emphasizing concurrency is clearer and
    easier to understand, as it is a more realistic
    model of the problem.
  • It results in a separation of concerns of what
    each task does from when it does it. This usually
    makes the system easier to understand, manage,
    and construct.
  • It can result in an overall reduction in system
    execution time by overlapping executions of
    independent tasks.
  • It allows greater scheduling flexibility since
    time critical tasks with hard deadlines may be
    given a higher priority than less critical tasks.
  • Identifying the concurrent tasks early in the
    design can allow an early performance analysis of
    the system.
  • However, concurrent tasking introduces complexity
    because of task interactions.

Task Management
  • Often, tasks execute asynchronously, i.e., at
    different speeds, but may need to interact with
    each other.
  • Three types of interactions are possible
  • communication
  • synchronization
  • mutual exclusion
  • Communication is simply used to transfer data
    between tasks.
  • Synchronization is used to coordinate tasks.
  • Mutual exclusion is used to control access to
    shared resources.

Task Management
  • Also, task interactions lead to three types of
  • independent
  • cooperating
  • competing
  • Independent tasks have no interactions with each
  • Cooperating tasks communicate and synchronize to
    perform some common operation.
  • Competing tasks communicate and synchronize to
    obtain access to shared resources.

Task Management
  • Implementing tasks
  • the case of dedicated resources
  • the case of shared resources
  • Implementing on dedicated resources
  • dedicate one processor for each task
  • connect processors using communication links such
    as a bus
  • Different arrangements are possible such as
    shared memory (one big memory shared by all but
    with local memories too) or distributed memory
    (all local memories).
  • Implementing on shared resources
  • sharing the processor
  • sharing the memory

Task Management
  • Issues in implementing tasks on a shared
  • how the processor is to be shared - what
    mechanisms are required to enable a processor
    executing one task to change its activity and
    execute another task
  • when the processor is to be shared - at what
    times, or as a result of what events, should the
    processor change from executing one task to
    executing another
  • which task should the processor direct its
    attention to, when sharing of the processor
    necessary (related to scheduling - to be
    discussed later)
  • How and when in serial execution
  • commence the next task at its starting point at
    the completion of the current task
  • How and when in concurrent execution
  • commence the next task at the point where it
    previously left off when the current task gives
    up use of the processor
  • This point is discussed next.

Task Management
  • Issues in implementing tasks on a shared memory
  • provide enough memory to hold all the tasks, or
  • do code sharing and memory sharing
  • Code sharing through
  • serially re-useable code
  • write the code in the subroutine shared (call it
    S) in such a way that it makes no assumptions
    about the values in its local variables when it
    is entered. Using a lock and unlock pair, only
    one task can be made to use S at any time.
  • re-entrant code
  • In the above scheme, all the temporary areas that
    S needs reside in S. If these areas were to be
    part of the task currently using S, then would
    consist of executable code only, and it could be
    executed by more than one task at a time,
    provided that S did not modify its own code in
    any way.
  • S uses the data areas indirectly, typically via a
    relocation pointer which is associated with each
    task and which is passed as a parameter when S is
  • Memory sharing to be discussed in Memory

Task Management
  • (states of a task such as running, waiting
  • properties of a task such as context, priority,
  • task creation
  • In general, all tasks should be created before
    run time and remain dormant until needed. This
    guarantees that the resource demands will be
    known and that performance can be evaluated with
    respect to real-time deadlines.
  • issues related to tasks such as structuring - to
    be discussed in the software design and modeling

Task Management
  • Variations in the task models of concurrent
    programming languages are based on
  • structure
  • level of parallelism
  • granularity
  • initialization
  • termination
  • representation
  • Structure
  • static the number of tasks is fixed and known
    before run time.
  • dynamic tasks are created at any time. The
    number of extant tasks is determined only at run
    time. For example, Ada and C.
  • Level of parallelism
  • nested tasks are defined at any level of the
    program text in particular, they are allowed to
    be defined within other tasks. For example, Ada
    and C.
  • flat tasks are defined only at the outermost
    level of the program text.

Task Management
  • Granularity
  • coarse grain such a program contains relatively
    few big (long live history) tasks, e.g., Ada.
  • fine grain such a program contains a large
    number of simple tasks.
  • Initialization when a task is created, it may
    need to supplied with information pertinent to
    its execution. Two ways to do that
  • pass information in the form of parameters to the
  • communicate explicitly with the task after is has
    started its execution
  • Termination under the following circumstances
  • completion of execution of the task body
  • suicide, by execution of a self-terminate
  • abortion, through the explicit action of another
  • occurrence of an untrapped error condition
  • never tasks are assumed to execute
    non-terminating loops
  • when no longer needed

Task Management
  • Representation there are four basic mechanisms
    for expressing concurrent execution
  • coroutines
  • fork and join
  • cobegin and explicit task declaration
  • explicit task declaration

Task Management
  • Coroutines
  • like subroutines but allow control to pass
    explicitly between them in a symmetric rather
    than strictly hierarchical way
  • Control is passed from one coroutine to another
    by means of the resume statement which names
    the coroutine to be resumed.
  • When a coroutine executes a resume, it stops
    executing but retains local state information so
    that if another coroutine subsequently resumes
    it, it can and will continue its execution.
  • No run-time support system is needed as the
    coroutines themselves sort out their order of
  • In this scheme, tasks can be written by
    independent parties, and the number of tasks need
    not be known in advance.
  • Certain languages such as Ada and Modula-2 have
    built-in support for coroutines.
  • Not adequate for true parallel processing as
    their semantics allow for execution of only one
    routine at a time.
  • error-prone due to the use of global variables
    for communication

Task Management
  • (put a picture of coroutine control flow from
    burns et al. P. 164)

Task Management
  • Fork and Join
  • Fork specifies that a designated routine should
    start executing concurrently with the invoker of
    the fork.
  • Join allows the invoker to synchronize with the
    completion of the invoked routine.
  • Fork and join allow for dynamic task creation and
    provide a means of passing information to the
    child task via parameters. Usually only a single
    value is returned by the child on its
  • flexible but error-prone in use because they do
    not provide a structured approach to task
  • available in Unix.

Task Management
  • (put a picture of fork-join control flow from
    burns et al. P. 165)

Task Management
  • Cobegin
  • a structured way of denoting the concurrent
    execution of a collection of statements
  • Tasks between a pair of cobegin and coend
    statements execute concurrently.
  • Can even support nesting of cobegins.
  • Occam-2 supports cobegins.

Task Management
  • (put a picture of cobegin from burns at al. P.

Task Management
  • Explicit task declaration
  • Routines themselves state whether they will be
    executed concurrently.
  • Ada supports explicit task declaration by
    implicit task creation in that all tasks declared
    within a block start executing concurrently at
    the end of the declarative part of that block.
  • Ada also supports dynamic task creation using the
    new operator on a task type.
  • In Ada, initialization data cannot be given to
    tasks upon creation.

Task Management
  • Task management in Ada
  • (write some things from burns et al.)

Task Interaction - Communication
  • Communication is based on
  • shared memory
  • message passing
  • Shared memory-based communication
  • Each task may access or update pieces of shared
  • Message passing-based communication
  • A direct transfer of information occurs from one
    task to another.
  • Communication mechanisms
  • channels
  • pools

Task Interaction - Communication
  • Channels
  • provide the medium for items of information to be
    passed between one task and another
  • can hold more than one item at any time
  • usually have the items passing through in an
    ordered manner
  • Pools
  • make items of information available for reading
    and/or writing by a number of tasks in the system
  • act as a repository of information information
    does not flow within a pool

Task Interaction - Communication
  • Implementing communication
  • Channel - The purpose of the channel is to
    provide a pipe of information passing from one
    task to another. For the tasks to run truly
    asynchronously, there must be some buffering of
    information the larger the buffers, the greater
    the system flexibility.
  • queues
  • circular queues (or ring buffers or hoppers)
  • event flags
  • sockets and pipes
  • Pool - Pools usually take the form of system
    tables, shared data areas, and shared files.
    Since a pool is shared by more than one task, it
    is essential to control strictly the access to
    information in pools.
  • mailboxes (or ports)
  • monitors
  • In all cases involving a finite-sized structure,
    the size of the structure should be taken into
    account during the design phase of the system to
    prevent overflows.

Task Interaction - Communication
  • Queues
  • Items are placed on the tail of the queue by the
    sending task and removed from the head of the
    queue by the receiving task.
  • A common organization is First-In-First-Out
    (FIFO) organization in which the first item come
    in will be the first go out.
  • Items can have priorities and can be placed in
    the queue based on their priorities.
  • For large items such as arrays, it is better to
    use the address of the item in the queue. In this
    case, the producer task allocates the memory and
    the consumer task releases or reuses it.
  • Circular queues
  • The underlying structure is a queue but the
    arrangement is like a ring in that items are
    placed into the slots in the queue which are
    considered to be arranged around a ring.
  • easier to manage than a FIFO queue

Task Interaction - Communication
  • Event flags
  • An event flag is associated with a set of related
    Boolean events. The flag maintains the state of
    the events and provides users access to read or
    modify the events. A task can wait for a
    particular event to change states.
  • In essence, they represent simulated interrupts,
    created by the programmer. Raising the event flag
    transfers control to the operating system, which
    can then invoke the corresponding handler. An
    example is the raise and signal facilities in C.
  • Liked by designers because they enable Boolean
    logic to be applied to events, e.g., a task can
    wait on the conjunction and/or disjunction of
    discrete events.
  • poor mechanisms because they do not have content,
    and it is hard to decide who resets a flags
    state and what to do if a flag indicates the
    event is already set (or cleared).

Task Interaction - Communication
  • Sockets and pipes
  • most often associated with network-based systems
    and provide a reliable communication path
  • should be used if portability is more important
    than performance
  • Mailboxes
  • A mailbox is a mutually agreed upon memory
    location that multiple tasks can use to
  • Each mailbox has a unique identification, and two
    tasks can communicate only if they have a shared
  • uncommon in modern real-time systems
  • Monitors
  • A monitor is defined over a channel or a pool and
    hides the internal structure of them.
  • A monitor is used to enforce synchronization (via
    condition variables) and mutual exclusion under
    the control of the compiler.
  • provide information hiding
  • Java uses monitors.

Task Interaction - Synchronization
  • Synchronization involves the ability of one task
    to stimulate or inhibit its own action or that of
    another task. In other words, in order to carry
    out the activities required of it, a task may
    need to have the ability to say stop or go or
    wait a moment to itself, or another task.
  • Synchronization between two tasks centers around
    two significant events, wait and signal. One task
    must wait for the expected event to occur, and
    the other task will signal that the event has
  • Thus, synchronization can be implemented by
    assuming the existence of the following two
  • WAIT(event)
  • SIGNAL(event)
  • WAIT and SIGNAL procedures are indivisible
    operations in that once begun, they must be
    completed and the processor cannot be swapped
    while they are being executed.

Task Interaction - Synchronization
  • WAIT(event)
  • causes the task to suspend activity as soon as
    the WAIT operation is executed, and it will
    remain suspended until such time as notification
    of the occurrence of an event is received.
  • Should the event have already occurred, the task
    will resume immediately.
  • A waiting task can be thought of as being in the
    act of reading event information from a channel
    or pool. Once this information appears, it can
  • SIGNAL(event)
  • broadcasts the fact that an event has occurred.
    Its action is to place event information in a
    channel or pool. This in turn may enable a
    waiting task to continue.
  • Implementing synchronization via semaphores
  • Semaphore
  • a non-negative integer that can only be
    manipulated by WAIT and SIGNAL apart from the
    initialization routine
  • event in WAIT and SIGNAL above refers to a
  • also used to manage mutual exclusion

Task Interaction - Mutual Exclusion
  • Critical region
  • a sequence of statements that must appear to be
    executed indivisibly (or atomically)
  • Mutual exclusion
  • the synchronization required to protect a
    critical region
  • can be enforced using semaphores
  • Potential problems - due to improper use of
    mutual exclusion primitives
  • Deadlocks
  • Livelocks
  • Lockouts or starvation
  • Priority inversion (to be discussed in Scheduling)

Task Interaction - Mutual Exclusion
  • (give an example of mutual exclusion - burns et
    al. P. 194)

Task Interaction - Mutual Exclusion
  • Deadlock
  • Two or more tasks are waiting indefinitely for an
    event that can be caused by only one of the
    waiting tasks.
  • Livelock
  • Two or more tasks are busy waiting indefinitely
    for an event that can be caused by only one of
    the busy-waiting tasks.
  • Lockout or starvation
  • One task that wished to gain access to a resource
    is never allowed to do so because there are
    always other tasks gaining access before it.

Task Interaction - Mutual Exclusion
  • (give an example of deadlock, livelock, and

Task Interaction - Mutual Exclusion
  • If a task is free from livelocks, deadlocks, and
    lockouts, then it is said to possess liveness.
    This property implies that if a task wishes to
    perform some action, then it will, eventually, be
    allowed to do so. In particular, if a task
    requests access to a critical section, then it
    will gain access within a finite time.
  • Deadlocks are the most serious error condition
    among the three problems above. There are three
    possible approaches to address the issue of
  • deadlock prevention
  • deadlock avoidance
  • deadlock detection and recovery
  • For a thorough discussion of these issues, refer
    to standard operating systems books, e.g.,
    Silberschatz and Galvin, because real-time
    systems use the same techniques.

Memory Management
  • Two issues
  • Heap management
  • Stack management
  • Heap management
  • Classic heap
  • Priority heap
  • Fixed block heap
  • Classic heap
  • Usually found on Unix systems
  • The memory is collected into one giant heap and
    partitioned according to the demand from tasks.
  • There are several fit memory allocation
    algorithms, e.g., best-fit, first-fit, that also
    attempt to minimize the memory fragmentation.
  • Has a big management overhead so is not used in
    real-time systems

Memory Management
  • Priority heap
  • partitions the memory along priority boundaries,
    e.g., a high and a low priority partitions are
    created the high priority partition is given
    adequate memory for its worst case, and the
    remaining memory is given to the low priority
    partition. In this scheme, only low priority
    tasks may need to wait due to insufficient
  • Fixed block heap
  • partitions the memory into several pools of fixed
    block length and upon a request, allocates a
    single block of memory from the pool with size
    equal or larger than the requested amount.
  • Partitions can be used to keep multiple tasks in
    memory at the same time, provided that the number
    of tasks is fixed and known.
  • leads to fragmentation. One approach to minimize
    this is that memory can be divided into regions
    in which each region contains a collection of
    different-sized, fixed-sized partitions.

Memory Management
  • Other heap issues
  • Debug information Saving information with each
    block of memory can help characterize memory
    usage and performance. During the integration
    phase of a project, the additional information
    can be important to isolating such memory
    problems as memory leaks, over committed memory,
    and thrashing.
  • Keeping additional information in the memory
    blocks allocated A memory block can be stamped
    with the id of the task (task id) or the address
    of the statement in a task (return address of the
    routine) that requests it. These can help
    identify the causes of the memory problems. One
    more technique is to use a time stamp with each
    block allocated. Time stamping in conjunction
    with task id or return address makes it possible
    to see how long a memory block has been
    allocated if the time is long, it is usually a
    sign that the memory block was lost. During the
    development phase, it may be desirable to have a
    periodic task running that examines the time
    stamps to identify the blocks with potential
    memory leaks.
  • Low watermark This alerts the developer whenever
    the number of unallocated blocks falls below a
    threshold and helps identify potential memory
    over-commitment problems.

Memory Management
  • Stack management When multiple tasks share a
    single processor, their contexts (volatile
    information such as the contents of hardware
    registers, memory-management registers, and the
    program counter) need to be saved and restored so
    as to switch them. This can be done using
  • task-control block model
  • one or more run-time stacks
  • Task-control block model
  • best for full-featured real-time operating
  • Context is kept in the control block of the task.
  • Having multiple tasks means multiple control
    blocks, which are maintained in a list.
  • This list can be fixed, if the number of tasks is
    fixed and known, or dynamic.
  • In the fixed case, the control block of a task is
    regarded as deleted even though it is not. In
    the dynamic case, it is actually deleted and the
    memory allocated for the task is returned to the

Memory Management
  • Run-time stacks
  • used to keep context
  • may use only one run-time stack for all the tasks
    or one run-time stack in conjunction with several
    application stacks (or private stacks), one for
    each task in memory
  • Multiple stack case allows tasks to interrupt
    themselves, thus helping handle transient
    overload conditions, or reentrancy and recursion.
  • Stack size must be known a priori. If recursion
    is not used, the stack size is equal to the
    number of tasks anticipated plus some provision
    for time overloading.
  • Operating system manages the stacks.
  • If it is necessary to maintain the context of an
    interrupt service routine over repeated
    interrupts, then an independent stack rather than
    the stack that is used for all the interrupts is

Real-Time Kernel Issues
  • Kernel the smallest portion of the operating
    system that provides task scheduling,
    dispatching, and intertask communication.
  • Nanokernel - the dispatcher
  • Microkernel - a nanokernel with task scheduling
  • Kernel - a microkernel with intertask
  • Executive - a kernel that includes privatized
    memory blocks, I/O services, and other complex
    issues. Most commercial real-time kernels are in
    this category.
  • Operating system - an executive that also
    provides generalized user interface, security,
    file management system, etc.
  • Scheduler the component that determines which
    task to run next.
  • Dispatcher the component that gives control of
    the processor to the task selected by the
  • should be as fast as possible as it is invoked
    during every task switch
  • can be implemented in hardware or software

Real-Time Kernel Issues
  • Kernel design strategies
  • Polled loop systems
  • Coroutines (already discussed)
  • Interrupt-driven systems
  • Foreground/background systems
  • (later from laplante)

Real-Time Operating Systems
  • Three groups
  • Small, fast, proprietary kernels
  • Real-time extensions to commercial operating
  • Research operating systems
  • Small, fast, proprietary kernels
  • homegrown
  • commercial offerings
  • QNX, PDOS, pSOS, VCOS, VRTX32, VxWorks
  • To reduce the run-time overheads incurred by the
    kernel and to make the system fast, the kernel
  • has a fast context switch
  • has a small size
  • responds to external interrupts quickly
  • minimizes intervals during which interrupts are
  • provides fixed or variable sized partitions for
    memory management as well as the ability to lock
    code and data in memory
  • provides special sequential files that can
    accumulate data at a fast rate

Real-Time Operating Systems
  • To deal with timing constraints, the kernel
  • provides bounded execution time for most
  • maintains a real-time clock
  • provides for special alarms and timeouts
  • supports real-time queuing disciplines such as
    earliest deadline first and primitives for
    jamming a message into the front of a queue
  • provides primitives to delay processing by a
    fixed amount of time and to suspend/resume
  • Also, the kernel
  • performs multitasking and intertask communication
    and synchronization via standard primitives such
    as mailboxes, events, signals, and semaphores.
  • For complex embedded systems, these kernels are
    inadequate as they are designed to be fast rather
    than to be predictable in every aspect.

Real-Time Operating Systems
  • Real-time extensions to commercial operating
  • Unix to RT-Unix
  • CHORUS to its real-time version
  • generally slower and less predictable than the
    proprietary kernels but have greater
    functionality and better software development
  • based on a set of familiar interfaces that
    facilitate portability
  • not the correct approach because too many basic
    and inappropriate underlying assumptions still
    exist such as optimizing for the average case
    (rather than worst case), assigning resources on
    demand, ignoring most if not all semantic
    information about the application, and
    independent CPU scheduling and resource
    allocation possibly causing unbounded blocking.

Real-Time Operating Systems
  • Research operating systems
  • MARS
  • Spring
  • ARTS
  • DARK
  • (write home pages for the above systems)

  • KrSh Krishna and Shin, Real-Time systems,
    McGraw Hill, 1997.
  • Ko Kopetz, Real-Time Systems, Kluwer, 1997.
  • Li, Liem, Retargetable Compilers for Embedded
    Core Processors, Kluwer, 1997.
  • YeWo, Yen and Wolf, Hardware-Software
    Co-Synthesis of Distributed Embedded Systems,
    Kluwer, 1997.
  • Ba Balarin et al., Hardware-Software Co-Design
    of Embedded Systems, Kluwer, 1997.
  • Gu Gupta, Co-Synthesis of Hardware and Software
    for Digital Embedded Systems, Kluwer, 1995.
  • Pr Protopapas, Microcomputer Hardware Design,
    Prentice-Hall, 1988.
  • La Lapsley, DSP Processor Fundamentals,
    Berkeley Design Tech., 1996.
  • Ly Lyons, Understanding Digital Signal
    Processing, Addison-Wesley, 1997.
  • St Stallings, Data and Computer Communication,
    MacMillan, 1985.

References (contd)
  • Cl, Clements, Microprocessor Systems Design,
    PWS, 1997.
  • XuPa, Xu and Parnas, On Satisfying Timing
    Constraints in Hard-Real-Time Systems, IEEE
    T-Software Engineering, Jan. 1993.
  • El Ellison, Developing Real-Time Embedded
    Software, Wiley, 1994.
  • Al Allworth, Introduction to Real-Time Software
    Design, Springer-Verlag, 1984.
  • La Laplante, Real-Time Systems Design and
    Analysis, IEEE, 1997.
  • Kl Klein, RTOS Design How is Your Application
    Affected, Wind River Systems, 1997.
  • Go1 Gomaa, Software Design Methods for
    Concurrent and Real-Time Systems, Addison-Wesley,
  • Go2 Gomaa, Software Design Methods for
    Real-Time Systems, CMU SEI Technical Report
    SEI-CM-22-1.0, 1989.
  • Bu Budgen, Introduction to Software Design, CMU
    SEI Technical Report SEI-CM-2-2.1, 1989.
  • Kle Klein et al., A Practitioners Handbook for
    Real-Time Analysis, Kluwer, 1993.

References (contd)
  • Re Rechtin, Systems Architecting,
    Prentice-Hall, 1991.
  • Ree Rechtin et al., The Art of Systems
    Architecting, CRC Press, 1996.
  • Ha Hatley et al., Strategies for Real-Time
    System Specification, Dorset House, 1987.
  • WWW Pointers
  • Carnegie Mellon Univ. Engineering Design Research
    Center http//www.edrc.cmu.edu
  • Embedded Systems Conference East/West (Miller
    Freeman Publishing) http//www.mfi.com)
  • Embedded Systems http//www.compapp.dcu.ie/cdaly
  • Safety-Critical Systems http//www.comlab.ox.ac.u
  • Design Automation for Embedded Systems journal
  • Embedded Systems Programming trade magazine
  • Embedded Hardware/Software Codesign
About PowerShow.com