Developer’s Manual
January, 2004
163
Intel XScale® Core
Developer’s Manual
Performance Considerations
Performance Considerations
10
This chapter describes relevant performance considerations that compiler writers, application
programmers and system designers need to be aware of to efficiently use the Intel XScale
®
core.
Performance numbers discussed here include interrupt latency, branch prediction, and instruction
latencies.
10.1
Interrupt Latency
Minimum Interrupt Latency is defined as the minimum number of cycles from the assertion of any
interrupt signal (IRQ or FIQ) to the execution of the instruction at the vector for that interrupt. The
point at which the assertion begins depends on the ASSP. This number assumes best case
conditions exist when the interrupt is asserted, e.g., the system isn’t waiting on the completion of
some other operation.
A sometimes more useful number to work with is the Maximum Interrupt Latency. This is typically
a complex calculation that depends on what else is going on in the system at the time the interrupt
is asserted. Some examples that can adversely affect interrupt latency are:
•
the instruction currently executing could be a 16-register LDM,
•
the processor could fault just when the interrupt arrives,
•
the processor could be waiting for data from a load, doing a page table walk, etc., and
•
high core to system (bus) clock ratios.
Maximum Interrupt Latency can be reduced by:
•
ensuring that the interrupt vector and interrupt service routine are resident in the instruction
cache. This can be accomplished by locking them down into the cache.
•
removing or reducing the occurrences of hardware page table walks. This also can be
accomplished by locking down the application’s page table entries into the TLBs, along with
the page table entry for the interrupt service routine.
Refer to the Intel XScale
®
core implementation option section of the ASSP architecture
specification for more information on interrupt latency.