Search code examples
armcpu-cachemmucortex-a

Arm cortex a9 memory access


I want to know the sequence an ARM core (Cortex-A series processor) accesses memory? Right from Virtual Address generated by core to memory and Instruction/Data transferred from the memory to the core. Consider core has generated a virtual address for some data/instruction and there is a miss from TLBs, then how does address reach to main memory(DRAM if I am not wrong) and how does data comes to core through L2 and L1 caches.

What if required data/instruction is already in L1 cache?

What if required data/instruction is already in L2 cache?

I am confused regarding cache and MMU communications.


Solution

  • tl;dr - Whatever you want. The ARM is highly flexible and the SOC vendor and/or the system programmer may make the memory sub-systems do a great many different things depending on the end device features and needs.

    First, the MMU has fields that explicitly dictate how the cache is to be used. I recommend reading Chapter 9 Caches and Chapter 10 Memory Management Unit of the Cortex-A Series Programmers Guide.

    Some terms are,

    1. PoC - point of coherency.
    2. PoU - point of unification.
    3. Strongly ordered.
    4. Device
    5. Normal

    Many MMU properties and caching can be affected by different CP15 and configuration registers. For instance, an 'exclusive configuration' for data in the L1 cache is never in the L2 can make it particularly difficult to cleanly write self modifying code and other dynamic updates. So, even for a particular Cortex-A model, the system configuration may change things (write-back/write-through, write-allocate/no write-allocate, bufferable, non-cacheable, etc).

    A typical sequence for general DDR core memory is,

    1. Resolve virt -> phys
      1. Micro TLB present? Yes, have `phys`
      2. TLB present? Yes, have `phys`
      3. Table walk. Have `phys` or fault.
    2. Access marked cacheable? Yes do 2.1. No step 4.
      1. In L1 cache? Yes 2b.
      2. If read return data. If write fill data and mark drity (write back).
    3. In L2 cache? Yes 3.1
      1. If read return data. If write fill data and mark drity (write back).
    4. Run physical cycle on AXI bus (may route to sub-bus).

    What if required data/instruction is already in L1 cache?

    What if required data/instruction is already in L2 cache?

    For normal cases these are just cache hits. If it is a 'write-through' and 'write' then the value is updated in cache and written to memory. It it is 'write-back' the value is updated in cache and marked dirty.Note1 If it is a read, then the cache memory is used (in both case).

    The system maybe set up completely differently for device memory (Ie, memory mapped USB registers, world shareable memory, multi-core/cpu buffers, etc). Often the setup will depend on system cost, performance and power consumption. Ie, a write-through cache is easier to implement (lower power and less cost) but often lower performance.

    I am confused regarding cache and MMU communications.

    Mainly the MMU will provide information for the caches to resolve an address. The MMU may say to use/not use the cache. It may tell the cache it can 'gang' writes together (write-bufferable), but should not store them indefinitely, etc. So many of the MMU specifiers can selectively alter the behavior of the cache. As the Cortex-A cache parameters are not defined (it is up to each SOC manufacturer), it is often the case that particular MMU bits may have alternate behavior on different systems.

    Note1: The 'dirty cache' may have additional 'broadcasts' of exclusion monitor information for strex and ldrex type accesses.