diff options
author | Antonio Borneo <borneo.antonio@gmail.com> | 2019-04-27 15:52:52 +0200 |
---|---|---|
committer | Andreas Fritiofson <andreas.fritiofson@gmail.com> | 2019-10-18 09:20:58 +0100 |
commit | 5dc5ed571435c63f274f94ef89e20face012df08 (patch) | |
tree | 822c6849e57cc9951fa1257b74f015d554cc5d60 /src | |
parent | 5f42124a40e8ccbc1b5db3a8898d24408be22c62 (diff) |
target/cortex_a: use aligned accesses for read/write cpu memory slow
Armv7a is able to read and write memory at un-aligned address, but
only when bit SCTLR.A (Alignment check enable) is zero and the
address belongs to a memory space with attribute "Normal" (see [1]
chapter A3.2.1 "Unaligned data access"). In all the other cases
the memory access will trigger an alignment fault data abort
exception.
Memory attributes are explained in [1] chapter A3.5 "Memory types
and attributes and the memory order model".
Disabling the MMU cause a change in memory attribute, as explained
in [1] chapter B3.2 "The effects of disabling MMUs on VMSA
behavior".
This can cause several issues. e.g. a SW breakpoint on un-aligned
4-byte Thumb instruction, set when MMU is on, can be impossible to
remove when MMU turns off.
While is possible to check all the possible conditions before an
un-aligned memory access, it's clearly more maintainable to skip
such complexity and only perform aligned accesses.
Check the alignment and eventually modify the data size before
calling the functions cortex_a_{read,write}_cpu_memory_slow().
Change the comment in the two functions above to comply with the
new behaviour.
[1] ARM DDI 0406C.d - "ARM Architecture Reference Manual, ARMv7-A
and ARMv7-R edition"
Change-Id: I57b4c11e7fa7e78aaaaee4406a5734b48db740ae
Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com>
Reviewed-on: http://openocd.zylin.com/5138
Tested-by: jenkins
Reviewed-by: Matthias Welwarsky <matthias@welwarsky.de>
Diffstat (limited to 'src')
-rw-r--r-- | src/target/cortex_a.c | 41 |
1 files changed, 37 insertions, 4 deletions
diff --git a/src/target/cortex_a.c b/src/target/cortex_a.c index b3a8a41d..3ed2481b 100644 --- a/src/target/cortex_a.c +++ b/src/target/cortex_a.c @@ -1893,7 +1893,8 @@ static int cortex_a_write_cpu_memory_slow(struct target *target, { /* Writes count objects of size size from *buffer. Old value of DSCR must * be in *dscr; updated to new value. This is slow because it works for - * non-word-sized objects and (maybe) unaligned accesses. If size == 4 and + * non-word-sized objects. Avoid unaligned accesses as they do not work + * on memory address space without "Normal" attribute. If size == 4 and * the address is aligned, cortex_a_write_cpu_memory_fast should be * preferred. * Preconditions: @@ -2050,7 +2051,22 @@ static int cortex_a_write_cpu_memory(struct target *target, /* We are doing a word-aligned transfer, so use fast mode. */ retval = cortex_a_write_cpu_memory_fast(target, count, buffer, &dscr); } else { - /* Use slow path. */ + /* Use slow path. Adjust size for aligned accesses */ + switch (address % 4) { + case 1: + case 3: + count *= size; + size = 1; + break; + case 2: + if (size == 4) { + count *= 2; + size = 2; + } + case 0: + default: + break; + } retval = cortex_a_write_cpu_memory_slow(target, size, count, buffer, &dscr); } @@ -2136,7 +2152,8 @@ static int cortex_a_read_cpu_memory_slow(struct target *target, { /* Reads count objects of size size into *buffer. Old value of DSCR must be * in *dscr; updated to new value. This is slow because it works for - * non-word-sized objects and (maybe) unaligned accesses. If size == 4 and + * non-word-sized objects. Avoid unaligned accesses as they do not work + * on memory address space without "Normal" attribute. If size == 4 and * the address is aligned, cortex_a_read_cpu_memory_fast should be * preferred. * Preconditions: @@ -2352,7 +2369,23 @@ static int cortex_a_read_cpu_memory(struct target *target, /* We are doing a word-aligned transfer, so use fast mode. */ retval = cortex_a_read_cpu_memory_fast(target, count, buffer, &dscr); } else { - /* Use slow path. */ + /* Use slow path. Adjust size for aligned accesses */ + switch (address % 4) { + case 1: + case 3: + count *= size; + size = 1; + break; + case 2: + if (size == 4) { + count *= 2; + size = 2; + } + break; + case 0: + default: + break; + } retval = cortex_a_read_cpu_memory_slow(target, size, count, buffer, &dscr); } |