diff options
author | Petar Jovanovic <petar.jovanovic@rt-rk.com> | 2013-10-11 03:20:45 +0200 |
---|---|---|
committer | Petar Jovanovic <petar.jovanovic@rt-rk.com> | 2013-10-11 03:20:45 +0200 |
commit | 3ebbc156690e2510a21287c7ece988905a5c2e28 (patch) | |
tree | 841b287f6430ef9318e4267b371bb0d4328c2bf2 /test | |
parent | 1b783c13dd573e2611f7fde92e3e66475bdb8918 (diff) |
Apply upstream: Add missing ATOMIC_CMP_SWAP case.
Cherry-pick r185186 from upstream.
Original commit message:
Author: Lang Hames <lhames@gmail.com>
Date: Fri Jun 28 18:36:42 2013 +0000
Add missing case to switch statement - DAGTypeLegalizer::ExpandIntegerResult
should expand ATOMIC_CMP_SWAP nodes the same way that it does for ATOMIC_SWAP.
Since ATOMIC_LOADs on some targets (e.g. older ARM variants) get legalized to
ATOMIC_CMP_SWAPs, the missing case had been causing i64 atomic loads to crash
during isel.
This has to be cherry-picked, as we have experienced the same bug described
in the original message. Missing case caused MIPS 64 atomics to crash.
TBR= mseaborn@chromium.org, dschuff@chromium.org
BUG= crash for MIPS atomics
Review URL: https://codereview.chromium.org/26958002
Diffstat (limited to 'test')
-rw-r--r-- | test/CodeGen/ARM/atomic-load-store.ll | 15 |
1 files changed, 15 insertions, 0 deletions
diff --git a/test/CodeGen/ARM/atomic-load-store.ll b/test/CodeGen/ARM/atomic-load-store.ll index 12a8fe4cd8..66916a7c2e 100644 --- a/test/CodeGen/ARM/atomic-load-store.ll +++ b/test/CodeGen/ARM/atomic-load-store.ll @@ -2,6 +2,7 @@ ; RUN: llc < %s -mtriple=armv7-apple-ios -O0 | FileCheck %s -check-prefix=ARM ; RUN: llc < %s -mtriple=thumbv7-apple-ios | FileCheck %s -check-prefix=THUMBTWO ; RUN: llc < %s -mtriple=thumbv6-apple-ios | FileCheck %s -check-prefix=THUMBONE +; RUN llc < %s -mtriple=armv4-apple-ios | FileCheck %s -check-prefix=ARMV4 define void @test1(i32* %ptr, i32 %val1) { ; ARM: test1 @@ -54,3 +55,17 @@ define void @test4(i8* %ptr1, i8* %ptr2) { store atomic i8 %val, i8* %ptr2 seq_cst, align 1 ret void } + +define i64 @test_old_load_64bit(i64* %p) { +; ARMV4: test_old_load_64bit +; ARMV4: ___sync_val_compare_and_swap_8 + %1 = load atomic i64* %p seq_cst, align 8 + ret i64 %1 +} + +define void @test_old_store_64bit(i64* %p, i64 %v) { +; ARMV4: test_old_store_64bit +; ARMV4: ___sync_lock_test_and_set_8 + store atomic i64 %v, i64* %p seq_cst, align 8 + ret void +} |