diff options
author | Dan Gohman <gohman@apple.com> | 2008-10-15 06:50:19 +0000 |
---|---|---|
committer | Dan Gohman <gohman@apple.com> | 2008-10-15 06:50:19 +0000 |
commit | 3358629380b56fb5abb5dc3037c22c3b8c7701fb (patch) | |
tree | 31a72b9193cd8b4335ca1bea9055318640eea6be /lib/Target/X86/X86InstrSSE.td | |
parent | 607ec1f608d31667412603d9efb3511ef6e18448 (diff) |
Now that predicates can be composed, simplify several of
the predicates by extending simple predicates to create
more complex predicates instead of duplicating the logic
for the simple predicates.
This doesn't reduce much redundancy in DAGISelEmitter.cpp's
generated source yet; that will require improvements to
DAGISelEmitter.cpp's instruction sorting, to make it more
effectively group nodes with similar predicates together.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@57565 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'lib/Target/X86/X86InstrSSE.td')
-rw-r--r-- | lib/Target/X86/X86InstrSSE.td | 28 |
1 files changed, 8 insertions, 20 deletions
diff --git a/lib/Target/X86/X86InstrSSE.td b/lib/Target/X86/X86InstrSSE.td index 6d0d768f8f..1ae9935473 100644 --- a/lib/Target/X86/X86InstrSSE.td +++ b/lib/Target/X86/X86InstrSSE.td @@ -98,19 +98,13 @@ def loadv2i64 : PatFrag<(ops node:$ptr), (v2i64 (load node:$ptr))>; // Like 'store', but always requires vector alignment. def alignedstore : PatFrag<(ops node:$val, node:$ptr), - (st node:$val, node:$ptr), [{ - StoreSDNode *ST = cast<StoreSDNode>(N); - return !ST->isTruncatingStore() && - ST->getAddressingMode() == ISD::UNINDEXED && - ST->getAlignment() >= 16; + (store node:$val, node:$ptr), [{ + return cast<StoreSDNode>(N)->getAlignment() >= 16; }]>; // Like 'load', but always requires vector alignment. -def alignedload : PatFrag<(ops node:$ptr), (ld node:$ptr), [{ - LoadSDNode *LD = cast<LoadSDNode>(N); - return LD->getExtensionType() == ISD::NON_EXTLOAD && - LD->getAddressingMode() == ISD::UNINDEXED && - LD->getAlignment() >= 16; +def alignedload : PatFrag<(ops node:$ptr), (load node:$ptr), [{ + return cast<LoadSDNode>(N)->getAlignment() >= 16; }]>; def alignedloadfsf32 : PatFrag<(ops node:$ptr), (f32 (alignedload node:$ptr))>; @@ -125,11 +119,8 @@ def alignedloadv2i64 : PatFrag<(ops node:$ptr), (v2i64 (alignedload node:$ptr))> // be naturally aligned on some targets but not on others. // FIXME: Actually implement support for targets that don't require the // alignment. This probably wants a subtarget predicate. -def memop : PatFrag<(ops node:$ptr), (ld node:$ptr), [{ - LoadSDNode *LD = cast<LoadSDNode>(N); - return LD->getExtensionType() == ISD::NON_EXTLOAD && - LD->getAddressingMode() == ISD::UNINDEXED && - LD->getAlignment() >= 16; +def memop : PatFrag<(ops node:$ptr), (load node:$ptr), [{ + return cast<LoadSDNode>(N)->getAlignment() >= 16; }]>; def memopfsf32 : PatFrag<(ops node:$ptr), (f32 (memop node:$ptr))>; @@ -143,11 +134,8 @@ def memopv16i8 : PatFrag<(ops node:$ptr), (v16i8 (memop node:$ptr))>; // SSSE3 uses MMX registers for some instructions. They aren't aligned on a // 16-byte boundary. // FIXME: 8 byte alignment for mmx reads is not required -def memop64 : PatFrag<(ops node:$ptr), (ld node:$ptr), [{ - LoadSDNode *LD = cast<LoadSDNode>(N); - return LD->getExtensionType() == ISD::NON_EXTLOAD && - LD->getAddressingMode() == ISD::UNINDEXED && - LD->getAlignment() >= 8; +def memop64 : PatFrag<(ops node:$ptr), (unindexedload node:$ptr), [{ + return cast<LoadSDNode>(N)->getAlignment() >= 8; }]>; def memopv8i8 : PatFrag<(ops node:$ptr), (v8i8 (memop64 node:$ptr))>; |