diff options
author | Evan Cheng <evan.cheng@apple.com> | 2011-01-07 19:35:30 +0000 |
---|---|---|
committer | Evan Cheng <evan.cheng@apple.com> | 2011-01-07 19:35:30 +0000 |
commit | a5e1362f968568d66d76ddcdcff4ab98e203a48c (patch) | |
tree | 53e266c315432b49be8ad6f3a2d2a5873265ab53 /lib | |
parent | 1434f66b2e132a707e2c8ccb3350ea13fb5aa051 (diff) |
Revert r122955. It seems using movups to lower memcpy can cause massive regression (even on Nehalem) in edge cases. I also didn't see any real performance benefit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@123015 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'lib')
-rw-r--r-- | lib/Target/X86/X86ISelLowering.cpp | 6 |
1 files changed, 5 insertions, 1 deletions
diff --git a/lib/Target/X86/X86ISelLowering.cpp b/lib/Target/X86/X86ISelLowering.cpp index ddec78bfff..f871b5a770 100644 --- a/lib/Target/X86/X86ISelLowering.cpp +++ b/lib/Target/X86/X86ISelLowering.cpp @@ -1063,8 +1063,12 @@ X86TargetLowering::getOptimalMemOpType(uint64_t Size, // linux. This is because the stack realignment code can't handle certain // cases like PR2962. This should be removed when PR2962 is fixed. const Function *F = MF.getFunction(); - if (NonScalarIntSafe && !F->hasFnAttr(Attribute::NoImplicitFloat)) { + if (NonScalarIntSafe && + !F->hasFnAttr(Attribute::NoImplicitFloat)) { if (Size >= 16 && + (Subtarget->isUnalignedMemAccessFast() || + ((DstAlign == 0 || DstAlign >= 16) && + (SrcAlign == 0 || SrcAlign >= 16))) && Subtarget->getStackAlignment() >= 16) { if (Subtarget->hasSSE2()) return MVT::v4i32; |