aboutsummaryrefslogtreecommitdiff
path: root/test/CodeGen/CellSPU/rotate_ops.ll
diff options
context:
space:
mode:
authorNadav Rotem <nadav.rotem@intel.com>2012-02-12 15:05:31 +0000
committerNadav Rotem <nadav.rotem@intel.com>2012-02-12 15:05:31 +0000
commit2ee746b87d9471d2dc024827cacdc46114ed3708 (patch)
treed1e71067eb2c76203cec7b0f6a9759d4995b5613 /test/CodeGen/CellSPU/rotate_ops.ll
parent61c2c128f7ebed525f153fb518d362f52b710ee8 (diff)
This patch addresses the problem of poor code generation for the zext
v8i8 -> v8i32 on AVX machines. The codegen often scalarizes ANY_EXTEND nodes. The DAGCombiner has two optimizations that can mitigate the problem. First, if all of the operands of a BUILD_VECTOR node are extracted from an ZEXT/ANYEXT nodes, then it is possible to create a new simplified BUILD_VECTOR which uses UNDEFS/ZERO values to eliminate the scalar ZEXT/ANYEXT nodes. Second, another dag combine optimization lowers BUILD_VECTOR into a shuffle vector instruction. In the case of zext v8i8->v8i32 on AVX, a value in an XMM register is to be shuffled into a wide YMM register. This patch modifes the second optimization and allows the creation of shuffle vectors even when the newly generated vector and the original vector from which we extract the values are of different types. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@150340 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'test/CodeGen/CellSPU/rotate_ops.ll')
-rw-r--r--test/CodeGen/CellSPU/rotate_ops.ll2
1 files changed, 1 insertions, 1 deletions
diff --git a/test/CodeGen/CellSPU/rotate_ops.ll b/test/CodeGen/CellSPU/rotate_ops.ll
index 8b7af20b4a..9770935276 100644
--- a/test/CodeGen/CellSPU/rotate_ops.ll
+++ b/test/CodeGen/CellSPU/rotate_ops.ll
@@ -1,5 +1,5 @@
; RUN: llc < %s -march=cellspu -o %t1.s
-; RUN: grep rot %t1.s | count 85
+; RUN: grep rot %t1.s | count 86
; RUN: grep roth %t1.s | count 8
; RUN: grep roti.*5 %t1.s | count 1
; RUN: grep roti.*27 %t1.s | count 1