aboutsummaryrefslogtreecommitdiff
path: root/lib/Target/PowerPC/PPCISelLowering.cpp
AgeCommit message (Collapse)Author
2006-06-27Implement a bunch of 64-bit cleanliness work. With this, treeadd builds (butChris Lattner
doesn't work right). git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28921 91177308-0d34-0410-b5e6-96231b3b80d8
2006-06-26Improve PPC64 calling convention supportChris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28919 91177308-0d34-0410-b5e6-96231b3b80d8
2006-06-21Correct returns of 64-bit values, though they seemed to work before...Chris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28892 91177308-0d34-0410-b5e6-96231b3b80d8
2006-06-16fix some assumptions that pointers can only be 32-bits. With this, we canChris Lattner
now compile: static unsigned long X; void test1() { X = 0; } into: _test1: lis r2, ha16(_X) li r3, 0 stw r3, lo16(_X)(r2) blr Totally amazing :) git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28839 91177308-0d34-0410-b5e6-96231b3b80d8
2006-06-16Rename some subtarget features. A CPU now can *have* 64-bit instructions,Chris Lattner
can in 32-bit mode we can choose to optionally *use* 64-bit registers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28824 91177308-0d34-0410-b5e6-96231b3b80d8
2006-06-15Type of extract_element index operand should be iPTR.Evan Cheng
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28797 91177308-0d34-0410-b5e6-96231b3b80d8
2006-06-10Fix a problem exposed by the local allocator. CALL instructions are not markedChris Lattner
as using incoming argument registers, so the local allocator would clobber them between their set and use. To fix this, we give the call instructions a variable number of uses in the CALL MachineInstr itself, so live variables understands the live ranges of these register arguments. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28744 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-30Always reserve space for 8 spilled GPRs. GCC apparently assumes that thisChris Lattner
space will be available, even if the callee isn't varargs. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28571 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-26Change RET node to include signness information of the return values. i.e.Evan Cheng
RET chain, value1, sign1, value2, sign2, ... git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28510 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-25CALL node change (arg / sign pairs instead of just arguments).Evan Cheng
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28462 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-24Patches to make the LLVM sources more -pedantic clean. Patch providedChris Lattner
by Anton Korobeynikov! This is a step towards closing PR786. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28447 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-24Fix CodeGen/Generic/vector.ll:test_div with altivec.Chris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28445 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-24Handle SETO* like we handle SET*, restoring behavior after Evan's setccChris Lattner
change. This fixes PowerPC/fnegsel.ll. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28443 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-17Make PPC call lowering more aggressive, making the isel matching code simpleChris Lattner
enough to be autogenerated. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28354 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-17Switch PPC over to a call-selection model where the lowering code createsChris Lattner
the copyto/fromregs instead of making the PPCISD::CALL selection code create them. This vastly simplifies the selection code, and moves the ABI handling parts into one place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28346 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-173 changes, 2 of which are cleanup one of which changes codegen:Chris Lattner
1. Rearrange code a bit so that the special case doesn't require indenting lots of code. 2. Add comments describing PPC calling convention. 3. Only round up to 56-bytes of stack space for an outgoing call if the callee is varargs. This saves a bit of stack space. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28342 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-16implement passing/returning vector regs to calls, at least non-varargs calls.Chris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28341 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-16Instead of implementing LowerCallTo directly, let the default impl produce anChris Lattner
ISD::CALL node, then custom lower that. This means that we only have to handle LEGAL call operands/results, not every possible type. This allows us to simplify the call code, shrinking it by about 1/3. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28339 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-16Simplify the argument counting logic by only incrementing the index.Chris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28335 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-16Simplify the dead argument handling code.Chris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28334 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-16Vector args passed in registers don't reserve stack space.Chris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28333 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-16Switch the PPC backend over to using FORMAL_ARGUMENTS for formal argumentChris Lattner
handling. This makes the lower argument code significantly simpler (we only need to handle legal argument types). Incidentally, this also implements support for vector argument registers, so long as they are not on the stack. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28331 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-16Fit in 80 colsChris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28311 91177308-0d34-0410-b5e6-96231b3b80d8
2006-05-12Remove dead var, fix bad override.Chris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28264 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-28Fix CodeGen/Generic/2006-04-28-Sign-extend-bool.llChris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@28017 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-22JumpTable support! What this represents is working asm and jit support forNate Begeman
x86 and ppc for 100% dense switch statements when relocations are non-PIC. This support will be extended and enhanced in the coming days to support PIC, and less dense forms of jump tables. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27947 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18Fix a crash on:Chris Lattner
void foo2(vector float *A, vector float *B) { vector float C = (vector float)vec_cmpeq(*A, *B); if (!vec_any_eq(*A, *B)) *B = (vector float){0,0,0,0}; *A = C; } git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27808 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18pretty print node nameChris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27806 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18Implement an important entry from README_ALTIVEC:Chris Lattner
If an altivec predicate compare is used immediately by a branch, don't use a (serializing) MFCR instruction to read the CR6 register, which requires a compare to get it back to CR's. Instead, just branch on CR6 directly. :) For example, for: void foo2(vector float *A, vector float *B) { if (!vec_any_eq(*A, *B)) *B = (vector float){0,0,0,0}; } We now generate: _foo2: mfspr r2, 256 oris r5, r2, 12288 mtspr 256, r5 lvx v2, 0, r4 lvx v3, 0, r3 vcmpeqfp. v2, v3, v2 bne cr6, LBB1_2 ; UnifiedReturnBlock LBB1_1: ; cond_true vxor v2, v2, v2 stvx v2, 0, r4 mtspr 256, r2 blr LBB1_2: ; UnifiedReturnBlock mtspr 256, r2 blr instead of: _foo2: mfspr r2, 256 oris r5, r2, 12288 mtspr 256, r5 lvx v2, 0, r4 lvx v3, 0, r3 vcmpeqfp. v2, v3, v2 mfcr r3, 2 rlwinm r3, r3, 27, 31, 31 cmpwi cr0, r3, 0 beq cr0, LBB1_2 ; UnifiedReturnBlock LBB1_1: ; cond_true vxor v2, v2, v2 stvx v2, 0, r4 mtspr 256, r2 blr LBB1_2: ; UnifiedReturnBlock mtspr 256, r2 blr This implements CodeGen/PowerPC/vec_br_cmp.ll. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27804 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18Use vmladduhm to do v8i16 multiplies which is faster and simpler than doingChris Lattner
even/odd halves. Thanks to Nate telling me what's what. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27793 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18Implement v16i8 multiply with this code:Chris Lattner
vmuloub v5, v3, v2 vmuleub v2, v3, v2 vperm v2, v2, v5, v4 This implements CodeGen/PowerPC/vec_mul.ll. With this, v16i8 multiplies are 6.79x faster than before. Overall, UnitTests/Vector/multiplies.c is now 2.45x faster with LLVM than with GCC. Remove the 'integer multiplies' todo from the README file. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27792 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18Lower v8i16 multiply into this code:Chris Lattner
li r5, lo16(LCPI1_0) lis r6, ha16(LCPI1_0) lvx v4, r6, r5 vmulouh v5, v3, v2 vmuleuh v2, v3, v2 vperm v2, v2, v5, v4 where v4 is: LCPI1_0: ; <16 x ubyte> .byte 2 .byte 3 .byte 18 .byte 19 .byte 6 .byte 7 .byte 22 .byte 23 .byte 10 .byte 11 .byte 26 .byte 27 .byte 14 .byte 15 .byte 30 .byte 31 This is 5.07x faster on the G5 (measured) than lowering to scalar code + loads/stores. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27789 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-18Custom lower v4i32 multiplies into a cute sequence, instead of having legalizeChris Lattner
scalarize the sequence into 4 mullw's and a bunch of load/store traffic. This speeds up v4i32 multiplies 4.1x (measured) on a G5. This implements PowerPC/vec_mul.ll git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27788 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17Make sure to check splats of every constant we can, handle splat(31) byChris Lattner
being a bit more clever, add support for odd splats from -31 to -17. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27764 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17Teach the ppc backend to use rol and vsldoi to generate splatted constants.Chris Lattner
This implements vec_constants.ll:test_vsldoi and test_rol git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27760 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17Make some code more general, adding support for constant formation of severalChris Lattner
new patterns. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27754 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17Learn how to make odd splatted constants in range [17,29]. This implementsChris Lattner
PowerPC/vec_constants.ll:test_29. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27752 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17Pull some code out into a helper function.Chris Lattner
Effeciently codegen even splats in the range [-32,30]. This allows us to codegen <30,30,30,30> as: vspltisw v0, 15 vadduwm v2, v0, v0 instead of as a cp load. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27750 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-17Implement a TODO: for any shuffle that can be viewed as a v4[if]32 shuffle,Chris Lattner
if it can be implemented in 3 or fewer discrete altivec instructions, codegen it as such. This implements Regression/CodeGen/PowerPC/vec_perf_shuffle.ll git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27748 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-16Implement a TODO: have the legalizer canonicalize a bunch of operations toChris Lattner
one type (v4i32) so that we don't have to write patterns for each type, and so that more CSE opportunities are exposed. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27731 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-16Make the BUILD_VECTOR lowering code much more aggressive w.r.t constant vectors.Chris Lattner
Remove some done items from the todo list. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27729 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-15Fix a crash when faced with a shuffle vector that has an undef in its mask.Chris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27726 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-14Allow undef in a shuffle maskChris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27714 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-14Move the rest of the PPCTargetLowering::LowerOperation cases out intoChris Lattner
separate functions, for simplicity and code clarity. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27693 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-14Pull the VECTOR_SHUFFLE and BUILD_VECTOR lowering code out into separateChris Lattner
functions, which makes the code much cleaner :) git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27692 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-13Force non-darwin targets to use a static relo model. This fixes PR734,Chris Lattner
tested by CodeGen/Generic/vector.ll git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27657 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-12Add a new way to match vector constants, which make it easier to bang bits ofChris Lattner
different types. Codegen spltw(0x7FFFFFFF) and spltw(0x80000000) without a constant pool load, implementing PowerPC/vec_constants.ll:test1. This compiles: typedef float vf __attribute__ ((vector_size (16))); typedef int vi __attribute__ ((vector_size (16))); void test(vi *P1, vi *P2, vf *P3) { *P1 &= (vi){0x80000000,0x80000000,0x80000000,0x80000000}; *P2 &= (vi){0x7FFFFFFF,0x7FFFFFFF,0x7FFFFFFF,0x7FFFFFFF}; *P3 = vec_abs((vector float)*P3); } to: _test: mfspr r2, 256 oris r6, r2, 49152 mtspr 256, r6 vspltisw v0, -1 vslw v0, v0, v0 lvx v1, 0, r3 vand v1, v1, v0 stvx v1, 0, r3 lvx v1, 0, r4 vandc v1, v1, v0 stvx v1, 0, r4 lvx v1, 0, r5 vandc v0, v1, v0 stvx v0, 0, r5 mtspr 256, r2 blr instead of (with two constant pool entries): _test: mfspr r2, 256 oris r6, r2, 49152 mtspr 256, r6 li r6, lo16(LCPI1_0) lis r7, ha16(LCPI1_0) li r8, lo16(LCPI1_1) lis r9, ha16(LCPI1_1) lvx v0, r7, r6 lvx v1, 0, r3 vand v0, v1, v0 stvx v0, 0, r3 lvx v0, r9, r8 lvx v1, 0, r4 vand v1, v1, v0 stvx v1, 0, r4 lvx v1, 0, r5 vand v0, v1, v0 stvx v0, 0, r5 mtspr 256, r2 blr GCC produces (with 2 cp entries): _test: mfspr r0,256 stw r0,-4(r1) oris r0,r0,0xc00c mtspr 256,r0 lis r2,ha16(LC0) lis r9,ha16(LC1) la r2,lo16(LC0)(r2) lvx v0,0,r3 lvx v1,0,r5 la r9,lo16(LC1)(r9) lwz r12,-4(r1) lvx v12,0,r2 lvx v13,0,r9 vand v0,v0,v12 stvx v0,0,r3 vspltisw v0,-1 vslw v12,v0,v0 vandc v1,v1,v12 stvx v1,0,r5 lvx v0,0,r4 vand v0,v0,v13 stvx v0,0,r4 mtspr 256,r12 blr git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27624 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-12Rename get_VSPLI_elt -> get_VSPLTI_eltChris Lattner
Canonicalize BUILD_VECTOR's that match VSPLTI's into a single type for each form, eliminating a bunch of Pat patterns in the .td file and allowing us to CSE stuff more aggressively. This implements PowerPC/buildvec_canonicalize.ll:VSPLTI git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27614 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-12Ensure that zero vectors are always v4i32, which forces them to CSE withChris Lattner
each other. This implements CodeGen/PowerPC/vxor-canonicalize.ll git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27609 91177308-0d34-0410-b5e6-96231b3b80d8
2006-04-11Vector function results go into V2 according to GCC. The darwin ABI docChris Lattner
doesn't say where they go :-/ git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@27579 91177308-0d34-0410-b5e6-96231b3b80d8