aboutsummaryrefslogtreecommitdiff
path: root/lib/CodeGen
AgeCommit message (Collapse)Author
2007-11-02One more extract_subreg coalescing bug.Evan Cheng
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43644 91177308-0d34-0410-b5e6-96231b3b80d8
2007-11-02Fix a thinko.Duncan Sands
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43639 91177308-0d34-0410-b5e6-96231b3b80d8
2007-11-01Executive summary: getTypeSize -> getTypeStoreSize / getABITypeSize.Duncan Sands
The meaning of getTypeSize was not clear - clarifying it is important now that we have x86 long double and arbitrary precision integers. The issue with long double is that it requires 80 bits, and this is not a multiple of its alignment. This gives a primitive type for which getTypeSize differed from getABITypeSize. For arbitrary precision integers it is even worse: there is the minimum number of bits needed to hold the type (eg: 36 for an i36), the maximum number of bits that will be overwriten when storing the type (40 bits for i36) and the ABI size (i.e. the storage size rounded up to a multiple of the alignment; 64 bits for i36). This patch removes getTypeSize (not really - it is still there but deprecated to allow for a gradual transition). Instead there is: (1) getTypeSizeInBits - a number of bits that suffices to hold all values of the type. For a primitive type, this is the minimum number of bits. For an i36 this is 36 bits. For x86 long double it is 80. This corresponds to gcc's TYPE_PRECISION. (2) getTypeStoreSizeInBits - the maximum number of bits that is written when storing the type (or read when reading it). For an i36 this is 40 bits, for an x86 long double it is 80 bits. This is the size alias analysis is interested in (getTypeStoreSize returns the number of bytes). There doesn't seem to be anything corresponding to this in gcc. (3) getABITypeSizeInBits - this is getTypeStoreSizeInBits rounded up to a multiple of the alignment. For an i36 this is 64, for an x86 long double this is 96 or 128 depending on the OS. This is the spacing between consecutive elements when you form an array out of this type (getABITypeSize returns the number of bytes). This is TYPE_SIZE in gcc. Since successive elements in a SequentialType (arrays, pointers and vectors) need to be aligned, the spacing between them will be given by getABITypeSize. This means that the size of an array is the length times the getABITypeSize. It also means that GEP computations need to use getABITypeSize when computing offsets. Furthermore, if an alloca allocates several elements at once then these too need to be aligned, so the size of the alloca has to be the number of elements multiplied by getABITypeSize. Logically speaking this doesn't have to be the case when allocating just one element, but it is simpler to also use getABITypeSize in this case. So alloca's and mallocs should use getABITypeSize. Finally, since gcc's only notion of size is that given by getABITypeSize, if you want to output assembler etc the same as gcc then getABITypeSize is the size you want. Since a store will overwrite no more than getTypeStoreSize bytes, and a read will read no more than that many bytes, this is the notion of size appropriate for alias analysis calculations. In this patch I have corrected all type size uses except some of those in ScalarReplAggregates, lib/Codegen, lib/Target (the hard cases). I will get around to auditing these too at some point, but I could do with some help. Finally, I made one change which I think wise but others might consider pointless and suboptimal: in an unpacked struct the amount of space allocated for a field is now given by the ABI size rather than getTypeStoreSize. I did this because every other place that reserves memory for a type (eg: alloca) now uses getABITypeSize, and I didn't want to make an exception for unpacked structs, i.e. I did it to make things more uniform. This only effects structs containing long doubles and arbitrary precision integers. If someone wants to pack these types more tightly they can always use a packed struct. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43620 91177308-0d34-0410-b5e6-96231b3b80d8
2007-11-01- Coalesce extract_subreg when both intervals are relatively small.Evan Cheng
- Some code clean up. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43606 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-31Promotion of sdiv/srem/udiv/urem.Duncan Sands
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43551 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-31Add a newline at the end of the file.Duncan Sands
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43550 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-31Add the skeleton of a better PHI elimination pass.Owen Anderson
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43542 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-31Some fixes to get MachineDomTree working better.Owen Anderson
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43541 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-31Make i64=expand_vector_elt(v2i64) work in 32-bit mode.Dale Johannesen
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43535 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-30Typo.Evan Cheng
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43511 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-30Add support for expanding trunc stores. ConsiderDuncan Sands
storing an i170 on a 32 bit machine. This is first promoted to a trunc-i170 store of an i256. On a little-endian machine this expands to a store of an i128 and a trunc-i42 store of an i128. The trunc-i42 store is further expanded to a trunc-i42 store of an i64, then to a store of an i32 and a trunc-i10 store of an i32. At this point the operand type is legal (i32) and expansion stops (legalization of the trunc-i10 needs to be handled in LegalizeDAG.cpp). On big-endian machines the high bits are stored first, and some bit-fiddling is needed in order to generate aligned stores. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43499 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-30If a call to getTruncStore is for a normal store,Duncan Sands
offload to getStore rather than trying to handle both cases at once (the assertions for example assume the store really is truncating). git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43498 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-29Fix a DAGCombiner abort on a bitcast from a scalar to a vector.Dan Gohman
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43470 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-29Enable more fold (sext (load x)) -> (sext (truncate (sextload x)))Evan Cheng
transformation. Previously, it's restricted by ensuring the number of load uses is one. Now the restriction is loosened up by allowing setcc uses to be "extended" (e.g. setcc x, c, eq -> setcc sext(x), sext(c), eq). git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43465 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-29Add explicit keywords.Dan Gohman
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43464 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-28The guaranteed alignment of ptr+offset is only the minimum ofDuncan Sands
of offset and the alignment of ptr if these are both powers of 2. While the ptr alignment is guaranteed to be a power of 2, there is no reason to think that offset is. For example, if offset is 12 (the size of a long double on x86-32 linux) and the alignment of ptr is 8, then the alignment of ptr+offset will in general be 4, not 8. Introduce a function MinAlign, lifted from gcc, for computing the minimum guaranteed alignment. I've tried to fix up everywhere under lib/CodeGen/SelectionDAG/. I also changed some places that weren't wrong (because both values were a power of 2), as a defensive change against people copying and pasting the code. Hopefully someone who cares about alignment will review the rest of LLVM and fix up the remaining places. Since I'm on x86 I'm not very motivated to do this myself... git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43421 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-26- Remove the hacky code that forces a memcpy. Alignment is taken care of in theBill Wendling
FE. - Explicitly pass in the alignment of the load & store. - XFAIL 2007-10-23-UnalignedMemcpy.ll because llc has a bug that crashes on unaligned pointers. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43398 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-25Changed XXX to FIXME, and added comment to the README fileBill Wendling
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43359 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-25Added comment explaining why we are doing this check.Bill Wendling
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43353 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-25Small formatting changes. Add a sanity check.Duncan Sands
Use NVT rather than looking it up, since we have it to hand. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43341 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-25Promote SETCC operands.Duncan Sands
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43340 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-25Correctly extract the ValueType from a VTSDNode.Duncan Sands
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43339 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-24Another expansion for i64 multiply, suitable for PPC.Dale Johannesen
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43314 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-23Fix comment and use the "Size" variable that's already provided.Bill Wendling
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43271 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-23If there's an unaligned memcpy to/from the stack, don't lower it. Just call theBill Wendling
memcpy library function instead. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43270 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-23This broke lots. Reverting.Bill Wendling
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43264 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-23Lowering a memcpy to the stack is killing PPC. The ARM and X86 backends alreadyBill Wendling
have their own custom memcpy lowering code. This code needs to be factored out into a target-independent lowering method with hooks to the backend. In the meantime, just call memcpy if we're trying to copy onto a stack. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43262 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-23It's possible to commute instrctions with more than 3 operands.Evan Cheng
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43256 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-23isSubRegOf() is a dup of isSubRegister.Evan Cheng
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43249 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-22Add missing paratheses.Evan Cheng
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43227 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-22Support for expanding extending loads of integers withDuncan Sands
funky bit-widths. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43225 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-22Fix up the logic for result expanding the various extensionDuncan Sands
operations so they work right for integers with funky bit-widths. For example, consider extending i48 to i64 on a 32 bit machine. The i64 result is expanded to 2 x i32. We know that the i48 operand will be promoted to i64, then also expanded to 2 x i32. If we had the expanded promoted operand to hand, then expanding the result would be trivial. Unfortunately at this stage we can only get hold of the promoted operand. So instead we kind of hand-expand, doing explicit shifting and truncating to get the top and bottom halves of the i64 operand into 2 x i32, which are then used to expand the result. This is harmless, because when the promoted operand is finally expanded all this bit fiddling turns into trivial operations which are eliminated either by the expansion code itself or the DAG combiner. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43223 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-22- Only perform the unfolding optimization when the folding in question is ↵Evan Cheng
modref. - Remove a bogus assertion. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43211 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-20Add promote operand support for [su]int_to_fp.Chris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43204 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-20Add result promotion of FP_TO_*INT, fixing CodeGen/X86/trunc-to-bool.llChris Lattner
with the new legalizer. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43199 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-20simplify some code.Chris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43198 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-20Implement promote and expand for operands of memcpy and friends. Chris Lattner
This fixes CodeGen/X86/mem*.ll. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43197 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-20Added missing curly braces which renders the if clause useless in debug build.Evan Cheng
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43196 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-20Fix a few places vector operations were not gettingDale Johannesen
the operand's type from the right place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43195 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-19Local spiller optimization:Evan Cheng
Turn a store folding instruction into a load folding instruction. e.g. xorl %edi, %eax movl %eax, -32(%ebp) movl -36(%ebp), %eax orl %eax, -32(%ebp) => xorl %edi, %eax orl -36(%ebp), %eax mov %eax, -32(%ebp) This enables the unfolding optimization for a subsequent instruction which will also eliminate the newly introduced store instruction. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43192 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-19Don't branch fold inline asm statements.Bill Wendling
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43191 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-19Add support for a few more nodes.Duncan Sands
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43190 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-19Redo "last ppc long double fix" as Chris wants.Dale Johannesen
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43189 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-19Fix a really nasty vector miscompilation bill recently introduced.Chris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43181 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-19rename ExpandOperation to ExpandOperationResult, as suggestedChris Lattner
by Duncan git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43177 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-19Support for expanding ADDE and SUBE.Duncan Sands
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43175 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-19If the value types are equal then this routineDuncan Sands
asserts in later checks rather than producing the ordinary load it is supposed to. Avoid all such hassles by directly returning an ordinary load in this case. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43174 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-19Add support for byval function whose argument is not 32 bit aligned.Rafael Espindola
To do this it is necessary to add a "always inline" argument to the memcpy node. For completeness I have also added this node to memmove and memset. I have also added getMem* functions, because the extra argument makes it cumbersome to use getNode and because I get confused by it :-) git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43172 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-19Implement a few new operations.Chris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43171 91177308-0d34-0410-b5e6-96231b3b80d8
2007-10-19Implement expansion of SINT_TO_FP and UINT_TO_FP operands.Chris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@43170 91177308-0d34-0410-b5e6-96231b3b80d8