aboutsummaryrefslogtreecommitdiff
path: root/lib/Transforms
AgeCommit message (Collapse)Author
2012-04-13Add support to BBVectorize for vectorizing selects.Hal Finkel
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154700 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-13Add some comments, and fix a few places that missed setting Changed.Dan Gohman
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154687 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-13Consider ObjC runtime calls objc_storeWeak and others which make a copy ofDan Gohman
their argument as "escape" points for objc_retainBlock optimization. This fixes rdar://11229925. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154682 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-13By default, use Early-CSE instead of GVN for vectorization cleanup.Hal Finkel
As has been suggested by Duncan and others, Early-CSE and GVN should do similar redundancy elimination, but Early-CSE is much less expensive. Most of my autovectorization benchmarks show a performance regresion, but all of these are < 0.1%, and so I think that it is still worth using the less expensive pass. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154673 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-13Use the new Use-aware dominates method to apply the objc runtimeDan Gohman
library return value optimization for phi uses. Even when the phi itself is not dominated, the specific use may be dominated. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154647 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-13Code-gen may inject code into the IR before it emits the ASM. The linkerBill Wendling
obviously cannot know that this code is present, let alone used. So prevent the internalize pass from internalizing those global values which code-gen may insert. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154645 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-13Don't move objc_autorelease calls past autorelease pool boundaries whenDan Gohman
optimizing autorelease calls on phi nodes with null operands. This fixes rdar://11207070. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154642 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-11Typo.Chad Rosier
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154522 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-11Add two statistics to help track how we are computing the inline cost.Chandler Carruth
Yea, 'NumCallerCallersAnalyzed' isn't a great name, suggestions welcome. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154492 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-10[tsan] two more compile-time optimizations:Kostya Serebryany
- don't isntrument reads from constant globals. Saves ~1.5% of instrumented instructions on CPU2006 (counting static instructions, not their execution). - don't insrument reads from vtable (which is a global constant too). Saves ~5%. I did not measure the run-time impact of this, but it is certainly non-negative. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154444 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-10[tsan] compile-time instrumentation: do not instrument a read ifKostya Serebryany
a write to the same temp follows in the same BB. Also add stats printing. On Spec CPU2006 this optimization saves roughly 4% of instrumented reads (which is 3% of all instrumented accesses): Writes : 161216 Reads : 446458 Reads-before-write: 18295 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154418 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-10Fix 12513: Loop unrolling breaks with indirect branches.Andrew Trick
Take this opportunity to generalize the indirectbr bailout logic for loop transformations. CFG transformations will never get indirectbr right, and there's no point trying. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154386 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-10whitespaceAndrew Trick
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154385 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-08Teach InstCombine to nuke a common alloca pattern -- an alloca which hasChandler Carruth
GEPs, bit casts, and stores reaching it but no other instructions. These often show up during the iterative processing of the inliner, SROA, and DCE. Once we hit this point, we can completely remove the alloca. These were actually showing up in the final, fully optimized code in a bunch of inliner tests I've been working on, and notably they show up after LLVM finishes optimizing away all function calls involved in hash_combine(a, b). git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154285 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-07Refactor: Use positive field names in VectorizeConfig.Hongbin Zheng
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154249 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-06Sink the collection of return instructions until after *all*Chandler Carruth
simplification has been performed. This is a bit less efficient (requires another ilist walk of the basic blocks) but shouldn't matter in practice. More importantly, it's just too much work to keep track of all the various ways the return instructions can be mutated while simplifying them. This fixes yet another crasher, reported by Daniel Dunbar. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154179 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-06Make GVN's propagateEquality non-recursive. No intended functionality change.Duncan Sands
The modifications are a lot more trivial than they appear to be in the diff! git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154174 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-06Sink the return instruction collection until after we're done deletingChandler Carruth
dead code, including dead return instructions in some cases. Otherwise, we end up having a bogus poniter to a return instruction that blows up much further down the road. It turns out that this pattern is both simpler to code, easier to update in the face of enhancements to the inliner cleanup, and likely cheaper given that it won't add dead instructions to the list. Thanks to John Regehr's numerous test cases for teasing this out. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154157 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-05Fix accidentally inverted logic from r152803, and make theDan Gohman
testcase slightly less trivial. This fixes rdar://11171718. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154118 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-05BBVectorize: Add the const modifier to the VectorizeConfig because we won'tHongbin Zheng
modify it. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154098 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-05Introduce the VectorizeConfig class, with which we can control the behaviorHongbin Zheng
of the BBVectorizePass without using command line option. As pointed out by Hal, we can ask the TargetLoweringInfo for the architecture specific VectorizeConfig to perform vectorizing with architecture specific information. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154096 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-05Add the function "vectorizeBasicBlock" which allow users vectorize aHongbin Zheng
BasicBlock in other passes, e.g. we can call vectorizeBasicBlock in the loop unroll pass right after the loop is unrolled. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154089 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-05Pass the right sign to TLI->isLegalICmpImmediate.Jakob Stoklund Olesen
LSR can fold three addressing modes into its ICmpZero node: ICmpZero BaseReg + Offset => ICmp BaseReg, -Offset ICmpZero -1*ScaleReg + Offset => ICmp ScaleReg, Offset ICmpZero BaseReg + -1*ScaleReg => ICmp BaseReg, ScaleReg The first two cases are only used if TLI->isLegalICmpImmediate() likes the offset. Make sure the right Offset sign is passed to this method in the second case. The ARM version is not symmetric. <rdar://problem/11184260> git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154079 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-04Always compute all the bits in ComputeMaskedBits.Rafael Espindola
This allows us to keep passing reduced masks to SimplifyDemandedBits, but know about all the bits if SimplifyDemandedBits fails. This allows instcombine to simplify cases like the one in the included testcase. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154011 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-04LoopUnrollPass: Use variable "Threshold" instead of "CurrentThreshold" whenHongbin Zheng
reducing unroll count, otherwise the reduced unroll count is not taking the "OptimizeForSize" attribute into account. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@154007 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-02Add an option to turn off the expensive GVN load PRE part of GVN.Bill Wendling
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153902 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-02Fast fix for PR12343:Stepan Dyatkovskiy
http://llvm.org/bugs/show_bug.cgi?id=12343 We have not trivial way for splitting edges that are goes from indirect branch. We can do it with some tricks, but it should be additionally discussed. And it is still dangerous due to difficulty of indirect branches controlling. Fix forbids this case for unswitching. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153879 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-01Belatedly address some code review from Chris.Chandler Carruth
As a side note, I really dislike array_pod_sort... Do we really still care about any STL implementations that get this so wrong? Does libc++? git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153834 91177308-0d34-0410-b5e6-96231b3b80d8
2012-04-01Fix a pretty scary bug I introduced into the always inliner withChandler Carruth
a single missing character. Somehow, this had gone untested. I've added tests for returns-twice logic specifically with the always-inliner that would have caught this, and fixed the bug. Thanks to Matt for the careful review and spotting this!!! =D git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153832 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31Give the always-inliner its own custom filter. It shouldn't have to payChandler Carruth
the very high overhead of the complex inline cost analysis when all it wants to do is detect three patterns which must not be inlined. Comment the code, clean it up, and leave some hints about possible performance improvements if this ever shows up on a profile. Moving this off of the (now more expensive) inline cost analysis is particularly important because we have to run this inliner even at -O0. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153814 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31Remove a bunch of empty, dead, and no-op methods from all of theseChandler Carruth
interfaces. These methods were used in the old inline cost system where there was a persistent cache that had to be updated, invalidated, and cleared. We're now doing more direct computations that don't require this intricate dance. Even if we resume some level of caching, it would almost certainly have a simpler and more narrow interface than this. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153813 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31Initial commit for the rewrite of the inline cost analysis to operateChandler Carruth
on a per-callsite walk of the called function's instructions, in breadth-first order over the potentially reachable set of basic blocks. This is a major shift in how inline cost analysis works to improve the accuracy and rationality of inlining decisions. A brief outline of the algorithm this moves to: - Build a simplification mapping based on the callsite arguments to the function arguments. - Push the entry block onto a worklist of potentially-live basic blocks. - Pop the first block off of the *front* of the worklist (for breadth-first ordering) and walk its instructions using a custom InstVisitor. - For each instruction's operands, re-map them based on the simplification mappings available for the given callsite. - Compute any simplification possible of the instruction after re-mapping, and store that back int othe simplification mapping. - Compute any bonuses, costs, or other impacts of the instruction on the cost metric. - When the terminator is reached, replace any conditional value in the terminator with any simplifications from the mapping we have, and add any successors which are not proven to be dead from these simplifications to the worklist. - Pop the next block off of the front of the worklist, and repeat. - As soon as the cost of inlining exceeds the threshold for the callsite, stop analyzing the function in order to bound cost. The primary goal of this algorithm is to perfectly handle dead code paths. We do not want any code in trivially dead code paths to impact inlining decisions. The previous metric was *extremely* flawed here, and would always subtract the average cost of two successors of a conditional branch when it was proven to become an unconditional branch at the callsite. There was no handling of wildly different costs between the two successors, which would cause inlining when the path actually taken was too large, and no inlining when the path actually taken was trivially simple. There was also no handling of the code *path*, only the immediate successors. These problems vanish completely now. See the added regression tests for the shiny new features -- we skip recursive function calls, SROA-killing instructions, and high cost complex CFG structures when dead at the callsite being analyzed. Switching to this algorithm required refactoring the inline cost interface to accept the actual threshold rather than simply returning a single cost. The resulting interface is pretty bad, and I'm planning to do lots of interface cleanup after this patch. Several other refactorings fell out of this, but I've tried to minimize them for this patch. =/ There is still more cleanup that can be done here. Please point out anything that you see in review. I've worked really hard to try to mirror at least the spirit of all of the previous heuristics in the new model. It's not clear that they are all correct any more, but I wanted to minimize the change in this single patch, it's already a bit ridiculous. One heuristic that is *not* yet mirrored is to allow inlining of functions with a dynamic alloca *if* the caller has a dynamic alloca. I will add this back, but I think the most reasonable way requires changes to the inliner itself rather than just the cost metric, and so I've deferred this for a subsequent patch. The test case is XFAIL-ed until then. As mentioned in the review mail, this seems to make Clang run about 1% to 2% faster in -O0, but makes its binary size grow by just under 4%. I've looked into the 4% growth, and it can be fixed, but requires changes to other parts of the inliner. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153812 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31Internalize: Remove reference of @llvm.noinline, it was replaced with the ↵Benjamin Kramer
noinline attribute a long time ago. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153806 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-31Correctly vectorize powi.Hal Finkel
The powi intrinsic requires special handling because it always takes a single integer power regardless of the result type. As a result, we can vectorize only if the powers are equal. Fixes PR12364. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153797 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-29Don't PRE compares.Jakob Stoklund Olesen
CodeGenPrepare sinks compare instructions down to their uses to prevent live flags and predicate registers across basic blocks. PRE of a compare instruction prevents that, forcing the i1 compare result into a general purpose register. That is usually more expensive than the redundant compare PRE was trying to eliminate in the first place. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153657 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-28GlobalOpt: If we have an inbounds GEP from a ConstantAggregateZero global ↵Benjamin Kramer
that we just determined to be constant, replace all loads from it with a zero value. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153576 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-28Switch to WeakVHs in the value mapper, and aggressively prune dead basicChandler Carruth
blocks in the function cloner. This removes the last case of trivially dead code that I've been seeing in the wild getting inlined, analyzed, re-inlined, optimized, only to be deleted. Nukes a FIXME from the cleanup tests. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153572 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-28Fix 80-column violation.Chad Rosier
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153556 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-27Make a seemingly tiny change to the inliner and fix the generated codeChandler Carruth
size bloat. Unfortunately, I expect this to disable the majority of the benefit from r152737. I'm hopeful at least that it will fix PR12345. To explain this requires... quite a bit of backstory I'm afraid. TL;DR: The change in r152737 actually did The Wrong Thing for linkonce-odr functions. This change makes it do the right thing. The benefits we saw were simple luck, not any actual strategy. Benchmark numbers after a mini-blog-post so that I've written down my thoughts on why all of this works and doesn't work... To understand what's going on here, you have to understand how the "bottom-up" inliner actually works. There are two fundamental modes to the inliner: 1) Standard fixed-cost bottom-up inlining. This is the mode we usually think about. It walks from the bottom of the CFG up to the top, looking at callsites, taking information about the callsite and the called function and computing th expected cost of inlining into that callsite. If the cost is under a fixed threshold, it inlines. It's a touch more complicated than that due to all the bonuses, weights, etc. Inlining the last callsite to an internal function gets higher weighth, etc. But essentially, this is the mode of operation. 2) Deferred bottom-up inlining (a term I just made up). This is the interesting mode for this patch an r152737. Initially, this works just like mode #1, but once we have the cost of inlining into the callsite, we don't just compare it with a fixed threshold. First, we check something else. Let's give some names to the entities at this point, or we'll end up hopelessly confused. We're considering inlining a function 'A' into its callsite within a function 'B'. We want to check whether 'B' has any callers, and whether it might be inlined into those callers. If so, we also check whether inlining 'A' into 'B' would block any of the opportunities for inlining 'B' into its callers. We take the sum of the costs of inlining 'B' into its callers where that inlining would be blocked by inlining 'A' into 'B', and if that cost is less than the cost of inlining 'A' into 'B', then we skip inlining 'A' into 'B'. Now, in order for #2 to make sense, we have to have some confidence that we will actually have the opportunity to inline 'B' into its callers when cheaper, *and* that we'll be able to revisit the decision and inline 'A' into 'B' if that ever becomes the correct tradeoff. This often isn't true for external functions -- we can see very few of their callers, and we won't be able to re-consider inlining 'A' into 'B' if 'B' is external when we finally see more callers of 'B'. There are two cases where we believe this to be true for C/C++ code: functions local to a translation unit, and functions with an inline definition in every translation unit which uses them. These are represented as internal linkage and linkonce-odr (resp.) in LLVM. I enabled this logic for linkonce-odr in r152737. Unfortunately, when I did that, I also introduced a subtle bug. There was an implicit assumption that the last caller of the function within the TU was the last caller of the function in the program. We want to bonus the last caller of the function in the program by a huge amount for inlining because inlining that callsite has very little cost. Unfortunately, the last caller in the TU of a linkonce-odr function is *not* the last caller in the program, and so we don't want to apply this bonus. If we do, we can apply it to one callsite *per-TU*. Because of the way deferred inlining works, when it sees this bonus applied to one callsite in the TU for 'B', it decides that inlining 'B' is of the *utmost* importance just so we can get that final bonus. It then proceeds to essentially force deferred inlining regardless of the actual cost tradeoff. The result? PR12345: code bloat, code bloat, code bloat. Another result is getting *damn* lucky on a few benchmarks, and the over-inlining exposing critically important optimizations. I would very much like a list of benchmarks that regress after this change goes in, with bitcode before and after. This will help me greatly understand what opportunities the current cost analysis is missing. Initial benchmark numbers look very good. WebKit files that exhibited the worst of PR12345 went from growing to shrinking compared to Clang with r152737 reverted. - Bootstrapped Clang is 3% smaller with this change. - Bootstrapped Clang -O0 over a single-source-file of lib/Lex is 4% faster with this change. Please let me know about any other performance impact you see. Thanks to Nico for reporting and urging me to actually fix, Richard Smith, Duncan Sands, Manuel Klimek, and Benjamin Kramer for talking through the issues today. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153506 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-26153465 was incorrect. In this code we wanted to check that the pointer ↵Nadav Rotem
operand is of pointer type (and not vector type). git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153468 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-26PR12357: The pointer was used before it was checked.Nadav Rotem
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153465 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-26LSR ivchain bug fix: corner case with ConstantExpr.Andrew Trick
Fixes PR11950. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153463 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-26comment typoAndrew Trick
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153462 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-26eliminate an unneeded branch, part of PR12357Chris Lattner
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153458 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-26Tidy.Eric Christopher
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153456 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-26Tidy.Eric Christopher
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153455 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-26LSR cleanup: potential bug caught by PVS-Studio.Andrew Trick
Thanks Andrey. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153451 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-26[tsan] treat vtable pointer updates in a special way (requires tbaa); fix a ↵Kostya Serebryany
bug (forgot to return true after instrumenting); make sure the tsan tests are run git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153448 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-26Prune some includes and forward declarations.Craig Topper
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153429 91177308-0d34-0410-b5e6-96231b3b80d8
2012-03-25Teach the function cloner (and thus the inliner) to simplify PHINodesChandler Carruth
aggressively. There are lots of dire warnings about this being expensive that seem to predate switching to the TrackingVH-based value remapper that is automatically updated on RAUW. This makes it easy to not just prune single-entry PHIs, but to fully simplify PHIs, and to recursively simplify the newly inlined code to propagate PHINode simplifications. This introduces a bit of a thorny problem though. We may end up simplifying a branch condition to a constant when we fold PHINodes, and we would like to nuke any dead blocks resulting from this so that time isn't wasted continually analyzing them, but this isn't easy. Deleting basic blocks *after* they are fully cloned and mapped into the new function currently requires manually updating the value map. The last piece of the simplification-during-inlining puzzle will require either switching to WeakVH mappings or some other piece of refactoring. I've left a FIXME in the testcase about this. git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@153410 91177308-0d34-0410-b5e6-96231b3b80d8