Age | Commit message (Collapse) | Author |
|
Luis Felipe Strano Moraes!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129558 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129419 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129403 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
Now that we have a first-class way to represent unaligned loads, the unaligned
load intrinsics are superfluous.
First part of <rdar://problem/8460511>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@129401 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
regs. This is the only change in this checkin that may affects the
default scheduler. With better register tracking and heuristics, it
doesn't make sense to artificially lower the register limit so much.
Added -sched-high-latency-cycles and X86InstrInfo::isHighLatencyDef to
give the scheduler a way to account for div and sqrt on targets that
don't have an itinerary. It is currently defaults to 10 (the actual
number doesn't matter much), but only takes effect on non-default
schedulers: list-hybrid and list-ilp.
Added several heuristics that can be individually disabled for the
non-default sched=list-ilp mode. This helps us determine how much
better we can do on a given benchmark than the default
scheduler. Certain compute intensive loops run much faster in this
mode with the right set of heuristics, and it doesn't seem to have
much negative impact elsewhere. Not all of the heuristics are needed,
but we still need to experiment to decide which should be disabled by
default for sched=list-ilp.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@127067 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
operands starts at index 2, not 1.
rdar://9045024
PR9305
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@126359 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124272 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@124270 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@121415 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@120228 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
erased the instruction during LICM so UpdateRegPressureAfter() should not
reference it afterwards.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116845 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
is", which breaks some nightly tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116816 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
"long latency" enough to hoist even if it may increase spilling. Reloading
a value from spill slot is often cheaper than performing an expensive
computation in the loop. For X86, that means machine LICM will hoist
SQRT, DIV, etc. ARM will be somewhat aggressive with VFP and NEON
instructions.
- Enable register pressure aware machine LICM by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116781 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
The reg-reg copies were no longer being generated since copyPhysReg copies
physical registers only.
The loads and stores are not necessary - The TC constraint is imposed by the
TAILJMP and TCRETURN instructions, there should be no need for constrained loads
and stores.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116314 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
reapply: reimplement the second half of the or/add optimization. We should now
with no changes. Turns out that one missing "Defs = [EFLAGS]" can upset things
a bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116040 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
"Reimplement (part of) the or -> add optimization. Matching 'or' into 'add'"
With a critical fix: the add pseudos clobber EFLAGS.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116039 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
'add'", which seems to have broken just about everything.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116033 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
on r116007, which I am about to revert.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116032 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
which depends on r116007, which I am about to revert.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116031 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
only end up emitting LEA instead of OR. If we aren't able to promote
something into an LEA, we should never be emitting it as an ADD.
Add some testcases that we emit "or" in cases where we used to produce
an "add".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116026 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
casing FsMOVAPDrr/FsMOVAPSrr.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116016 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116014 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
is general goodness because it allows ORs to be converted to LEA to avoid
inserting copies. However, this is bad because it makes the generated .s
file less obvious and gives valgrind heartburn (tons of false positives in
bitfield code).
While the general fix should be in valgrind, we can at least try to avoid
emitting ADD instructions that *don't* get promoted to LEA. This is more
work because it requires introducing pseudo instructions to represents
"add that knows the bits are disjoint", but hey, people really love valgrind.
This fixes this testcase:
https://bugs.kde.org/show_bug.cgi?id=242137#c20
the add r/i cases are coming next.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116007 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
with the right types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@116001 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115997 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
instructions.
This unbreaks the machine code verifier and fixes PR8317.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115879 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
before
(e.g. CMOVBE16rr instead of CMOVBErr16).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115705 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
21 insertions(+), 53 deletions(-)
Moar change coming before I switch the rest.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115697 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
operands.
With this done, we can remove the _Int suffixes from the round instructions
without the disassembler blowing up. This allows the assembler to support
them, implementing rdar://8456376 - llvm-mc rejects 'roundss'
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115019 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
Clean up cvttps2dq by removing some redundant implementations of the
same instruction. rdar://8456382
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115018 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@115017 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
Teaching the code generator about CR8-15, how to rex them up, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114533 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
This fixes rdar://8396318.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@114201 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
value should be copied to the corresponding shadow reg as well.
Patch by Cameron Esfahani!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@112262 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111291 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
Patch by Cameron Esfahani!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@111288 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
Here we guess that AVX will have domain issues, so just implement them for consistency and in the future we remove if it's unnecessary.
- Make foldMemoryOperandImpl aware of 256-bit zero vectors folding and support the 128-bit counterparts of AVX too.
- Make sure MOV[AU]PS instructions are only selected when SSE1 is enabled, and duplicate the patterns to match AVX.
- Add a testcase for a simple 128-bit zero vector creation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110946 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110898 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
When a register is defined by a partial load:
%reg1234:sub_32 = MOV32mr <fi#-1>; GR64:%reg1234
That load cannot be folded into an instruction using the full 64-bit register.
It would become a 64-bit load.
This is related to the recent change to have isLoadFromStackSlot return false on
a sub-register load.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110874 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110460 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110410 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
address of the static
ID member as the sole unique type identifier. Clean up APIs related to this change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@110396 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
We do sometimes load from a too small stack slot when dealing with x86 arguments
(varargs and smaller-than-32-bit args). It looks like we know what we are doing
in those cases, so I am going to remove the assert instead of artifically
enlarging stack slot sizes.
The assert in storeRegToStackSlot stays in. We don't want to write beyond the
bounds of a stack slot.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109764 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
subregister operands like this:
%reg1040:sub_32bit<def> = MOV32rm <fi#-2>, 1, %reg0, 0, %reg0, %reg1040<imp-def>; mem:LD4[FixedStack-2](align=8)
Make them return false when subreg operands are present. VirtRegRewriter is
making bad assumptions otherwise.
This fixes PR7713.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109489 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
with a too-big register class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109488 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109167 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
rip out the implementation of X86InstrInfo::GetInstSizeInBytes.
The code being ripped out just implemented a copy and hacked up
version of the (old) instruction encoder, and is buggy and
terrible in other ways. Since "GetInstSizeInBytes" is really
only there to support the JIT's "NeedsExactSize" hook (which
noone is using), just rip out the code. I will rip out the
NeedsExactSize hook next.
This resolves rdar://7617809 - switch X86InstrInfo::GetInstSizeInBytes to use X86MCCodeEmitter
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109149 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
and then forced every register to be a vr128 on win64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109060 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
1) all registers were spilled as xmm, regardless of actual size
2) win64 abi doesn't do the varargs-size-in-%al thing
Still to look into:
xmm6-15 are marked as clobbered by call instructions on win64 even though they aren't.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@109035 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@108567 91177308-0d34-0410-b5e6-96231b3b80d8
|