aboutsummaryrefslogtreecommitdiff
path: root/lib/CodeGen/BackendUtil.cpp
diff options
context:
space:
mode:
authorChandler Carruth <chandlerc@gmail.com>2012-12-09 10:08:22 +0000
committerChandler Carruth <chandlerc@gmail.com>2012-12-09 10:08:22 +0000
commit5588a6903eb644e51c5e9b77939de16c61894fa8 (patch)
tree4e4bbb82568e321c3c3f90174b918ad399a7ab80 /lib/CodeGen/BackendUtil.cpp
parentef8d516e7e015a905ca72ce99590803ca7c7e8f3 (diff)
Add a test case that I've been using to clarify the bitfield layout for
both LE and BE targets. AFAICT, Clang get's this correct for PPC64. I've compared it to GCC 4.8 output for PPC64 (thanks Roman!) and to my limited ability to read power assembly, it looks functionally equivalent. It would be really good to fill in the assertions on this test case for x86-32, PPC32, ARM, etc., but I've reached the limit of my time and energy... Hopefully other folks can chip in as it would be good to have this in place to test any subsequent changes. To those who care about PPC64 performance, a side note: there is some *obnoxiously* bad code generated for these test cases. It would be worth someone's time to sit down and teach the PPC backend to pattern match these IR constructs better. It appears that things like '(shr %foo, <imm>)' turn into 'rldicl R, R, 64-<imm>, <imm>' or some such. They don't even get combined with other 'rldicl' instructions *immediately adjacent*. I'll add a couple of these patterns to the README, but I think it would be better to look at all the patterns produced by this and other bitfield access code, and systematically build up a collection of patterns that efficiently reduce them to the minimal code. git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@169693 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'lib/CodeGen/BackendUtil.cpp')
0 files changed, 0 insertions, 0 deletions