1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
|
//===- README.txt - Notes for improving PowerPC-specific code gen ---------===//
TODO:
* gpr0 allocation
* implement do-loop -> bdnz transform
===-------------------------------------------------------------------------===
We only produce the rlwnm instruction for rotate instructions. We should
at least match stuff like:
unsigned rot_and(unsigned X, int Y) {
unsigned T = (X << Y) | (X >> (32-Y));
T &= 127;
return T;
}
_foo3:
rlwnm r2, r3, r4, 0, 31
rlwinm r3, r2, 0, 25, 31
blr
... which is the basic pattern that should be written in the instr. It may
also be useful for stuff like:
long long foo2(long long X, int C) {
return X << (C&~32);
}
which currently produces:
_foo2:
rlwinm r2, r5, 0, 27, 25
subfic r5, r2, 32
slw r3, r3, r2
srw r5, r4, r5
or r3, r3, r5
slw r4, r4, r2
blr
===-------------------------------------------------------------------------===
Support 'update' load/store instructions. These are cracked on the G5, but are
still a codesize win.
===-------------------------------------------------------------------------===
Teach the .td file to pattern match PPC::BR_COND to appropriate bc variant, so
we don't have to always run the branch selector for small functions.
===-------------------------------------------------------------------------===
Lump the constant pool for each function into ONE pic object, and reference
pieces of it as offsets from the start. For functions like this (contrived
to have lots of constants obviously):
double X(double Y) { return (Y*1.23 + 4.512)*2.34 + 14.38; }
We generate:
_X:
lis r2, ha16(.CPI_X_0)
lfd f0, lo16(.CPI_X_0)(r2)
lis r2, ha16(.CPI_X_1)
lfd f2, lo16(.CPI_X_1)(r2)
fmadd f0, f1, f0, f2
lis r2, ha16(.CPI_X_2)
lfd f1, lo16(.CPI_X_2)(r2)
lis r2, ha16(.CPI_X_3)
lfd f2, lo16(.CPI_X_3)(r2)
fmadd f1, f0, f1, f2
blr
It would be better to materialize .CPI_X into a register, then use immediates
off of the register to avoid the lis's. This is even more important in PIC
mode.
Note that this (and the static variable version) is discussed here for GCC:
http://gcc.gnu.org/ml/gcc-patches/2006-02/msg00133.html
===-------------------------------------------------------------------------===
PIC Code Gen IPO optimization:
Squish small scalar globals together into a single global struct, allowing the
address of the struct to be CSE'd, avoiding PIC accesses (also reduces the size
of the GOT on targets with one).
Note that this is discussed here for GCC:
http://gcc.gnu.org/ml/gcc-patches/2006-02/msg00133.html
===-------------------------------------------------------------------------===
Implement Newton-Rhapson method for improving estimate instructions to the
correct accuracy, and implementing divide as multiply by reciprocal when it has
more than one use. Itanium will want this too.
===-------------------------------------------------------------------------===
Compile this:
int %f1(int %a, int %b) {
%tmp.1 = and int %a, 15 ; <int> [#uses=1]
%tmp.3 = and int %b, 240 ; <int> [#uses=1]
%tmp.4 = or int %tmp.3, %tmp.1 ; <int> [#uses=1]
ret int %tmp.4
}
without a copy. We make this currently:
_f1:
rlwinm r2, r4, 0, 24, 27
rlwimi r2, r3, 0, 28, 31
or r3, r2, r2
blr
The two-addr pass or RA needs to learn when it is profitable to commute an
instruction to avoid a copy AFTER the 2-addr instruction. The 2-addr pass
currently only commutes to avoid inserting a copy BEFORE the two addr instr.
===-------------------------------------------------------------------------===
Compile offsets from allocas:
int *%test() {
%X = alloca { int, int }
%Y = getelementptr {int,int}* %X, int 0, uint 1
ret int* %Y
}
into a single add, not two:
_test:
addi r2, r1, -8
addi r3, r2, 4
blr
--> important for C++.
===-------------------------------------------------------------------------===
int test3(int a, int b) { return (a < 0) ? a : 0; }
should be branch free code. LLVM is turning it into < 1 because of the RHS.
===-------------------------------------------------------------------------===
No loads or stores of the constants should be needed:
struct foo { double X, Y; };
void xxx(struct foo F);
void bar() { struct foo R = { 1.0, 2.0 }; xxx(R); }
===-------------------------------------------------------------------------===
Darwin Stub LICM optimization:
Loops like this:
for (...) bar();
Have to go through an indirect stub if bar is external or linkonce. It would
be better to compile it as:
fp = &bar;
for (...) fp();
which only computes the address of bar once (instead of each time through the
stub). This is Darwin specific and would have to be done in the code generator.
Probably not a win on x86.
===-------------------------------------------------------------------------===
PowerPC i1/setcc stuff (depends on subreg stuff):
Check out the PPC code we get for 'compare' in this testcase:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19672
oof. on top of not doing the logical crnand instead of (mfcr, mfcr,
invert, invert, or), we then have to compare it against zero instead of
using the value already in a CR!
that should be something like
cmpw cr7, r8, r5
cmpw cr0, r7, r3
crnand cr0, cr0, cr7
bne cr0, LBB_compare_4
instead of
cmpw cr7, r8, r5
cmpw cr0, r7, r3
mfcr r7, 1
mcrf cr7, cr0
mfcr r8, 1
rlwinm r7, r7, 30, 31, 31
rlwinm r8, r8, 30, 31, 31
xori r7, r7, 1
xori r8, r8, 1
addi r2, r2, 1
or r7, r8, r7
cmpwi cr0, r7, 0
bne cr0, LBB_compare_4 ; loopexit
FreeBench/mason has a basic block that looks like this:
%tmp.130 = seteq int %p.0__, 5 ; <bool> [#uses=1]
%tmp.134 = seteq int %p.1__, 6 ; <bool> [#uses=1]
%tmp.139 = seteq int %p.2__, 12 ; <bool> [#uses=1]
%tmp.144 = seteq int %p.3__, 13 ; <bool> [#uses=1]
%tmp.149 = seteq int %p.4__, 14 ; <bool> [#uses=1]
%tmp.154 = seteq int %p.5__, 15 ; <bool> [#uses=1]
%bothcond = and bool %tmp.134, %tmp.130 ; <bool> [#uses=1]
%bothcond123 = and bool %bothcond, %tmp.139 ; <bool>
%bothcond124 = and bool %bothcond123, %tmp.144 ; <bool>
%bothcond125 = and bool %bothcond124, %tmp.149 ; <bool>
%bothcond126 = and bool %bothcond125, %tmp.154 ; <bool>
br bool %bothcond126, label %shortcirc_next.5, label %else.0
This is a particularly important case where handling CRs better will help.
===-------------------------------------------------------------------------===
Simple IPO for argument passing, change:
void foo(int X, double Y, int Z) -> void foo(int X, int Z, double Y)
the Darwin ABI specifies that any integer arguments in the first 32 bytes worth
of arguments get assigned to r3 through r10. That is, if you have a function
foo(int, double, int) you get r3, f1, r6, since the 64 bit double ate up the
argument bytes for r4 and r5. The trick then would be to shuffle the argument
order for functions we can internalize so that the maximum number of
integers/pointers get passed in regs before you see any of the fp arguments.
Instead of implementing this, it would actually probably be easier to just
implement a PPC fastcc, where we could do whatever we wanted to the CC,
including having this work sanely.
===-------------------------------------------------------------------------===
Fix Darwin FP-In-Integer Registers ABI
Darwin passes doubles in structures in integer registers, which is very very
bad. Add something like a BIT_CONVERT to LLVM, then do an i-p transformation
that percolates these things out of functions.
Check out how horrible this is:
http://gcc.gnu.org/ml/gcc/2005-10/msg01036.html
This is an extension of "interprocedural CC unmunging" that can't be done with
just fastcc.
===-------------------------------------------------------------------------===
Compile this:
int foo(int a) {
int b = (a < 8);
if (b) {
return b * 3; // ignore the fact that this is always 3.
} else {
return 2;
}
}
into something not this:
_foo:
1) cmpwi cr7, r3, 8
mfcr r2, 1
rlwinm r2, r2, 29, 31, 31
1) cmpwi cr0, r3, 7
bgt cr0, LBB1_2 ; UnifiedReturnBlock
LBB1_1: ; then
rlwinm r2, r2, 0, 31, 31
mulli r3, r2, 3
blr
LBB1_2: ; UnifiedReturnBlock
li r3, 2
blr
In particular, the two compares (marked 1) could be shared by reversing one.
This could be done in the dag combiner, by swapping a BR_CC when a SETCC of the
same operands (but backwards) exists. In this case, this wouldn't save us
anything though, because the compares still wouldn't be shared.
===-------------------------------------------------------------------------===
The legalizer should lower this:
bool %test(ulong %x) {
%tmp = setlt ulong %x, 4294967296
ret bool %tmp
}
into "if x.high == 0", not:
_test:
addi r2, r3, -1
cntlzw r2, r2
cntlzw r3, r3
srwi r2, r2, 5
srwi r4, r3, 5
li r3, 0
cmpwi cr0, r2, 0
bne cr0, LBB1_2 ;
LBB1_1:
or r3, r4, r4
LBB1_2:
blr
noticed in 2005-05-11-Popcount-ffs-fls.c.
===-------------------------------------------------------------------------===
We should custom expand setcc instead of pretending that we have it. That
would allow us to expose the access of the crbit after the mfcr, allowing
that access to be trivially folded into other ops. A simple example:
int foo(int a, int b) { return (a < b) << 4; }
compiles into:
_foo:
cmpw cr7, r3, r4
mfcr r2, 1
rlwinm r2, r2, 29, 31, 31
slwi r3, r2, 4
blr
===-------------------------------------------------------------------------===
Fold add and sub with constant into non-extern, non-weak addresses so this:
static int a;
void bar(int b) { a = b; }
void foo(unsigned char *c) {
*c = a;
}
So that
_foo:
lis r2, ha16(_a)
la r2, lo16(_a)(r2)
lbz r2, 3(r2)
stb r2, 0(r3)
blr
Becomes
_foo:
lis r2, ha16(_a+3)
lbz r2, lo16(_a+3)(r2)
stb r2, 0(r3)
blr
===-------------------------------------------------------------------------===
We generate really bad code for this:
int f(signed char *a, _Bool b, _Bool c) {
signed char t = 0;
if (b) t = *a;
if (c) *a = t;
}
===-------------------------------------------------------------------------===
This:
int test(unsigned *P) { return *P >> 24; }
Should compile to:
_test:
lbz r3,0(r3)
blr
not:
_test:
lwz r2, 0(r3)
srwi r3, r2, 24
blr
===-------------------------------------------------------------------------===
On the G5, logical CR operations are more expensive in their three
address form: ops that read/write the same register are half as expensive as
those that read from two registers that are different from their destination.
We should model this with two separate instructions. The isel should generate
the "two address" form of the instructions. When the register allocator
detects that it needs to insert a copy due to the two-addresness of the CR
logical op, it will invoke PPCInstrInfo::convertToThreeAddress. At this point
we can convert to the "three address" instruction, to save code space.
This only matters when we start generating cr logical ops.
===-------------------------------------------------------------------------===
We should compile these two functions to the same thing:
#include <stdlib.h>
void f(int a, int b, int *P) {
*P = (a-b)>=0?(a-b):(b-a);
}
void g(int a, int b, int *P) {
*P = abs(a-b);
}
Further, they should compile to something better than:
_g:
subf r2, r4, r3
subfic r3, r2, 0
cmpwi cr0, r2, -1
bgt cr0, LBB2_2 ; entry
LBB2_1: ; entry
mr r2, r3
LBB2_2: ; entry
stw r2, 0(r5)
blr
GCC produces:
_g:
subf r4,r4,r3
srawi r2,r4,31
xor r0,r2,r4
subf r0,r2,r0
stw r0,0(r5)
blr
... which is much nicer.
This theoretically may help improve twolf slightly (used in dimbox.c:142?).
===-------------------------------------------------------------------------===
int foo(int N, int ***W, int **TK, int X) {
int t, i;
for (t = 0; t < N; ++t)
for (i = 0; i < 4; ++i)
W[t / X][i][t % X] = TK[i][t];
return 5;
}
We generate relatively atrocious code for this loop compared to gcc.
We could also strength reduce the rem and the div:
http://www.lcs.mit.edu/pubs/pdf/MIT-LCS-TM-600.pdf
===-------------------------------------------------------------------------===
float foo(float X) { return (int)(X); }
Currently produces:
_foo:
fctiwz f0, f1
stfd f0, -8(r1)
lwz r2, -4(r1)
extsw r2, r2
std r2, -16(r1)
lfd f0, -16(r1)
fcfid f0, f0
frsp f1, f0
blr
We could use a target dag combine to turn the lwz/extsw into an lwa when the
lwz has a single use. Since LWA is cracked anyway, this would be a codesize
win only.
===-------------------------------------------------------------------------===
We generate ugly code for this:
void func(unsigned int *ret, float dx, float dy, float dz, float dw) {
unsigned code = 0;
if(dx < -dw) code |= 1;
if(dx > dw) code |= 2;
if(dy < -dw) code |= 4;
if(dy > dw) code |= 8;
if(dz < -dw) code |= 16;
if(dz > dw) code |= 32;
*ret = code;
}
===-------------------------------------------------------------------------===
Complete the signed i32 to FP conversion code using 64-bit registers
transformation, good for PI. See PPCISelLowering.cpp, this comment:
// FIXME: disable this lowered code. This generates 64-bit register values,
// and we don't model the fact that the top part is clobbered by calls. We
// need to flag these together so that the value isn't live across a call.
//setOperationAction(ISD::SINT_TO_FP, MVT::i32, Custom);
Also, if the registers are spilled to the stack, we have to ensure that all
64-bits of them are save/restored, otherwise we will miscompile the code. It
sounds like we need to get the 64-bit register classes going.
===-------------------------------------------------------------------------===
%struct.B = type { ubyte, [3 x ubyte] }
void %foo(%struct.B* %b) {
entry:
%tmp = cast %struct.B* %b to uint* ; <uint*> [#uses=1]
%tmp = load uint* %tmp ; <uint> [#uses=1]
%tmp3 = cast %struct.B* %b to uint* ; <uint*> [#uses=1]
%tmp4 = load uint* %tmp3 ; <uint> [#uses=1]
%tmp8 = cast %struct.B* %b to uint* ; <uint*> [#uses=2]
%tmp9 = load uint* %tmp8 ; <uint> [#uses=1]
%tmp4.mask17 = shl uint %tmp4, ubyte 1 ; <uint> [#uses=1]
%tmp1415 = and uint %tmp4.mask17, 2147483648 ; <uint> [#uses=1]
%tmp.masked = and uint %tmp, 2147483648 ; <uint> [#uses=1]
%tmp11 = or uint %tmp1415, %tmp.masked ; <uint> [#uses=1]
%tmp12 = and uint %tmp9, 2147483647 ; <uint> [#uses=1]
%tmp13 = or uint %tmp12, %tmp11 ; <uint> [#uses=1]
store uint %tmp13, uint* %tmp8
ret void
}
We emit:
_foo:
lwz r2, 0(r3)
slwi r4, r2, 1
or r4, r4, r2
rlwimi r2, r4, 0, 0, 0
stw r2, 0(r3)
blr
We could collapse a bunch of those ORs and ANDs and generate the following
equivalent code:
_foo:
lwz r2, 0(r3)
rlwinm r4, r2, 1, 0, 0
or r2, r2, r4
stw r2, 0(r3)
blr
===-------------------------------------------------------------------------===
On PPC64, this results in a truncate followed by a truncstore. These should
be folded together.
unsigned short G;
void foo(unsigned long H) { G = H; }
===-------------------------------------------------------------------------===
We compile:
unsigned test6(unsigned x) {
return ((x & 0x00FF0000) >> 16) | ((x & 0x000000FF) << 16);
}
into:
_test6:
lis r2, 255
rlwinm r3, r3, 16, 0, 31
ori r2, r2, 255
and r3, r3, r2
blr
GCC gets it down to:
_test6:
rlwinm r0,r3,16,8,15
rlwinm r3,r3,16,24,31
or r3,r3,r0
blr
|