aboutsummaryrefslogtreecommitdiff
path: root/docs/FAQ.rst
blob: b4c6261c65d8010ff215669691b440b544458a58 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
.. _faq:

================================
Frequently Asked Questions (FAQ)
================================

.. contents::
   :local:


License
=======

Does the University of Illinois Open Source License really qualify as an "open source" license?
-----------------------------------------------------------------------------------------------
Yes, the license is `certified
<http://www.opensource.org/licenses/UoI-NCSA.php>`_ by the Open Source
Initiative (OSI).


Can I modify LLVM source code and redistribute the modified source?
-------------------------------------------------------------------
Yes.  The modified source distribution must retain the copyright notice and
follow the three bulletted conditions listed in the `LLVM license
<http://llvm.org/svn/llvm-project/llvm/trunk/LICENSE.TXT>`_.


Can I modify the LLVM source code and redistribute binaries or other tools based on it, without redistributing the source?
--------------------------------------------------------------------------------------------------------------------------
Yes. This is why we distribute LLVM under a less restrictive license than GPL,
as explained in the first question above.


Source Code
===========

In what language is LLVM written?
---------------------------------
All of the LLVM tools and libraries are written in C++ with extensive use of
the STL.


How portable is the LLVM source code?
-------------------------------------
The LLVM source code should be portable to most modern Unix-like operating
systems.  Most of the code is written in standard C++ with operating system
services abstracted to a support library.  The tools required to build and
test LLVM have been ported to a plethora of platforms.

Some porting problems may exist in the following areas:

* The autoconf/makefile build system relies heavily on UNIX shell tools,
  like the Bourne Shell and sed.  Porting to systems without these tools
  (MacOS 9, Plan 9) will require more effort.

What API do I use to store a value to one of the virtual registers in LLVM IR's SSA representation?
---------------------------------------------------------------------------------------------------

In short: you can't. It's actually kind of a silly question once you grok
what's going on. Basically, in code like:

.. code-block:: llvm

    %result = add i32 %foo, %bar

, ``%result`` is just a name given to the ``Value`` of the ``add``
instruction. In other words, ``%result`` *is* the add instruction. The
"assignment" doesn't explicitly "store" anything to any "virtual register";
the "``=``" is more like the mathematical sense of equality.

Longer explanation: In order to generate a textual representation of the
IR, some kind of name has to be given to each instruction so that other
instructions can textually reference it. However, the isomorphic in-memory
representation that you manipulate from C++ has no such restriction since
instructions can simply keep pointers to any other ``Value``'s that they
reference. In fact, the names of dummy numbered temporaries like ``%1`` are
not explicitly represented in the in-memory representation at all (see
``Value::getName()``).

Build Problems
==============

When I run configure, it finds the wrong C compiler.
----------------------------------------------------
The ``configure`` script attempts to locate first ``gcc`` and then ``cc``,
unless it finds compiler paths set in ``CC`` and ``CXX`` for the C and C++
compiler, respectively.

If ``configure`` finds the wrong compiler, either adjust your ``PATH``
environment variable or set ``CC`` and ``CXX`` explicitly.


The ``configure`` script finds the right C compiler, but it uses the LLVM tools from a previous build.  What do I do?
---------------------------------------------------------------------------------------------------------------------
The ``configure`` script uses the ``PATH`` to find executables, so if it's
grabbing the wrong linker/assembler/etc, there are two ways to fix it:

#. Adjust your ``PATH`` environment variable so that the correct program
   appears first in the ``PATH``.  This may work, but may not be convenient
   when you want them *first* in your path for other work.

#. Run ``configure`` with an alternative ``PATH`` that is correct. In a
   Bourne compatible shell, the syntax would be:

.. code-block:: console

   % PATH=[the path without the bad program] ./configure ...

This is still somewhat inconvenient, but it allows ``configure`` to do its
work without having to adjust your ``PATH`` permanently.


When creating a dynamic library, I get a strange GLIBC error.
-------------------------------------------------------------
Under some operating systems (i.e. Linux), libtool does not work correctly if
GCC was compiled with the ``--disable-shared option``.  To work around this,
install your own version of GCC that has shared libraries enabled by default.


I've updated my source tree from Subversion, and now my build is trying to use a file/directory that doesn't exist.
-------------------------------------------------------------------------------------------------------------------
You need to re-run configure in your object directory.  When new Makefiles
are added to the source tree, they have to be copied over to the object tree
in order to be used by the build.


I've modified a Makefile in my source tree, but my build tree keeps using the old version.  What do I do?
---------------------------------------------------------------------------------------------------------
If the Makefile already exists in your object tree, you can just run the
following command in the top level directory of your object tree:

.. code-block:: console

   % ./config.status <relative path to Makefile>;

If the Makefile is new, you will have to modify the configure script to copy
it over.


I've upgraded to a new version of LLVM, and I get strange build errors.
-----------------------------------------------------------------------
Sometimes, changes to the LLVM source code alters how the build system works.
Changes in ``libtool``, ``autoconf``, or header file dependencies are
especially prone to this sort of problem.

The best thing to try is to remove the old files and re-build.  In most cases,
this takes care of the problem.  To do this, just type ``make clean`` and then
``make`` in the directory that fails to build.


I've built LLVM and am testing it, but the tests freeze.
--------------------------------------------------------
This is most likely occurring because you built a profile or release
(optimized) build of LLVM and have not specified the same information on the
``gmake`` command line.

For example, if you built LLVM with the command:

.. code-block:: console

   % gmake ENABLE_PROFILING=1

...then you must run the tests with the following commands:

.. code-block:: console

   % cd llvm/test
   % gmake ENABLE_PROFILING=1

Why do test results differ when I perform different types of builds?
--------------------------------------------------------------------
The LLVM test suite is dependent upon several features of the LLVM tools and
libraries.

First, the debugging assertions in code are not enabled in optimized or
profiling builds.  Hence, tests that used to fail may pass.

Second, some tests may rely upon debugging options or behavior that is only
available in the debug build.  These tests will fail in an optimized or
profile build.


Compiling LLVM with GCC 3.3.2 fails, what should I do?
------------------------------------------------------
This is `a bug in GCC <http://gcc.gnu.org/bugzilla/show_bug.cgi?id=13392>`_,
and affects projects other than LLVM.  Try upgrading or downgrading your GCC.


Compiling LLVM with GCC succeeds, but the resulting tools do not work, what can be wrong?
-----------------------------------------------------------------------------------------
Several versions of GCC have shown a weakness in miscompiling the LLVM
codebase.  Please consult your compiler version (``gcc --version``) to find
out whether it is `broken <GettingStarted.html#brokengcc>`_.  If so, your only
option is to upgrade GCC to a known good version.


After Subversion update, rebuilding gives the error "No rule to make target".
-----------------------------------------------------------------------------
If the error is of the form:

.. code-block:: console

   gmake[2]: *** No rule to make target `/path/to/somefile',
                 needed by `/path/to/another/file.d'.
   Stop.

This may occur anytime files are moved within the Subversion repository or
removed entirely.  In this case, the best solution is to erase all ``.d``
files, which list dependencies for source files, and rebuild:

.. code-block:: console

   % cd $LLVM_OBJ_DIR
   % rm -f `find . -name \*\.d`
   % gmake

In other cases, it may be necessary to run ``make clean`` before rebuilding.


Source Languages
================

What source languages are supported?
------------------------------------
LLVM currently has full support for C and C++ source languages. These are
available through both `Clang <http://clang.llvm.org/>`_ and `DragonEgg
<http://dragonegg.llvm.org/>`_.

The PyPy developers are working on integrating LLVM into the PyPy backend so
that PyPy language can translate to LLVM.


I'd like to write a self-hosting LLVM compiler. How should I interface with the LLVM middle-end optimizers and back-end code generators?
----------------------------------------------------------------------------------------------------------------------------------------
Your compiler front-end will communicate with LLVM by creating a module in the
LLVM intermediate representation (IR) format. Assuming you want to write your
language's compiler in the language itself (rather than C++), there are 3
major ways to tackle generating LLVM IR from a front-end:

1. **Call into the LLVM libraries code using your language's FFI (foreign
   function interface).**

  * *for:* best tracks changes to the LLVM IR, .ll syntax, and .bc format

  * *for:* enables running LLVM optimization passes without a emit/parse
    overhead

  * *for:* adapts well to a JIT context

  * *against:* lots of ugly glue code to write

2. **Emit LLVM assembly from your compiler's native language.**

  * *for:* very straightforward to get started

  * *against:* the .ll parser is slower than the bitcode reader when
    interfacing to the middle end

  * *against:* it may be harder to track changes to the IR

3. **Emit LLVM bitcode from your compiler's native language.**

  * *for:* can use the more-efficient bitcode reader when interfacing to the
    middle end

  * *against:* you'll have to re-engineer the LLVM IR object model and bitcode
    writer in your language

  * *against:* it may be harder to track changes to the IR

If you go with the first option, the C bindings in include/llvm-c should help
a lot, since most languages have strong support for interfacing with C. The
most common hurdle with calling C from managed code is interfacing with the
garbage collector. The C interface was designed to require very little memory
management, and so is straightforward in this regard.

What support is there for a higher level source language constructs for building a compiler?
--------------------------------------------------------------------------------------------
Currently, there isn't much. LLVM supports an intermediate representation
which is useful for code representation but will not support the high level
(abstract syntax tree) representation needed by most compilers. There are no
facilities for lexical nor semantic analysis.


I don't understand the ``GetElementPtr`` instruction. Help!
-----------------------------------------------------------
See `The Often Misunderstood GEP Instruction <GetElementPtr.html>`_.


Using the C and C++ Front Ends
==============================

Can I compile C or C++ code to platform-independent LLVM bitcode?
-----------------------------------------------------------------
No. C and C++ are inherently platform-dependent languages. The most obvious
example of this is the preprocessor. A very common way that C code is made
portable is by using the preprocessor to include platform-specific code. In
practice, information about other platforms is lost after preprocessing, so
the result is inherently dependent on the platform that the preprocessing was
targeting.

Another example is ``sizeof``. It's common for ``sizeof(long)`` to vary
between platforms. In most C front-ends, ``sizeof`` is expanded to a
constant immediately, thus hard-wiring a platform-specific detail.

Also, since many platforms define their ABIs in terms of C, and since LLVM is
lower-level than C, front-ends currently must emit platform-specific IR in
order to have the result conform to the platform ABI.


Questions about code generated by the demo page
===============================================

What is this ``llvm.global_ctors`` and ``_GLOBAL__I_a...`` stuff that happens when I ``#include <iostream>``?
-------------------------------------------------------------------------------------------------------------
If you ``#include`` the ``<iostream>`` header into a C++ translation unit,
the file will probably use the ``std::cin``/``std::cout``/... global objects.
However, C++ does not guarantee an order of initialization between static
objects in different translation units, so if a static ctor/dtor in your .cpp
file used ``std::cout``, for example, the object would not necessarily be
automatically initialized before your use.

To make ``std::cout`` and friends work correctly in these scenarios, the STL
that we use declares a static object that gets created in every translation
unit that includes ``<iostream>``.  This object has a static constructor
and destructor that initializes and destroys the global iostream objects
before they could possibly be used in the file.  The code that you see in the
``.ll`` file corresponds to the constructor and destructor registration code.

If you would like to make it easier to *understand* the LLVM code generated
by the compiler in the demo page, consider using ``printf()`` instead of
``iostream``\s to print values.


Where did all of my code go??
-----------------------------
If you are using the LLVM demo page, you may often wonder what happened to
all of the code that you typed in.  Remember that the demo script is running
the code through the LLVM optimizers, so if your code doesn't actually do
anything useful, it might all be deleted.

To prevent this, make sure that the code is actually needed.  For example, if
you are computing some expression, return the value from the function instead
of leaving it in a local variable.  If you really want to constrain the
optimizer, you can read from and assign to ``volatile`` global variables.


What is this "``undef``" thing that shows up in my code?
--------------------------------------------------------
``undef`` is the LLVM way of representing a value that is not defined.  You
can get these if you do not initialize a variable before you use it.  For
example, the C function:

.. code-block:: c

   int X() { int i; return i; }

Is compiled to "``ret i32 undef``" because "``i``" never has a value specified
for it.


Why does instcombine + simplifycfg turn a call to a function with a mismatched calling convention into "unreachable"? Why not make the verifier reject it?
----------------------------------------------------------------------------------------------------------------------------------------------------------
This is a common problem run into by authors of front-ends that are using
custom calling conventions: you need to make sure to set the right calling
convention on both the function and on each call to the function.  For
example, this code:

.. code-block:: llvm

   define fastcc void @foo() {
       ret void
   }
   define void @bar() {
       call void @foo()
       ret void
   }

Is optimized to:

.. code-block:: llvm

   define fastcc void @foo() {
       ret void
   }
   define void @bar() {
       unreachable
   }

... with "``opt -instcombine -simplifycfg``".  This often bites people because
"all their code disappears".  Setting the calling convention on the caller and
callee is required for indirect calls to work, so people often ask why not
make the verifier reject this sort of thing.

The answer is that this code has undefined behavior, but it is not illegal.
If we made it illegal, then every transformation that could potentially create
this would have to ensure that it doesn't, and there is valid code that can
create this sort of construct (in dead code).  The sorts of things that can
cause this to happen are fairly contrived, but we still need to accept them.
Here's an example:

.. code-block:: llvm

   define fastcc void @foo() {
       ret void
   }
   define internal void @bar(void()* %FP, i1 %cond) {
       br i1 %cond, label %T, label %F
   T:
       call void %FP()
       ret void
   F:
       call fastcc void %FP()
       ret void
   }
   define void @test() {
       %X = or i1 false, false
       call void @bar(void()* @foo, i1 %X)
       ret void
   }

In this example, "test" always passes ``@foo``/``false`` into ``bar``, which
ensures that it is dynamically called with the right calling conv (thus, the
code is perfectly well defined).  If you run this through the inliner, you
get this (the explicit "or" is there so that the inliner doesn't dead code
eliminate a bunch of stuff):

.. code-block:: llvm

   define fastcc void @foo() {
       ret void
   }
   define void @test() {
       %X = or i1 false, false
       br i1 %X, label %T.i, label %F.i
   T.i:
       call void @foo()
       br label %bar.exit
   F.i:
       call fastcc void @foo()
       br label %bar.exit
   bar.exit:
       ret void
   }

Here you can see that the inlining pass made an undefined call to ``@foo``
with the wrong calling convention.  We really don't want to make the inliner
have to know about this sort of thing, so it needs to be valid code.  In this
case, dead code elimination can trivially remove the undefined code.  However,
if ``%X`` was an input argument to ``@test``, the inliner would produce this:

.. code-block:: llvm

   define fastcc void @foo() {
       ret void
   }

   define void @test(i1 %X) {
       br i1 %X, label %T.i, label %F.i
   T.i:
       call void @foo()
       br label %bar.exit
   F.i:
       call fastcc void @foo()
       br label %bar.exit
   bar.exit:
       ret void
   }

The interesting thing about this is that ``%X`` *must* be false for the
code to be well-defined, but no amount of dead code elimination will be able
to delete the broken call as unreachable.  However, since
``instcombine``/``simplifycfg`` turns the undefined call into unreachable, we
end up with a branch on a condition that goes to unreachable: a branch to
unreachable can never happen, so "``-inline -instcombine -simplifycfg``" is
able to produce:

.. code-block:: llvm

   define fastcc void @foo() {
      ret void
   }
   define void @test(i1 %X) {
   F.i:
      call fastcc void @foo()
      ret void
   }