From 1f3405800a99939005fd5d27b712e08ecceb5975 Mon Sep 17 00:00:00 2001
From: Bill Wendling Written by the LLVM Team The LLVM 3.1 distribution currently consists of code from the core LLVM
repository (which roughly includes the LLVM optimizers, code generators and
- supporting tools), and the Clang repository. In
- addition to this code, the LLVM Project includes other sub-projects that are
- in development. Here we include updates on these subprojects.These are in-progress notes for the upcoming LLVM 3.1
-release.
-
-You may prefer the
-LLVM 3.0
-Release Notes.
Introduction
@@ -74,9 +68,9 @@ Release Notes.
In the LLVM 3.1 time-frame, the Clang team has made many improvements:
+In the LLVM 3.1 time-frame, the Clang team has made many improvements. + Highlights include:
For more details about the changes to Clang since the 3.0 release, see the -Clang release notes -
- +For more details about the changes to Clang since the 3.0 release, see the + Clang release + notes.
If Clang rejects your code but another compiler accepts it, please take a look at the language @@ -124,6 +118,7 @@ Release Notes.
DragonEgg is a gcc plugin that replaces GCC's optimizers and code generators with LLVM's. It works with gcc-4.5 and gcc-4.6 @@ -134,8 +129,7 @@ Release Notes.
The 3.1 release has the following notable changes:
-....
+As of 3.1, compiler-rt includes the helper functions for atomic operations, + allowing atomic operations on arbitrary-sized quantities to work. These + functions follow the specification defined by gcc and are used by clang.
@@ -182,12 +177,11 @@ Release Notes.LLDB is a ground-up implementation of a command line debugger, as well as a - debugger API that can be used from other applications. LLDB makes use of the - Clang parser to provide high-fidelity expression parsing (particularly for - C++) and uses the LLVM JIT for target support.
- -...
+LLDB is a ground-up implementation of a + command line debugger, as well as a debugger API that can be used from other + applications. LLDB makes use of the Clang parser to provide high-fidelity + expression parsing (particularly for C++) and uses the LLVM JIT for target + support.
...
+Within the LLVM 3.1 time-frame there were the following highlights:
+ +<atomic>
header is now passing all tests, when
+ compiling with clang and linking against the support code from
+ compiler-rt.The VMKit project is an - implementation of a Java Virtual Machine (Java VM or JVM) that uses LLVM for - static and just-in-time compilation. +
The VMKit project is an implementation + of a Java Virtual Machine (Java VM or JVM) that uses LLVM for static and + just-in-time compilation.
-In the LLVM 3.1 time-frame, VMKit has had significant improvements on both - runtime and startup performance:
- -In the LLVM 3.1 time-frame, VMKit has had significant improvements on both + runtime and startup performance.
Polly is an experimental +
Polly is an experimental optimizer for data locality and parallelism. It currently provides high-level loop optimizations and automatic parallelisation (using the OpenMP run time). Work in the area of automatic SIMD and accelerator code generation was - started. + started.
-Within the LLVM 3.1 time-frame there were the following highlights:
+Within the LLVM 3.1 time-frame there were the following highlights:
-Crack aims to provide + the ease of development of a scripting language with the performance of a + compiled language. The language derives concepts from C++, Java and Python, + incorporating object-oriented programming, operator overloading and strong + typing.
+ +FAUST is a compiled language for + real-time audio signal processing. The name FAUST stands for Functional + AUdio STream. Its programming model combines two approaches: functional + programming and block diagram composition. In addition with the C, C++, Java, + JavaScript output formats, the Faust compiler can generate LLVM bitcode, and + works with LLVM 2.7-3.1.
+ +GHC is an open source compiler and + programming suite for Haskell, a lazy functional programming language. It + includes an optimizing static compiler generating good code for a variety of + platforms, together with an interactive system for convenient, quick + development.
+ +GHC 7.0 and onwards include an LLVM code generator, supporting LLVM 2.8 and + later.
+ +Julia is a high-level, + high-performance dynamic language for technical computing. It provides a + sophisticated compiler, distributed parallel execution, numerical accuracy, + and an extensive mathematical function library. The compiler uses type + inference to generate fast code without any type declarations, and uses + LLVM's optimization passes and JIT compiler. The + Julia Language is designed + around multiple dispatch, giving programs a large degree of flexibility. It + is ready for use on many kinds of problems.
+ +LLVM D Compiler (LDC) is + a compiler for the D programming Language. It is based on the DMD frontend + and uses LLVM as backend.
+ +Open Shading + Language (OSL) is a small but rich language for programmable shading in + advanced global illumination renderers and other applications, ideal for + describing materials, lights, displacement, and pattern generation. It uses + LLVM to JIT complex shader networks to x86 code at runtime.
+ +OSL was developed by Sony Pictures Imageworks for use in its in-house + renderer used for feature film animation and visual effects, and is + distributed as open source software with the "New BSD" license.
+ +In addition to producing an easily portable open source OpenCL + implementation, another major goal of + pocl is improving performance portability of OpenCL programs with + compiler optimizations, reducing the need for target-dependent manual + optimizations. An important part of pocl is a set of LLVM passes used to + statically parallelize multiple work-items with the kernel compiler, even in + the presence of work-group barriers. This enables static parallelization of + the fine-grained static concurrency in the work groups in multiple ways + (SIMD, VLIW, superscalar,...).
+ +Pure (http://pure-lang.googlecode.com/) is an algebraic/functional -programming language based on term rewriting. Programs are collections of -equations which are used to evaluate expressions in a symbolic fashion. The -interpreter uses LLVM as a backend to JIT-compile Pure programs to fast native -code. Pure offers dynamic typing, eager and lazy evaluation, lexical closures, a -hygienic macro system (also based on term rewriting), built-in list and matrix -support (including list and matrix comprehensions) and an easy-to-use interface -to C and other programming languages (including the ability to load LLVM bitcode -modules, and inline C, C++, Fortran and Faust code in Pure programs if the -corresponding LLVM-enabled compilers are installed).
+Pure is an + algebraic/functional programming language based on term rewriting. Programs + are collections of equations which are used to evaluate expressions in a + symbolic fashion. The interpreter uses LLVM as a backend to JIT-compile Pure + programs to fast native code. Pure offers dynamic typing, eager and lazy + evaluation, lexical closures, a hygienic macro system (also based on term + rewriting), built-in list and matrix support (including list and matrix + comprehensions) and an easy-to-use interface to C and other programming + languages (including the ability to load LLVM bitcode modules, and inline C, + C++, Fortran and Faust code in Pure programs if the corresponding + LLVM-enabled compilers are installed).
Pure version 0.54 has been tested and is known to work with LLVM 3.1 (and -continues to work with older LLVM releases >= 2.5).
+ continues to work with older LLVM releases >= 2.5). + +TCE is a toolset for designing + application-specific processors (ASP) based on the Transport triggered + architecture (TTA). The toolset provides a complete co-design flow from C/C++ + programs down to synthesizable VHDL/Verilog and parallel program binaries. + Processor customization points include the register files, function units, + supported operations, and the interconnection network.
+ +TCE uses Clang and LLVM for C/C++ language support, target independent + optimizations and also for parts of code generation. It generates new + LLVM-based code generators "on the fly" for the designed TTA processors and + loads them in to the compiler backend as runtime libraries to avoid + per-target recompilation of larger parts of the compiler chain.
+ +LLVM IR has several new features for better support of new targets and that expose new optimization opportunities:
-We added new TableGen infrastructure to support bundling for @@ -475,13 +604,14 @@ static heuristics as well as source code annotations such as
New features and major changes in the X86 target include:
This release has seen major new work on just about every aspect of the MIPS - backend. Some of the major new features include:
+New features and major changes in the MIPS target include:Support for Qualcomm's Hexagon VLIW processor has been added.
+An outstanding conditional inversion bug was fixed in this release.
-NOTE: LLVM 3.1 marks the last release of the PTX back-end, in its + current form. The back-end is currently being replaced by the NVPTX + back-end, currently in SVN ToT.
+ +llvm::getTrapFunctionName()
llvm::EnableSegmentedStacks
MDBuilder
class has been added to simplify the creation
+ of metadata.Officially supported Python bindings have been added! Feature support is far +from complete. The current bindings support interfaces to:
Using the Object File Interface, it is possible to inspect binary object files. +Think of it as a Python version of readelf or llvm-objdump.
+ +Support for additional features is currently being developed by community +contributors. If you are interested in shaping the direction of the Python +bindings, please express your intent on IRC or the developers list.
+Known problem areas include: