aboutsummaryrefslogtreecommitdiff
path: root/docs/tutorial/LangImpl7.html
diff options
context:
space:
mode:
authorChris Lattner <sabre@nondot.org>2007-11-15 04:51:31 +0000
committerChris Lattner <sabre@nondot.org>2007-11-15 04:51:31 +0000
commitb7e6b1ab7029b45f0be81f3026e571f9977dc5c3 (patch)
tree67d3aab22f5289fefc51254e93ae435d13f4143f /docs/tutorial/LangImpl7.html
parent5b8318a1a4819131decb95b9b2be844d678d7a9e (diff)
many edits, patch by Kelly Wilson!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44157 91177308-0d34-0410-b5e6-96231b3b80d8
Diffstat (limited to 'docs/tutorial/LangImpl7.html')
-rw-r--r--docs/tutorial/LangImpl7.html50
1 files changed, 25 insertions, 25 deletions
diff --git a/docs/tutorial/LangImpl7.html b/docs/tutorial/LangImpl7.html
index 523d24fcde..4ad3b9456b 100644
--- a/docs/tutorial/LangImpl7.html
+++ b/docs/tutorial/LangImpl7.html
@@ -49,11 +49,11 @@ respectable, albeit simple, <a
href="http://en.wikipedia.org/wiki/Functional_programming">functional
programming language</a>. In our journey, we learned some parsing techniques,
how to build and represent an AST, how to build LLVM IR, and how to optimize
-the resultant code and JIT compile it.</p>
+the resultant code as well as JIT compile it.</p>
-<p>While Kaleidoscope is interesting as a functional language, this makes it
-"too easy" to generate LLVM IR for it. In particular, a functional language
-makes it very easy to build LLVM IR directly in <a
+<p>While Kaleidoscope is interesting as a functional language, the fact that it
+is functional makes it "too easy" to generate LLVM IR for it. In particular, a
+functional language makes it very easy to build LLVM IR directly in <a
href="http://en.wikipedia.org/wiki/Static_single_assignment_form">SSA form</a>.
Since LLVM requires that the input code be in SSA form, this is a very nice
property and it is often unclear to newcomers how to generate code for an
@@ -124,13 +124,13 @@ the LLVM IR, and they live in the then/else branches of the if statement
(cond_true/cond_false). In order to merge the incoming values, the X.2 phi node
in the cond_next block selects the right value to use based on where control
flow is coming from: if control flow comes from the cond_false block, X.2 gets
-the value of X.1. Alternatively, if control flow comes from cond_tree, it gets
+the value of X.1. Alternatively, if control flow comes from cond_true, it gets
the value of X.0. The intent of this chapter is not to explain the details of
SSA form. For more information, see one of the many <a
href="http://en.wikipedia.org/wiki/Static_single_assignment_form">online
references</a>.</p>
-<p>The question for this article is "who places phi nodes when lowering
+<p>The question for this article is "who places the phi nodes when lowering
assignments to mutable variables?". The issue here is that LLVM
<em>requires</em> that its IR be in SSA form: there is no "non-ssa" mode for it.
However, SSA construction requires non-trivial algorithms and data structures,
@@ -162,12 +162,12 @@ represents stack variables.
</p>
<p>In LLVM, all memory accesses are explicit with load/store instructions, and
-it is carefully designed to not have (or need) an "address-of" operator. Notice
+it is carefully designed not to have (or need) an "address-of" operator. Notice
how the type of the @G/@H global variables is actually "i32*" even though the
variable is defined as "i32". What this means is that @G defines <em>space</em>
for an i32 in the global data area, but its <em>name</em> actually refers to the
-address for that space. Stack variables work the same way, but instead of being
-declared with global variable definitions, they are declared with the
+address for that space. Stack variables work the same way, except that instead of
+being declared with global variable definitions, they are declared with the
<a href="../LangRef.html#i_alloca">LLVM alloca instruction</a>:</p>
<div class="doc_code">
@@ -259,10 +259,10 @@ cond_next:
</pre>
</div>
-<p>The mem2reg pass implements the standard "iterated dominator frontier"
+<p>The mem2reg pass implements the standard "iterated dominance frontier"
algorithm for constructing SSA form and has a number of optimizations that speed
-up (very common) degenerate cases. mem2reg is the answer for dealing with
-mutable variables, and we highly recommend that you depend on it. Note that
+up (very common) degenerate cases. The mem2reg optimization pass is the answer to dealing
+with mutable variables, and we highly recommend that you depend on it. Note that
mem2reg only works on variables in certain circumstances:</p>
<ol>
@@ -288,10 +288,10 @@ more powerful and can promote structs, "unions", and arrays in many cases.</li>
<p>
All of these properties are easy to satisfy for most imperative languages, and
-we'll illustrate this below with Kaleidoscope. The final question you may be
+we'll illustrate it below with Kaleidoscope. The final question you may be
asking is: should I bother with this nonsense for my front-end? Wouldn't it be
better if I just did SSA construction directly, avoiding use of the mem2reg
-optimization pass? In short, we strongly recommend that use you this technique
+optimization pass? In short, we strongly recommend that you use this technique
for building SSA form, unless there is an extremely good reason not to. Using
this technique is:</p>
@@ -309,8 +309,8 @@ assignment point, good heuristics to avoid insertion of unneeded phi nodes, etc.
<li>Needed for debug info generation: <a href="../SourceLevelDebugging.html">
Debug information in LLVM</a> relies on having the address of the variable
-exposed to attach debug info to it. This technique dovetails very naturally
-with this style of debug info.</li>
+exposed so that debug info can be attached to it. This technique dovetails
+very naturally with this style of debug info.</li>
</ul>
<p>If nothing else, this makes it much easier to get your front-end up and
@@ -337,7 +337,7 @@ add two features:</p>
</ol>
<p>While the first item is really what this is about, we only have variables
-for incoming arguments and for induction variables, and redefining those only
+for incoming arguments as well as for induction variables, and redefining those only
goes so far :). Also, the ability to define new variables is a
useful thing regardless of whether you will be mutating them. Here's a
motivating example that shows how we could use these:</p>
@@ -403,8 +403,8 @@ locations.
</p>
<p>To start our transformation of Kaleidoscope, we'll change the NamedValues
-map to map to AllocaInst* instead of Value*. Once we do this, the C++ compiler
-will tell use what parts of the code we need to update:</p>
+map so that it maps to AllocaInst* instead of Value*. Once we do this, the C++
+compiler will tell us what parts of the code we need to update:</p>
<div class="doc_code">
<pre>
@@ -452,7 +452,7 @@ Value *VariableExprAST::Codegen() {
</pre>
</div>
-<p>As you can see, this is pretty straight-forward. Next we need to update the
+<p>As you can see, this is pretty straightforward. Now we need to update the
things that define the variables to set up the alloca. We'll start with
<tt>ForExprAST::Codegen</tt> (see the <a href="#code">full code listing</a> for
the unabridged code):</p>
@@ -518,7 +518,7 @@ into the alloca, and register the alloca as the memory location for the
argument. This method gets invoked by <tt>FunctionAST::Codegen</tt> right after
it sets up the entry block for the function.</p>
-<p>The final missing piece is adding the 'mem2reg' pass, which allows us to get
+<p>The final missing piece is adding the mem2reg pass, which allows us to get
good codegen once again:</p>
<div class="doc_code">
@@ -537,7 +537,7 @@ good codegen once again:</p>
<p>It is interesting to see what the code looks like before and after the
mem2reg optimization runs. For example, this is the before/after code for our
-recursive fib. Before the optimization:</p>
+recursive fib function. Before the optimization:</p>
<div class="doc_code">
<pre>
@@ -709,7 +709,7 @@ have "(x+1) = expr" - only things like "x = expr" are allowed.
</pre>
</div>
-<p>Once it has the variable, codegen'ing the assignment is straight-forward:
+<p>Once we have the variable, codegen'ing the assignment is straightforward:
we emit the RHS of the assignment, create a store, and return the computed
value. Returning a value allows for chained assignments like "X = (Y = Z)".</p>
@@ -799,7 +799,7 @@ optionally have an initializer value. As such, we capture this information in
the VarNames vector. Also, var/in has a body, this body is allowed to access
the variables defined by the var/in.</p>
-<p>With this ready, we can define the parser pieces. First thing we do is add
+<p>With this in place, we can define the parser pieces. The first thing we do is add
it as a primary expression:</p>
<div class="doc_code">
@@ -972,7 +972,7 @@ definitions, and we even (trivially) allow mutation of them :).</p>
<p>With this, we completed what we set out to do. Our nice iterative fib
example from the intro compiles and runs just fine. The mem2reg pass optimizes
all of our stack variables into SSA registers, inserting PHI nodes where needed,
-and our front-end remains simple: no iterated dominator frontier computation
+and our front-end remains simple: no "iterated dominance frontier" computation
anywhere in sight.</p>
</div>