這些總結並不是我本身寫的,而是摘自LLVM的版本比較老的文檔中。由於老版本的文檔已經鮮有人關注了,而這篇總結的很是好,到如今也頗有用處,因此就把這部份內容貼出來了。這只是原文檔的一部分。html
原文檔地址:http://llvm.org/releases/1.1/docs/Stacker.htmlgit
正文內容:express
Although I knew that LLVM uses a Single Static Assignment (SSA) format, it wasn't obvious to me how prevalent this idea was in LLVM until I reallystarted using it. Reading the Programmer's Manual and Language Reference,I noted that most of the important LLVM IR (Intermediate Representation) C++ classes were derived from the Value class. The full power of that simpledesign only became fully understood once I started constructing executableexpressions for Stacker.app
This really makes your programming go faster. Think about compiling codefor the following C/C++ expression: (a|b)*((x+1)/(y+1))
. Assumingthe values are on the stack in the order a, b, x, y, this could beexpressed in stacker as: 1 + SWAP 1 + / ROT2 OR *
.You could write a function using LLVM that computes this expression like this:less
Value* expression(BasicBlock*bb, Value* a, Value* b, Value* x, Value* y ){ Instruction* tail = bb-<getTerminator(); ConstantSInt* one = ConstantSInt::get( Type::IntTy, 1); BinaryOperator* or1 = BinaryOperator::create( Instruction::Or, a, b, "", tail ); BinaryOperator* add1 = BinaryOperator::create( Instruction::Add, x, one, "", tail ); BinaryOperator* add2 = BinaryOperator::create( Instruction::Add, y, one, "", tail ); BinaryOperator* div1 = BinaryOperator::create( Instruction::Div, add1, add2, "", tail); BinaryOperator* mult1 = BinaryOperator::create( Instruction::Mul, or1, div1, "", tail ); return mult1;}
"Okay, big deal," you say. It is a big deal. Here's why. Note that I didn'thave to tell this function which kinds of Values are being passed in. They could beInstruction
s, Constant
s, GlobalVariable
s, etc. Furthermore, if you specify Values that are incorrect for this sequence of operations, LLVM will either notice right away (at compilation time) or the LLVM Verifier will pick up the inconsistency when the compiler runs. In no case will you make a type error that gets passed through to the generated program. This really helps you write a compiler that always generates correct code!ssh
The second point is that we don't have to worry about branching, registers,stack variables, saving partial results, etc. The instructions we create are the values we use. Note that all that was created in the abovecode is a Constant value and five operators.Each of the instructions is the resulting value of that instruction. This saves a lot of time.ide
The lesson is this: SSA form is very powerful: there is no differencebetween a value and the instruction that created it. This is fullyenforced by the LLVM IR. Use it to your best advantage.post
I had to learn about terminating blocks the hard way: using the debugger to figure out what the LLVM verifier was trying to tell me and begging forhelp on the LLVMdev mailing list. I hope you avoid this experience.ui
Emblazon this rule in your mind:this
BasicBlock
s in your compiler must be terminated with a terminating instruction (branch, return, etc.).Terminating instructions are a semantic requirement of the LLVM IR. Thereis no facility for implicitly chaining together blocks placed into a functionin the order they occur. Indeed, in the general case, blocks will not beadded to the function in the order of execution because of the recursiveway compilers are written.
Furthermore, if you don't terminate your blocks, your compiler code will compile just fine. You won't find out about the problem until you're running the compiler and the module you just created fails on the LLVM Verifier.
After a little initial fumbling around, I quickly caught on to how blocksshould be constructed. In general, here's what I learned:
insert_before
argument. At first, I thought this was a mistake because clearly the normal mode of inserting instructions would be one at a time after some other instruction, not before. However, if you hold on to your terminating instruction (or use the handy dandy getTerminator()
method on a BasicBlock
), it can always be used as the insert_before
argument to your instruction constructors. This causes the instruction to automatically be inserted in the RightPlace™ place, just before the terminating instruction. The nice thing about this design is that you can pass blocks around and insert new instructions into them without ever knowing what instructions came before. This makes for some very clean compiler design.The foregoing is such an important principal, its worth making an idiom:
BasicBlock* bb = new BasicBlock(); bb-<getInstList().push_back( new Branch( ... ) ); new Instruction(..., bb-<getTerminator() );
To make this clear, consider the typical if-then-else statement(see StackerCompiler::handle_if() method). We can set this upin a single function using LLVM in the following way:
using namespace llvm; BasicBlock*MyCompiler::handle_if( BasicBlock* bb, SetCondInst* condition ){ // Create the blocks to contain code in the structure of if/then/else BasicBlock* then = new BasicBlock(); BasicBlock* else = new BasicBlock(); BasicBlock* exit = new BasicBlock(); // Insert the branch instruction for the "if" bb-<getInstList().push_back( new BranchInst( then, else, condition ) ); // Set up the terminating instructions then-<getInstList().push_back( new BranchInst( exit ) ); else-<getInstList().push_back( new BranchInst( exit ) ); // Fill in the then part .. details excised for brevity this-<fill_in( then ); // Fill in the else part .. details excised for brevity this-<fill_in( else ); // Return a block to the caller that can be filled in with the code // that follows the if/then/else construct. return exit;}
Presumably in the foregoing, the calls to the "fill_in" method would add the instructions for the "then" and "else" parts. They would use the third partof the idiom almost exclusively (inserting new instructions before the terminator). Furthermore, they could even recurse back to handle_if
should they encounter another if/then/else statement, and it will just work.
Note how cleanly this all works out. In particular, the push_back methods onthe BasicBlock
's instruction list. These are lists of type Instruction
which also happen to be Value
s. To create the "if" branch, we merely instantiate a BranchInst
that takes as arguments the blocks to branch to and the condition to branch on. The blocksact like branch labels! This new BranchInst
terminatesthe BasicBlock
provided as an argument. To give the caller a wayto keep inserting after calling handle_if
, we create an "exit" blockwhich is returned to the caller. Note that the "exit" block is used as the terminator for both the "then" and the "else" blocks. This guarantees that nomatter what else "handle_if" or "fill_in" does, they end up at the "exit" block.
One of the first things I noticed is the frequent use of the "push_back"method on the various lists. This is so common that it is worth mentioning.The "push_back" inserts a value into an STL list, vector, array, etc. at theend. The method might have also been named "insert_tail" or "append".Although I've used STL quite frequently, my use of push_back wasn't veryhigh in other programs. In LLVM, you'll use it all the time.
It took a little getting used to and several rounds of postings to the LLVMmailing list to wrap my head around this instruction correctly. Even though I hadread the Language Reference and Programmer's Manual a couple times each, I stillmissed a few very key points:
This means that when you look up an element in the global variable (assumingit's a struct or array), you must deference the pointer first! For manythings, this leads to the idiom:
std::vector index_vector; index_vector.push_back( ConstantSInt::get( Type::LongTy, 0 );// ... push other indices ... GetElementPtrInst* gep = new GetElementPtrInst( ptr, index_vector );
For example, suppose we have a global variable whose type is [24 x int]. Thevariable itself represents a pointer to that array. To subscript thearray, we need two indices, not just one. The first index (0) dereferences thepointer. The second index subscripts the array. If you're a "C" programmer, thiswill run against your grain because you'll naturally think of the global arrayvariable and the address of its first element as the same. That tripped me upfor a while until I realized that they really do differ .. by type.Remember that LLVM is a strongly typed language itself. Everythinghas a type. The "type" of the global variable is [24 x int]*. That is, it'sa pointer to an array of 24 ints. When you dereference that global variable witha single (0) index, you now have a "[24 x int]" type. Althoughthe pointer value of the dereferenced global and the address of the zero'th elementin the array will be the same, they differ in their type. The zero'th element hastype "int" while the pointer value has type "[24 x int]".
Get this one aspect of LLVM right in your head, and you'll save yourselfa lot of compiler writing headaches down the road.
Linkage types in LLVM can be a little confusing, especially if your compilerwriting mind has affixed very hard concepts to particular words like "weak","external", "global", "linkonce", etc. LLVM does not use the precisedefinitions of, say, ELF or GCC, even though they share common terms. To be fair,the concepts are related and similar but not precisely the same. This can leadyou to think you know what a linkage type represents but in fact it is slightlydifferent. I recommend you read the Language Reference on this topic very carefully. Then, read it again.
Here are some handy tips that I discovered along the way:
Constants in LLVM took a little getting used to until I discovered a few utilityfunctions in the LLVM IR that make things easier. Here's what I learned:
強烈推薦認真讀一讀這篇感悟。目前以我對LLVM的理解,寫不出這個層次的感悟,可是也會嘗試去寫一篇,這樣能夠加深本身的理解的同時,發現本身的不足之處。