## compilers – How to handle L-attributed SDDs with LR parsers

I’ve read Chapter 5, Syntax Directed Translation, of the Dragon Book but am still unable to understand the following related points:

1. How to handle L-attributed SDDs in an LR grammar when using a tool such as Bison? I believe, the book says that inherited attributes could be used with an LR parser (via Marker symbols?) provided the grammar is LL, or is (partly?) transformed into LL. Is this true? What if the grammar is LR and/or converting it to LL is not easy — how do you then use inherited attributes?

2. If LR grammars are a proper superset of LL grammars and therefore more powerful, then how come LR grammars/parsers won’t out-of-the-box allow L-attributed SDDs which, being a superset of S-attributed SDDs, are more powerful?

3. Avoiding the SDD style – Parsing style mismatch: Is it true that no matter the grammar (LL, LR, …) or the parsing tool (Antlr, bison, yacc…), I can always first build the AST and then traverse it post-/pre-order any number of times to get whatever SDTs I want? (Ignore the time inefficiency here.)

Is there a text or online resource that covers topics 1 and 2 better or in more detail?

## array – Do C++ compilers optimize/pack small data types (e.g. boolean values)?

This occurred to me when looking at stencil computations in numpy. This python library compiles compute-intensive components, so I believe that it’s a valid example. Here making selections on an array is done with small offsets (0, 1, -1) a lot of times. Would a c++ compiler take into account that it doesn’t need a full integer/char/bool type and “pack” bits into bytes?

For example, could selecting every second element from 0 to 7, be represented as a single byte instead of 8 bytes:

0,1,0,1,0,1,0,1

0,0,0,0,0,0,0,0

0,0,0,0,0,0,0,1

0,0,0,0,0,0,0,0

0,0,0,0,0,0,0,1

0,0,0,0,0,0,0,0

0,0,0,0,0,0,0,1

0,0,0,0,0,0,0,0

0,0,0,0,0,0,0,1

## optimization – Do compilers optimise in concurrency?

Asuming expensive_calc_one and expensive_calc_two are pure functions

Unfortunately, determining whether a function is pure is equivalent to solving the Halting Problem in the general case. So, you cannot have an Ahead-of-Time compiler which can in the general case decide whether a function is pure or not.

You have to help the compiler by explicitly designing the language in such a way that the compiler has a chance to decide purity. You have to force the programmer to explicitly annotate such functions, for example, or do something like Haskell or Clean, where you clearly isolate side effects using the type system.

but also computationally quite expensive

Unfortunately, determining in an Ahead-of-Time compiler whether a function is “computationally quite expensive” is also equivalent to solving the Halting Problem. So, you would need to force the programmer to explicitly annotate computationally expensive functions for the compiler to parallelize.

Now, if you have to force the programmer to explicitly annotate pure and computationally expensive functions as candidates for parallelization, then is it really automatic parallelization? Where is the difference to simply annotating functions for parallelization?

Note that some of those problems could be addressed by performing the automatic parallelization at runtime. At runtime, you can simply benchmark a function and see how long it runs, for example. Then, the next time it is called, you evaluate it in parallel. (Of course, if the function performs memoization, then your guess will be wrong.)

Are there any compilers which would introduce concurrency into previously non-concurrent code

Not really. Auto-parallelization has been (one of) the holy grail(s) of compiler research for over half a century, and is still as far away today as it was 50–70 years ago. Some compilers perform parallelization at a very small scale, by auto-vectorization, e.g. performing multiple arithmetic operations in parallel by compiling them to vector instructions (MMX/SSE on AMD64, for example). However, this is generally done on a scale of only a handful of instructions, not entire functions.

There are, however, languages where the language constructs themselves have been designed for parallelism. For example, in Fortress, a `for` loop executes all its iterations in parallel. That means, of course, that you are not allowed to write `for` loops where different iterations depend on each other. Another example is Go, which has the `go` keyword for spawning a goroutine.

However, in this case, you either have the programmer explicitly telling the compiler “execute this in parallel”, or you have the language explicitly telling the programmer “this language construct will be executed in parallel”. So, it’s really the same as, say, Java, except it it much better integrated into the language.

But doing it fully automatically, is near impossible, unless the language has been specifically designed with it in mind. As an example: Excel (conceptually) evaluates all cells in parallel.

## compilers – What is the name of this type of program optimization?

On an imperative programming language, let us consider the following program:

``````for i in 0..N { // N is the length of the arrays A, B, C.
A[i] = A[i] + B[i];
}
for i in 0..N {
A[i] = A[i] + C[i];
}
``````

This program just sums three arrays $$A + B + C$$ component-wisely and store it to $$A$$.

We can easily transform this program into the following equivalent one:

``````for i in 0..N {
let tmp = A[i] + B[i];
A[i] = tmp + C[i];
}
``````

I think the latter code is more efficient than the former because we can decrease the number of memory accesses.

Now I have a question.

What is the name of this type of program transform or program optimization?
Can we also call this deforestation?

## static analysis – Is my compilers assignment irrational?

In our assignment we are given a grammar (called minipython) and we are asked to implement some phases of compilation (parsing, some semantic analysis).

One of the requirements is to be able to check for types in an arithmetic expression (arithmetic expressions are only defined for integers). For example in the below code:

``````def add(x,y):
return "hello world"
``````

We are expected to detect such errors. However the language,like python, seems dynamic to me. Meaning that the return type of a function depends on the runtime (for example the function def f(x): return x

When we showed the instructor the code below:

``````a=0
b=1
if b>2:
a = 'a'
c = a + b
``````

The instructor told us we are supposed to detect such errors (in the arithmetic expression one type is string, the other is a number). And also he said we should run the code tracking the value of each variable.

Well… i haven’t heard of a compiler that runs the program in order to detect errors. I mean that would be impossible if the language is Turing-complete right? (I’m not sure if minipython is though)

One of my classmates is attempting to do this by setting a limit on the times he remains on any loop or setting a max recursion depth. But I don’t think this is right , i mean what compiler does this? I don’t know what magic people do in static analysis tools but i don’t expect that anyone runs the code in order to check type errors… unit tests are for that.

My question: Are my assignment’s tasks irrational?? Can this language have a totally inferred type system? I’m really confused and I don’t have much experience with compilers. Please correct me at any point if I have said anything that is wrong

I can also provide minipython’s BNF if its necessary.

## optimization – What x86 instructions do todays best compilers use and/or ignore?

I am working on a compiler and am interested in x86 machine code statistics.

But what about all the rest of the instructions? How much work does a team put into a compiler to get more optimization out of it by using different optimized instructions?

Specifically, there are instructions like for crypto operations (eg `CRC32`). There are advanced bit manipulation instructions (eg `BEXTR`). There are MMX and SSE instructions (tons and tons of these), and floating point instructions. Etc.

Other than these top 20, what do compilers typically take advantage of. If it’s all over the place, that would be good to know, but I’m mainly looking for a sense of what the cutting edge is in terms of today’s compiler implementations, what instructions or groups of instructions they use, and what they ignore.

## compilers – Difference in g++ and MINGW

So For some reason when compiling the code

`int x=5;`

`int z=x + ++x;`

yields different results on online compilers and g++ on VS code on the native terminal.

g++ on VS code yields 13 while most online compilers and MINGW yields 14.
I know the reasoning for both but I assumed that g++ is MINGW equivalent for MacOs, is it not such?
If so my course follows MINGW so what compiler should I use?

## compilers – How a code executed in the computer

I am started to learn about compilers and interpreters. But I still can’t understand how program execution happens under the hood.

This is what I have understood so far.

Compiler

The Compiler does not run a code, instead, just converts the code into another (low-level) code. This part is clear to me.

Interpreter

First I thought the interpreter converts some high-level code or byte code into machine language and then executes it.
But then I realized that interpreters not particularly convert code into machine code, instead, that instruction becomes an input to the interpreter application and it activate some particular function of the interpreter, which has been already converted into the machine code.

Refer to this pseudo code. Imagine I create a programming language that is interpreted by an interpreter created by python.

``````add 5 + 3
sub 5 - 3
``````

Interpreter

``````def add(a, b):
print(a+b)

def sub(a,b)
print(a-b)

# Parsing
code = () #list of instructinos

for instruction in code:
elif instruction.command == "sub":
sub(instruction.a, instruction.b)
else:
print("incorrect instruction)
``````

I am not sure whether my understanding of the interpreter is correct. If it is not, I want to know how the interpreter works internally. I mean how the interpreter executes a code without translating it into machine code.

## compilers – Calculating worklist with live variable analysis

I am struggling with the calculation of the worklist algorithm, I do not want to implement the iterative algorithm as so many redundant steps it takes.

The algorithm I am following to calculate the worklist for live variables is as below

Can anybody explain to me for my example given below, what would be my initial worklist and how the worklist algorithm would be applied to this?

``````x = 1   /*block 1*/
y = 23  /*block 2*/
x = 100 /*block 3*/
print x+y /*block 4*/
``````

for simple understanding I have taken this code, I have calculated below equations for In(n) block only as I am not able to understand how this algorithm works for out(n).

``````in(4) = use(4) U (out(4) - def(4))
= {x, y} U { }
in(3) = use(3) U (out(3) - def(3))
= { } U { y }
in(2) = use(2) U (out(2) - def(2))
= { } U { y } - { y }
in(1) = use(1) U (out(1) - def(1))
= { } U {  }
``````

Now, even though I calculated these equations I am not sure how to use it in worklist? And at end how to does the worklist gets empty?(Algorithms – Nielson book slide 15)

## formal languages – Use of graph grammars/rewriting systems in compilers?

A(n imperative) program – in a higher-level language and more importantly in assembly language or intermediate representations like LLVM – can be formalized as a directed “port graph”, in which vertices are instructions and ports correspond to input/output operands/arguments. An optimization applied of a program’s code therefore corresponds to a rewrite applied to its port digraph.

Now, graph rewriting is a small but somewhat-active area, in itself. What I’m wondering if these kinds of systems have been actually put to explicit use in the context of a compiler. That is, representing the optimization phase as a rewriting process, or a derivation using a port-graph grammar.

I’m not much of a “compilers guy” – I took a basic course on them in my undergraduate degree, and I know about LLVM and its IR – but it seems to me that graph rewriting systems is the “obvious” formalism to use; and at the same time – I don’t see almost any FOSS projects involving such systems, nor do the papers about them discuss their use in compilers.

Note: I’m more interested in practical-use, popular-language compilers more than academia-only systems, but I’ll take what I can get.