diff --git a/docs/_docs/arch/commit_stage.md b/docs/_docs/arch/commit_stage.md
index 8cb85815a..b6fe9e4d6 100644
--- a/docs/_docs/arch/commit_stage.md
+++ b/docs/_docs/arch/commit_stage.md
@@ -3,3 +3,39 @@ title: Commit
permalink: /docs/commit_stage/
---
+The commit stage is the last stage in the processor's pipeline. Its
+purpose is to take incoming instruction and update the architectural
+state. This includes writing CSR registers, committing stores and
+writing back data to the register file. The golden rule is that no other
+pipeline stage is allowed to update the architectural state under any
+circumstances. If it keeps an internal state it must be re-settable
+(e.g.: by a flush signal, see ).
+
+We can distinguish two categories of retiring instructions. The first
+category just write the architectural register file. The second might as
+well write the register file but needs some further business logic to
+happen. At the time of this writing the only two places where this is
+necessary it the store unit where the commit stage needs to tell the
+store unit to actually commit the store to memory and the CSR buffer
+which needs to be freed as soon as the corresponding CSR instruction
+retires.
+
+In addition to retiring instructions the commit stage also manages the
+various exception sources. In particular at time of commit exceptions
+can arise from three different sources. First an exception has occurred
+in any of the previous four pipeline stages (only four as PC Gen can't
+throw an exception). Second an exception happend during commit. The only
+source where during commit an exception can happen is from the CS
+register file () and third from an interrupt.
+
+To allow precise interrupts to happen they are considered during the
+commit only and associated with this particular instruction. Because we
+need a particular PC to associate the interrupt with it, it can be the
+case that an interrupt needs to be deferred until another valid
+instruction is in the commit stage.
+
+Furthermore commit stage controls the overall stalling of the processor.
+If the halt signal is asserted it will not commit any new instruction
+which will generate back-pressure and eventually stall the pipeline.
+Commit stage also communicates heavily with the controller to execute
+fence instructions (cache flushes) and other pipeline re-sets.
diff --git a/docs/_docs/arch/ex_stage.md b/docs/_docs/arch/ex_stage.md
index 06a1d5992..c0f8fcbf9 100644
--- a/docs/_docs/arch/ex_stage.md
+++ b/docs/_docs/arch/ex_stage.md
@@ -3,3 +3,312 @@ title: Execute
permalink: /docs/ex_stage/
---
+The execute stage is a logical stage which encapsulates all the
+functional units (FUs). The FUs are not supposed to have inter-unit
+dependencies for the moment, e.g.: every FU must be able to perform its
+operation independently of every other unit. Each functional unit
+maintains a valid signal with which it will signal valid output data and
+a ready signal which tells the issue logic whether it is able to accept
+a new request or not. Furthermore, as briefly explained in the section
+about instruction issue (), they also receive a unique transaction ID.
+The functional unit is supposed to return this transaction ID together
+with the valid signal an the result. At the time of this writing the
+execute stage houses an ALU, a branch unit, a load store unit (LSU), a
+CSR buffer and a multiply/divide unit.
+
+#### ALU {#ssub:alu}
+
+The arithmetic logic unit (ALU) is a small piece of hardware which
+performs 32 and 64-bit subtraction, addition, shifts and comparisons. It
+always completes its operation in a single cycle and therefore does not
+contain any state-full elements. Its ready signal is always asserted and
+it simply passes the transaction ID from its input to its output.
+Together with the two operands it also receives an operator which tells
+it which operation to perform.
+
+#### Branch Unit {#ssub:branch_unit}
+
+The branch unit's purpose is to manage all kind of control flow changes
+i.e.: conditional and unconditional jumps. It does so by providing an
+adder to calculate the target address and some comparison logic to
+decide whether to take the branch or not. Furthermore it also decides if
+a branch was mis-predicted or not and reporting corrective actions to
+the PC Gen stage. Corrective actions include updating the BHT and
+setting the PC if necessary. As it can be that jumps are predicted on
+any instruction (including instructions which are no jumps at all - see
+aliasing problem in PC Gen section) it needs to know whenever an instruction gets
+issued to a functional unit and monitor the branch prediction
+information. If a branch was accidentally predicted on a non-branch
+instruction it also takes corrective action and re-sets the PC to the
+correct address (depending on whether the instruction was compressed or
+not it add PC `+ 2` or PC `+ 4`).
+
+As briefly mentioned in the section about instruction re-aligning the
+branch unit places the PC from an unaligned 32-bit instruction on the
+upper 16-bit (e.g.: on a new word boundary). Moreover if an instruction
+is compressed it also has an influence on the reported prediction as it
+needs to set a bit if the prediction occurred on the lower 16 bit (e.g.:
+the lower compressed instruction).
+
+As can be seen this all adds a lot of costly operations to this stage,
+mostly comparison and additions. Therefore the branch unit is on the critical path of the overall
+design. Nevertheless, it was our design-choice to keep branches a single
+cycle operation. Still, it could be the case that in a future version it
+might make sense to split this path. This would bring some costly
+IPC implications to the overall design mainly because of the current
+restriction that the scoreboard is only admitting new instructions if
+there are no unresolved branches. With a single cycle operation all
+branches are resolved in the same cycle of issue which doesn't introduce
+any pipeline stalls.
+
+#### Load Store Unit (LSU) {#ssub:load_store_unit}
+
+
+
+The load store unit is similar to every other functional unit. In
+addition, it has to manage the interface to the data memory (D\$). In
+particular, it houses the DTLB (Data Translation Lookaside Buffer), the
+hardware page table walker (PTW) and the memory management unit (MMU).
+It also arbitrates the access to data memory between loads, stores and
+the PTW - giving precedence to PTW lookups. This is done in order to
+resolve TLB misses as soon as possible. A high level block diagram of
+the LSU can be found in .
+
+The LSU can issue load request immediately while stores need to be kept
+back as long as the scoreboard does not issue a commit signal: This is
+done because the whole processor is designed to only have a single
+commit point (see ). Because issuing loads to the memory hierarchy does
+not have any semantic side effects the LSU can issue them immediately,
+totally in contrast to the nature of a store. Stores alter the
+architectural state and are therefore placed in a store buffer only to
+be committed in a later step by the commit stage. Sometimes this is also
+called *posted-store* because the store request is posted to the store
+queue and waiting for entering the memory hierarchy as soon as the
+commit signal goes high and the memory interface is not in use.
+
+Therefore, upon a load, the LSU also needs to check the store buffer for
+potential aliasing. Should it find uncommitted data it stalls, since it
+can't satisfy the current request.
+
+This means:
+
+- Two loads to the same address are allowed. They will return in issue
+ order.
+- Two stores to the same address are allowed. They are issued in-order
+ by the scoreboard and stored in-order in the store buffer as long as
+ the scoreboard didn't give the signal to commit them.
+- A store followed by a load to the same address can only be satisfied
+ if the store has already been committed (marked as committed in the
+ store buffer). Otherwise the LSU stalls until the scoreboard commits
+ the instruction. We cannot guarantee that the store will eventually
+ be committed (e.g.: an exception occurred).
+
+For the moment being, the LSU does not handle misaligned accesses. In
+particular this means that access which are not aligned to a 64 bit
+boundary for double word accesses, access which are not aligned to a
+32-bit boundary for word access and the accesses which are not aligned
+on 16-bit boundary for half word access. If encounters such a load or
+store it will throw a misaligned exception and lets the exception
+handler resolve the load or store. In addition to mis-aligned exceptions
+it can also throw page fault exceptions.
+
+To ease the design of the LSU it is split in 6 major parts of which each
+is described in more detail in the upcoming paragraphs:
+
+1. **LSU Bypass**
+2. **D\$ Arbiter**
+3. **Load Unit**
+4. **Store Unit**
+5. **MMU (including TLBs and PTW)**
+6. **Non-blocking data cache**
+
+##### LSU Bypass {#par:lsu_bypass}
+
+The LSU bypass module is a auxiliary module which manages the LSU status
+information (full flag etc.) which it presents to the issue stage. This
+is necessary for a the following reason: The design of the LSU is critical in
+most aspects as it directly interfaces the relatively slow SRAMs. It additionally
+needs to do some costly operation in sequence. The most costly (in terms
+of timing) being address generation, address translation and checking
+the store buffer for potential aliasing. Therefore it is only known very
+late whether the current load/store can go to memory or if additional
+cycles are needed. From which aliasing on the store buffer and TLB miss
+are the most prominent ones. As the issue stage relies on the
+ready signal to dispatch new instructions this would result in an overly
+long path which would considerably slow down the whole design because of
+some corner cases.
+
+To mitigate this problem a FIFO is added which can hold
+another request from issue stage. Therefore the ready flag of the
+functional units can be delayed by one cycle which eases timing.
+The LSU bypass model further decouples the functional unit
+from the issue stage. This is mostly necessary as the issue stage can't
+stall as soon as it issued an instruction. In particular the LSU bypass
+is called that way because it is either bypassed or serves the load or
+store unit from its internal FIFO until they signal completion to the
+LSU bypass module.
+
+##### Load Unit {#par:load_unit}
+
+The load unit takes care of all loads. Loads are issued as soon as
+possible as they do not have any side effects. Before issuing a
+load the load unit needs to check the store buffer for stores which are
+not committed into the memory hierarchy yet in order to avoid loading
+stale data. As a full comparison is quite costly only the lower 12 bit
+(the page-offset where physical and virtual addresses are the same) are
+compared. This has two major advantages: the comparison is only 12-bit
+instead of 64-bit and therefore faster when done on the whole buffer
+and the physical address is not needed which implies
+that we don't need to wait for address translation to finish. If the
+page offset matches with one of the outstanding stores the load unit
+simply stalls and waits until the store buffer is drained. As an
+improvement one could do some more elaborate data forwarding as the data
+in the store buffer is the most up-to-date. This is not done at the
+moment.
+
+Furthermore the load unit needs to perform address translation. It makes
+use of virtually indexed and physically tagged D\$ access scheme
+in order to reduce the number of cycles needed for load accesses. As it
+can happen that a load blocks the D\$ it
+has to kill the current request on the memory interface to give way to
+the hardware PTW on the cache side. Some more advanced caching
+infrastructure (like a non-blocking cache) would alleviate this problem.
+
+##### Store Unit {#par:store_unit}
+
+The store unit manages all stores. It does so by calculating the target
+address and setting the appropriate byte enable bits. Furthermore it
+also performs address translation and communicates with the load unit to
+see if any load matches an outstanding store in one of its buffers. Most
+of the store units business logic resides in the store buffer which is
+described in detail in the next section.
+
+##### Store Buffer {#par:store_buffer}
+
+The store buffer keeps track of all stores. It actually consists of two
+buffers: One is for already committed instructions and one is for
+outstanding instructions which are still speculative. On a flush only
+the instruction which are already committed are persisted while the
+speculative queue is completely emptied. To prevent buffer overflows the
+two queues maintain a full flag. The full flag of the speculative queue
+directly goes to the store unit, which will stall the LSU bypass module
+and therefore not receive any more requests. On the contrast the full
+signal of the commit queue goes to the commit stage. Commit stage will
+stall if it the commit queue can't accept any new data items. On every
+committed store the commit stage also asserts the `lsu_commit` signal
+which will put the particular entry from the speculative queue into the
+non-sepculative (commit) queue.
+
+As soon as a store is in the commit queue the queue will automatically
+try to commit the oldest store in the queue to memory as soon as the
+cache grants the request.
+
+The store buffer only works with physical addresses. At the time when
+they are committed the translation is already correct. For stores in the
+speculative queue addresses are potentially not correct but this fact
+will resolve if address translation data structures are updated as those
+instructions will also automatically flush the whole speculative buffer.
+
+##### Memory Management Unit (MMU) {#par:mmu}
+
+
+
+The memory management unit (MMU) takes care of address translation (see
+) and memory accesses in general. Address translation needs to be
+separately activated by writing the corresponding control and status
+register and switching to a lower privilege mode than machine mode. As
+soon as address translation is enabled it will also handle page faults.
+The MMU contains an ITLB, DTLB and hardware page table walker (HPTW).
+Although logically not really entangled - the fetch interface is also
+routed through the MMU. In general the fetch and data interface are
+handled differently. They only share the HPTW with each other (see .
+
+There are mainly two fundamentally different paths through the MMU: one
+from the instruction fetch stage and the other from the LSU. Lets begin
+with the instruction fetch interface: The IF stage makes a request to
+get the memory content at a specific address. Instruction fetch will
+always ask for virtual addresses. Depending on whether the address
+translation is enabled the MMU will either transparently let the request
+directly go to the I\$ or do address translation.
+
+In case address translation is activated, the request to the instruction
+cache is delayed until a valid translation can be found. If no valid
+translation can be found the MMU will signal this with an exception.
+Furthermore, if an address translation can be performed with a hit on
+the ITLB it is a purely combinational path. The TLB is implemented as a
+fully set-associative caches made out of flops. This in turn means that
+the request path to memory is quite long and may become critical quite
+easily.
+
+If an exception occurred the exception is returned to the instruction
+fetch stage together with the valid signal and not the grant signal.
+This has the implication that we need to support multiple out-standing
+transactions on the exception path as well (see ). The MMU has a
+dedicated buffer (FIFO) which stores those exceptions and returns them
+as soon as the answer is valid.
+
+The MMUs interface on the data memory side (D\$) is entirely different.
+It has a simple request-response interfaces guarded by handshaking
+signals. Either the load unit or the store unit will ask the MMU to
+perform address translation. However the address translation process is
+not combinatorial as it is the case for the fetch interface. An
+additional bank of registers delays the MMU's answer (on a TLB hit) an
+additional cycle. As already mentioned in the previous paragraph address
+translation is a quite critical process in terms of timing. The
+particular problem on the data interface is the fact that the LSU needs
+to generate the address beforehand. Address generation involves another
+costly addition. Together with address translation this path definitely
+becomes critical. As the data cache is virtually indexed and physical
+tagged this additional cycle does not cost any loss in IPC. But, it
+makes the process of memory requests a little bit more complicated as we
+might need to abort memory accesses because of exceptions. If an
+exception occurred on a load request the load unit needs to kill the
+memory request it sent the cycle earlier. An excepting load (or store)
+will never go to memory.
+
+Both TLBs are fully set-associative and configurable in size. Also the
+application specifier ID (ASID) can be changed in size. The ASID can
+prevent flushing of certain regions in the TLB (for example when
+switching applications). This is currently **not implemented**.
+
+##### Page Table Walker (PTW) {#par:page_table_walker_ptw}
+
+The purpose of a page table walker has already been introduced in . The
+page table walker listens on both ITLB and DTLB for incoming translation
+requests. If it sees that either one of the requests is missing on the
+TLB it saves the virtual address and starts its page table walk. If the
+page table walker encounters any error state it will throw a page fault
+exception which in return is caught by the MMU and propagated to either
+the fetch interface or the LSU.
+
+The page table walker gives precedence to DTLB misses. The page table
+walking process is described in more detail in the RISC-V Privileged
+Architecture.
+
+#### Multiplier {#ssub:multiplier}
+
+The multiplier contains a division and multiplication unit. Multiplication
+is performed in two cycles and is fully pipelined (re-timing needed). The
+division is a simple serial divider which needs 64 cycles in the worst case.
+
+#### CSR Buffer {#ssub:csr_buffer}
+
+The CSR buffer a functional unit which its only purpose is to store the
+address of the CSR register the instruction is going to read/write.
+There are two reasons why we need to do this. The first reason is that
+an CSR instruction alters the architectural state, hence this
+instruction has to be buffered and can only be executed as soon as the
+commit stage decides to commit the instruction. The second reason is the
+way the scoreboard entry is structured: It has only one result
+field but for any CSR instruction we need to keep the data we want to
+write and the address of the CSR which this instruction is going to
+alter. In order to not clutter the scoreboard with some special case bit
+fields the CSR buffer comes into play. It simply holds the address and
+if the CSR instruction is going to execute it will use the stored
+address.
+
+The clear disadvantage is that with the buffer being just one element we
+can't execute more than one CSR instruction back to back without a
+pipeline stall. Since CSR instructions are quite rare this is not too
+much of a problem. Some CSR instructions will cause a pipeline flush
+anyway.
diff --git a/docs/_docs/arch/id_stage.md b/docs/_docs/arch/id_stage.md
index beeaba693..43e6b375a 100644
--- a/docs/_docs/arch/id_stage.md
+++ b/docs/_docs/arch/id_stage.md
@@ -3,3 +3,92 @@ title: Instruction Decode
permalink: /docs/id_stage/
---
+Instruction decode is the fist pipeline stage of the processor's
+back-end. Its main purpose is to distill instructions from the data
+stream it gets from IF stage, decode them and send them to the issue
+stage.
+
+With the introduction of compressed instructions (in general variable
+length instructions) the ID stage gets a little bit more complicated: It
+has to search the incoming data stream for potential instructions,
+re-align them and (in the case of compressed instructions) decompress
+them. Furthermore, as we will know at the end of this stage whether the
+decoded instruction is branch instruction it passes this information on
+to the issue stage.
+
+#### Instruction Re-aligner {#ssub:instruction_re_aligner}
+
+
+
+As mentioned above the instruction re-aligner checks the incoming data
+stream for compressed instructions. Compressed instruction have their
+last bit unequal to 11 while normal 32-bit instructions have their last
+two bit set to 11. The main complication arises from the fact that a
+compressed instruction can make a normal instruction unaligned (e.g.:
+the instruction starts at a half word boundary). This can (in the worst
+case) mandate two memory accesses before the instruction can be fully
+decoded. We therefore need to make sure that the fetch FIFO has enough
+space to keep the second part of the instruction. Therefore the
+instruction re-aligner needs to keep track of whether the previous
+instruction was unaligned or compressed to correctly decide what to do
+with the upcoming instruction.
+
+Furthermore, the branch-prediction information is used to only output
+the correct instruction to the issue stage. As we only predict on
+word-aligned PCs the passed on branch prediction information needs to be
+investigated to rule out which instruction we are actually need, in case
+there are two instructions (compressed or unaligned) present. This means
+that we potentially have to discard one of the two instructions (the
+instruction before the branch target). For that reason the instruction
+re-aligner also needs to check whether this fetch entry contains a valid
+and taken branch. Depending on whether it is predicted on the upper 16
+bit it has to discard the lower 16 bit accordingly. This process is
+illustrate in .
+
+#### Compressed Decoder {#ssub:compressed_decoder}
+
+As mentioned earlier we also need to decompress all the compressed
+instructions. This is done by a small combinatorial circuit which takes
+a 16-bit compressed instruction and expands it to its 32-bit equivalent.
+All compressed instructions have a 32-bit equivalent.
+
+#### Decoder {#ssub:decoder}
+
+The decoder either takes the raw instruction data or the uncompressed
+equivalent of the 16-bit instruction and decodes them accordingly. It
+transforms the raw bits to the most fundamental control structure in
+Ariane, a scoreboard entry:
+
+- **PC**: PC of instruction
+- **FU**: functional unit to use
+- **OP**: operation to perform in each functional unit
+- **RS1**: register source address 1
+- **RS2**: register source address 2
+- **RD**: register destination address
+- **Result**: for unfinished instructions this field also holds the
+ immediate
+- **Valid**: is the result valid
+- **Use I Immediate**: should we use the immediate as operand b?
+- **Use Z Immediate**: use zimm as operand a
+- **Use PC**: set if we need to use the PC as operand a, PC from
+ exception
+- **Exception**: exception has occurred
+- **Branch predict**: branch predict scoreboard data structure
+- **Is compressed**: signals a compressed instructions, we need this
+ information at the commit stage if we want jump accordingly e.g.:
+ `+4`, `+2`
+
+It gets incrementally processed further down the pipeline. The
+scoreboard entry controls operand selection, dispatch and the execution.
+Furthermore it contains an exception entry which strongly ties the
+particular instruction to its potential exception. As the first time an
+exception could have occoured was already in the IF stage the decoder
+also makes sure that this exception finds its way into the scoreboard
+entry. A potential illegal instruction exception can occur during
+decoding. If this is the case and no previous exception has happened the
+decoder will set the corresponding exceptions field along with the
+faulting bits (in `[s|m]tval`). As this is not the only point in which
+illegal instruction exception can happen and an illegal instruction
+exception always asks for the faulting address in the `[s|m]tval` field
+this field gets set here anyway. But only if instruction fetch didn't
+throw an exception for this instruction yet.
diff --git a/docs/_docs/arch/if_stage.md b/docs/_docs/arch/if_stage.md
index 71fb9a301..c96aa9cdb 100644
--- a/docs/_docs/arch/if_stage.md
+++ b/docs/_docs/arch/if_stage.md
@@ -3,3 +3,47 @@ title: Instruction Fetch
permalink: /docs/if_stage/
---
+Instruction Fetch stage (IF) gets its information from the PC Gen stage.
+This information includes information about branch prediction (was it a
+predicted branch? which is the target address? was it predicted to be
+taken?), the current PC (word-aligned if it was a consecutive fetch) and
+whether this request is valid. The IF stage asks the MMU to do address
+translation on the requested PC and controls the I\$ (or just an
+instruction memory) interface. The instruction memory interface is
+described in more detail in .
+
+The delicate part of the instruction fetch is that it is very timing
+critical. This fact prevents us from implementing some more elaborate
+handshake protocol (as round-times would be too large). Therefore the IF
+stage signals the I\$ interface that it wants to do a fetch request to
+memory. Depending on the cache's state this request may be granted or
+not. If it was granted the instruction fetch stage puts the request in
+an internal FIFO. It needs to do so as it has to know at any point in
+time how many transactions are outstanding. This is mostly due to the
+fact that instruction fetch happens on a very speculative basis because
+of branch prediction. It can always be the case that the controller
+decides to flush the instruction fetch stage in which case it needs to
+discard all outstanding transactions.
+
+The current implementation allows for a maximum of two outstanding
+transaction. If there are more than two the IF stage will simply not
+acknowledge any new request from PC Gen. As soon as a valid answer from
+memory returns (and the request is not considered out-dated because of a
+flush) the answer is put into a FIFO together with the fetch address and
+the branch prediction information.
+
+Together with the answer from memory the MMU will also signal potential
+exceptions. Therefore this is the first place where exceptions can
+potentially happen (bus errors, invalid accesses and instruction page
+faults).
+
+#### Fetch FIFO {#ssub:fetch_fifo}
+
+The fetch FIFO contains all requested (valid) fetches from instruction
+memory. The FIFO currently has one write port and two read ports (of
+which only one is used). In a future implementation the second read port
+could potentially be used to implement macro-op fusion or widen the
+issue interface to cover two instructions.
+
+The fetch FIFO also fully decouples the processor's front-end and its
+back-end. On a flush request the whole fetch FIFO is reset.
diff --git a/docs/_docs/arch/issue_stage.md b/docs/_docs/arch/issue_stage.md
index d856346ee..a89be1b43 100644
--- a/docs/_docs/arch/issue_stage.md
+++ b/docs/_docs/arch/issue_stage.md
@@ -3,3 +3,93 @@ title: Issue
permalink: /docs/issue_stage/
---
+The issue stage's purpose is to receive the decoded instructions and
+issue them to the various functional units. Furthermore the issue stage
+keeps track of all issued instructions, the functional unit status and
+receives the write-back data from the execute stage. Furthermore it
+contains the CPU's register file. By using a data-structure called
+scoreboard (see ) it knows exactly which instructions are issued, which
+functional unit they are in and which register they will write-back to.
+As previously mentioned you can roughly divide the execution in four
+parts **1. issue**, **2. read operands**, **3. execute** and **4.
+write-back**. The issue stage handles step one, two and four.
+
+
+
+#### Issue {#ssub:issue}
+
+When the issue stage gets a new decoded instruction it checks whether
+the required functional unit is free or will be free in the next cycle.
+Then it checks if its source operands are available and if no other,
+currently issued, instruction will write the same destination register.
+Furthermore it keeps track that no unresolved branch gets issued. The
+latter is mainly needed to simplify hardware design. By only allowing
+one branch we can easily back-track if we later find-out that we've
+mis-predicted on it.
+
+By ensuring that the scoreboard only allows one instruction to write a
+certain destination register it easies the design of the forwarding path
+significantly. The scoreboard has a combinatorial circuit which outputs
+the status of all 32 destination register together with what functional
+unit will produce the outcome. This signal is called `rd_clobber`.
+
+The issue stage communicates with the various functional units
+independently. This in particular means that it has to monitor their
+ready and valid signals, receive and store their write-back data
+unconditionally. It will always have enough space as it allocates a slot
+in the scoreboard for every issued instruction. This solves the
+potential structural hazards of smaller microprocessors. This modular
+design will also allow to explore more advanced issuing technique like
+out-of-order issue ().
+
+The issuing of instructions happen in-order, that means order of program
+flow is naturally maintained. What can happen out-of-order is the
+write-back of each functional unit. Think for example, that the issue
+stage issues a multiplication which takes $n$ clock cycles to produce a
+valid result. In the next cycle the issue stage issues an ALU
+instruction like an addition. The addition will just take one clock
+cycle to return and therefore return before the multiplication's result
+is ready. Because of this we need to assign IDs to the various issue
+stages. The ID resembles the (unique) position in which the scoreboard
+will store the result of this instruction. The ID (called transaction
+ID) has enough bits to uniquely represent each slot in the scoreboard
+and needs to be passed along with the other data to the corresponding
+functional unit.
+
+This scheme allows the functional units to operate in complete
+independence of the issue logic. They can return different transactions
+in different order. The scoreboard will know where to put them as long
+as the corresponding ID is signaled alongside the result. This scheme
+even allows the functional unit to buffer results and process them
+entirely out-of-order if it makes sense to them. This is a further
+example of how to efficiently decouple the different modules of a
+processor.
+
+#### Read Operands {#ssub:read_operands}
+
+Read operands is physically happens in the same cycle as the issuing of
+instructions but can be conceptually thought of as another stage. As the
+scoreboard knows which registers are getting written it can handle the
+forwarding of those operands if necessary. The design goal was to
+execute two ALU instructions back to back (e.g.: with no bubble in
+between). The operands come from either the register file (if no other
+instruction currently in the scoreboard will write that register) or be
+forwarded by the scoreboard (by looking at the `rd_clobber` signal).
+
+The operand selection logic is a classical priority selection giving
+precedence to results form the scoreboard over the register file as the
+functional unit will always produce the more up to date result. To
+obtain the right register value we need to poll the scoreboard for both
+source operands.
+
+#### Scoreboard {#ssub:scoreboard}
+
+The scoreboard is implemented as a FIFO with one read and one write port
+with valid and acknowledge signals. In addition to that it provides the
+aforementioned signals which tell the rest of the CPU which registers
+are going to be clobbered by a previously scheduled instruction.
+Instruction decode directly writes to the scoreboard if it is not
+already full. The commit stage looks for already finished instructions
+and updates the architectural state. Which either means going for an
+exception, updating the register or CSR file.
+
diff --git a/docs/_docs/arch/pcgen_stage.md b/docs/_docs/arch/pcgen_stage.md
index ae9dc6a8f..7e7f88f1e 100644
--- a/docs/_docs/arch/pcgen_stage.md
+++ b/docs/_docs/arch/pcgen_stage.md
@@ -3,3 +3,119 @@ title: PC Generation
permalink: /docs/pcgen_stage/
---
+PC gen is responsible for generating the next program counter. All
+program counters are logical addressed. If the logical to physical
+mapping changes a `fence.vm` instruction should flush the pipeline and TLBs.
+
+This stage contains speculation on the branch target address as well as
+the information if the branch is taken or not. In addition, it houses
+the branch target buffer (BTB) and a branch history table (BHT).
+
+If the BTB decodes a certain PC as a jump the BHT decides if the branch
+is taken or not. Because of the various state-full memory components
+this stage is split into two pipeline stages. PC Gen communicates with
+the IF via a handshake signal. Instruction fetch signals its readiness
+with an asserted ready signal while PC Gen signals a valid request by
+asserting the `fetch_valid` signal.
+
+The next PC can originate from the following sources (listed in order of
+precedence):
+
+1. **Default assignment**: The default assignment is to fetch PC + 4.
+ PC Gen always fetches on a word boundary (32-bit). Compressed
+ instructions are handled in a later pipeline step.
+
+2. **Branch Predict**: If the BHT and BTB predict a branch on a certain
+ PC, PC Gen sets the next PC to the predicted address and also
+ informs the IF stage that it performed a prediction on the PC. This
+ is needed in various places further down the pipeline (for example
+ to correct prediction). Branch information which is passed down the
+ pipeline is encapsulated in a structure called `branchpredict_sbe_t`.
+ In contrast to branch prediction information which is passed up the
+ pipeline which is just called `branchpredict_t`. This is used for
+ corrective actions (see next bullet point). This naming convention
+ should make it easy to detect the flow of branch information in the source code.
+
+3. **Control flow change request**: A control flow change request
+ occurs from the fact that the branch predictor mis-predicted. This
+ can either be a 'real' mis-prediction or a branch which was not
+ recognized as one. In any case we need to correct our action and
+ start fetching from the correct address.
+
+4. **Return from environment call**: A return from an environment call
+ performs corrective action of the PC in terms of setting the
+ successive PC to the one stored in the `[m|s]epc` register.
+
+5. **Exception/Interrupt**: If an exception (or interrupt, which is in
+ the context of RISC-V systems quite similar) occurs PC Gen will
+ generate the next PC as part of the trap vector base address. The
+ trap vector base address can be different depending on whether the
+ exception traps to S-Mode or M-Mode (user mode exceptions are
+ currently not supported). It is the purpose of the CSR Unit to
+ figure out where to trap to and present the correct address to PC Gen.
+
+6. **Pipeline Flush because of CSR side effects**: When a CSR
+ with side-effects gets written we need to flush the whole pipeline
+ and start fetching from the next instruction again in order to take
+ the up-dated information into account (for example virtual memory base pointer changes).
+
+7. **Debug**: Debug has the highest order of precedence as it can
+ interrupt any control flow requests. It also the only source of
+ control flow change which can actually happen simultaneously to any
+ other of the forced control flow changes. The debug unit reports the
+ request to change the PC and the PC which the CPU should change to.
+
+This unit also takes care of a signal called `fetch_enable` which
+purpose is to prevent fetching if not asserted. Also note that no
+flushing takes place in this unit. All the flush information is
+distributed by the controller. Actually the controller's only purpose is
+to flush different pipeline stages.
+
+#### Branch Prediction {#ssub:branch_prediction}
+
+
+
+
+All branch prediction data structures reside in a single register-file
+like data structure.
+It is indexed with the appropriate number of bits from the PC and
+contains information about the predicted target address as well as the
+outcome of a configurable-width saturation counter (two by default). The
+prediction result is used in the subsequent stage to jump (or not).
+
+In addition of providing prediction result the BTB also updates its
+information on mis-predictions. It can either correct the saturation
+counter or clear the branch prediction entry. The latter is done when
+the branch unit saw that the predicted PC didn't match or an when an instruction
+with privilege changing side-effect is committing.
+
+The branch-outcome and the branch
+target address are calculated in the same functional unit therefore a
+mis-prediction on the target address is as costly as a mis-prediction on
+the branch decision. As the branch unit (the functional unit
+which does all the branch-handling) is already quite critical in terms
+of timing this is a potential improvement.
+
+As Ariane fully implements the compressed instruction set branches can also happen on 16-bit (or half word)
+instructions. As this would significantly increase the size of the BTB
+the BTB is indexed with a word aligned PC. This brings the potential
+draw-back that branch-prediction does always mis-predict on a
+instruction fetch word which contains two compressed branches. However, such case
+should be rare in reality.
+
+A trick we played here is to take the next PC (e.g.: the word aligned PC
+of the upper 16-bit of this instruction) of an un-aligned instruction to
+index the BTB. This naturally allows the the IF stage to fetch all
+necessary instruction data. Actually it will fetch two more unused bytes
+which are then discarded by the instruction re-aligner. For that
+reason we also need to keep an additional bit whether the instruction is
+on the lower or upper 16-bit.
+
+For branch prediction a potential source of
+unnecessary pipeline bubbles is aliasing. To prevent aliasing from
+happening (or at least make it more unlikely) a couple of tag bits
+(upper bits from the indexed PC) are used and compared on every access.
+This is a trade-off necessary as we are lacking sufficiently fast
+SRAMs which could be used to host the BTB. Instead we are forced to use
+register which have a significantly larger impact on over all area and
+power consumption.
diff --git a/docs/img/branch_prediction.pdf b/docs/img/branch_prediction.pdf
new file mode 100644
index 000000000..b4399f5ee
--- /dev/null
+++ b/docs/img/branch_prediction.pdf
@@ -0,0 +1,3702 @@
+%PDF-1.5
%
+1 0 obj
<>/OCGs[5 0 R 43 0 R]>>/Pages 3 0 R/Type/Catalog>>
endobj
2 0 obj
<>stream
+
+
+
+
+ application/pdf
+
+
+ Druck
+
+
+ 2017-09-07T19:31:31+02:00
+ 2017-09-07T19:31:31+02:00
+ 2017-09-07T19:04:38+02:00
+ Adobe Illustrator CC 2014 (Macintosh)
+
+
+
+ 256
+ 132
+ JPEG
+ /9j/4AAQSkZJRgABAgEASABIAAD/7QAsUGhvdG9zaG9wIDMuMAA4QklNA+0AAAAAABAASAAAAAEA
AQBIAAAAAQAB/+4ADkFkb2JlAGTAAAAAAf/bAIQABgQEBAUEBgUFBgkGBQYJCwgGBggLDAoKCwoK
DBAMDAwMDAwQDA4PEA8ODBMTFBQTExwbGxscHx8fHx8fHx8fHwEHBwcNDA0YEBAYGhURFRofHx8f
Hx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8f/8AAEQgAhAEAAwER
AAIRAQMRAf/EAaIAAAAHAQEBAQEAAAAAAAAAAAQFAwIGAQAHCAkKCwEAAgIDAQEBAQEAAAAAAAAA
AQACAwQFBgcICQoLEAACAQMDAgQCBgcDBAIGAnMBAgMRBAAFIRIxQVEGE2EicYEUMpGhBxWxQiPB
UtHhMxZi8CRygvElQzRTkqKyY3PCNUQnk6OzNhdUZHTD0uIIJoMJChgZhJRFRqS0VtNVKBry4/PE
1OT0ZXWFlaW1xdXl9WZ2hpamtsbW5vY3R1dnd4eXp7fH1+f3OEhYaHiImKi4yNjo+Ck5SVlpeYmZ
qbnJ2en5KjpKWmp6ipqqusra6voRAAICAQIDBQUEBQYECAMDbQEAAhEDBCESMUEFURNhIgZxgZEy
obHwFMHR4SNCFVJicvEzJDRDghaSUyWiY7LCB3PSNeJEgxdUkwgJChgZJjZFGidkdFU38qOzwygp
0+PzhJSktMTU5PRldYWVpbXF1eX1RlZmdoaWprbG1ub2R1dnd4eXp7fH1+f3OEhYaHiImKi4yNjo
+DlJWWl5iZmpucnZ6fkqOkpaanqKmqq6ytrq+v/aAAwDAQACEQMRAD8A9LWfmnyzfTwwWWr2V1Pc
+p9XihuIpHk9EkScFViW4EHlTp3xVM8VdirsVdirsVdiqW+ZuP8AhvVedOP1O45V6U9Jq1xVgGpR
acNU8xNNNpB8uLp0r6aqix5tPNCiGOir6o9BreR+Rb4vXpvwHFVB+btV1yLXZrTyro/lufSRamWL
Ubw2hJuUikYW/pi6hc+pII15lQE32bsqh08w3lpoAik0jSb3zXI9m8VmYrVbX0TbQfXOVzHItvGy
ziegMrN0orKVxVQ0rVfNcGrWdpe6PoN1prrZG9v3bTA6My2y3YQR3Mf2Sbhv7o7gU5A0CrItal0S
G21OXQItHnvk1O0PoOtm3O14WwuAnqywID6fM/b+W+KsLvvOnnCzs7N5/L3ldNSvZPQg0wNZOzOq
xfEZBfDijGSTdUfhxHLZuYVTXT9Z82Jp8El9oXlufUZr5VuLeOSyjghsmtI2LLL9Yd+SXJZT8DE0
NBShxVKotV/McXNs36I8uNEEv3uFnXTFVm5PJZp+6vXZGYIsQALBQ3J3c7KqyTynqurXOq20fmbR
vLdlpzRObmW2e0dllEEDIFP1hzQzNMp/d9hv+0yqZXl1odtoiyaHaaNe3h1OVZ4pfqm1q17IJXXn
Lbiqx7r8X0YqxBfMHn9bCaQ+WvLbahOqm2gL6eYYHBgVvUYXqsysvrvxWvE8ByIrRVVm1fzq2oPJ
Hovl+KxjufTjiH6NdpLZWmBmYG9SjFGiIjEgqymrINiqm2k6rrDwwHVtH8uRSSXUImFv9Tb07ZjO
soIa7AZl9OFuYbYSbRuUIKq3Rrue50XV3822eg2ch0wtBHZC04rc8rlZFUie4c/ulgbw32Na4qnX
m2+sbZ7YeW7HQ7xGjnNy0os2KyKE9FQHuLT7VXPU1ICkoG5qqw698xeeLWzt7k6D5YmeeS3hmtbe
K2llg9aeRHk/3vRJAkaRsRyQfvKlgI2qqiF17zsLzhJ5b8rG1oD6sc1ozVN2UK8WuU6Wvx1/m+fE
Kqcesedbto4bjQvLumxNcaeHuraXT5pRC3Br48ZZ2RQvxANRz2CtXmFXonkRbZP3ayW018umacNT
ktRAoa6/f+qzLbfugS1fs7eG2KstxV2KuxV2KuxVi3lf8utF8sXEUujXF1Agto7W7gZo5EuhAZDF
JMXjaQOhmanpsgpRacVACqdPolm7s5kugWJJC3l0oqd9gJQB8hiq39A2P+/bv/pNu/8Aqrirv0DY
/wC/bv8A6Tbv/qrirv0DY/79u/8ApNu/+quKoLQ9It7jRdPnmnvHmmtoZJHN7d7s0YJO0vjiqN/Q
Nj/v27/6Tbv/AKq4qkHnbyZYX2iyTfXb+3bT1lu19K8uP3hSCRfTfm7/ALtufxAUJ6VpUFVP/Mf/
ACj2qf8AMJP/AMmmxVMcVdirsVS7S/8Ae7WP+YtP+oS3xVMcVdirsVdiqXaD/vDL/wAxd7/1Fy4q
mOKuxV2Kpd5j/wCUe1T/AJhJ/wDk02KpjirsVdirsVS6D/lIb3/mEtP+TtziqY4q7FXYq7FXYq7F
XYq7FXYq7FUu8uf8o9pf/MJB/wAmlxVMcVS7zH/yj2qf8wk//JpsVd5j/wCUe1T/AJhJ/wDk02Kp
jirsVdiqXaX/AL3ax/zFp/1CW+KpjirsVdirsVS7Qf8AeGX/AJi73/qLlxVMcVdirsVS7zH/AMo9
qn/MJP8A8mmxVMcVdirsVdiqXQf8pDe/8wlp/wAnbnFUxxV2KuxV2KuxV2KuxV2KuxV2Kpd5c/5R
7S/+YSD/AJNLiqY4ql3mUgeXNVJNALO4JJ/4xNiqX+YvMXl9vL+pqup2hY2k4AE8ZJJjb/KxVMP8
R+Xv+rpaf8j4v+asVd/iPy9/1dLT/kfF/wA1Yq7/ABH5e/6ulp/yPi/5qxVL9M8xeXxe6sTqdoA1
2pUmePcfVIBt8XtiqYf4j8vf9XS0/wCR8X/NWKu/xH5e/wCrpaf8j4v+asVd/iPy9/1dLT/kfF/z
Virv8R+Xv+rpaf8AI+L/AJqxVL9D8xeX1spA2p2gP1u8NDPH0N3KR+1iqYf4j8vf9XS0/wCR8X/N
WKu/xH5e/wCrpaf8j4v+asVd/iPy9/1dLT/kfF/zViqX+YvMXl9vL+pqup2hY2k4AE8ZJJjb/KxV
MP8AEfl7/q6Wn/I+L/mrFXf4j8vf9XS0/wCR8X/NWKu/xH5e/wCrpaf8j4v+asVd/iPy9/1dLT/k
fF/zViqhpt/Y3mv372dzFcqtpaBmhdXAPqXOxKk4qnGKuxV2KuJABJNANyTiqjBe2c6wtBPHKtxH
61uyOrCSLb94lD8S/Gu423GKq2KuxV2KuxV2Kpd5c/5R7S/+YSD/AJNLiqY4ql3mWo8u6rQVP1O4
oP8Ank2KoDzFeaqfL+pg6dQG0nqfWQ0/dt7YqmH13V/+rb/yWT+mKu+u6v8A9W3/AJLJ/TFXfXdX
/wCrb/yWT+mKpfpl5qv13VqadWt2tf3ybf6JB7YqmH13V/8Aq2/8lk/pirvrur/9W3/ksn9MVd9d
1f8A6tv/ACWT+mKu+u6v/wBW3/ksn9MVS/QrzVRZSU06v+l3n+7k/wCWuX2xVMPrur/9W3/ksn9M
Vd9d1f8A6tv/ACWT+mKu+u6v/wBW3/ksn9MVS/zFeaqfL+pg6dQG0nqfWQ0/dt7YqmH13V/+rb/y
WT+mKu+u6v8A9W3/AJLJ/TFXfXdX/wCrb/yWT+mKu+u6v/1bf+Syf0xVj11o2rav5vjvP0hd6MNO
ht5HtLd4niuQ5uk/ego1eBaq12r2JpRVkH6Lvv8Aq8Xf/AWn/ZPirv0Xff8AV4u/+AtP+yfFVW3s
LqKZZJNSubhFrWKRbcKaim/pwo23XY4q83m/5x68sTagbibU76SAC39GB1s5GU2ytGil5Ld+UQi4
J6ZX4uNZC5pRVx/5xz8lGw+pNf6nx5O3qrLbxy0f6t8PKOBPhX6klPcsTVjUKvVMVdirsVdirsVS
7y5/yj2l/wDMJB/yaXFUxxVLvMf/ACj2qf8AMJP/AMmmxV3mP/lHtU/5hJ/+TTYqmOKuxV2Kpdpf
+92sf8xaf9QlviqY4q7FXYq7FUu0H/eGX/mLvf8AqLlxVMcVdirsVS7zH/yj2qf8wk//ACabFUxx
V2KuxV2KpdB/ykN7/wAwlp/yducVTHFXYq7FXYq7FXYq7FXYq7FXYql3lz/lHtL/AOYSD/k0uKpj
iqA8wJJJoOpRxo0kj2s6oiAszExsAFUVJJ8BiqW65rtlcaRqVpDFdyXTWkgWFbO7LfvEdU29L9oq
QMVTD9PWP++rv/pCu/8Aqliq0+Y9NEqxFboSurOqfU7upVSAxA9LsWH34qu/T1j/AL6u/wDpCu/+
qWKoHT9Yto7vU3eC8VZrlXiJsrv4lFtClR+6/mUjFUd+nrH/AH1d/wDSFd/9UsVd+nrH/fV3/wBI
V3/1SxV36esf99Xf/SFd/wDVLFXfp6x/31d/9IV3/wBUsVQOjaxbQ2kiSwXiMbm7cA2V39l7mR1P
913VgcVR36esf99Xf/SFd/8AVLFVp8x6aJViK3QldWZU+p3dSqkBiB6XQFh9+Krv09Y/76u/+kK7
/wCqWKoHXdYtp9E1CGKC8eWW2mSNBZXdSzRkAD913OKo79PWP++rv/pCu/8Aqlirv09Y/wC+rv8A
6Qrv/qliq1PMemuzqi3TNE3GQCzuyVYqGof3Wx4sDiq79PWP++rv/pCu/wDqliqlp1wtzrV7PHHM
kJtrWMPNDLDVlkuCwX1VStAwrTxxVNsVdirsVdirsVdirsVdirsVdiqXeXP+Ue0v/mEg/wCTS4qm
OKuxVLoP+Uhvf+YS0/5O3OKpjiqXT/8AKQ2X/MJd/wDJ22xVMcVdirsVdirsVdirsVdiqXT/APKQ
2X/MJd/8nbbFUxxV2KuxV2Kpdpf+92sf8xaf9QlviqY4q7FXYq7FXYq7FXYq7FXYq7FXYq7FUu8u
f8o9pf8AzCQf8mlxVMcVdiqXQf8AKQ3v/MJaf8nbnFUxxVKNSvbOz12xlu547eI2t0okldUUsZLc
gVYgVoDiqt/iPy9/1dLT/kfF/wA1YqhdNu9a1K3e7tr2z+rtPcRwUt5JKxxTvGp5rcKGqqVqBiqJ
a38ybcb6zG+9bOU7f9JQxVq11O7jvU0/U4ljnlBNrdRVME/HdlHL4o5QBy9Mk/DurNRuKq/U/MXl
/SiBqmp2lgWpQXU8cJPLkV+2y9fTanyPhiqR3X5r/l7ai49XWYzJazTwXFvGk0s6vaBmnJhjRpeE
YRuT8eO3XFWWDcYq7FUun/5SGy/5hLv/AJO22KoJPP3kt5YIk1qzaS5nNrEolUn1g0icW3+Hk0Dq
pagYigqcVcfPvkwmNYdYtrtpZoLcLZuLoq92/pQeoIPUMaySfCHei12riqfYq7FUu0v/AHu1j/mL
T/qEt8VTHFXYq7FXYq7FXYq7FXYq7FXYq7FXYql3lz/lHtL/AOYSD/k0uKpjirsVSyeDVItUlu7S
KCaOaCGJllmeJlaJ5WqOMUtQfVxVd6/mH/litP8ApLl/7JsVd6/mH/litP8ApLl/7JsVd6/mH/li
tP8ApLl/7JsVW+WzKdNYzKqSm6vOaoxdQfrcuwYhCfuGKpniqU+Zy62EDxIr3KXln9XDEr8TXCK3
xKGYVjZgxAPw17Yqk3mjyXbeaRGNd0S0uzFFLbxsuoXcLCKco0q8oYYz8XpKD7VHRmBVQVv+Wei2
+qPqkOgWqXkt0b+Vv0nfFGuSzv6rRGP0yayv1XoxHQ4qzPTLw3um2l4U9M3MMcxjB5cfUUNx5UWt
K9aYqicVS6f/AJSGy/5hLv8A5O22Kseuvye/Lu61Uatc6W0t+rSukrXV2QpnleaUInq8FV5JXJUC
m5FKbYqivL/5YeRfLz2r6Ppa2jWTSyW1JZnCtOqLIaO7BqiJevQio3xVlGKuxVLtL/3u1j/mLT/q
Et8VTHFXYq7FXYq7FXYq7FXYq7FXYq7FXYqly+WvLigKulWYUbAC3ioB/wADirv8OeXv+rXaf8iI
v+acVd/hzy9/1a7T/kRF/wA04q7/AA55e/6tdp/yIi/5pxV3+HPL3/VrtP8AkRF/zTirv8OeXv8A
q12n/IiL/mnFXf4c8vf9Wu0/5ERf804qhdLi1LTbZ7SHTU9FJ7h4fTlRF9OWd5EotNvhcbYqiWvt
ZFAumA1NCTOgA2O/Q98VatrG/nvEvdUMYeAsbO0gLPHEWUo0jSMEMjlSQDwUKCRv1KqZ4q7FUu8u
f8o9pf8AzCQf8mlxVMcVS6f/AJSGy/5hLv8A5O22KpjirsVdirsVS7S/97tY/wCYtP8AqEt8VTHF
XYq7FXYq7FXYq7FXYq7FXYq7FXYq7FXYq7FXYq7FXYq7FXYq7FXYq7FUqttcuLm3iuYNJu3hmRZI
n5WoqriqmhnB6HFUt0vzDHYiw8vT2zfpmGziY2IuLH1mjReBlWI3AkKckPxccVTY6regEnSLsAbk
l7T/ALKMVQKajd3d9ZanbabcTWf1aVUkSSzYOJ2hdHQrOVZSsZ3r4Yqry+YxDdQWkthNHd3IY21u
81kskgQVfghuOTcQd6dMVULjzlY20sUVxA0Ms4maCOS5sEZ1tRW4KA3ILCKnx0+z3xVXvfMf1G3a
5vrCe1tlIDTTzWUaAsQqgs1wBuTQYquu9feztZbu7064t7W3RpJ7iWWySNEUVZndrgKqgbknFUHp
erytLcXEVhLPFqbrd2TRTWbiSEQQx81IuKMKrWq1FCPHFUx/Sl9/1Z7v/g7T/soxVu31aSW/Syms
Li1eSKSZJJTAUIiZFYfupZGr+9HbFUwxV2KuxV2KuxV2KuxV2KuxV2KuxV2KuxV2KuxV2KuxV2Ku
xV2KuxVJtMOsWWm2lm1gJGtoY4S6zKAxjULUVFd6YqxfzJ+XcHmDzGmvXsF4twkVtEttHdW/1bla
XBuYpTFJE9ZAxK1J+ySO+KpVo35MaTpE1jPbQ30lxYyrKss9xZymT00RI0lDW/ErEEPpUAMZZihB
pRVD235G6VA9vGILv6lZwQQWkKzWqyqYLya95NOIuZBkn+wvFdgTVgpVVR0/8gdAsfS9KPUnaESB
HlurKRv3qemQ7NbVkUAbRvVOvw0ZgVU2i/LfSdQ0g2psLlkiTWbKGQz2yNGdSuGF1IqxRInJSrLG
eOynviqSJ/zj3or2Nrb3Y1F5be0is3eC6treJhFMs/NYEhKKWkQV616tV6uVUwt/yR0S2muJYINR
i+s6dLpRjS9gWNYp4xHLIqLEF9VzWRiagueVK4qvl/JbRZ9cOtXlre3t9IlvFcvdXNrMJ1thCB6v
OAljJ9WX1GrVviHRiMVZX5N0O88r+X7fR7aw9RYeTPL6qqGZjWihjIyoi0SNSx4oqrU0xVN4U1Gf
WILqe1FvDDbzxE+oHJaV4WWgA8IjiqaYq7FXYq7FXYq7FXYq7FXYq7FXYq7FXYq7FXYq7FXYq7FX
Yq7FXYq7FXYq7FXYq7FUu0H/AHhl/wCYu9/6i5cVTHFXYq7FXYq7FXYq7FXYq7FXYq7FXYq7FXYq
7FXYq7FXYq7FXYq7FXYq7FXYq7FXYq7FXYq7FXYq7FUu0H/eGX/mLvf+ouXFUxxV2KuxV2KuxV2K
uxV2KuxV2KuxV2KuxV2KuxV2KuxV2KuxV2KuxV2KuxV2KuxV2KuxV2Kpfq3mDQ9HMH6Vv4LEXJdY
GuJFjVjFG0r/ABMQAFRCxJxVDN5z8pI5STWbOIhWdfUnjjDpHbpdPJGXIDokEqSMy1AU1JxVbZee
PJl6qta65YSh1MiBbmKpjE5tuYBapQzj0w3QtsMVSHyj5n0HTobiz1HzdZapLdahdPZTcoIkCSvH
KkAdAkcknG6RuvxFvh2GyqdP5/8AJKPZg65ZFL+KWe0nWZGgeKGRYnb1lJiAEjhBVt22FSDiq2y/
MPyLfXtjY2eu2U95qUZlsYEmQvIqhTsK7NRxRTud6DY0VZDirsVdirsVdirsVdirsVdirsVdirsV
dirsVdirsVdirsVdirsVdirsVdirsVdirsVYv5+/wV9T0/8AxXz9H61/uO9L616n1r0JacPqn7zn
6fPh/lU4/HxxVi19/wAqN9K3+s04/Vbj6vx+v8vq/wCioPX+x8X/ABzvR/yvD464qhbz/lQdY/rP
OvLf/jp/Y/SMtfWp/uj6/wA6+p+75ca7cMVVLj/lR/rW/wBb+tfWfXk9P6z+mPW9T6rbet6nq/Hw
+q+h6nqfBxry/bxVR1b/AKF5/wAI6Z+kvS/w5wuP0bz+vU4/Wo/XpT95/vRwry/4jXFUfo//ACpj
9I231D1vX5wfUvU/Sfoc/Ws/R9H1v3P2/qn2f2OFfgxV6dirsVdirsVdirsVdirsVf/Z
+
+
+
+ uuid:78da60e2-1364-7848-b9fa-38156feb972d
+ xmp.did:d943dd6b-b187-4e1f-a21c-3e94e29a63ce
+ uuid:5D20892493BFDB11914A8590D31508C8
+ proof:pdf
+
+ uuid:182b8aad-59f2-4271-bb09-49a37b9e3e8c
+ xmp.did:6ce228ba-3300-1e41-a3a4-a8b78dd690e0
+ uuid:5D20892493BFDB11914A8590D31508C8
+ proof:pdf
+
+
+
+
+ saved
+ xmp.iid:d943dd6b-b187-4e1f-a21c-3e94e29a63ce
+ 2017-09-07T19:04:38+02:00
+ Adobe Illustrator CC 2014 (Macintosh)
+ /
+
+
+
+ Print
+ False
+ False
+ 1
+
+ 210.001556
+ 148.498278
+ Millimeters
+
+
+
+
+ MyriadPro-Regular
+ Myriad Pro
+ Regular
+ Open Type
+ Version 2.106;PS 2.000;hotconv 1.0.70;makeotf.lib2.5.58329
+ False
+ MyriadPro-Regular.otf
+
+
+
+
+
+ Black
+
+
+
+
+
+ Standard-Farbfeldgruppe
+ 0
+
+
+
+ Weiß
+ CMYK
+ PROCESS
+ 0.000000
+ 0.000000
+ 0.000000
+ 0.000000
+
+
+ Schwarz
+ CMYK
+ PROCESS
+ 0.000000
+ 0.000000
+ 0.000000
+ 100.000000
+
+
+ CMYK Rot
+ CMYK
+ PROCESS
+ 0.000000
+ 100.000000
+ 100.000000
+ 0.000000
+
+
+ CMYK Gelb
+ CMYK
+ PROCESS
+ 0.000000
+ 0.000000
+ 100.000000
+ 0.000000
+
+
+ CMYK Grün
+ CMYK
+ PROCESS
+ 100.000000
+ 0.000000
+ 100.000000
+ 0.000000
+
+
+ CMYK Cyan
+ CMYK
+ PROCESS
+ 100.000000
+ 0.000000
+ 0.000000
+ 0.000000
+
+
+ CMYK Blau
+ CMYK
+ PROCESS
+ 100.000000
+ 100.000000
+ 0.000000
+ 0.000000
+
+
+ CMYK Magenta
+ CMYK
+ PROCESS
+ 0.000000
+ 100.000000
+ 0.000000
+ 0.000000
+
+
+ C=15 M=100 Y=90 K=10
+ CMYK
+ PROCESS
+ 15.000000
+ 100.000000
+ 90.000000
+ 10.000000
+
+
+ C=0 M=90 Y=85 K=0
+ CMYK
+ PROCESS
+ 0.000000
+ 90.000000
+ 85.000000
+ 0.000000
+
+
+ C=0 M=80 Y=95 K=0
+ CMYK
+ PROCESS
+ 0.000000
+ 80.000000
+ 95.000000
+ 0.000000
+
+
+ C=0 M=50 Y=100 K=0
+ CMYK
+ PROCESS
+ 0.000000
+ 50.000000
+ 100.000000
+ 0.000000
+
+
+ C=0 M=35 Y=85 K=0
+ CMYK
+ PROCESS
+ 0.000000
+ 35.000000
+ 85.000000
+ 0.000000
+
+
+ C=5 M=0 Y=90 K=0
+ CMYK
+ PROCESS
+ 5.000000
+ 0.000000
+ 90.000000
+ 0.000000
+
+
+ C=20 M=0 Y=100 K=0
+ CMYK
+ PROCESS
+ 20.000000
+ 0.000000
+ 100.000000
+ 0.000000
+
+
+ C=50 M=0 Y=100 K=0
+ CMYK
+ PROCESS
+ 50.000000
+ 0.000000
+ 100.000000
+ 0.000000
+
+
+ C=75 M=0 Y=100 K=0
+ CMYK
+ PROCESS
+ 75.000000
+ 0.000000
+ 100.000000
+ 0.000000
+
+
+ C=85 M=10 Y=100 K=10
+ CMYK
+ PROCESS
+ 85.000000
+ 10.000000
+ 100.000000
+ 10.000000
+
+
+ C=90 M=30 Y=95 K=30
+ CMYK
+ PROCESS
+ 90.000000
+ 30.000000
+ 95.000000
+ 30.000000
+
+
+ C=75 M=0 Y=75 K=0
+ CMYK
+ PROCESS
+ 75.000000
+ 0.000000
+ 75.000000
+ 0.000000
+
+
+ C=80 M=10 Y=45 K=0
+ CMYK
+ PROCESS
+ 80.000000
+ 10.000000
+ 45.000000
+ 0.000000
+
+
+ C=70 M=15 Y=0 K=0
+ CMYK
+ PROCESS
+ 70.000000
+ 15.000000
+ 0.000000
+ 0.000000
+
+
+ C=85 M=50 Y=0 K=0
+ CMYK
+ PROCESS
+ 85.000000
+ 50.000000
+ 0.000000
+ 0.000000
+
+
+ C=100 M=95 Y=5 K=0
+ CMYK
+ PROCESS
+ 100.000000
+ 95.000000
+ 5.000000
+ 0.000000
+
+
+ C=100 M=100 Y=25 K=25
+ CMYK
+ PROCESS
+ 100.000000
+ 100.000000
+ 25.000000
+ 25.000000
+
+
+ C=75 M=100 Y=0 K=0
+ CMYK
+ PROCESS
+ 75.000000
+ 100.000000
+ 0.000000
+ 0.000000
+
+
+ C=50 M=100 Y=0 K=0
+ CMYK
+ PROCESS
+ 50.000000
+ 100.000000
+ 0.000000
+ 0.000000
+
+
+ C=35 M=100 Y=35 K=10
+ CMYK
+ PROCESS
+ 35.000000
+ 100.000000
+ 35.000000
+ 10.000000
+
+
+ C=10 M=100 Y=50 K=0
+ CMYK
+ PROCESS
+ 10.000000
+ 100.000000
+ 50.000000
+ 0.000000
+
+
+ C=0 M=95 Y=20 K=0
+ CMYK
+ PROCESS
+ 0.000000
+ 95.000000
+ 20.000000
+ 0.000000
+
+
+ C=25 M=25 Y=40 K=0
+ CMYK
+ PROCESS
+ 25.000000
+ 25.000000
+ 40.000000
+ 0.000000
+
+
+ C=40 M=45 Y=50 K=5
+ CMYK
+ PROCESS
+ 40.000000
+ 45.000000
+ 50.000000
+ 5.000000
+
+
+ C=50 M=50 Y=60 K=25
+ CMYK
+ PROCESS
+ 50.000000
+ 50.000000
+ 60.000000
+ 25.000000
+
+
+ C=55 M=60 Y=65 K=40
+ CMYK
+ PROCESS
+ 55.000000
+ 60.000000
+ 65.000000
+ 40.000000
+
+
+ C=25 M=40 Y=65 K=0
+ CMYK
+ PROCESS
+ 25.000000
+ 40.000000
+ 65.000000
+ 0.000000
+
+
+ C=30 M=50 Y=75 K=10
+ CMYK
+ PROCESS
+ 30.000000
+ 50.000000
+ 75.000000
+ 10.000000
+
+
+ C=35 M=60 Y=80 K=25
+ CMYK
+ PROCESS
+ 35.000000
+ 60.000000
+ 80.000000
+ 25.000000
+
+
+ C=40 M=65 Y=90 K=35
+ CMYK
+ PROCESS
+ 40.000000
+ 65.000000
+ 90.000000
+ 35.000000
+
+
+ C=40 M=70 Y=100 K=50
+ CMYK
+ PROCESS
+ 40.000000
+ 70.000000
+ 100.000000
+ 50.000000
+
+
+ C=50 M=70 Y=80 K=70
+ CMYK
+ PROCESS
+ 50.000000
+ 70.000000
+ 80.000000
+ 70.000000
+
+
+
+
+
+ Graustufen
+ 1
+
+
+
+ C=0 M=0 Y=0 K=100
+ CMYK
+ PROCESS
+ 0.000000
+ 0.000000
+ 0.000000
+ 100.000000
+
+
+ C=0 M=0 Y=0 K=90
+ CMYK
+ PROCESS
+ 0.000000
+ 0.000000
+ 0.000000
+ 89.999400
+
+
+ C=0 M=0 Y=0 K=80
+ CMYK
+ PROCESS
+ 0.000000
+ 0.000000
+ 0.000000
+ 79.998800
+
+
+ C=0 M=0 Y=0 K=70
+ CMYK
+ PROCESS
+ 0.000000
+ 0.000000
+ 0.000000
+ 69.999700
+
+
+ C=0 M=0 Y=0 K=60
+ CMYK
+ PROCESS
+ 0.000000
+ 0.000000
+ 0.000000
+ 59.999100
+
+
+ C=0 M=0 Y=0 K=50
+ CMYK
+ PROCESS
+ 0.000000
+ 0.000000
+ 0.000000
+ 50.000000
+
+
+ C=0 M=0 Y=0 K=40
+ CMYK
+ PROCESS
+ 0.000000
+ 0.000000
+ 0.000000
+ 39.999400
+
+
+ C=0 M=0 Y=0 K=30
+ CMYK
+ PROCESS
+ 0.000000
+ 0.000000
+ 0.000000
+ 29.998800
+
+
+ C=0 M=0 Y=0 K=20
+ CMYK
+ PROCESS
+ 0.000000
+ 0.000000
+ 0.000000
+ 19.999700
+
+
+ C=0 M=0 Y=0 K=10
+ CMYK
+ PROCESS
+ 0.000000
+ 0.000000
+ 0.000000
+ 9.999100
+
+
+ C=0 M=0 Y=0 K=5
+ CMYK
+ PROCESS
+ 0.000000
+ 0.000000
+ 0.000000
+ 4.998800
+
+
+
+
+
+ Strahlende Farben
+ 1
+
+
+
+ C=0 M=100 Y=100 K=0
+ CMYK
+ PROCESS
+ 0.000000
+ 100.000000
+ 100.000000
+ 0.000000
+
+
+ C=0 M=75 Y=100 K=0
+ CMYK
+ PROCESS
+ 0.000000
+ 75.000000
+ 100.000000
+ 0.000000
+
+
+ C=0 M=10 Y=95 K=0
+ CMYK
+ PROCESS
+ 0.000000
+ 10.000000
+ 95.000000
+ 0.000000
+
+
+ C=85 M=10 Y=100 K=0
+ CMYK
+ PROCESS
+ 85.000000
+ 10.000000
+ 100.000000
+ 0.000000
+
+
+ C=100 M=90 Y=0 K=0
+ CMYK
+ PROCESS
+ 100.000000
+ 90.000000
+ 0.000000
+ 0.000000
+
+
+ C=60 M=90 Y=0 K=0
+ CMYK
+ PROCESS
+ 60.000000
+ 90.000000
+ 0.003100
+ 0.003100
+
+
+
+
+
+
+ Adobe PDF library 10.01
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
endstream
endobj
3 0 obj
<>
endobj
7 0 obj
<>/Resources<>/Font<>/ProcSet[/PDF/Text]/Properties<>>>/Thumb 48 0 R/TrimBox[0.0 0.0 595.28 420.94]/Type/Page>>
endobj
45 0 obj
<>stream
+HVnKWrfJw{ aql osqN_RoJUZϨZ}V?*{V)+ˉ^ww1>Tw|IMfQk7)|;?1hVlNfjqfĒ
+1W#9I,8y՛snKL)Y,Hӣ.V.+`6L瀞zn\[~m#85Wf
+SMmT^l>4FmAFf ۙX~4-UԻs`mB~边xG%BH˔uLidOYω=DzʂcQgfb|,Ё{Ӎ
++u֤9C)iyqNɆ Fois譃Ɇ<ĞBPC=JiPcd\QP\~ڣm
B-&98ԯYN*QBqmկ;rY`d3F
24r1XwLъ܍qblу<` 1Τw Ibi),@y%1y!t>r5}1IR$J!,zЏR GY$5sJjL6"9@t?rKZSΕA~
ueLi5&j^no _Yt80A EB{=-
+i"*YWUQ)^BϡühE*ۉz