docs update

This commit is contained in:
Blaise Tine 2021-10-19 16:32:31 -04:00
parent fe862f64b1
commit 956f3d1880
14 changed files with 19 additions and 15 deletions

View file

Before

Width:  |  Height:  |  Size: 60 KiB

After

Width:  |  Height:  |  Size: 60 KiB

View file

Before

Width:  |  Height:  |  Size: 77 KiB

After

Width:  |  Height:  |  Size: 77 KiB

View file

Before

Width:  |  Height:  |  Size: 67 KiB

After

Width:  |  Height:  |  Size: 67 KiB

View file

Before

Width:  |  Height:  |  Size: 517 KiB

After

Width:  |  Height:  |  Size: 517 KiB

View file

@ -8,7 +8,7 @@ The Vortex Cache Sub-system has the following main properties:
### Cache Hierarchy
![Image of Cache Hierarchy](./images/cache_hierarchy.png)
![Image of Cache Hierarchy](./assets/img/cache_hierarchy.png)
- Cache can be configured to be any level in the hierarchy
- Caches communicate via snooping
@ -18,7 +18,7 @@ The Vortex Cache Sub-system has the following main properties:
VX.cache.v is the top module of the cache verilog code located in the `/hw/rtl/cache` directory.
![Image of Vortex Cache](./images/vortex_cache_top_module.png)
![Image of Vortex Cache](./assets/img/vortex_cache_top_module.png)
- Configurable (Cache size, number of banks, bank line size, etc.)
- I/O signals
@ -44,7 +44,7 @@ VX.cache.v is the top module of the cache verilog code located in the `/hw/rtl/c
VX_bank.v is the verilog code that handles cache bank functionality and is located in the `/hw/rtl/cache` directory.
![Image of Vortex Cache Bank](./images/vortex_bank.png)
![Image of Vortex Cache Bank](./assets/img/vortex_bank.png)
- Allows for high throughput
- Each bank contains queues to hold requests to the cache

View file

@ -19,7 +19,7 @@ OPAE Build Configuration
Within the `/hw/syn/opae` directory, there are source text files for each core-option for the fpga build (the 32 and 64 core options are not currently implemented) which have the following parameters that can be configured:
- NUM_CORES: the number of cores per cluster
- NUM_CLUSTERS: the number of clusters alotted to the processor
- L3_ENABLE: enable the use of the L3 cache
- L2_ENABLE: enable the use of the L2 cache
- PERF_ENABLE: enable the use of all profile counters
To enable L3 cache and profile counters for a build, simply uncomment the definition within the respective source file.
@ -33,41 +33,45 @@ The FPGA has to following configuration options:
- 4 cores fpga (fpga-4c)
- 8 cores fpga (fpga-8c)
- 16 cores fpga (fpga-16c)
- 32 cores fpga (fpga-32c)
- 64 cores fpga (fpga-64c)
Command line:
$ cd hw/syn/opae
$ make fpga- *# of cores* c
$ make fpga-<num-of-cores>c
Example: `make fpga-4c`
A new folder (ex: `build_fpga_4c`) will be created and the build will start and take ~30-45 min to complete.
A new folder (ex: `build_fpga_4c`) will be created and the build will start and take ~30-480 min to complete.
OPAE Build Progress
-------------------
You could check the last 10 lines in the build log for possible errors until build completion.
$ tail -n 10 ./build_fpga_4c/build.log
$ tail -n 10 ./build_fpga_<num-of-cores>c/build.log
Check if the build is still running by looking for quartus_sh, quartus_syn, or quartus_fit programs.
$ ps -u *username*
$ ps -u <username>
If the build fails and you need to restart it, clean up the build folder using the following command:
$ make clean-fpga- *# of cores* c
$ make clean-fpga-<num-of-cores>c
Example: `make clean-fpga-4c`
The file `vortex_afu.gbs` should exist when the build is done:
$ ls -lsa ./build_fpga_ *# of cores* c/vortex_afu.gbs
$ ls -lsa ./build_fpga_<num-of-cores>c/vortex_afu.gbs
Signing the bitstream and Programming the FPGA
----------------------------------------------
$ cd ./build_fpga_`# of cores`c/
$ cd ./build_fpga_<num-of-cores>c
$ PACSign PR -t UPDATE -H openssl_manager -i vortex_afu.gbs -o vortex_afu_unsigned_ssl.gbs
$ fpgasupdate vortex_afu_unsigned_ssl.gbs

View file

@ -21,10 +21,10 @@
Running Vortex simulators with different configurations:
- Run basic driver test with rtlsim driver and Vortex config of 2 clusters, 2 cores, 2 warps, 4 threads
$ ./ci/blackbox.sh --clusters=2 --cores=2 --warps=2 --threads=4 --driver=rtlsim --app=basic
$ ./ci/blackbox.sh --driver=rtlsim --clusters=2 --cores=2 --warps=2 --threads=4 --app=basic
- Run demo driver test with vlsim driver and Vortex config of 1 clusters, 4 cores, 4 warps, 2 threads
$ ./ci/blackbox.sh --clusters=1 --cores=4 --warps=4 --threads=2 --driver=vlsim --app=demo
$ ./ci/blackbox.sh --driver=vlsim --clusters=1 --cores=4 --warps=4 --threads=2 --app=demo
- Run dogfood driver test with simx driver and Vortex config of 4 cluster, 4 cores, 8 warps, 6 threads
$ ./ci/blackbox.sh --clusters=4 --cores=4 --warps=8 --threads=6 --driver=simx --app=dogfood
$ ./ci/blackbox.sh --driver=simx --clusters=4 --cores=4 --warps=8 --threads=6 --app=dogfood

View file

@ -32,7 +32,7 @@ Vortex uses the SIMT (Single Instruction, Multiple Threads) execution model with
### Vortex Pipeline/Datapath
![Image of Vortex Microarchitecture](./images/vortex_microarchitecture_v2.png)
![Image of Vortex Microarchitecture](./assets/img/vortex_microarchitecture_v2.png)
Vortex has a 5-stage pipeline: FI | ID | Issue | EX | WB.