Merge branch 'develop' of https://github.com/vortexgpgpu/vortex into develop

This commit is contained in:
Blaise Tine 2024-01-28 00:36:41 -08:00
commit eac6a485fa
14 changed files with 200 additions and 85 deletions

View file

@ -33,6 +33,7 @@ Vortex is a full-stack open-source RISC-V GPGPU.
- `miscs`: Miscellaneous resources.
## Build Instructions
More detailed build instructions can be found [here](docs/install_vortex.md).
### Supported OS Platforms
- Ubuntu 18.04
- Centos 7

View file

@ -1,71 +1,45 @@
# Environment Setup# Vortex Dev Environment Setup
These instructions apply to the development vortex repo using the *updated toolchain*. The updated toolchain is considered to be any commit of `master` pulled from *July 2, 2023* onwards. The toolchain update in question can be viewed in this [commit](https://github.com/vortexgpgpu/vortex-dev/commit/0048496ba28d7b9a209a0e569d52d60f2b68fc04). Therefore, if you are unsure whether you are using the new toolchain or not, then you should check the `ci` folder for the existence of the `toolchain_prebuilt.sh` script. Furthermore, you should notice that the `toolchain_install.sh` script has the legacy `llvm()` split into `llvm-vortex()` and `llvm-pocl()`.
# Environment Setup
These instructions apply to the development vortex repo using the updated toolchain. The updated toolchain is considered to be any commit of `master` pulled from July 2, 2023 onwards. The toolchain update in question can be viewed in this [commit](https://github.com/vortexgpgpu/vortex-dev/commit/0048496ba28d7b9a209a0e569d52d60f2b68fc04). Therefore, if you are unsure whether you are using the new toolchain or not, then you should check the `ci` folder for the existence of the `toolchain_prebuilt.sh` script. Furthermore, you should notice that the `toolchain_install.sh` script has the legacy `llvm()` split into `llvm-vortex()` and `llvm-pocl()`.
> Note: As it stands right now, there a few test suites which are not working due to this toolchain migration. We are working to determine an exact list of which ones are working and which ones are not. For now, if the repo builds at a minimum, then you can consider all these steps to have worked successfully.
## Choosing an Development Environment
There are three primary environments you can use. Each has its own pros and cons. Refer to this section to help you determine which environment best suits your needs.
1. Volvo
2. Docker
3. Local
## Set Up on Your Own System
The toolchain binaries provided with Vortex are built on Ubuntu-based systems. To install Vortex on your own system, [follow these instructions](install_vortex.md).
## Servers for Georgia Tech Students and Collaborators
### Volvo
Volvo is a server provided by Georgia Tech. As such, it provides high performance compute, but you need valid credentials to access it. If you don't already have credentials, you can get in contact with your mentor to ask about setting your account up.
Volvo is a 64-core server provided by HPArch. You need valid credentials to access it. If you don't already have access, you can get in contact with your mentor to ask about setting your account up.
Pros:
Setup on Volvo:
1. Connect to Georgia Tech's VPN or ssh into another machine on campus
2. `ssh volvo.cc.gatech.edu`
3. Clone Vortex to your home directory: `git clone --recursive https://github.com/vortexgpgpu/vortex.git`
4. `source /nethome/software/set_vortex_env.sh` to set up the necessary environment variables.
5. `make -s` in the `vortex` root directory
6. Run a test program: `./ci/blackbox.sh --cores=2 --app=dogfood`
1. Native x86_64 architecture, AMD EPYC 7702P 64-Core Processor (*fast*)
2. Packages and difficult configurations are already done for you
3. Consistent environment as others, allowing for easier troubleshooting
4. Just need to SSH into Volvo, minimal impact on local computer resources
5. VScode remote development tools are phenomenal over SSH
### Nio
Nio is a 20-core desktop server provided by HPArch. If you have access to Volvo, you also have access to Nio.
Cons:
1. Volvo is accessed via gatech vpn, external contributors might encounter issues with it -- especially from other university networks
2. Account creation is not immediate and is subject to processing time
3. Volvo might have outtages (*pretty uncommon*)
5. SSH development requires internet and other remote development tools (*vscode works!*)
### Docker
Docker allows for isolated pre-built environments to be created, shared and used. They are much more resource efficient than a Virtual Machine, and have great tooling and support available. The main motivation for Docker is bringing a consistent development environment to your local computer, across all platforms.
Pros:
1. If you are native to x86_64, the container will also run natively, yielding better performance. However, if you have aarch64 (arm) processor, you can still run the Docker container without configuration changes.
2. Consistent environment as others, allowing for easier troubleshooting
3. Works out of the box, just have a working installation of Docker
4. Vortex uses a build system, so once you build the repo once, only new code changes need to be recompiled
5. Docker offers helpful tools and extensions to monitor the performance of your container
Cons:
1. If you are using an arm processor, the container will be run in emulation mode, so it will inherently run slower, as it needs to translate all the x86_64 instructions. It's still usable on Apple Silicon, however.
2. Limited to your computer's performance, and Vortex is a large repo to build
3. Will utilize a few gigabytes of storage on your computer for saving binaries to run the container
Setup on Nio:
1. Connect to Georgia Tech's VPN or ssh into another machine on campus
2. `ssh nio.cc.gatech.edu`
3. Clone Vortex to your home directory: `git clone --recursive https://github.com/vortexgpgpu/vortex.git`
4. `source /opt/set_vortex_env_dev.sh` to set up the necessary environment variables.
5. `make -s` in the `vortex` root directory
6. Run a test program: `./ci/blackbox.sh --cores=2 --app=dogfood`
### Local
You can reverse engineer the Dockerfile and scripts above to get a working environment setup locally. This option is for experienced users, who have already considered the pros and cons of Volvo and Docker.
## Docker (Experimental)
Docker allows for isolated pre-built environments to be created, shared and used. The emulation mode required for ARM-based processors will incur a decrease in performance. Currently, the dockerfile is not included with the official vortex repository and is not actively maintained or supported.
## Setup on Volvo
1. Clone Repo Recursively: `git clone --recursive https://github.com/vortexgpgpu/vortex-dev.git`
2. Source `/opt/set_vortex_env_dev.sh` to initialize pre-installed toolchain
3. `make -s` in `vortex-dev` root directory
4. Run a test program: `./ci/blackbox.sh --cores=2 --app=dogfood`
## Setup with Docker
Currently the Dockerfile is not included with the official vortex-dev repository, however you can quickly add it to repo and get started.
1. Clone repo recursively onto your local machine: `git clone --recursive https://github.com/vortexgpgpu/vortex-dev.git`
2. Download a copy of `Dockerfile.dev` and place it in the root of the repo.
3. Build the Dockerfile into an image: `docker build --platform=linux/amd64 -t vortex-dev -f Dockerfile.dev .`
4. Run a container based on the image: `docker run --rm -v ./:/root/vortex-dev/ -it --name vtx-dev --privileged=true --platform=linux/amd64 vortex-dev`
### Setup with Docker
1. Clone repo recursively onto your local machine: `git clone --recursive https://github.com/vortexgpgpu/vortex.git`
2. Download the dockerfile from [here](https://github.gatech.edu/gist/usubramanya3/f1bf3e953faa38a6372e1292ffd0b65c) and place it in the root of the repo.
3. Build the Dockerfile into an image: `docker build --platform=linux/amd64 -t vortex -f dockerfile .`
4. Run a container based on the image: `docker run --rm -v ./:/root/vortex/ -it --name vtx-dev --privileged=true --platform=linux/amd64 vortex`
5. Install the toolchain `./ci/toolchain_install.sh --all` (once per container)
6. `make -s` in `vortex-dev` root directory
6. `make -s` in `vortex` root directory
7. Run a test program: `./ci/blackbox.sh --cores=2 --app=dogfood`
### Additional Docker Commands
- Exit from a container (does not stop or remove it)
- Resume a container you have exited or start a second terminal session `docker exec -it <container-name> bash`
You may exit from a container and resume a container you have exited or start a second terminal session `docker exec -it <container-name> bash`

View file

@ -13,7 +13,8 @@
## Installation
- Refer to the build instructions in [README](../README.md).
- For the different environments Vortex supports, [read this document](environment_setup.md).
- To install on your own system, [follow this document](install_vortex.md).
## Quick Start Scenarios
@ -28,4 +29,4 @@ Running Vortex simulators with different configurations:
- Run dogfood driver test with simx driver and Vortex config of 4 cluster, 4 cores, 8 warps, 6 threads
$ ./ci/blackbox.sh --driver=simx --clusters=4 --cores=4 --warps=8 --threads=6 --app=dogfood
$ ./ci/blackbox.sh --driver=simx --clusters=4 --cores=4 --warps=8 --threads=6 --app=dogfood

124
docs/install_vortex.md Normal file
View file

@ -0,0 +1,124 @@
# Installing and Setting Up the Vortex Environment
## Ubuntu 18.04, 20.04
1. Install the following dependencies:
```
sudo apt-get install build-essential zlib1g-dev libtinfo-dev libncurses5 uuid-dev libboost-serialization-dev libpng-dev libhwloc-dev
```
2. Upgrade gcc to 11:
```
sudo apt-get install gcc-11 g++-11
```
Multiple gcc versions on Ubuntu can be managed with update-alternatives, e.g.:
```
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 9
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-9 9
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 11
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-11 11
```
3. Download the Vortex codebase:
```
git clone --recursive https://github.com/vortexgpgpu/vortex.git
```
4. Install Vortex's prebuilt toolchain:
```
cd vortex
sudo ./ci/toolchain_install.sh -all
# By default, the toolchain will install to /opt folder. This is recommended, but you can install the toolchain to a different directory by setting DESTDIR.
DESTDIR=$TOOLDIR ./ci/toolchain_install.sh -all
```
5. Set up environment:
```
export VORTEX_HOME=$TOOLDIR/vortex
export LLVM_VORTEX=$TOOLDIR/llvm-vortex
export LLVM_POCL=$TOOLDIR/llvm-pocl
export POCL_CC_PATH=$TOOLDIR/pocl/compiler
export POCL_RT_PATH=$TOOLDIR/pocl/runtime
export RISCV_TOOLCHAIN_PATH=$TOOLDIR/riscv-gnu-toolchain
export VERILATOR_ROOT=$TOOLDIR/verilator
export SV2V_PATH=$TOOLDIR/sv2v
export YOSYS_PATH=$TOOLDIR/yosys
export PATH=$YOSYS_PATH/bin:$SV2V_PATH/bin:$VERILATOR_ROOT/bin:$PATH
```
6. Build Vortex
```
make
```
## RHEL 8
Note: depending on the system, some of the toolchain may need to be recompiled for non-Ubuntu Linux. The source for the tools can be found [here](https://github.com/vortexgpgpu/).
1. Install the following dependencies:
```
sudo yum install libpng-devel boost boost-devel boost-serialization libuuid-devel opencl-headers hwloc hwloc-devel gmp-devel compat-hwloc1
```
2. Upgrade gcc to 11:
```
sudo yum install gcc-toolset-11
```
Multiple gcc versions on Red Hat can be managed with scl
3. Install MPFR 4.2.0:
Download [the source](https://ftp.gnu.org/gnu/mpfr/) and follow [the installation documentation](https://www.mpfr.org/mpfr-current/mpfr.html#How-to-Install).
4. Download the Vortex codebase:
```
git clone --recursive https://github.com/vortexgpgpu/vortex.git
```
5. Install Vortex's prebuilt toolchain:
```
cd vortex
sudo ./ci/toolchain_install.sh -all
# By default, the toolchain will install to /opt folder. This is recommended, but you can install the toolchain to a different directory by setting DESTDIR.
DESTDIR=$TOOLDIR ./ci/toolchain_install.sh -all
```
6. Set up environment:
```
export VORTEX_HOME=$TOOLDIR/vortex
export LLVM_VORTEX=$TOOLDIR/llvm-vortex
export LLVM_POCL=$TOOLDIR/llvm-pocl
export POCL_CC_PATH=$TOOLDIR/pocl/compiler
export POCL_RT_PATH=$TOOLDIR/pocl/runtime
export RISCV_TOOLCHAIN_PATH=$TOOLDIR/riscv-gnu-toolchain
export VERILATOR_ROOT=$TOOLDIR/verilator
export SV2V_PATH=$TOOLDIR/sv2v
export YOSYS_PATH=$TOOLDIR/yosys
export PATH=$YOSYS_PATH/bin:$SV2V_PATH/bin:$VERILATOR_ROOT/bin:$PATH
export LD_LIBRARY_PATH=<path to mpfr>/src/.libs:$LD_LIBRARY_PATH
```
7. Build Vortex
```
make
```

View file

@ -191,6 +191,10 @@
`define STALL_TIMEOUT (100000 * (1 ** (`L2_ENABLED + `L3_ENABLED)))
`endif
`ifndef SV_DPI
`define DPI_DISABLE
`endif
`ifndef FPU_FPNEW
`ifndef FPU_DSP
`ifndef FPU_DPI

View file

@ -14,7 +14,7 @@
`ifndef VX_PLATFORM_VH
`define VX_PLATFORM_VH
`ifndef SYNTHESIS
`ifdef SV_DPI
`include "util_dpi.vh"
`endif

View file

@ -12,6 +12,7 @@
// limitations under the License.
`include "VX_define.vh"
`include "VX_trace.vh"
module VX_dispatch import VX_gpu_pkg::*; #(
parameter CORE_ID = 0

View file

@ -308,13 +308,20 @@ module VX_schedule import VX_gpu_pkg::*; #(
localparam GNW_WIDTH = `LOG2UP(`NUM_CLUSTERS * `NUM_CORES * `NUM_WARPS);
reg [`UUID_WIDTH-1:0] instr_uuid;
wire [GNW_WIDTH-1:0] g_wid = (GNW_WIDTH'(CORE_ID) << `NW_BITS) + GNW_WIDTH'(schedule_wid);
`ifdef SV_DPI
always @(posedge clk) begin
if (reset) begin
instr_uuid <= `UUID_WIDTH'(dpi_uuid_gen(1, 0, 0));
end else if (schedule_fire) begin
instr_uuid <= `UUID_WIDTH'(dpi_uuid_gen(0, 32'(g_wid), 64'(schedule_pc)));
end
end
end
`else
wire [GNW_WIDTH+16-1:0] w_uuid = {g_wid, 16'(schedule_pc)};
always @(*) begin
instr_uuid = `UUID_WIDTH'(w_uuid);
end
`endif
`else
wire [`UUID_WIDTH-1:0] instr_uuid = '0;
`endif

View file

@ -14,9 +14,7 @@
`ifndef VX_TRACE_VH
`define VX_TRACE_VH
`ifndef SYNTHESIS
`include "VX_define.vh"
`ifdef SIMULATION
task trace_ex_type(input int level, input [`EX_BITS-1:0] ex_type);
case (ex_type)

View file

@ -16,7 +16,7 @@
`include "VX_define.vh"
`ifndef SYNTHESIS
`ifdef SV_DPI
`include "float_dpi.vh"
`endif

View file

@ -56,17 +56,17 @@ TARGET=asesim make -C runtime/opae
PREFIX=build_base CONFIGS="-DEXT_F_DISABLE -DL1_DISABLE -DSM_DISABLE -DNUM_WARPS=2 -DNUM_THREADS=2" TARGET=asesim make
# ASE test runs
./run_ase.sh build_base_arria10_asesim_1c/synth ../../../../tests/regression/basic/basic -n1 -t0
./run_ase.sh build_base_arria10_asesim_1c/synth ../../../../tests/regression/basic/basic -n1 -t1
./run_ase.sh build_base_arria10_asesim_1c/synth ../../../../tests/regression/basic/basic -n16
./run_ase.sh build_base_arria10_asesim_1c/synth ../../../../tests/regression/demo/demo -n16
./run_ase.sh build_base_arria10_asesim_1c/synth ../../../../tests/regression/dogfood/dogfood -n16
./run_ase.sh build_base_arria10_asesim_1c/synth ../../../../tests/opencl/vecadd/vecadd
./run_ase.sh build_base_arria10_asesim_1c/synth ../../../../tests/opencl/sgemm/sgemm -n4
./run_ase.sh build_base_arria10_asesim_1c ../../../../tests/regression/basic/basic -n1 -t0
./run_ase.sh build_base_arria10_asesim_1c ../../../../tests/regression/basic/basic -n1 -t1
./run_ase.sh build_base_arria10_asesim_1c ../../../../tests/regression/basic/basic -n16
./run_ase.sh build_base_arria10_asesim_1c ../../../../tests/regression/demo/demo -n16
./run_ase.sh build_base_arria10_asesim_1c ../../../../tests/regression/dogfood/dogfood -n16
./run_ase.sh build_base_arria10_asesim_1c ../../../../tests/opencl/vecadd/vecadd
./run_ase.sh build_base_arria10_asesim_1c ../../../../tests/opencl/sgemm/sgemm -n4
# modify "vsim_run.tcl" to dump VCD trace
vcd file trace.vcd
vcd add -r /*/Vortex/hw/rtl/*
vcd add -r /*/afu/*
run -all
# compress FPGA output files

View file

@ -15,27 +15,27 @@
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
BUILD_DIR=$1
BUILD_DIR=$(realpath $1)
PROGRAM=$(basename "$2")
PROGRAM_DIR=`dirname $2`
POCL_RT_PATH=$TOOLDIR/pocl/runtime
VORTEX_RT_PATH=$SCRIPT_DIR/../../../../runtime
# Export ASE_WORKDIR variable
export ASE_WORKDIR=$SCRIPT_DIR/$BUILD_DIR/work
shift 2
export ASE_WORKDIR=$BUILD_DIR/synth/work
# cleanup incomplete runs
rm -f $ASE_WORKDIR/.app_lock.pid
rm -f $ASE_WORKDIR/.ase_ready.pid
rm -f $SCRIPT_DIR/$BUILD_DIR/nohup.out
rm -f $BUILD_DIR/synth/nohup.out
# Start Simulator in background
pushd $SCRIPT_DIR/$BUILD_DIR
echo " [DBG] starting ASE simnulator (stdout saved to '$SCRIPT_DIR/$BUILD_DIR/nohup.out')"
nohup make sim &
# Start Simulator in background (capture processs group pid)
pushd $BUILD_DIR/synth
echo " [DBG] starting ASE simnulator (stdout saved to '$BUILD_DIR/synth/nohup.out')"
setsid make sim &> /dev/null &
SIM_PID=$!
popd
# Wait for simulator readiness
@ -47,6 +47,11 @@ done
# run application
pushd $PROGRAM_DIR
shift 2
echo " [DBG] running ./$PROGRAM $*"
ASE_LOG=0 LD_LIBRARY_PATH=$POCL_RT_PATH/lib:$VORTEX_RT_PATH/opae:$LD_LIBRARY_PATH ./$PROGRAM $*
popd
# stop the simulator (kill process group)
kill -- -$(ps -o pgid= $SIM_PID | grep -o '[0-9]*')
wait $SIM_PID 2> /dev/null

View file

@ -75,7 +75,7 @@ TOP = vortex_afu_shim
VL_FLAGS += --language 1800-2009 --assert -Wall -Wpedantic
VL_FLAGS += -Wno-DECLFILENAME -Wno-REDEFMACRO
VL_FLAGS += --x-initial unique --x-assign unique
VL_FLAGS += -DSIMULATION
VL_FLAGS += -DSIMULATION -DSV_DPI
VL_FLAGS += -DXLEN_$(XLEN)
VL_FLAGS += $(CONFIGS)
VL_FLAGS += verilator.vlt

View file

@ -56,7 +56,7 @@ VL_FLAGS += --language 1800-2009 --assert -Wall -Wpedantic
VL_FLAGS += -Wno-DECLFILENAME -Wno-REDEFMACRO
VL_FLAGS += --x-initial unique --x-assign unique
VL_FLAGS += verilator.vlt
VL_FLAGS += -DSIMULATION
VL_FLAGS += -DSIMULATION -DSV_DPI
VL_FLAGS += -DXLEN_$(XLEN)
VL_FLAGS += $(CONFIGS)
VL_FLAGS += $(RTL_INCLUDE)