cva6/.gitlab-ci
Farhan Ali Shah 542fe39adc
Some checks are pending
bender-up-to-date / bender-up-to-date (push) Waiting to run
ci / build-riscv-tests (push) Waiting to run
ci / execute-riscv64-tests (push) Blocked by required conditions
ci / execute-riscv32-tests (push) Blocked by required conditions
Adding support for ZCMT Extension for Code-Size Reduction in CVA6 (#2659)
## Introduction
This PR implements the ZCMT extension in the CVA6 core, targeting the 32-bit embedded-class platforms. ZCMT is a code-size reduction feature that utilizes compressed table jump instructions (cm.jt and cm.jalt) to reduce code size for embedded systems
**Note:** Due to implementation complexity, ZCMT extension is primarily targeted at embedded class CPUs. Additionally, it is not compatible with architecture class profiles.(Ref. [Unprivilege spec 27.20](https://drive.google.com/file/d/1uviu1nH-tScFfgrovvFCrj7Omv8tFtkp/view))

## Key additions

- Added zcmt_decoder module for compressed table jump instructions: cm.jt (jump table) and cm.jalt (jump-and-link table)

- Implemented the Jump Vector Table (JVT) CSR to store the base address of the jump table in csr_reg module

- Implemented a return address stack, enabling cm.jalt to behave equivalently to jal ra (jump-and-link with return address), by pushing the return address onto the stack in zcmt_decoder module

## Implementation in CVA6
The implementation of the ZCMT extension involves the following major modifications:

### compressed decoder 
The compressed decoder scans and identifies the cm.jt and cm.jalt instructions, and generates signals indicating that the instruction is both compressed and a ZCMT instruction.

### zcmt_decoder
A new zcmt_decoder module was introduced to decode the cm.jt and cm.jalt instructions, fetch the base address of the JVT table from JVT CSR, extract the index and construct jump instructions to ensure efficient integration of the ZCMT extension in embedded platforms. Table.1 shows the IO port connection of zcmt_decoder module. High-level block diagram of zcmt implementation in CVA6 is shown in Figure 1.

_Table. 1 IO port connection with zcmt_decoder module_
Signals | IO | Description | Connection | Type
-- | -- | -- | -- | --
clk_i | in | Subsystem Clock | SUBSYSTEM | logic
rst_ni | in | Asynchronous reset active low | SUBSYSTEM | logic
instr_i | in | Instruction in | compressed_decoder | logic [31:0]
pc_i | in | Current PC | PC from FRONTEND | logic [CVA6Cfg.VLEN-1:0]
is_zcmt_instr_i | in | Is instruction a zcmt instruction | compressed_decoder | logic
illegal_instr_i | in | Is instruction a illegal instruction | compressed_decoder | logic
is_compressed_i | in | Is instruction a compressed instruction | compressed_decoder | logic
jvt_i | in | JVT struct from CSR | CSR | jvt_t
req_port_i | in | Handshake between CACHE and FRONTEND (fetch) | Cache | dcache_req_o_t
instr_o | out | Instruction out | cvxif_compressed_if_driver | logic [31:0]
illegal_instr_o | out | Is the instruction is illegal | cvxif_compressed_if_driver | logic
is_compressed_o | out | Is the instruction is compressed | cvxif_compressed_if_driver | logic
fetch_stall_o | out | Stall siganl | cvxif_compressed_if_driver | logic
req_port_o | out | Handshake between CACHE and FRONTEND (fetch) | Cache | dcache_req_i_t

### branch unit condition
A condition is implemented in the branch unit to ensure that ZCMT instructions always cause a misprediction, forcing the program to jump to the calculated address of the newly constructed jump instruction.

### JVT CSR
A new JVT csr is implemented in csr_reg which holds the base address of the JVT table. The base address is fetched from the JVT CSR, and combined with the index value to calculate the effective address.

### No MMU
Embedded platform does not utilize the MMU, so zcmt_decoder is connected with cache through port 0 of the Dcache module for implicit read access from the memory.

![zcmt_block drawio](https://github.com/user-attachments/assets/ac7bba75-4f56-42f4-9f5e-0c18f00d4dae)
_Figure. 1 High level block diagram of ZCMT extension implementation_

## Known Limitations
The implementation targets 32-bit instructions for embedded-class platforms without an MMU. Since the core does not utilize an MMU, it is leveraged to connect the zcmt_decoder to the cache via port 0.

## Testing and Verification

- Developed directed test cases to validate cm.jt and cm.jalt instruction functionality
- Verified correct initialization and updates of JVT CSR

### Test Plan 
A test plan is developed to test the functionality of ZCMT extension along with JVT CSR. Directed Assembly test executed to check the functionality. 

_Table. 2 Test plan_
S.no | Features | Description | Pass/Fail Criteria | Test Type | Test status
-- | -- | -- | -- | ---- | --
1 | cm.jt | Simple assembly test to validate the working of cm.jt instruction in  CV32A60x. | Check against Spike's ref. model | Directed | Pass
2 | cm.jalt | Simple assembly test to validate the working of cm.jalt instruction in both CV32A60x. | Check against Spike's ref. model | Directed | Pass
3 | cm.jalt with return address stack | Simple assembly test to validate the working of cm.jalt instruction with return address stack in both CV32A60x. It works as jump and link ( j ra, imm) | Check against Spike's ref. model | Directed | Pass
4 | JVT CSR | Read and write base address of Jump table to JVT CSR | Check against Spike's ref. model | Directed | Pass


**Note**: Please find the test under CVA6_REPO_DIR/verif/tests/custom/zcmt"
2025-01-27 13:23:26 +01:00
..
scripts improve dashboard-provided log (#2636) 2024-11-28 11:46:47 +01:00
setup-ci-example Create Spyglass CI job and add Spyglass folder to cva6 repository (#2131) 2024-05-24 14:16:15 +02:00
expected_synth.yml Adding support for ZCMT Extension for Code-Size Reduction in CVA6 (#2659) 2025-01-27 13:23:26 +01:00
README.md Update CI readme after merging cva6 and core-v-verif (#1390) 2023-09-13 16:15:41 +02:00

GitLab CI for CVA6

This document describes the different steps performed automatically when a branch is pushed to a repository. It is not meant to be a complete description. It is an entry point to help to understand the structure of the pipelines; to find the information your are looking for / the part of the CI you want to edit. Please refer to the mentioned files for more details.

Only the GitLab-related tasks are described here.

Before the branch reaches GitLab

CVA6 repository is mirrored into a GitLab instance owned by Thales to perform regression tests on pull requests and master branch.

Pipeline boot

When a branch is pushed, the entry point of the CI is the .gitlab-ci.yml file at the repository root.

See .gitlab-ci.yml

It includes a file from a setup-ci project (to locate tools etc.), defines workflow rules and tests.

Running the tests

Stages are defined as below (order matters):

  • build tools: pub_build_tools build Spike and pub_check_env prints environment variables for debugging.
  • smoke tests: pub_smoke and pub_gen_smoke jobs run smoke tests.
  • verif tests: many jobs run different verif tests. The template for them is described later in this document.
  • backend tests: jobs which use results of verif tests, often synthesis results.
  • report: merge reports merges all reports into a single yaml file.

Adding a verif test

A simple test looks like this:

pub_<name>:
  extends:
    - .verif_test
    - .template_job_short_ci
  variables:
    DASHBOARD_JOB_TITLE: "<title for dashboard>"
    DASHBOARD_JOB_DESCRIPTION: "<description for dashboard>"
    DASHBOARD_SORT_INDEX: <index to sort jobs in dashboard>
    DASHBOARD_JOB_CATEGORY: "<job category for dashboard>"
  script:
    - source verif/regress/<my-script>.sh
    - python3 .gitlab-ci/scripts/report_<kind>.py <args...>
  • .verif_test tells that:
    • The job goes in verif tests stage
    • Before running the script part, additionally to the global before_script:
      • Spike is got from pub_build_tools
      • Artifacts are cleaned, artifacts/reports/ and artifacts/logs/ are created
      • A "failure" report is created by default (in case the script exits early)
      • $SYN_VCS_BASHRC is sourced
    • All the contents of the artifacts/ folder will be considered as artifacts (even if the job fails)
  • .template_job_short_ci tells under which conditions the job should run
  • variables defines environment variables. The 4 variables above are needed to generate the report for the dashboard.
  • script defines the script to run:
    1. Run the test, for instance sourcing a script in verif/regress/
    2. Generate a report running a script from .gitlab-ci/scripts/reports_*.py

Notes:

You can add more environment variables such as:

variables:
  DV_SIMULATORS: "veri-testharness,spike"
  DV_TESTLISTS: "../tests/testlist_riscv-tests-$DV_TARGET-p.yaml"

You can also have several jobs running in parallel with variables taking different values:

parallel:
  matrix:
    - DV_TARGET: [cv64a6_imafdc_sv39, cv32a60x]

Adding a backend test

pub_<name>:
  needs:
    - pub_<other_job>
    - <...>
  extends:
    - .backend_test
    - .template_job_always_manual
  variables:
    <same as for verif tests>
  script:
    - <mv spike from artifacts if you need it>
    - <your script>
    - python3 .gitlab-ci/scripts/report_<kind>.py <args...>

Backend tests are like verif tests, differences are:

  • needs list is needed to specify when the test is run. Without a needs list, all jobs from all previous stages are considered as needed. However, when a needs list is declared, all useful dependencies must be specified by hand, which is more complex. It contains:
    • pub_build_tools if you need spike (don't forget to mv it from the artifacts or it will be re-built!)
    • The jobs you need artifacts from
  • .backend_test indicates that:
    • The job goes in backend tests stage
    • It performs the same steps than .backend_test, except that:
      • it does not source VCS (so you have to do it if you need it)
      • it does not move spike (so you have to do it if you need it)

Generating a report

You might want to use .gitlab-ci/scripts/report_simu.py.

If it does not suit your needs, below are snippets to help you write a report generator using our python library.

import report_builder as rb

# Create metrics
metric = rb.TableMetric('Metric name')
metric.add_value('colomn 1', 'colomn 2', 'etc')

# Gather them into a report
report = rb.Report('report label')
report.add_metric(metric)

# Create the report file in the artifacts
report.dump()

There are 3 kinds of metric:

# A simple table
metric = rb.TableMetric('Metric name, actually not displayed yet')
metric.add_value('colomn 1', 'colomn 2', 'etc')

# A table with a pass/fail label on each line
metric = rb.TableStatusMetric('Metric name, actually not displayed yet')
metric.add_pass('colomn 1', 'colomn 2', 'etc')
metric.add_fail('colomn 1', 'colomn 2', 'etc')

# A log
metric = rb.LogMetric('Metric name, actually not displayed yet')
metric.add_value("one line (no need to add a backslash n)")
metric.values += ["one line (no need to add a backslash n)"] # same as above
metric.values = ["line1", "line2", "etc"] # also works

# You can fail a metric of any kind at any moment
metric.fail()

Failures are propagated:

  • one fail in a TableStatusMetric fails the whole metric
  • one failed metric fails the whole report
  • one failed report fails the whole pipeline report

Dashboard

The merge reports job merges the reports from all jobs of the pipeline into a single file. It pushes this file to a repository. This repository has a CI which produces HTML dashboard pages from the latest files. These HTML pages are published on https://riscv-ci.pages.thales-invia.fr/dashboard/

  • Main page index.html gathers results from all processed pipelines.
  • Each page dashboard_cva6_<PR id>.html gathers results from all pipelines of one PR.

PR comment

The merge reports job gets the list of open PRs. It compares the name of the current branch with the name of each PR branch to find the PR. If a PR matches, it triggers the GitHub workflow dashboard-done.yml in this repository, providing the PR number and the success/fail status.

See .github/workflows/dashboard-done.yml

This GitHub workflow creates a comment in the PR with the success/fail status and a link to the dashboard page.

However, the dashboard page may not be available right at this moment, as page generation, performed later, takes time.