Update code from upstream repository
https://github.com/lowRISC/opentitan to revision
7e131447da6d5f3044666a17974e15df44f0328b

Updates to Ibex code to match this import:
* Include str_utils in the imported code.
* List new source files in dv/uvm/core_ibex/ibex_dv.f
* Update patches to resolve merge conflicts.
* Update tb_cs_registers.cc and ibex_riscv_compliance.cc to match the
  new return code of simctrl.Exec().

Imported updates:
* Do not require pyyaml >= 5.1 (Philipp Wagner)
* [prim_edn_req] Forward fips signal to consumer (Pirmin Vogel)
* [prim_edn_req] Use prim_sync_reqack_data primitive (Pirmin Vogel)
* [prim_edn_req] De-assert EDN request if packer FIFO has data
  available (Pirmin Vogel)
* [cleanup] Mass replace tabs with spaces (Srikrishna Iyer)
* [lc_ctrl] Add script to generate the LC state based on the ECC poly
  (Michael Schaffner)
* [dvsim] Use list for rsync command (Eunchan Kim)
* [verilator] Only control the reset line when necessary (Rupert
  Swarbrick)
* [dv/csr_utils] Add debug msg for UVM_NOT_OK err (Cindy Chen)
* [dvsim] Add exclude hidden files when needed (Eunchan Kim)
* [prim_sync_reqack] Add variant with associated data and optional
  data reg (Pirmin Vogel)
* [DV, Xcelium] Fix for lowRISC/opentitan#4690 (Srikrishna Iyer)
* [dvsim] Remote copy update (Srikrishna Iyer)
* [prim_edn_req] Add EDN sync and packer gadget primitive (Michael
  Schaffner)
* [prim] Add hamming code as ECC option (Timothy Chen)
* [DV] Cleanup lint warnings with Verible lint (¨Srikrishna)
* [prim_ram] Rearrange parity bit packing and fix wrong wmask settings
  (Michael Schaffner)
* [lc_sync/lc_sender] Absorb flops within lc_sender (Michael
  Schaffner)
* [prim_otp_pkg] Move prim interface constants into separate package
  (Michael Schaffner)
* [sram_ctrl] Pull scr macro out of sram_ctrl (Michael Schaffner)
* [top] Move alert handler to periphs and attach escalation clock to
  ibex (Michael Schaffner)
* [prim_esc_rxtx/rv_core_ibex] Add default values and NMI
  synchronization (Michael Schaffner)
* [dvsim] Fix regression publish result link with --remote switch
  (Cindy Chen)
* [vendor/ibex] Remove duplicate check tool requirements files
  (Michael Schaffner)
* [prim_ram_1p_scr] Fix sequencing bug in scrambling logic (Michael
  Schaffner)
* [prim_ram*_adv] Qualify error output signals with rvalid (Michael
  Schaffner)
* [dvsim] Fix purge not delete remote repo_top (Cindy Chen)
* [lc/otp/alerts] Place size-only buffers on all multibit signals
  (Michael Schaffner)
* [prim_buf] Add generic and Xilinx buffer primitive (Michael
  Schaffner)
* [prim] Packer to add byte hint assertion (Eunchan Kim)
* [dvsim] Logic to copy repo to scratch area (Srikrishna Iyer)
* [dv/lc_ctrl] enable lc_ctrl alert_test (Cindy Chen)
* [prim] documentation update for flash (Timothy Chen)
* [flash_ctrl] Add additional interface support (Timothy Chen)
* [dvsim] Fix publish report path (Weicai Yang)
* [top_earlgrey] Instantiate LC controller in toplevel (Michael
  Schaffner)
* [doc] Fix checklist items in V1 (Michael Schaffner)
* [dv/csr_excl] Fix VCS warning (Cindy Chen)
* [dv/doc] cleaned up checkist alignment (Rasmus Madsen)
* [doc/dv] cleanup (Rasmus Madsen)
* [dv/doc] updated dv_plan links to new location (Rasmus Madsen)
* [dv/doc] changed testplan to dv_plan in markdown files (Rasmus
  Madsen)
* [dv/doc] changed dv plan to dv doc (Rasmus Madsen)
* Remove redundant ascentlint options (Olof Kindgren)
* Add ascentlint default options for all cores depending on
  lint:common (Olof Kindgren)
* [flash] documentation update (Timothy Chen)
* [flash / top] Add info_sel to flash interface (Timothy Chen)
* [otp] lci interface assertion related fix (Cindy Chen)
* [dv/uvmdvgen] Add switch to auto-gen edn (Cindy Chen)
* [util] Rejig how we load hjson configurations for dvsim.py (Rupert
  Swarbrick)
* added changes required by sriyerg (Dawid Zimonczyk)
* update riviera.hjson (Dawid Zimonczyk)
* [flash_ctrl] Add high endurance region attribute (Timothy Chen)
* Change VerilatorSimCtrl::Exec to handle --help properly (Rupert
  Swarbrick)
* Simplify handling of exit_app in VerilatorSimCtrl::ParseCommandArgs
  (Rupert Swarbrick)
* [sram_ctrl] Rtl lint fix (Michael Schaffner)
* [keymgr] Add edn support (Timothy Chen)
* [dv] Make width conversion explicit in dv_base_env_cfg::initialize
  (Rupert Swarbrick)
* [dvsim] Allow dvsim.py to be run under Make (Rupert Swarbrick)
* [dvsim[ rename revision_string to revision (Srikrishna Iyer)
* [dvsim] Update log messages (Srikrishna Iyer)
* [dvsim] fix for full verbosity (Srikrishna Iyer)
* [dv] Fix Questa warning and remove unused var (Weicai Yang)
* [dvsim] Add alias for --run-only (Weicai Yang)
* [keymgr] Hook-up random compile time constants (Timothy Chen)
* [dvsim] Add support for UVM_FULL over cmd line (Srikrishna Iyer)
* [dv common] Enable DV macros in non-UVM components (Srikrishna Iyer)
* [DVsim] Add support for Verilator (Srikrishna Iyer)
* [DVSim] Fix how sw_images is treated (Srikrishna Iyer)
* [DV common] Fixes in sim.mk for Verilator (Srikrishna Iyer)
* [DV Common] Split DV test status reporting logic (Srikrishna Iyer)
* [prim_arbiter_ppc] Fix lint error (Philipp Wagner)
* [DV common] Factor `sim_tops` out of build_opts (Srikrishna Iyer)
* [dvsim] run yapf to fix style (Weicai Yang)
* [dv/common] VCS UNR flow (Weicai Yang)
* [dv] Add get_max_offset function in dv_base_reg_block (Weicai Yang)
* [otp_ctrl] Fix warnings from VCS (Cindy Chen)
* [lint] Change unused_ waiver (Eunchan Kim)
* [dv/alert_test] Add alert_test IP level automation test (Cindy Chen)
* [DV] Update the was SW is built for DV (Srikrishna Iyer)
* [dvsim] Replace `sw_test` with `sw_images` (Srikrishna Iyer)
* [chip dv] Move sw build directory (Srikrishna Iyer)
* [dv common] Update dv_utils to use str_utils_pkg (Srikrishna Iyer)
* [DVSim] Method to add pre/post build/run steps (Srikrishna Iyer)

Signed-off-by: Philipp Wagner <phw@lowrisc.org>
This commit is contained in:
Philipp Wagner 2021-01-07 15:10:23 +00:00 committed by Philipp Wagner
parent 0199bbae66
commit b1daf9e44e
112 changed files with 3412 additions and 1247 deletions

View file

@ -13,10 +13,14 @@ int main(int argc, char **argv) {
// Get pass / fail from Verilator
int retcode = simctrl.Exec(argc, argv);
if (!retcode) {
// Get pass / fail from testbench
retcode = !top.test_passed_o;
auto pr = simctrl.Exec(argc, argv);
int ret_code = pr.first;
bool ran_simulation = pr.second;
if (ret_code != 0 || !ran_simulation) {
return ret_code;
}
return retcode;
// Get pass / fail from testbench
return !top.test_passed_o;
}

View file

@ -18,5 +18,5 @@ int main(int argc, char **argv) {
"TOP.ibex_riscv_compliance.u_ram.u_ram.gen_generic.u_impl_generic");
simctrl.RegisterExtension(&memutil);
return simctrl.Exec(argc, argv);
return simctrl.Exec(argc, argv).first;
}

View file

@ -67,9 +67,12 @@ ${PRJ_DIR}/vendor/google_riscv-dv/src/riscv_signature_pkg.sv
+incdir+${PRJ_DIR}/dv/uvm/core_ibex/common/irq_agent
+incdir+${PRJ_DIR}/vendor/lowrisc_ip/dv/sv/mem_model
+incdir+${PRJ_DIR}/vendor/lowrisc_ip/dv/sv/dv_utils
+incdir+${PRJ_DIR}/vendor/lowrisc_ip/dv/sv/str_utils
${PRJ_DIR}/dv/uvm/bus_params_pkg/bus_params_pkg.sv
${PRJ_DIR}/vendor/lowrisc_ip/dv/sv/common_ifs/clk_rst_if.sv
${PRJ_DIR}/vendor/lowrisc_ip/dv/sv/common_ifs/pins_if.sv
${PRJ_DIR}/vendor/lowrisc_ip/dv/sv/str_utils/str_utils_pkg.sv
${PRJ_DIR}/vendor/lowrisc_ip/dv/sv/dv_utils/dv_test_status_pkg.sv
${PRJ_DIR}/vendor/lowrisc_ip/dv/sv/dv_utils/dv_utils_pkg.sv
${PRJ_DIR}/vendor/lowrisc_ip/dv/sv/mem_model/mem_model_pkg.sv
${PRJ_DIR}/dv/uvm/core_ibex/common/ibex_mem_intf_agent/ibex_mem_intf.sv

View file

@ -9,6 +9,6 @@
upstream:
{
url: https://github.com/lowRISC/opentitan
rev: e619fc60c6b9c755043eba65a41dc47815612834
rev: 7e131447da6d5f3044666a17974e15df44f0328b
}
}

View file

@ -15,6 +15,7 @@
{from: "hw/dv/sv/csr_utils", to: "dv/sv/csr_utils"},
{from: "hw/dv/sv/dv_base_reg", to: "dv/sv/dv_base_reg"},
{from: "hw/dv/sv/mem_model", to: "dv/sv/mem_model"},
{from: "hw/dv/sv/str_utils", to: "dv/sv/str_utils"},
// We apply a patch to fix the bus_params_pkg core file name when
// vendoring in dv_lib and dv_utils. This allows us to have an

View file

@ -72,7 +72,7 @@ interface pins_if #(
endfunction
// make connections
for (genvar i = 0; i < Width; i++) begin : each_pin
for (genvar i = 0; i < Width; i++) begin : gen_each_pin
`ifdef VERILATOR
assign pins[i] = pins_oe[i] ? pins_o[i] :
pins_pu[i] ? 1'b1 :
@ -91,7 +91,7 @@ interface pins_if #(
// between 'value to be driven out' and the external driver's value.
assign pins[i] = pins_oe[i] ? pins_o[i] : 1'bz;
`endif
end
end : gen_each_pin
endinterface
`endif

View file

@ -213,7 +213,9 @@ package csr_utils_pkg;
// when reset occurs, all items will be dropped immediately. This may end up getting
// d_error = 1 from previous item on the bus. Skip checking it during reset
if (check == UVM_CHECK && !under_reset) begin
`DV_CHECK_EQ(status, UVM_IS_OK, "", error, msg_id)
`DV_CHECK_EQ(status, UVM_IS_OK,
$sformatf("trying to update csr %0s", csr.get_full_name()),
error, msg_id)
end
decrement_outstanding_access();
end
@ -274,7 +276,9 @@ package csr_utils_pkg;
csr.write(.status(status), .value(value), .path(path), .map(map), .prior(100));
csr_post_write_sub(csr, en_shadow_wr);
if (check == UVM_CHECK && !under_reset) begin
`DV_CHECK_EQ(status, UVM_IS_OK, "", error, msg_id)
`DV_CHECK_EQ(status, UVM_IS_OK,
$sformatf("trying to write csr %0s", csr.get_full_name()),
error, msg_id)
end
// Only update the predicted value if status is ok (otherwise the write isn't completed
// successfully and the design shouldn't have accepted the written value)
@ -397,7 +401,9 @@ package csr_utils_pkg;
.prior(100));
end
if (check == UVM_CHECK && !under_reset) begin
`DV_CHECK_EQ(status, UVM_IS_OK, "", error, msg_id)
`DV_CHECK_EQ(status, UVM_IS_OK,
$sformatf("trying to read csr/field %0s", ptr.get_full_name()),
error, msg_id)
end
decrement_outstanding_access();
end
@ -630,7 +636,8 @@ package csr_utils_pkg;
increment_outstanding_access();
ptr.read(.status(status), .offset(offset), .value(data), .map(map), .prior(100));
if (check == UVM_CHECK && !under_reset) begin
`DV_CHECK_EQ(status, UVM_IS_OK, "", error, msg_id)
`DV_CHECK_EQ(status, UVM_IS_OK,
$sformatf("trying to read mem %0s", ptr.get_full_name()), error, msg_id)
end
decrement_outstanding_access();
end
@ -679,7 +686,9 @@ package csr_utils_pkg;
increment_outstanding_access();
ptr.write(.status(status), .offset(offset), .value(data), .map(map), .prior(100));
if (check == UVM_CHECK && !under_reset) begin
`DV_CHECK_EQ(status, UVM_IS_OK, "", error, msg_id)
`DV_CHECK_EQ(status, UVM_IS_OK,
$sformatf("trying to write mem %0s", ptr.get_full_name()),
error, msg_id)
end
decrement_outstanding_access();
end

View file

@ -27,10 +27,12 @@ class csr_excl_item extends uvm_object;
csr_test_type_e csr_test_type = CsrAllTests);
bit [2:0] val = CsrNoExcl;
bit [NUM_CSR_TESTS-1:0] test = CsrInvalidTest;
csr_excl_s csr_excl_item;
if (csr_test_type == CsrInvalidTest) begin
`uvm_fatal(`gfn, $sformatf("add %s exclusion without a test", obj))
end
if (!exclusions.exists(obj)) exclusions[obj] = '{default:CsrNoExcl};
val = csr_excl_type | exclusions[obj].csr_excl_type;
test = csr_test_type | exclusions[obj].csr_test_type;
exclusions[obj].csr_excl_type = csr_excl_type_e'(val);

View file

@ -78,10 +78,8 @@ class dv_base_reg_block extends uvm_reg_block;
// Use below to get the addr map size #3317
// max2(biggest_reg_offset+reg_size, biggest_mem_offset+mem_size) and then round up to 2**N
protected function void compute_addr_mask(uvm_reg_map map);
uvm_reg_addr_t max_addr, max_offset;
uvm_reg_addr_t max_offset;
uvm_reg_block blocks[$];
uvm_reg regs[$];
uvm_mem mems[$];
int unsigned alignment;
// TODO: assume IP only contains 1 reg block, find a better way to handle chip-level and IP
@ -92,26 +90,7 @@ class dv_base_reg_block extends uvm_reg_block;
return;
end
get_registers(regs);
get_memories(mems);
`DV_CHECK_GT_FATAL(regs.size() + mems.size(), 0)
// Walk the known registers and memories, calculating the largest byte address visible. Note
// that the get_offset() calls will return absolute addresses, including any base address in the
// default register map.
max_addr = 0;
foreach (regs[i]) begin
max_addr = max2(regs[i].get_offset(map) + regs[i].get_n_bytes() - 1, max_addr);
end
foreach (mems[i]) begin
uvm_reg_addr_t mem_size;
mem_size = mems[i].get_offset(.map(map)) + mems[i].get_size() * mems[i].get_n_bytes() - 1;
max_addr = max2(mem_size, max_addr);
end
// Subtract the base address in the default register map to get the maximum relative offset.
max_offset = max_addr - map.get_base_addr();
max_offset = get_max_offset(map);
// Set alignment to be ceil(log2(biggest_offset))
alignment = 0;
@ -131,6 +110,35 @@ class dv_base_reg_block extends uvm_reg_block;
`DV_CHECK_FATAL(addr_mask[map])
endfunction
// Return the offset of the highest byte contained in either a register or a memory
function uvm_reg_addr_t get_max_offset(uvm_reg_map map = null);
uvm_reg_addr_t max_offset;
uvm_reg regs[$];
uvm_mem mems[$];
if (map == null) map = get_default_map();
get_registers(regs);
get_memories(mems);
`DV_CHECK_GT_FATAL(regs.size() + mems.size(), 0)
// Walk the known registers and memories, calculating the largest byte address visible. Note
// that the get_offset() calls will return absolute addresses, including any base address in the
// specified register map.
max_offset = 0;
foreach (regs[i]) begin
max_offset = max2(regs[i].get_offset(map) + regs[i].get_n_bytes() - 1, max_offset);
end
foreach (mems[i]) begin
uvm_reg_addr_t mem_size;
mem_size = mems[i].get_offset(.map(map)) + mems[i].get_size() * mems[i].get_n_bytes() - 1;
max_offset = max2(mem_size, max_offset);
end
return max_offset;
endfunction
// Get the address mask. This should only be called after locking the model (because it depends on
// the layout of registers and memories in the block).
function uvm_reg_addr_t get_addr_mask(uvm_reg_map map = null);

View file

@ -41,6 +41,8 @@ class dv_base_env_cfg #(type RAL_T = dv_base_reg_block) extends uvm_object;
`uvm_object_new
virtual function void initialize(bit [bus_params_pkg::BUS_AW-1:0] csr_base_addr = '1);
import bus_params_pkg::*;
// build the ral model
if (has_ral) begin
uvm_reg_addr_t base_addr;
@ -56,7 +58,13 @@ class dv_base_env_cfg #(type RAL_T = dv_base_reg_block) extends uvm_object;
// Now the model is locked, we know its layout. Set the base address for the register block.
// The function internally picks a random one if we pass '1 to it, and performs an integrity
// check on the set address.
ral.set_base_addr(csr_base_addr);
//
// The definition of base_addr explicitly casts from a bus address to a uvm_reg_addr_t (to
// correctly handle the case where a bus address is narrower than a uvm_reg_addr_t).
base_addr = (&csr_base_addr ?
{`UVM_REG_ADDR_WIDTH{1'b1}} :
{{(`UVM_REG_ADDR_WIDTH - BUS_AW){1'b0}}, csr_base_addr});
ral.set_base_addr(base_addr);
// Get list of valid csr addresses (useful in seq to randomize addr as well as in scb checks)
get_csr_addrs(ral, csr_addrs);

View file

@ -11,8 +11,13 @@
// UVM speficic macros
`ifndef gfn
`ifdef UVM
// verilog_lint: waive macro-name-style
`define gfn get_full_name()
`else
// verilog_lint: waive macro-name-style
`define gfn $sformatf("%m")
`endif
`endif
`ifndef gtn
@ -37,9 +42,9 @@
`define downcast(EXT_, BASE_, MSG_="", SEV_=fatal, ID_=`gfn) \
begin \
if (!$cast(EXT_, BASE_)) begin \
`uvm_``SEV_(ID_, $sformatf({"Cast failed: base class variable %0s ", \
"does not hold extended class %0s handle %s"}, \
`"BASE_`", `"EXT_`", MSG_)) \
`dv_``SEV_($sformatf({"Cast failed: base class variable %0s ", \
"does not hold extended class %0s handle %s"}, \
`"BASE_`", `"EXT_`", MSG_), ID_) \
end \
end
`endif
@ -80,7 +85,7 @@
`define DV_CHECK(T_, MSG_="", SEV_=error, ID_=`gfn) \
begin \
if (!(T_)) begin \
`uvm_``SEV_(ID_, $sformatf("Check failed (%s) %s ", `"T_`", MSG_)) \
`dv_``SEV_($sformatf("Check failed (%s) %s ", `"T_`", MSG_), ID_) \
end \
end
`endif
@ -89,8 +94,8 @@
`define DV_CHECK_EQ(ACT_, EXP_, MSG_="", SEV_=error, ID_=`gfn) \
begin \
if (!((ACT_) == (EXP_))) begin \
`uvm_``SEV_(ID_, $sformatf("Check failed %s == %s (%0d [0x%0h] vs %0d [0x%0h]) %s", \
`"ACT_`", `"EXP_`", ACT_, ACT_, EXP_, EXP_, MSG_)) \
`dv_``SEV_($sformatf("Check failed %s == %s (%0d [0x%0h] vs %0d [0x%0h]) %s", \
`"ACT_`", `"EXP_`", ACT_, ACT_, EXP_, EXP_, MSG_), ID_) \
end \
end
`endif
@ -99,8 +104,8 @@
`define DV_CHECK_NE(ACT_, EXP_, MSG_="", SEV_=error, ID_=`gfn) \
begin \
if (!((ACT_) != (EXP_))) begin \
`uvm_``SEV_(ID_, $sformatf("Check failed %s != %s (%0d [0x%0h] vs %0d [0x%0h]) %s", \
`"ACT_`", `"EXP_`", ACT_, ACT_, EXP_, EXP_, MSG_)) \
`dv_``SEV_($sformatf("Check failed %s != %s (%0d [0x%0h] vs %0d [0x%0h]) %s", \
`"ACT_`", `"EXP_`", ACT_, ACT_, EXP_, EXP_, MSG_), ID_) \
end \
end
`endif
@ -109,8 +114,8 @@
`define DV_CHECK_CASE_EQ(ACT_, EXP_, MSG_="", SEV_=error, ID_=`gfn) \
begin \
if (!((ACT_) === (EXP_))) begin \
`uvm_``SEV_(ID_, $sformatf("Check failed %s === %s (0x%0h [%0b] vs 0x%0h [%0b]) %s", \
`"ACT_`", `"EXP_`", ACT_, ACT_, EXP_, EXP_, MSG_)) \
`dv_``SEV_($sformatf("Check failed %s === %s (0x%0h [%0b] vs 0x%0h [%0b]) %s", \
`"ACT_`", `"EXP_`", ACT_, ACT_, EXP_, EXP_, MSG_), ID_) \
end \
end
`endif
@ -119,8 +124,8 @@
`define DV_CHECK_CASE_NE(ACT_, EXP_, MSG_="", SEV_=error, ID_=`gfn) \
begin \
if (!((ACT_) !== (EXP_))) begin \
`uvm_``SEV_(ID_, $sformatf("Check failed %s !== %s (%0d [0x%0h] vs %0d [0x%0h]) %s", \
`"ACT_`", `"EXP_`", ACT_, ACT_, EXP_, EXP_, MSG_)) \
`dv_``SEV_($sformatf("Check failed %s !== %s (%0d [0x%0h] vs %0d [0x%0h]) %s", \
`"ACT_`", `"EXP_`", ACT_, ACT_, EXP_, EXP_, MSG_), ID_) \
end \
end
`endif
@ -129,8 +134,8 @@
`define DV_CHECK_LT(ACT_, EXP_, MSG_="", SEV_=error, ID_=`gfn) \
begin \
if (!((ACT_) < (EXP_))) begin \
`uvm_``SEV_(ID_, $sformatf("Check failed %s < %s (%0d [0x%0h] vs %0d [0x%0h]) %s", \
`"ACT_`", `"EXP_`", ACT_, ACT_, EXP_, EXP_, MSG_)) \
`dv_``SEV_($sformatf("Check failed %s < %s (%0d [0x%0h] vs %0d [0x%0h]) %s", \
`"ACT_`", `"EXP_`", ACT_, ACT_, EXP_, EXP_, MSG_), ID_) \
end \
end
`endif
@ -139,8 +144,8 @@
`define DV_CHECK_GT(ACT_, EXP_, MSG_="", SEV_=error, ID_=`gfn) \
begin \
if (!((ACT_) > (EXP_))) begin \
`uvm_``SEV_(ID_, $sformatf("Check failed %s > %s (%0d [0x%0h] vs %0d [0x%0h]) %s", \
`"ACT_`", `"EXP_`", ACT_, ACT_, EXP_, EXP_, MSG_)) \
`dv_``SEV_($sformatf("Check failed %s > %s (%0d [0x%0h] vs %0d [0x%0h]) %s", \
`"ACT_`", `"EXP_`", ACT_, ACT_, EXP_, EXP_, MSG_), ID_) \
end \
end
`endif
@ -149,8 +154,8 @@
`define DV_CHECK_LE(ACT_, EXP_, MSG_="", SEV_=error, ID_=`gfn) \
begin \
if (!((ACT_) <= (EXP_))) begin \
`uvm_``SEV_(ID_, $sformatf("Check failed %s <= %s (%0d [0x%0h] vs %0d [0x%0h]) %s", \
`"ACT_`", `"EXP_`", ACT_, ACT_, EXP_, EXP_, MSG_)) \
`dv_``SEV_($sformatf("Check failed %s <= %s (%0d [0x%0h] vs %0d [0x%0h]) %s", \
`"ACT_`", `"EXP_`", ACT_, ACT_, EXP_, EXP_, MSG_), ID_) \
end \
end
`endif
@ -159,8 +164,8 @@
`define DV_CHECK_GE(ACT_, EXP_, MSG_="", SEV_=error, ID_=`gfn) \
begin \
if (!((ACT_) >= (EXP_))) begin \
`uvm_``SEV_(ID_, $sformatf("Check failed %s >= %s (%0d [0x%0h] vs %0d [0x%0h]) %s", \
`"ACT_`", `"EXP_`", ACT_, ACT_, EXP_, EXP_, MSG_)) \
`dv_``SEV_($sformatf("Check failed %s >= %s (%0d [0x%0h] vs %0d [0x%0h]) %s", \
`"ACT_`", `"EXP_`", ACT_, ACT_, EXP_, EXP_, MSG_), ID_) \
end \
end
`endif
@ -168,14 +173,14 @@
`ifndef DV_CHECK_STREQ
`define DV_CHECK_STREQ(ACT_, EXP_, MSG_="", SEV_=error, ID_=`gfn) \
if (!((ACT_) == (EXP_))) begin \
`uvm_``SEV_(ID_, $sformatf("Check failed \"%s\" == \"%s\" %s", ACT_, EXP_, MSG_)); \
`dv_``SEV_($sformatf("Check failed \"%s\" == \"%s\" %s", ACT_, EXP_, MSG_), ID_) \
end
`endif
`ifndef DV_CHECK_STRNE
`define DV_CHECK_STRNE(ACT_, EXP_, MSG_="", SEV_=error, ID_=`gfn) \
if (!((ACT_) != (EXP_))) begin \
`uvm_``SEV_(ID_, $sformatf("Check failed \"%s\" != \"%s\" %s", ACT_, EXP_, MSG_)); \
`dv_``SEV_($sformatf("Check failed \"%s\" != \"%s\" %s", ACT_, EXP_, MSG_), ID_) \
end
`endif
@ -268,7 +273,7 @@
`define DV_PRINT_ARR_CONTENTS(ARR_, V_=UVM_MEDIUM, ID_=`gfn) \
begin \
foreach (ARR_[i]) begin \
`uvm_info(ID_, $sformatf("%s[%0d] = 0x%0d[0x%0h]", `"ARR_`", i, ARR_[i], ARR_[i]), V_) \
`dv_info($sformatf("%s[%0d] = 0x%0d[0x%0h]", `"ARR_`", i, ARR_[i], ARR_[i]), V_, ID_) \
end \
end
`endif
@ -280,7 +285,7 @@
while (!FIFO_.is_empty()) begin \
TYP_ item; \
void'(FIFO_.try_get(item)); \
`uvm_``SEV_(ID_, $sformatf("%s item uncompared:\n%s", `"FIFO_`", item.sprint())) \
`dv_``SEV_($sformatf("%s item uncompared:\n%s", `"FIFO_`", item.sprint()), ID_) \
end \
end
`endif
@ -293,7 +298,7 @@
while (!FIFO_[i].is_empty()) begin \
TYP_ item; \
void'(FIFO_[i].try_get(item)); \
`uvm_``SEV_(ID_, $sformatf("%s[%0d] item uncompared:\n%s", `"FIFO_`", i, item.sprint())) \
`dv_``SEV_($sformatf("%s[%0d] item uncompared:\n%s", `"FIFO_`", i, item.sprint()), ID_) \
end \
end \
end
@ -305,7 +310,7 @@
begin \
while (Q_.size() != 0) begin \
TYP_ item = Q_.pop_front(); \
`uvm_``SEV_(ID_, $sformatf("%s item uncompared:\n%s", `"Q_`", item.sprint())) \
`dv_``SEV_($sformatf("%s item uncompared:\n%s", `"Q_`", item.sprint()), ID_) \
end \
end
`endif
@ -317,7 +322,7 @@
foreach (Q_[i]) begin \
while (Q_[i].size() != 0) begin \
TYP_ item = Q_[i].pop_front(); \
`uvm_``SEV_(ID_, $sformatf("%s[%0d] item uncompared:\n%s", `"Q_`", i, item.sprint())) \
`dv_``SEV_($sformatf("%s[%0d] item uncompared:\n%s", `"Q_`", i, item.sprint()), ID_) \
end \
end \
end
@ -330,7 +335,7 @@
while (MAILBOX_.num() != 0) begin \
TYP_ item; \
void'(MAILBOX_.try_get(item)); \
`uvm_``SEV_(ID_, $sformatf("%s item uncompared:\n%s", `"MAILBOX_`", item.sprint())) \
`dv_``SEV_($sformatf("%s item uncompared:\n%s", `"MAILBOX_`", item.sprint()), ID_) \
end \
end
`endif
@ -359,7 +364,7 @@
begin \
EXIT_ \
if (MSG_ != "") begin \
`uvm_info(ID_, MSG_, UVM_HIGH) \
`dv_info(MSG_, UVM_HIGH, ID_) \
end \
end \
join_any \
@ -428,47 +433,64 @@
// Macros for logging (info, warning, error and fatal severities).
//
// These are meant to be invoked in modules and interfaces that are shared between DV and Verilator
// testbenches.
// testbenches. We waive the lint requirement for these to be in uppercase, since they are
// UVM-adjacent.
`ifdef UVM
`ifndef DV_INFO
`define DV_INFO(MSG_, VERBOSITY_ = UVM_LOW, ID_ = $sformatf("%m")) \
`uvm_info(ID_, MSG_, VERBOSITY_)
`ifndef dv_info
// verilog_lint: waive macro-name-style
`define dv_info(MSG_, VERBOSITY_ = UVM_LOW, ID_ = $sformatf("%m")) \
if (uvm_pkg::uvm_report_enabled(VERBOSITY_, uvm_pkg::UVM_INFO, ID_)) begin \
uvm_pkg::uvm_report_info(ID_, MSG_, VERBOSITY_, `uvm_file, `uvm_line, "", 1); \
end
`endif
`ifndef DV_WARNING
`define DV_WARNING(MSG_, ID_ = $sformatf("%m")) \
`uvm_warning(ID_, MSG_)
`ifndef dv_warning
// verilog_lint: waive macro-name-style
`define dv_warning(MSG_, ID_ = $sformatf("%m")) \
if (uvm_pkg::uvm_report_enabled(uvm_pkg::UVM_NONE, uvm_pkg::UVM_WARNING, ID_)) begin \
uvm_pkg::uvm_report_warning(ID_, MSG_, uvm_pkg::UVM_NONE, `uvm_file, `uvm_line, "", 1); \
end
`endif
`ifndef DV_ERROR
`define DV_ERROR(MSG_, ID_ = $sformatf("%m")) \
`uvm_error(ID_, MSG_)
`ifndef dv_error
// verilog_lint: waive macro-name-style
`define dv_error(MSG_, ID_ = $sformatf("%m")) \
if (uvm_pkg::uvm_report_enabled(uvm_pkg::UVM_NONE, uvm_pkg::UVM_ERROR, ID_)) begin \
uvm_pkg::uvm_report_error(ID_, MSG_, uvm_pkg::UVM_NONE, `uvm_file, `uvm_line, "", 1); \
end
`endif
`ifndef DV_FATAL
`define DV_FATAL(MSG_, ID_ = $sformatf("%m")) \
`uvm_fatal(ID_, MSG_)
`ifndef dv_fatal
// verilog_lint: waive macro-name-style
`define dv_fatal(MSG_, ID_ = $sformatf("%m")) \
if (uvm_pkg::uvm_report_enabled(uvm_pkg::UVM_NONE, uvm_pkg::UVM_FATAL, ID_)) begin \
uvm_pkg::uvm_report_fatal(ID_, MSG_, uvm_pkg::UVM_NONE, `uvm_file, `uvm_line, "", 1); \
end
`endif
`else // UVM
`ifndef DV_INFO
`define DV_INFO(MSG_, VERBOSITY = DUMMY_, ID_ = $sformatf("%m")) \
`ifndef dv_info
// verilog_lint: waive macro-name-style
`define dv_info(MSG_, VERBOSITY = DUMMY_, ID_ = $sformatf("%m")) \
$display("%0t: (%0s:%0d) [%0s] %0s", $time, `__FILE__, `__LINE__, ID_, MSG_);
`endif
`ifndef DV_WARNING
`define DV_WARNING(MSG_, ID_ = $sformatf("%m")) \
`ifndef dv_warning
// verilog_lint: waive macro-name-style
`define dv_warning(MSG_, ID_ = $sformatf("%m")) \
$warning("%0t: (%0s:%0d) [%0s] %0s", $time, `__FILE__, `__LINE__, ID_, MSG_);
`endif
`ifndef DV_ERROR
`define DV_ERROR(MSG_, ID_ = $sformatf("%m")) \
`ifndef dv_error
// verilog_lint: waive macro-name-style
`define dv_error(MSG_, ID_ = $sformatf("%m")) \
$error("%0t: (%0s:%0d) [%0s] %0s", $time, `__FILE__, `__LINE__, ID_, MSG_);
`endif
`ifndef DV_FATAL
`define DV_FATAL(MSG_, ID_ = $sformatf("%m")) \
`ifndef dv_fatal
// verilog_lint: waive macro-name-style
`define dv_fatal(MSG_, ID_ = $sformatf("%m")) \
$fatal("%0t: (%0s:%0d) [%0s] %0s", $time, `__FILE__, `__LINE__, ID_, MSG_);
`endif
@ -511,7 +533,7 @@
/* The #1 delay below allows any part of the tb to control the conditions first at t = 0. */ \
#1; \
if ((en_``__CG_NAME) || (__COND)) begin \
`DV_INFO({"Creating covergroup ", `"__CG_NAME`"}, UVM_MEDIUM) \
`dv_info({"Creating covergroup ", `"__CG_NAME`"}, UVM_MEDIUM) \
__CG_NAME``_inst = new``__CG_ARGS; \
end \
end

View file

@ -33,24 +33,7 @@ class dv_report_server extends uvm_default_report_server;
// Print final test pass-fail - external tool can use this signature for test status
// Treat UVM_WARNINGs as a sign of test failure since it could silently result in false pass
if ((num_uvm_warning + num_uvm_error + num_uvm_fatal) == 0) begin
$display("\nTEST PASSED CHECKS");
$display(" _____ _ _ _ ");
$display("|_ _|__ ___| |_ _ __ __ _ ___ ___ ___ __| | |");
$display(" | |/ _ \\/ __| __| | '_ \\ / _` / __/ __|/ _ \\/ _` | |");
$display(" | | __/\\__ \\ |_ | |_) | (_| \\__ \\__ \\ __/ (_| |_|");
$display(" |_|\\___||___/\\__| | .__/ \\__,_|___/___/\\___|\\__,_(_)");
$display(" |_| \n");
end
else begin
$display("\nTEST FAILED CHECKS");
$display(" _____ _ __ _ _ _ _ ");
$display("|_ _|__ ___| |_ / _| __ _(_) | ___ __| | |");
$display(" | |/ _ \\/ __| __| | |_ / _` | | |/ _ \\/ _` | |");
$display(" | | __/\\__ \\ |_ | _| (_| | | | __/ (_| |_|");
$display(" |_|\\___||___/\\__| |_| \\__,_|_|_|\\___|\\__,_(_)\n");
end
dv_test_status_pkg::dv_test_status((num_uvm_warning + num_uvm_error + num_uvm_fatal) == 0);
endfunction
// Override default messaging format to standard "pretty" format for all testbenches
@ -69,7 +52,7 @@ class dv_report_server extends uvm_default_report_server;
string file_line;
if (show_file_line && filename != "") begin
if (!show_file_path) filename = get_no_hier_filename(filename);
if (!show_file_path) filename = str_utils_pkg::str_path_basename(filename);
file_line = $sformatf("(%0s:%0d) ", filename, line);
end
obj_name = {obj_name, ((obj_name != "") ? " " : "")};
@ -79,12 +62,4 @@ class dv_report_server extends uvm_default_report_server;
end
endfunction
// get we don't really want the full path to the filename
// this should be reasonably lightweight
local function string get_no_hier_filename(string filename);
int idx;
for (idx = filename.len() - 1; idx >= 0; idx--) if (filename[idx] == "/") break;
return (filename.substr(idx + 1, filename.len() - 1));
endfunction
endclass

View file

@ -0,0 +1,17 @@
CAPI=2:
# Copyright lowRISC contributors.
# Licensed under the Apache License, Version 2.0, see LICENSE for details.
# SPDX-License-Identifier: Apache-2.0
name: "lowrisc:dv:dv_test_status"
description: "DV test status reporting utilities"
filesets:
files_dv:
files:
- dv_test_status_pkg.sv
file_type: systemVerilogSource
targets:
default:
filesets:
- files_dv

View file

@ -0,0 +1,32 @@
// Copyright lowRISC contributors.
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
package dv_test_status_pkg;
// Prints the test status signature & banner.
//
// This function takes a boolean arg indicating whether the test passed or failed and prints the
// signature along with a banner. The signature can be used by external scripts to determine if
// the test passed or failed.
function automatic void dv_test_status(bit passed);
if (passed) begin
$display("\nTEST PASSED CHECKS");
$display(" _____ _ _ _ ");
$display("|_ _|__ ___| |_ _ __ __ _ ___ ___ ___ __| | |");
$display(" | |/ _ \\/ __| __| | '_ \\ / _` / __/ __|/ _ \\/ _` | |");
$display(" | | __/\\__ \\ |_ | |_) | (_| \\__ \\__ \\ __/ (_| |_|");
$display(" |_|\\___||___/\\__| | .__/ \\__,_|___/___/\\___|\\__,_(_)");
$display(" |_| \n");
end
else begin
$display("\nTEST FAILED CHECKS");
$display(" _____ _ __ _ _ _ _ ");
$display("|_ _|__ ___| |_ / _| __ _(_) | ___ __| | |");
$display(" | |/ _ \\/ __| __| | |_ / _` | | |/ _ \\/ _` | |");
$display(" | | __/\\__ \\ |_ | _| (_| | | | __/ (_| |_|");
$display(" |_|\\___||___/\\__| |_| \\__,_|_|_|\\___|\\__,_(_)\n");
end
endfunction
endpackage

View file

@ -12,6 +12,8 @@ filesets:
- lowrisc:dv:common_ifs
- lowrisc:prim:assert:0.1
- lowrisc:ibex:bus_params_pkg
- lowrisc:dv:str_utils
- lowrisc:dv:dv_test_status
files:
- dv_utils_pkg.sv
- dv_report_server.sv: {is_include_file: true}

View file

@ -9,7 +9,9 @@ package dv_utils_pkg;
// macro includes
`include "dv_macros.svh"
`ifdef UVM
`include "uvm_macros.svh"
`endif
// common parameters used across all benches
parameter int NUM_MAX_INTERRUPTS = 32;

View file

@ -0,0 +1,6 @@
# str_utils_pkg
This package provides some basic string and path manipulation utility functions that
can be used across the project. It can be imported in non-UVM testbenches as
well. Please see the package for the list of available functions and their
descriptions.

View file

@ -0,0 +1,19 @@
CAPI=2:
# Copyright lowRISC contributors.
# Licensed under the Apache License, Version 2.0, see LICENSE for details.
# SPDX-License-Identifier: Apache-2.0
name: "lowrisc:dv:str_utils"
description: "String manipulation utilities"
filesets:
files_dv:
depend:
- lowrisc:dv:dv_macros
files:
- str_utils_pkg.sv
file_type: systemVerilogSource
targets:
default:
filesets:
- files_dv

View file

@ -0,0 +1,174 @@
// Copyright lowRISC contributors.
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
package str_utils_pkg;
`include "dv_macros.svh"
// Returns 1 if string 's' has substring 'sub' within the given index range. 0 Otherwise.
function automatic bit str_has_substr(string s, string sub, int range_lo = 0, int range_hi = -1);
if (range_hi < 0 || range_hi >= s.len()) range_hi = s.len() - 1;
for (int i = range_lo; i <= (range_hi - sub.len() + 1); i++) begin
if (s.substr(i, i + sub.len() - 1) == sub) begin
return 1;
end
end
return 0;
endfunction : str_has_substr
// Returns the index of first occurrence of string 'sub' within string 's''s given index range.
// Returns -1 otherwise.
function automatic int str_find(string s, string sub, int range_lo = 0, int range_hi = -1);
if (range_hi < 0 || range_hi >= s.len()) range_hi = s.len() - 1;
for (int i = range_lo; i <= (range_hi - sub.len() + 1); i++) begin
if (s.substr(i, i + sub.len() - 1) == sub) begin
return i;
end
end
return -1;
endfunction : str_find
// Returns the index of last occurrence of string 'sub' within string 's''s given index range.
// Returns -1 otherwise.
function automatic int str_rfind(string s, string sub, int range_lo = 0, int range_hi = -1);
if (range_hi < 0 || range_hi >= s.len()) range_hi = s.len() - 1;
for (int i = (range_hi - sub.len() + 1); i >= range_lo; i--) begin
if (s.substr(i, i + sub.len() - 1) == sub) begin
return i;
end
end
return -1;
endfunction : str_rfind
// Strips a given set of characters in string 's'.
//
// The set of characters to strip is provided as a string. If not set, all whitespace characters
// are stripped by default. Stripping is done at both ends, unless the user turns off the
// stripping from one of the ends.
function automatic string str_strip(string s,
string chars = " \t\n",
bit lstrip = 1'b1,
bit rstrip = 1'b1);
byte chars_q[$];
if (chars == "") return s;
foreach (chars[i]) chars_q.push_back(chars.getc(i));
if (lstrip) begin
int i = 0;
while (s.getc(i) inside {chars_q}) i++;
s = s.substr(i, s.len() - 1);
end
if (rstrip) begin
int i = s.len() - 1;
while (s.getc(i) inside {chars_q}) i--;
s = s.substr(0, i);
end
return s;
endfunction : str_strip
// Splits the input `string` on the given single-character delimiter `delim`.
//
// The split tokens are pushed into the `result` queue. The whitespaces on each split token are
// stripped by default, which can be turned off.
// TODO: allow arbitrary length delimiter.
function automatic void str_split(input string s,
output string result[$],
input byte delim = " ",
input bit strip_whitespaces = 1'b1);
string sub;
bit in_quotes;
result = {};
foreach (s[i]) begin
if (s[i] == "\"") begin
in_quotes = !in_quotes;
end
if ((s.getc(i) == delim) && !in_quotes) begin
if (strip_whitespaces) sub = str_strip(sub);
if (sub != "") result.push_back(sub);
sub = "";
end else begin
sub = {sub, s[i]};
end
if (i == s.len() - 1) begin
if (strip_whitespaces) sub = str_strip(sub);
if (sub != "") result.push_back(sub);
end
end
endfunction : str_split
// Returns a string concatenated from the provided queue of strings 's'.
//
// The concatenation is performed using the 'delim' arg as the delimiter.
function automatic string str_join(string s[$], string delim = " ");
string str;
foreach (s[i]) begin
str = {str, s[i], delim};
end
if (str != "") begin
str = str.substr(0, str.len() - delim.len() - 1);
end
return str;
endfunction : str_join
// Converts a string to an array of bytes.
function automatic void str_to_bytes(string s, output byte bytes[]);
bytes = new[s.len()];
foreach (bytes[i]) begin
bytes[i] = s.getc(i);
end
endfunction : str_to_bytes
/************************/
/* File path functions. */
/************************/
// Returns the dirname of the file.
//
// Examples:
// path/to/foo.bar => path/to
// path/to/foo/bar => path/to/foo
// path/to/foo/bar/ => path/to/foo
// path/to/foo/bar/. => path/to/foo/bar
// / => /
function automatic string str_path_dirname(string filename);
int idx;
string dirname;
if (filename == "/") return filename;
filename = str_strip(.s(filename), .chars("/"), .lstrip(1'b0));
idx = str_rfind(.s(filename), .sub("/"));
if (idx == -1) idx = filename.len();
if (idx == 0) idx++;
dirname = filename.substr(0, idx - 1);
return dirname;
endfunction : str_path_dirname
// Returns the basename of the file.
//
// Optionally, it takes a bit flag to drop the extension from the basename if desired.
// Examples:
// path/to/foo.bar => (foo.bar, foo)
// path/to/foo/bar => (bar, bar)
// path/to/foo/bar/ => (bar, bar)
// path/to/foo/bar/. => (., .)
// / => (/, /)
function automatic string str_path_basename(string filename, bit drop_extn = 1'b0);
int idx;
string basename;
if (filename == "/") return filename;
filename = str_strip(.s(filename), .chars("/"), .lstrip(1'b0));
idx = str_rfind(.s(filename), .sub("/"));
basename = filename.substr(idx + 1, filename.len() - 1);
if (basename == ".") return basename;
if (drop_extn) begin
idx = str_find(.s(basename), .sub("."));
if (idx == -1) idx = basename.len();
basename = basename.substr(0, idx - 1);
end
return basename;
endfunction : str_path_basename
endpackage

View file

@ -2,10 +2,9 @@
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
{
flow: sim
// Where to find DV code
dv_root: "{proj_root}/vendor/lowrisc_ip/dv"
flow: sim
flow_makefile: "{dv_root}/tools/dvsim/sim.mk"
import_cfgs: ["{proj_root}/dv/uvm/common_project_cfg.hjson",
@ -17,7 +16,7 @@
build_dir: "{scratch_path}/{build_mode}"
run_dir_name: "{index}.{test}"
run_dir: "{scratch_path}/{run_dir_name}/out"
sw_build_dir: "{scratch_path}/sw"
sw_build_dir: "{scratch_path}"
sw_root_dir: "{proj_root}/sw"
// pass and fail patterns
@ -41,6 +40,7 @@
expand_uvm_verbosity_l: UVM_LOW
expand_uvm_verbosity_m: UVM_MEDIUM
expand_uvm_verbosity_h: UVM_HIGH
expand_uvm_verbosity_f: UVM_FULL
expand_uvm_verbosity_d: UVM_DEBUG
// Default simulation verbosity (l => UVM_LOW). Can be overridden by
@ -53,12 +53,10 @@
dut_instance: "{tb}.dut"
// Top level simulation entities.
sim_tops: ["-top {tb}"]
sim_tops: ["{tb}"]
// Default build and run opts
build_opts: [// List multiple tops for the simulation
"{sim_tops}",
// Standard UVM defines
build_opts: [// Standard UVM defines
"+define+UVM",
"+define+UVM_NO_DEPRECATED",
"+define+UVM_REGEX_NO_DPI",
@ -71,7 +69,7 @@
// Default list of things to export to shell
exports: [
{ TOOL_SRCS_DIR: "{tool_srcs_dir}" },
{ dv_root: "{dv_root}" },
{ SIMULATOR: "{tool}" },
{ WAVES: "{waves}" },
{ DUT_TOP: "{dut}" },
@ -104,7 +102,7 @@
// By default, two regressions are made available - "all" and "nightly". Both
// run all available tests for the DUT. "nightly" enables coverage as well.
// The 'tests' key is set to an empty list, which indicates "run everything".
// Test sets can enable sim modes, which are a set of build_opts and run_opts
// Regressions can enable sim modes, which are a set of build_opts and run_opts
// that are grouped together. These are appended to the build modes used by the
// tests.
regressions: [
@ -129,26 +127,20 @@
}
]
// Add waves.tcl to the set of sources to be copied over to
// {tool_srcs_dir}. This can be sourced by the tool-specific TCL
// script to set up wave dumping.
tool_srcs: ["{dv_root}/tools/sim.tcl",
"{dv_root}/tools/common.tcl",
"{dv_root}/tools/waves.tcl"]
// Project defaults for VCS
vcs_cov_cfg_file: "{{build_mode}_vcs_cov_cfg_file}"
vcs_cov_excl_files: ["{tool_srcs_dir}/common_cov_excl.el"]
vcs_unr_cfg_file: "{dv_root}/tools/vcs/unr.cfg"
vcs_cov_excl_files: ["{dv_root}/tools/vcs/common_cov_excl.el"]
// Build-specific coverage cfg files for VCS.
default_vcs_cov_cfg_file: "-cm_hier {tool_srcs_dir}/cover.cfg"
cover_reg_top_vcs_cov_cfg_file: "-cm_hier {tool_srcs_dir}/cover_reg_top.cfg"
default_vcs_cov_cfg_file: "-cm_hier {dv_root}/tools/vcs/cover.cfg"
cover_reg_top_vcs_cov_cfg_file: "-cm_hier {dv_root}/tools/vcs/cover_reg_top.cfg"
// Project defaults for Xcelium
// xcelium_cov_cfg_file: "{{build_mode}_xcelium_cov_cfg_file}"
// xcelium_cov_refine_files: ["{tool_srcs_dir}/common_cov.vRefine"]
// xcelium_cov_refine_files: ["{dv_root}/tools/xcelium/common_cov.vRefine"]
// Build-specific coverage cfg files for Xcelium.
// default_xcelium_cov_cfg_file: "-covfile {tool_srcs_dir}/cover.ccf"
// cover_reg_top_xcelium_cov_cfg_file: "-covfile {tool_srcs_dir}/cover_reg_top.ccf"
// default_xcelium_cov_cfg_file: "-covfile {dv_root}/tools/xcelium/cover.ccf"
// cover_reg_top_xcelium_cov_cfg_file: "-covfile {dv_root}/tools/xcelium/cover_reg_top.ccf"
}

View file

@ -5,12 +5,6 @@
build_cmd: "{job_prefix} dsim"
run_cmd: "{job_prefix} dsim"
// Indicate the tool specific helper sources - these are copied over to the
// {tool_srcs_dir} before running the simulation.
// TODO, there is no dsim tool file, point to vcs for now to avoid error from script
// tool_srcs: ["{dv_root}/tools/dsim/*"]
build_opts: ["-work {build_dir}/dsim_out",
"-genimage image",
"-sv",
@ -23,6 +17,8 @@
"-c-opts -I{DSIM_HOME}/include",
"-timescale 1ns/1ps",
"-f {sv_flist}",
// List multiple tops for the simulation. Prepend each top level with `-top`.
"{eval_cmd} echo {sim_tops} | sed -E 's/(\\S+)/-top \\1/g'",
"+incdir+{build_dir}",
// Suppress following DSim errors and warnings:
// EnumMustBePositive - UVM 1.2 violates this
@ -57,13 +53,13 @@
// Merging coverage.
// "cov_db_dirs" is a special variable that appends all build directories in use.
// It is constructed by the tool itself.
cov_merge_dir: "{scratch_base_path}/cov_merge"
cov_merge_dir: "{scratch_path}/cov_merge"
cov_merge_db_dir: "{cov_merge_dir}/merged.vdb"
cov_merge_cmd: "{job_prefix} urg"
cov_merge_opts: []
// Generate coverage reports in text as well as html.
cov_report_dir: "{scratch_base_path}/cov_report"
cov_report_dir: "{scratch_path}/cov_report"
cov_report_cmd: "{job_prefix} urg"
cov_report_opts: []
cov_report_txt: "{cov_report_dir}/dashboard.txt"
@ -71,7 +67,7 @@
// Analyzing coverage - this is done by invoking --cov-analyze switch. It opens up the
// GUI for visual analysis.
cov_analyze_dir: "{scratch_base_path}/cov_analyze"
cov_analyze_dir: "{scratch_path}/cov_analyze"
cov_analyze_cmd: "{job_prefix} verdi"
cov_analyze_opts: ["-cov",
"-covdir {cov_merge_db_dir}",
@ -89,8 +85,8 @@
cov_metrics: ""
// pass and fail patterns
build_fail_patterns: ["^Error-.*$"]
run_fail_patterns: ["^Error-.*$"] // Null pointer error
build_fail_patterns: ["^=E:"]
run_fail_patterns: ["^=E:"]
// waveform
probe_file: "dsim.probe"

View file

@ -5,14 +5,13 @@
build_cmd: "vlib work && {job_prefix} vlog"
run_cmd: "{job_prefix} vsim"
// Indicate the tool specific helper sources - these are copied over to the
// {tool_srcs_dir} before running the simulation.
tool_srcs: ["{dv_root}/tools/riviera/*"]
build_opts: ["-sv",
"-timescale 1ns/1ps",
"-uvmver 1.2",
"-f {sv_flist}"]
build_opts: ["-timescale 1ns/1ps",
"+incdir+\"{RIVIERA_HOME}/vlib/uvm-1.2/src\"",
"\"{RIVIERA_HOME}/vlib/uvm-1.2/src/uvm_pkg.sv\"",
"-f {sv_flist}",
// List multiple tops for the simulation. Prepend each top level with `-top`.
"{eval_cmd} echo {sim_tops} | sed -E 's/(\\S+)/-top \\1/g'",
]
run_opts: ["-sv_seed={seed}",
"-c",
@ -20,6 +19,7 @@
"-lib {sv_flist_gen_dir}/work",
"+UVM_TESTNAME={uvm_test}",
"+UVM_TEST_SEQ={uvm_test_seq}",
"-sv_lib \"{RIVIERA_HOME}/bin/uvm_1_2_dpi\"",
"-do {run_script}"]
@ -27,7 +27,7 @@
supported_wave_formats: []
// Default tcl script used when running the sim. Override if needed.
run_script: "{tool_srcs_dir}/riviera_run.do"
run_script: "{dv_root}/tools/riviera/riviera_run.do"
// Coverage related.
// TODO: These options have to be filled in.
@ -41,20 +41,20 @@
// Merging coverage.
// "cov_db_dirs" is a special variable that appends all build directories in use.
// It is constructed by the tool itself.
cov_merge_dir: "{scratch_base_path}/cov_merge"
cov_merge_dir: "{scratch_path}/cov_merge"
cov_merge_db_dir: ""
cov_merge_cmd: ""
cov_merge_opts: []
// Generate covreage reports in text as well as html.
cov_report_dir: "{scratch_base_path}/cov_report"
cov_report_dir: "{scratch_path}/cov_report"
cov_report_cmd: ""
cov_report_opts: []
cov_report_dashboard: ""
// Analyzing coverage - this is done by invoking --cov-analyze switch. It opens up the
// GUI for visual analysis.
cov_analyze_dir: "{scratch_base_path}/cov_analyze"
cov_analyze_dir: "{scratch_path}/cov_analyze"
cov_analyze_cmd: ""
cov_analyze_opts: []
@ -86,5 +86,12 @@
is_sim_mode: 1
build_opts: []
}
// TODO: Add build and run options to enable zero delay loop detection.
{
name: riviera_loopdetect
is_sim_mode: 1
build_opts: []
run_opts: []
}
]
}

View file

@ -4,108 +4,100 @@
.DEFAULT_GOAL := all
LOCK_TOOL_SRCS_DIR ?= flock --timeout 3600 ${tool_srcs_dir} --command
LOCK_SW_BUILD ?= flock --timeout 3600 ${sw_build_dir} --command
LOCK_SW_BUILD_DIR ?= flock --timeout 3600 ${sw_build_dir} --command
all: build run
###############################
## sim build and run targets ##
###############################
build: compile_result
build: build_result
prep_tool_srcs:
@echo "[make]: prep_tool_srcs"
mkdir -p ${tool_srcs_dir}
${LOCK_TOOL_SRCS_DIR} "cp -Ru ${tool_srcs} ${tool_srcs_dir}/."
pre_compile: prep_tool_srcs
@echo "[make]: pre_compile"
pre_build:
@echo "[make]: pre_build"
mkdir -p ${build_dir}
ifneq (${pre_build_cmds},)
cd ${build_dir} && ${pre_build_cmds}
endif
gen_sv_flist: pre_compile
gen_sv_flist: pre_build
@echo "[make]: gen_sv_flist"
ifneq (${sv_flist_gen_cmd},)
cd ${build_dir} && ${sv_flist_gen_cmd} ${sv_flist_gen_opts}
endif
compile: gen_sv_flist
@echo "[make]: compile"
do_build: gen_sv_flist
@echo "[make]: build"
cd ${sv_flist_gen_dir} && ${build_cmd} ${build_opts}
post_compile: compile
@echo "[make]: post_compile"
post_build: do_build
@echo "[make]: post_build"
ifneq (${post_build_cmds},)
cd ${build_dir} && ${post_build_cmds}
endif
compile_result: post_compile
@echo "[make]: compile_result"
build_result: post_build
@echo "[make]: build_result"
run: run_result
pre_run: prep_tool_srcs
pre_run:
@echo "[make]: pre_run"
mkdir -p ${run_dir}
ifneq (${sw_test},)
mkdir -p ${sw_build_dir}
ifneq (${pre_run_cmds},)
cd ${run_dir} && ${pre_run_cmds}
endif
.ONESHELL:
sw_build: pre_run
@echo "[make]: sw_build"
ifneq (${sw_test},)
ifneq (${sw_images},)
set -e
mkdir -p ${sw_build_dir}
# Initialize meson build system.
${LOCK_SW_BUILD} "cd ${proj_root} && \
${LOCK_SW_BUILD_DIR} "cd ${proj_root} && \
BUILD_ROOT=${sw_build_dir} ${proj_root}/meson_init.sh"
# Compile boot rom code and generate the image.
${LOCK_SW_BUILD} "ninja -C ${sw_build_dir}/build-out \
sw/device/boot_rom/boot_rom_export_${sw_build_device}"
# Extract the boot rom logs.
${proj_root}/util/device_sw_utils/extract_sw_logs.py \
-e "${sw_build_dir}/build-out/sw/device/boot_rom/boot_rom_${sw_build_device}.elf" \
-f .logs.fields -r .rodata .chip_info \
-n "rom" -o "${run_dir}"
# Copy over the boot rom image to the run_dir.
cp ${sw_build_dir}/build-out/sw/device/boot_rom/boot_rom_${sw_build_device}.32.vmem \
${run_dir}/rom.vmem
cp ${sw_build_dir}/build-out/sw/device/boot_rom/boot_rom_${sw_build_device}.elf \
${run_dir}/rom.elf
ifeq (${sw_test_is_prebuilt},1)
# Copy over the sw test image and related sources to the run_dir.
cp ${proj_root}/${sw_test}.64.vmem ${run_dir}/sw.vmem
# Optionally, assume that ${sw_test}_logs.txt exists and copy over to the run_dir.
# Ignore copy error if it actually doesn't exist. Likewise for ${sw_test}_rodata.txt.
-cp ${proj_root}/${sw_test}_logs.txt ${run_dir}/sw_logs.txt
-cp ${proj_root}/${sw_test}_rodata.txt ${run_dir}/sw_rodata.txt
else
# Compile the sw test code and generate the image.
${LOCK_SW_BUILD} "ninja -C ${sw_build_dir}/build-out \
${sw_test}_export_${sw_build_device}"
# Convert sw image to frame format
# TODO only needed for loading sw image through SPI. Can enhance this later
${LOCK_SW_BUILD} "ninja -C ${sw_build_dir}/build-out sw/host/spiflash/spiflash_export"
${LOCK_SW_BUILD} "${sw_build_dir}/build-bin/sw/host/spiflash/spiflash --input \
${sw_build_dir}/build-bin/${sw_test}_${sw_build_device}.bin \
--dump-frames=${run_dir}/sw.frames.bin"
${LOCK_SW_BUILD} "srec_cat ${run_dir}/sw.frames.bin --binary \
--offset 0x0 --byte-swap 4 --fill 0xff -within ${run_dir}/sw.frames.bin -binary -range-pad 4 \
--output ${run_dir}/sw.frames.vmem --vmem"
# Extract the sw test logs.
${proj_root}/util/device_sw_utils/extract_sw_logs.py \
-e "${sw_build_dir}/build-out/${sw_test}_${sw_build_device}.elf" \
-f .logs.fields -r .rodata \
-n "sw" -o "${run_dir}"
# Copy over the sw test image to the run_dir.
cp ${sw_build_dir}/build-out/${sw_test}_${sw_build_device}.64.vmem ${run_dir}/sw.vmem
cp ${sw_build_dir}/build-out/${sw_test}_${sw_build_device}.elf ${run_dir}/sw.elf
# Loop through the list of sw_images and invoke meson on each item.
# `sw_images` is a space-separated list of tests to be built into an image.
# Optionally, each item in the list can have additional metadata / flags using
# the delimiter ':'. The format is as follows:
# <path-to-sw-test>:<index>:<flag1>:<flag2>
#
# If no delimiter is detected, then the full string is considered to be the
# <path-to-sw-test>. If 1 delimiter is detected, then it must be <path-to-sw-
# test> followed by <index>. The <flag> is considered optional.
@for sw_image in ${sw_images}; do \
image=`echo $$sw_image | cut -d: -f 1`; \
index=`echo $$sw_image | cut -d: -f 2`; \
flags=(`echo $$sw_image | cut -d: -f 3- --output-delimiter " "`); \
if [[ -z $$image ]]; then \
echo "ERROR: SW image \"$$sw_image\" is malformed."; \
echo "Expected format: path-to-sw-test:index:optional-flags."; \
exit 1; \
fi; \
if [[ $${flags[@]} =~ "prebuilt" ]]; then \
echo "SW image \"$$image\" is prebuilt - copying sources."; \
target_dir=`dirname ${sw_build_dir}/build-bin/$$image`; \
mkdir -p $$target_dir; \
cp ${proj_root}/$$image* $$target_dir/.; \
else \
echo "Building SW image \"$$image\"."; \
target="$$image""_export_${sw_build_device}"; \
${LOCK_SW_BUILD_DIR} "ninja -C ${sw_build_dir}/build-out $$target"; \
fi; \
done;
endif
endif
simulate: sw_build
@echo "[make]: simulate"
cd ${run_dir} && ${run_cmd} ${run_opts}
post_run: simulate
@echo "[make]: post_run"
ifneq (${post_run_cmds},)
cd ${run_dir} && ${post_run_cmds}
endif
run_result: post_run
@echo "[make]: run_result"
@ -119,29 +111,42 @@ debug_waves:
############################
## coverage rated targets ##
############################
cov_unr_build: gen_sv_flist
@echo "[make]: cov_unr_build"
cd ${sv_flist_gen_dir} && ${cov_unr_build_cmd} ${cov_unr_build_opts}
cov_unr: cov_unr_build
@echo "[make]: cov_unr"
cd ${sv_flist_gen_dir} && ${cov_unr_run_cmd} ${cov_unr_run_opts}
# Merge coverage if there are multiple builds.
cov_merge:
@echo "[make]: cov_merge"
${cov_merge_cmd} ${cov_merge_opts}
# Open coverage tool to review and create report or exclusion file.
cov_analyze: prep_tool_srcs
@echo "[make]: cov_analyze"
${cov_analyze_cmd} ${cov_analyze_opts}
# Generate coverage reports.
cov_report:
@echo "[make]: cov_report"
${cov_report_cmd} ${cov_report_opts}
# Open coverage tool to review and create report or exclusion file.
cov_analyze:
@echo "[make]: cov_analyze"
${cov_analyze_cmd} ${cov_analyze_opts}
.PHONY: build \
run \
reg \
pre_compile \
compile \
post_compile \
compile_result \
pre_run \
simulate \
post_run \
run_result
pre_build \
gen_sv_flist \
do_build \
post_build \
build_result \
run \
pre_run \
sw_build \
simulate \
post_run \
run_result \
debug_waves \
cov_merge \
cov_analyze \
cov_report

View file

@ -0,0 +1,28 @@
// Copyright lowRISC contributors.
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
{
entries: [
{
name: alert_test
desc: '''
Verify common `alert_test` CSR that allows SW to mock-inject alert requests.
- Enable a random set of alert requests by writing random value to
alert_test CSR.
- Check each `alert_tx.alert_p` pin to verify that only the requested alerts
are triggered.
- During alert_handshakes, write `alert_test` CSR again to verify that:
If `alert_test` writes to current ongoing alert handshake, the `alert_test`
request will be ignored.
If `alert_test` writes to current idle alert handshake, a new alert_handshake
should be triggered.
- Wait for the alert handshakes to finish and verify `alert_tx.alert_p` pins
all sets back to 0.
- Repeat the above steps a bunch of times.
'''
milestone: V2
tests: ["{name}{intf}_alert_test"]
}
]
}

View file

@ -0,0 +1,20 @@
// Copyright lowRISC contributors.
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
{
build_modes: [
{
name: cover_reg_top
}
]
tests: [
{
name: "{name}_alert_test"
build_mode: "cover_reg_top"
uvm_test_seq: "{name}_common_vseq"
run_opts: ["+run_alert_test", "+en_scb=0"]
reseed: 50
}
]
}

View file

@ -6,15 +6,13 @@
build_ex: "{build_dir}/simv"
run_cmd: "{job_prefix} {build_ex}"
// Indicate the tool specific helper sources - these are copied over to the
// {tool_srcs_dir} before running the simulation.
tool_srcs: ["{dv_root}/tools/vcs/*"]
build_opts: ["-sverilog -full64 -licqueue -kdb -ntb_opts uvm-1.2",
"-timescale=1ns/1ps",
"-Mdir={build_ex}.csrc",
"-o {build_ex}",
"-f {sv_flist}",
// List multiple tops for the simulation. Prepend each top level with `-top`.
"{eval_cmd} echo {sim_tops} | sed -E 's/(\\S+)/-top \\1/g'",
"+incdir+{build_dir}",
// Turn on warnings for non-void functions called with return values ignored
"+warn=SV-NFIVC",
@ -107,7 +105,7 @@
supported_wave_formats: ["fsdb", "vpd"]
// Default tcl script used when running the sim. Override if needed.
run_script: "{tool_srcs_dir}/sim.tcl"
run_script: "{dv_root}/tools/sim.tcl"
// Coverage related.
cov_db_dir: "{scratch_path}/coverage/{build_mode}.vdb"
@ -120,7 +118,7 @@
// Merging coverage.
// "cov_db_dirs" is a special variable that appends all build directories in use.
// It is constructed by the tool itself.
cov_merge_dir: "{scratch_base_path}/cov_merge"
cov_merge_dir: "{scratch_path}/cov_merge"
cov_merge_db_dir: "{cov_merge_dir}/merged.vdb"
cov_merge_cmd: "{job_prefix} urg"
cov_merge_opts: ["-full64",
@ -132,14 +130,11 @@
"-parallel",
"-parallel_split 20",
// Use cov_db_dirs var for dir args; append -dir in front of each
'''{eval_cmd} dirs=`echo {cov_db_dirs}`; dir_args=; \
for d in $dirs; do dir_args="$dir_args -dir $d"; done; \
echo $dir_args
''',
"{eval_cmd} echo {cov_db_dirs} | sed -E 's/(\\S+)/-dir \\1/g'",
"-dbname {cov_merge_db_dir}"]
// Generate coverage reports in text as well as html.
cov_report_dir: "{scratch_base_path}/cov_report"
cov_report_dir: "{scratch_path}/cov_report"
cov_report_cmd: "{job_prefix} urg"
cov_report_opts: ["-full64",
"+urg+lic+wait",
@ -152,9 +147,43 @@
cov_report_txt: "{cov_report_dir}/dashboard.txt"
cov_report_page: "dashboard.html"
// UNR related.
// All code coverage, assert isn't supported
cov_unr_metrics: "line+cond+fsm+tgl+branch"
cov_unr_dir: "{scratch_path}/cov_unr"
cov_unr_common_build_opts: ["-sverilog -full64 -licqueue -ntb_opts uvm-1.2",
"-timescale=1ns/1ps"]
// Use recommended UUM (Unified usage model) 3 steps flow. The other flow defines macro
// "SYNTHESIS", which we have used in design
cov_unr_build_cmd: [// Step 1
"{job_prefix} vlogan {cov_unr_common_build_opts} &&",
// Step 2
"{job_prefix} vlogan {cov_unr_common_build_opts}",
// grep all defines from {build_opts} from step 2
'''{eval_cmd} opts=`echo {build_opts}`; defines=; d=; \
for o in $opts; \
do \
d=`echo $o | grep -o '+define+.*'`; \
defines="$defines $d"; \
done; \
echo $defines
''',
"-f {sv_flist} &&",
// Step 3
"{job_prefix} vcs {cov_unr_common_build_opts}"]
cov_unr_build_opts: ["-cm {cov_unr_metrics}",
"{vcs_cov_cfg_file}",
"-unr={vcs_unr_cfg_file}",
"{dut}"]
cov_unr_run_cmd: ["{job_prefix} ./unrSimv"]
cov_unr_run_opts: ["-unr"]
// Analyzing coverage - this is done by invoking --cov-analyze switch. It opens up the
// GUI for visual analysis.
cov_analyze_dir: "{scratch_base_path}/cov_analyze"
cov_analyze_dir: "{scratch_path}/cov_analyze"
cov_analyze_cmd: "{job_prefix} verdi"
cov_analyze_opts: ["-cov",
"-covdir {cov_merge_db_dir}",
@ -228,7 +257,7 @@
{
name: vcs_xprop
is_sim_mode: 1
build_opts: ["-xprop={tool_srcs_dir}/xprop.cfg"]
build_opts: ["-xprop={dv_root}/tools/vcs/xprop.cfg"]
}
{
name: vcs_profile

View file

@ -0,0 +1,139 @@
// Copyright lowRISC contributors.
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
{
// Replicate settings from `common_sim_cfg.hjson`.
//
// Unfortunately, that file assumes that the tools are invoked natively as
// opposed to via FuseSoC. Verilator is setup to be invoked via FuseSoC,
// which causes FuseSoC to fail due to unknown {build_opts}. Hence, contents
// from `common_sim_cfg.hjson` are replicated here as appropriate.
// -- START --
dv_root: "{proj_root}/hw/dv"
flow: sim
flow_makefile: "{dv_root}/tools/dvsim/sim.mk"
import_cfgs: ["{proj_root}/hw/data/common_project_cfg.hjson",
"{dv_root}/tools/dvsim/common_modes.hjson",
"{dv_root}/tools/dvsim/fusesoc.hjson",
]
// Default directory structure for the output
build_dir: "{scratch_path}/{build_mode}"
run_dir_name: "{index}.{test}"
run_dir: "{scratch_path}/{run_dir_name}/out"
sw_build_dir: "{scratch_path}"
sw_root_dir: "{proj_root}/sw"
regressions: [
{
name: smoke
reseed: 1
}
{
name: all
}
{
name: all_once
reseed: 1
}
{
name: nightly
}
]
// -- END --
build_cmd: "fusesoc {fusesoc_cores_root_dirs} run"
ex_name: "{eval_cmd} echo \"{fusesoc_core}\" | cut -d: -f3"
run_cmd: "{build_dir}/sim-verilator/V{ex_name}"
// TODO: Verilator has a few useful build switches. Need to figure out how to
// pass them via FuseSoC.
build_opts: ["--flag=fileset_{design_level}",
"--target=sim",
"--build-root={build_dir}",
"--setup",
"--build",
"{fusesoc_core}"
// "--timescale 1ns/1ps",
// Enable all assertions.
// "--assert",
// Flush streams immediately after all $displays.
// "--autoflush",
// Enable multi-threading.
// "--threads 4",
// Randomize all 2-state vars if driven to unknown 'X'.
// "--x-assign unique",
// "--x-initial unique",
]
run_opts: [// Set random seed.
// "+verilator+seed+{seed}",
]
// Supported wave dumping formats (in order of preference).
supported_wave_formats: ["fst"]
// Vars that need to exported to the env.
exports: [
]
// pass and fail patterns
build_pass_patterns: []
build_fail_patterns: [// Verilator compile error.
"^%Error.*?:",
// FuseSoC build error.
"^ERROR:.*$",
]
run_pass_patterns: ["^TEST PASSED CHECKS$"]
run_fail_patterns: [// $warning/error/fatal messages.
"^\\[[0-9]+\\] %Warning.*?: ",
"^\\[[0-9]+\\] %Error.*?: ",
"^\\[[0-9]+\\] %Fatal.*?: ",
// Ninja / SW compile failure.
"^FAILED: ",
" error: ",
// Failure signature pattern.
"^TEST FAILED CHECKS$"
]
// Coverage related.
cov_db_dir: ""
cov_db_test_dir: ""
build_modes: [
{
name: verilator_waves
is_sim_mode: 1
// build_opts: ["--trace",
// "--trace-fst"
// "--trace-structs",
// "--trace-params",
// "-CFLAGS \"-DVM_TRACE_FMT_FST\"",
// ]
run_opts: ["--trace"]
}
// TODO: These need to be fine-tuned.
{
name: verilator_cov
is_sim_mode: 1
// build_opts: ["--coverage"]
}
{
name: verilator_xprop
is_sim_mode: 1
}
{
name: verilator_profile
is_sim_mode: 1
// build_opts: ["--prof-cfuncs",
// "--prof-threads",
// ]
}
{
name: verilator_loopdetect
is_sim_mode: 1
}
]
}

View file

@ -5,10 +5,6 @@
build_cmd: "{job_prefix} xrun"
run_cmd: "{job_prefix} xrun"
// Indicate the tool specific helper sources - these are copied over to the
// {tool_srcs_dir} before running the simulation.
tool_srcs: ["{dv_root}/tools/xcelium/*"]
build_opts: ["-elaborate -64bit -access +r -sv",
"-licqueue",
// TODO: duplicate primitives between OT and Ibex #1231
@ -18,6 +14,11 @@
"-f {sv_flist}",
"-uvmhome CDNS-1.2",
"-xmlibdirname {build_dir}/xcelium.d",
// List multiple tops for the simulation. Prepend each top level with `-top`.
"{eval_cmd} echo {sim_tops} | sed -E 's/(\\S+)/-top \\1/g'",
// Set the top level elaborated entity (snapshot name) correctly since there are
// multiple tops.
"-snapshot {tb}",
// for uvm_hdl_* used by csr backdoor
"-access +rw",
// Use this to conditionally compile for Xcelium (example: LRM interpretations differ
@ -33,7 +34,9 @@
run_opts: ["-input {run_script}",
"-licqueue",
"-64bit -xmlibdirname {build_dir}/xcelium.d -R",
"-64bit -xmlibdirname {build_dir}/xcelium.d",
// Use the same snapshot name set during the build step.
"-r {tb}",
"+SVSEED={seed}",
"+UVM_TESTNAME={uvm_test}",
"+UVM_TEST_SEQ={uvm_test_seq}",
@ -63,7 +66,7 @@
supported_wave_formats: ["shm", "fsdb", "vcd"]
// Default tcl script used when running the sim. Override if needed.
run_script: "{tool_srcs_dir}/sim.tcl"
run_script: "{dv_root}/tools/sim.tcl"
// Coverage related.
// By default, collect all coverage metrics: block:expr:fsm:toggle:functional.
@ -88,26 +91,26 @@
// Merging coverage.
// It is constructed by the tool itself.
cov_merge_dir: "{scratch_base_path}/cov_merge"
cov_merge_dir: "{scratch_path}/cov_merge"
cov_merge_db_dir: "{cov_merge_dir}/merged"
cov_merge_cmd: "{job_prefix} imc"
cov_merge_opts: ["-64bit",
"-licqueue",
"-exec {tool_srcs_dir}/cov_merge.tcl"]
"-exec {dv_root}/tools/xcelium/cov_merge.tcl"]
// Generate covreage reports in text as well as html.
cov_report_dir: "{scratch_base_path}/cov_report"
cov_report_dir: "{scratch_path}/cov_report"
cov_report_cmd: "{job_prefix} imc"
cov_report_opts: ["-64bit",
"-licqueue",
"-exec {tool_srcs_dir}/cov_report.tcl",
"-exec {dv_root}/tools/xcelium/cov_report.tcl",
"{xcelium_cov_refine_files}"]
cov_report_txt: "{cov_report_dir}/cov_report.txt"
cov_report_page: "index.html"
// Analyzing coverage - this is done by invoking --cov-analyze switch. It opens up the
// GUI for visual analysis.
cov_analyze_dir: "{scratch_base_path}/cov_analyze"
cov_analyze_dir: "{scratch_path}/cov_analyze"
cov_analyze_cmd: "{job_prefix} imc"
cov_analyze_opts: ["-gui",
"-64bit",

View file

@ -99,9 +99,7 @@ def main():
f.write("CAPI=2:\n")
yaml.dump(ral_pkg_core_text,
f,
encoding="utf-8",
default_flow_style=False,
sort_keys=False)
encoding="utf-8")
print("RAL core file written to {}".format(ral_pkg_core_file))

View file

@ -6,16 +6,16 @@
# VCS syntax: -ucli -do <this file>
# Xcelium syntax: -input <this file>
set tool_srcs_dir ""
if {[info exists ::env(TOOL_SRCS_DIR)]} {
set tool_srcs_dir "$::env(TOOL_SRCS_DIR)"
set dv_root ""
if {[info exists ::env(dv_root)]} {
set dv_root "$::env(dv_root)"
} else {
puts "ERROR: Script run without TOOL_SRCS_DIR environment variable."
puts "ERROR: Script run without dv_root environment variable."
quit
}
source "${tool_srcs_dir}/common.tcl"
source "${tool_srcs_dir}/waves.tcl"
source "${dv_root}/tools/common.tcl"
source "${dv_root}/tools/waves.tcl"
run
quit

23
vendor/lowrisc_ip/dv/tools/vcs/unr.cfg vendored Normal file
View file

@ -0,0 +1,23 @@
# Copyright lowRISC contributors.
# Licensed under the Apache License, Version 2.0, see LICENSE for details.
# SPDX-License-Identifier: Apache-2.0
-covInput $SCRATCH_PATH/cov_merge/merged.vdb
-covDUT $dut_instance
# Provide the clock specification
-clock clk 100
# Provide the reset specification: signal_name, active_value, num clk cycles reset to be active
-reset rst_n 0 20
# Black box some of the modules
# -blackBoxes -type design *
# Include common el file, so that it doesn't generate reviewed common exclusions
-covEL $dv_root/tools/vcs/common_cov_excl.el
# Name of the generated exclusion file
-save_exclusion $SCRATCH_PATH/cov_unr/unr_exclude.el
# Enables verbose reporting in addition to summary reporting.
-verboseReport

View file

@ -47,22 +47,17 @@ void VerilatorSimCtrl::SetTop(VerilatedToplevel *top, CData *sig_clk,
flags_ = flags;
}
int VerilatorSimCtrl::Exec(int argc, char **argv) {
std::pair<int, bool> VerilatorSimCtrl::Exec(int argc, char **argv) {
bool exit_app = false;
if (!ParseCommandArgs(argc, argv, exit_app)) {
return 1;
}
bool good_cmdline = ParseCommandArgs(argc, argv, exit_app);
if (exit_app) {
// Successful exit requested by command argument parsing
return 0;
return std::make_pair(good_cmdline ? 0 : 1, false);
}
RunSimulation();
if (!WasSimulationSuccessful()) {
return 1;
}
return 0;
int retcode = WasSimulationSuccessful() ? 0 : 1;
return std::make_pair(retcode, true);
}
bool VerilatorSimCtrl::ParseCommandArgs(int argc, char **argv, bool &exit_app) {
@ -88,6 +83,7 @@ bool VerilatorSimCtrl::ParseCommandArgs(int argc, char **argv, bool &exit_app) {
if (!tracing_possible_) {
std::cerr << "ERROR: Tracing has not been enabled at compile time."
<< std::endl;
exit_app = true;
return false;
}
TraceOn();
@ -101,6 +97,7 @@ bool VerilatorSimCtrl::ParseCommandArgs(int argc, char **argv, bool &exit_app) {
break;
case ':': // missing argument
std::cerr << "ERROR: Missing argument." << std::endl << std::endl;
exit_app = true;
return false;
case '?':
default:;
@ -115,6 +112,7 @@ bool VerilatorSimCtrl::ParseCommandArgs(int argc, char **argv, bool &exit_app) {
// Parse arguments for all registered extensions
for (auto it = extension_array_.begin(); it != extension_array_.end(); ++it) {
if (!(*it)->ParseCLIArguments(argc, argv, exit_app)) {
exit_app = true;
return false;
if (exit_app) {
return true;
@ -291,11 +289,16 @@ void VerilatorSimCtrl::Run() {
time_begin_ = std::chrono::steady_clock::now();
UnsetReset();
Trace();
unsigned long start_reset_cycle_ = initial_reset_delay_cycles_;
unsigned long end_reset_cycle_ = start_reset_cycle_ + reset_duration_cycles_;
while (1) {
if (time_ / 2 >= initial_reset_delay_cycles_) {
unsigned long cycle_ = time_ / 2;
if (cycle_ == start_reset_cycle_) {
SetReset();
}
if (time_ / 2 >= reset_duration_cycles_ + initial_reset_delay_cycles_) {
} else if (cycle_ == end_reset_cycle_) {
UnsetReset();
}

View file

@ -49,15 +49,18 @@ class VerilatorSimCtrl {
* 1. Parses a C-style set of command line arguments (see ParseCommandArgs())
* 2. Runs the simulation (see RunSimulation())
*
* @return a main()-compatible process exit code: 0 for success, 1 in case
* of an error.
* @return a pair with main()-compatible process exit code (0 for success, 1
* in case of an error) and a boolean flag telling the calling
* function whether the simulation actually ran.
*/
int Exec(int argc, char **argv);
std::pair<int, bool> Exec(int argc, char **argv);
/**
* Parse command line arguments
*
* Process all recognized command-line arguments from argc/argv.
* Process all recognized command-line arguments from argc/argv. If a command
* line argument implies that we should exit immediately (like --help), sets
* exit_app. On failure, sets exit_app as well as returning false.
*
* @param argc, argv Standard C command line arguments
* @param exit_app Indicate that program should terminate

View file

@ -0,0 +1,157 @@
---
title: "Primitive Component: Flash Wrapper"
---
# Overview
`prim_flash` is a wrapper interface for technology specific flash modules.
As the exact details of each technology can be different, this document mainly describes the interface requirements and their functions.
The wrapper however does assume that all page sizes are the same (they cannot be different between data and info partitions, or different types of info partitions).
## Parameters
Name | type | Description
---------------|--------|----------------------------------------------------------
NumBanks | int | Number of flash banks. Flash banks are assumed to be identical, asymmetric flash banks are not supported
InfosPerBank | int | Maximum number of info pages in the info partition. Since info partitions can have multiple types, this is max among all types.
InfoTypes | int | The number of info partition types, this number can be 1~N.
InfoTypesWidth | int | The number of bits needed to represent the info types.
PagesPerBank | int | The number of pages per bank for data partition.
WordsPerPage | int | The number of words per page per bank for both information and data partition.
DataWidth | int | The full data width of a flash word (inclusive of metadata)
MetaDataWidth | int | The metadata width of a flash word
TestModeWidth | int | The number of test modes for a bank of flash
## Signal Interfaces
### Overall Interface Signals
Name | In/Out | Description
------------------------|--------|---------------------------------
clk_i | input | Clock input
rst_n_i | input | Reset input
flash_req_i | input | Inputs from flash protocol and physical controllers
flash_rsp_o | output | Outputs to flash protocol and physical controllers
prog_type_avail_o | output | Available program types in this flash wrapper: Currently there are only two types, program normal and program repair
init_busy_o | output | The flash wrapper is undergoing initialization
tck_i | input | jtag tck
tdi_i | input | jtag tdi
tms_i | input | jtag tms
tdo_o | output | jtag tdo
scanmode_i | input | dft scanmode input
scan_rst_n_i | input | dft scanmode reset
flash_power_ready_h_io | inout | flash power is ready (high voltage connection)
flash_power_down_h_io | inout | flash wrapper is powering down (high voltage connection)
flash_test_mode_a_io | inout | flash test mode values (analog connection)
flash_test_voltage_h_io | inout | flash test mode voltage (high voltage connection)
### Flash Request/Response Signals
Name | In/Out | Description
-------------------|--------|---------------------------------
rd | input | read request
prog | input | program request
prog_last | input | last program beat
prog_type | input | type of program requested: currently there are only two types, program normal and program repair
pg_erase | input | page erase request
bk_erase | output | bank erase request
erase_suspend | input | erase suspend request
addr | input | requested transaction address
part | input | requested transaction partition
info_sel | input | if requested transaction is information partition, the type of information partition accessed
he | output | high endurance enable for requested address
prog_data | input | program data
ack | output | transction acknowledge
rd_data | output | transaction read data
done | output | transaction done
erase_suspend_done | output | erase suspend done
# Theory of Operations
## Transactions
Transactions into the flash wrapper follow a req / ack / done format.
A request is issued by raising one of `rd`, `prog`, `pg_erase` or `bk_erase` to 1.
When the flash wrapper accepts the transaction, `ack` is returned.
When the transaction fully completes, a `done` is returned as well.
Depending on the type of transaction, there may be a significant gap between `ack` and `done`.
For example, a read may have only 1 or 2 cycles between transaction acknowledgement and transaction complete.
Whereas a program or erase may have a gap extending up to uS or even mS.
It is the flash wrapper's decision on how many outstanding transaction to accept.
The following are examples for read, program and erase transactions.
### Read
{{< wavejson >}}
{signal: [
{name: 'clk_i', wave: 'p................'},
{name: 'rd_i', wave: '011..0.1..0......'},
{name: 'addr_i', wave: 'x22..x.2..x......'},
{name: 'ack_o', wave: '1.0.10...10......'},
{name: 'done_o', wave: '0...10...10....10'},
{name: 'rd_data_o', wave: 'x...2x...2x....2x'},
]}
{{< /wavejson >}}
### Program
{{< wavejson >}}
{signal: [
{name: 'clk_i', wave: 'p................'},
{name: 'prog_i', wave: '011...0.1....0...'},
{name: 'prog_type_i', wave: 'x22...x.2....x...'},
{name: 'prog_data_i', wave: 'x22...x.2....x...'},
{name: 'prog_last_i', wave: '0.......1....0...'},
{name: 'ack_o', wave: '010..10.....10...'},
{name: 'done_o', wave: '0..............10'},
]}
{{< /wavejson >}}
### Erase
{{< wavejson >}}
{signal: [
{name: 'clk_i', wave: 'p................'},
{name: '*_erase_i', wave: '01.0.........1.0.'},
{name: 'ack_o', wave: '0.10..........10.'},
{name: 'done_o', wave: '0.....10.........'},
]}
{{< /wavejson >}}
## Initlialization
The flash wrapper may undergo technology specific intializations when it is first powered up.
During this state, it asserts the `init_busy` to inform the outside world that it is not ready for transactions.
During this time, if a transaction is issued towards the flash wrapper, the transaction is not acknowledged until the initialization is complete.
## Program Beats
Since flash programs can take a significant amount of time, certain flash wrappers employ methods to optimize the program operation.
This optimization may place an upper limit on how many flash words can be handled at a time.
The purpose of the `prog_last` is thus to indicate when a program burst has completed.
Assume the flash wrapper can handle 16 words per program operation.
Assume a program burst has only 15 words to program and thus will not fill up the full program resolution.
On the 15th word, the `prog_last` signal asserts and informs the flash wrapper that it should not expect a 16th word and should proceed to complete the program operation.
## Program Type
The `prog_type` input informs the flash wrapper what type of program operation it should perform.
A program type not supported by the wrapper, indicated through `prog_type_avail` shall never be issued to the flash wrapper.
## Erase Suspend
Since erase operations can take a significant amount of time, sometimes it is necessary for software or other components to suspend the operation.
The suspend operation follows a similar request (`erase_suspend_req` and done (`erase_suspend_done`) interface.
When the erase suspend completes, the flash wrapper circuitry also asserts `done` for the ongoing erase transaction to ensure all hardware gracefully completes.
The following is an example diagram
{{< wavejson >}}
{signal: [
{name: 'clk_i', wave: 'p................'},
{name: 'pg_erase_i', wave: '01............0..'},
{name: 'ack_o', wave: '1.0..............'},
{name: 'erase_suspend_i', wave: '0.....1.......0..'},
{name: 'done_o', wave: '0............10..'},
{name: 'erase_suspend_done_o', wave: '0............10..'},
]}
{{< /wavejson >}}

View file

@ -72,9 +72,9 @@ round_keys = key_derivation(key_i, idx_i);
state = data_i;
for (int i=0; i < NumRounds; i++) {
state = state ^ round_keys[i];
state = sbox4_layer(state);
state = perm_layer(state);
state = state ^ round_keys[i];
state = sbox4_layer(state);
state = perm_layer(state);
}
data_o = state ^ round_keys[NumRounds-1];

View file

@ -23,13 +23,8 @@
// Default iterations for all tests - each test entry can override this.
reseed: 50
// Add the coverage configuration and exclusion files so they get copied
// over to the scratch area.
tool_srcs: ["{proj_root}/hw/ip/prim/dv/prim_lfsr/data/prim_lfsr_cover.cfg",
"{proj_root}/hw/ip/prim/dv/prim_lfsr/data/prim_lfsr_cover_assert.cfg",
"{proj_root}/hw/ip/prim/dv/prim_lfsr/data/prim_lfsr_cov_excl.el"]
vcs_cov_excl_files: ["{tool_srcs_dir}/prim_lfsr_cov_excl.el"]
// Add PRIM_LSFR specific exclusion files.
vcs_cov_excl_files: ["{proj_root}/hw/ip/prim/dv/prim_lfsr/data/prim_lfsr_cov_excl.el"]
build_modes: [
{
@ -44,8 +39,8 @@
// dw_8 is only used for "smoke" sims, so coverage collection is not needed.
prim_lfsr_dw_8_vcs_cov_cfg_file: ""
prim_lfsr_dw_24_vcs_cov_cfg_file: "-cm_hier {tool_srcs_dir}/prim_lfsr_cover.cfg"
vcs_cov_assert_cfg_file: "-cm_assert_hier {tool_srcs_dir}/prim_lfsr_cover_assert.cfg"
prim_lfsr_dw_24_vcs_cov_cfg_file: "-cm_hier {proj_root}/hw/ip/prim/dv/prim_lfsr/data/prim_lfsr_cover.cfg"
vcs_cov_assert_cfg_file: "-cm_assert_hier {proj_root}/hw/ip/prim/dv/prim_lfsr/data/prim_lfsr_cover_assert.cfg"
prim_lfsr_dw_8_xcelium_cov_cfg_file: ""
prim_lfsr_dw_24_xcelium_cov_cfg_file: ""

View file

@ -26,13 +26,10 @@
// Default iterations for all tests - each test entry can override this.
reseed: 50
// Add these to tool_srcs so that they get copied over.
tool_srcs: ["{proj_root}/hw/ip/prim/dv/prim_present/data/prim_present_cover.cfg"]
overrides: [
{
name: vcs_cov_cfg_file
value: "-cm_hier {tool_srcs_dir}/prim_present_cover.cfg"
value: "-cm_hier {proj_root}/hw/ip/prim/dv/prim_present/data/prim_present_cover.cfg"
}
]

View file

@ -23,12 +23,10 @@
// Default iterations for all tests - each test entry can override this.
reseed: 50
tool_srcs: ["{proj_root}/hw/ip/prim/dv/prim_prince/data/prim_prince_cover.cfg"]
overrides: [
{
name: vcs_cov_cfg_file
value: "-cm_hier {tool_srcs_dir}/prim_prince_cover.cfg"
value: "-cm_hier {proj_root}/hw/ip/prim/dv/prim_prince/data/prim_prince_cover.cfg"
}
]

View file

@ -0,0 +1,8 @@
# Copyright lowRISC contributors.
# Licensed under the Apache License, Version 2.0, see LICENSE for details.
# SPDX-License-Identifier: Apache-2.0
#
# waiver file for prim_buf
waive -rules {STAR_PORT_CONN_USE} -location {prim_buf.sv} -regexp {.*wild card port connection encountered on instance.*} \
-comment "Generated prims may have wildcard connections."

View file

@ -2,15 +2,14 @@
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
#include "Vprim_sync_reqack_tb.h"
#include "verilated_toplevel.h"
#include "verilator_sim_ctrl.h"
#include <functional>
#include <iostream>
#include <signal.h>
#include "Vprim_sync_reqack_tb.h"
#include "sim_ctrl_extension.h"
#include "verilated_toplevel.h"
#include "verilator_sim_ctrl.h"
class PrimSyncReqAckTB : public SimCtrlExtension {
using SimCtrlExtension::SimCtrlExtension;
@ -56,7 +55,7 @@ int main(int argc, char **argv) {
<< std::endl;
// Get pass / fail from Verilator
ret_code = simctrl.Exec(argc, argv);
ret_code = simctrl.Exec(argc, argv).first;
return ret_code;
}

View file

@ -17,6 +17,8 @@ module prim_sync_reqack_tb #(
localparam int unsigned NumTransactions = 8;
localparam logic FastToSlow = 1'b1; // Select 1'b0 for SlowToFast
localparam int unsigned Ratio = 4; // must be even and greater equal 2
localparam bit DataSrc2Dst = 1'b1; // Select 1'b0 for Dst2Src
localparam bit DataReg = 1'b0; // Select 1'b1 if data flows from Dst2Src
// Derivation of parameters
localparam int unsigned Ticks = Ratio/2;
@ -55,7 +57,12 @@ module prim_sync_reqack_tb #(
logic rst_done;
// Instantiate DUT
prim_sync_reqack prim_sync_reqack (
logic [WidthTrans-1:0] out_data, unused_out_data;
prim_sync_reqack_data #(
.Width ( WidthTrans ),
.DataSrc2Dst ( DataSrc2Dst ),
.DataReg ( DataReg )
) u_prim_sync_reqack_data (
.clk_src_i (clk_src),
.rst_src_ni (rst_slow_n),
.clk_dst_i (clk_dst),
@ -64,8 +71,12 @@ module prim_sync_reqack_tb #(
.src_req_i (src_req),
.src_ack_o (src_ack),
.dst_req_o (dst_req),
.dst_ack_i (dst_ack)
.dst_ack_i (dst_ack),
.data_i (dst_count_q),
.data_o (out_data)
);
assign unused_out_data = out_data;
// Make sure we do not apply stimuli before the reset.
always_ff @(posedge clk_slow or negedge rst_slow_n) begin

View file

@ -15,6 +15,7 @@ filesets:
- lowrisc:prim:pad_wrapper
- lowrisc:prim:prim_pkg
- lowrisc:prim:clock_mux2
- lowrisc:prim:buf
- lowrisc:prim:flop
- lowrisc:prim:flop_2sync
files:
@ -33,6 +34,7 @@ filesets:
- rtl/prim_fifo_sync.sv
- rtl/prim_slicer.sv
- rtl/prim_sync_reqack.sv
- rtl/prim_sync_reqack_data.sv
- rtl/prim_keccak.sv
- rtl/prim_packer.sv
- rtl/prim_packer_fifo.sv

50
vendor/lowrisc_ip/ip/prim/prim_buf.core vendored Normal file
View file

@ -0,0 +1,50 @@
CAPI=2:
# Copyright lowRISC contributors.
# Licensed under the Apache License, Version 2.0, see LICENSE for details.
# SPDX-License-Identifier: Apache-2.0
name: "lowrisc:prim:buf"
description: "Generic buffer"
filesets:
primgen_dep:
depend:
- lowrisc:prim:prim_pkg
- lowrisc:prim:primgen
files_verilator_waiver:
depend:
# common waivers
- lowrisc:lint:common
files:
file_type: vlt
files_ascentlint_waiver:
depend:
# common waivers
- lowrisc:lint:common
files:
- lint/prim_buf.waiver
file_type: waiver
files_veriblelint_waiver:
depend:
# common waivers
- lowrisc:lint:common
- lowrisc:lint:comportable
generate:
impl:
generator: primgen
parameters:
prim_name: buf
targets:
default:
filesets:
- tool_verilator ? (files_verilator_waiver)
- tool_ascentlint ? (files_ascentlint_waiver)
- tool_veriblelint ? (files_veriblelint_waiver)
- primgen_dep
generate:
- impl

View file

@ -0,0 +1,21 @@
CAPI=2:
# Copyright lowRISC contributors.
# Licensed under the Apache License, Version 2.0, see LICENSE for details.
# SPDX-License-Identifier: Apache-2.0
name: "lowrisc:prim:edn_req:0.1"
description: "EDN synchronization and word packing IP."
filesets:
files_rtl:
depend:
- lowrisc:prim:all
- lowrisc:prim:assert
- lowrisc:ip:edn_pkg
files:
- rtl/prim_edn_req.sv
file_type: systemVerilogSource
targets:
default:
filesets:
- files_rtl

View file

@ -0,0 +1,56 @@
CAPI=2:
# Copyright lowRISC contributors.
# Licensed under the Apache License, Version 2.0, see LICENSE for details.
# SPDX-License-Identifier: Apache-2.0
name: "lowrisc:prim:lc_sender:0.1"
description: "Sender primitive for life cycle control signals."
filesets:
files_rtl:
depend:
- lowrisc:prim:assert
- lowrisc:prim:flop
- lowrisc:ip:lc_ctrl_pkg
files:
- rtl/prim_lc_sender.sv
file_type: systemVerilogSource
files_verilator_waiver:
depend:
# common waivers
- lowrisc:lint:common
files:
file_type: vlt
files_ascentlint_waiver:
depend:
# common waivers
- lowrisc:lint:common
files:
# - lint/prim_lc_sender.waiver
file_type: waiver
files_veriblelint_waiver:
depend:
# common waivers
- lowrisc:lint:common
- lowrisc:lint:comportable
targets:
default: &default_target
filesets:
- tool_verilator ? (files_verilator_waiver)
- tool_ascentlint ? (files_ascentlint_waiver)
- tool_veriblelint ? (files_veriblelint_waiver)
- files_rtl
lint:
<<: *default_target
default_tool: verilator
parameters:
- SYNTHESIS=true
tools:
verilator:
mode: lint-only
verilator_options:
- "-Wall"

View file

@ -10,7 +10,7 @@ filesets:
depend:
- lowrisc:prim:assert
- lowrisc:prim:flop_2sync
- lowrisc:prim:clock_buf
- lowrisc:prim:buf
- lowrisc:ip:lc_ctrl_pkg
files:
- rtl/prim_lc_sync.sv
@ -51,10 +51,6 @@ targets:
parameters:
- SYNTHESIS=true
tools:
ascentlint:
ascentlint_options:
- "-wait_license"
- "-stop_on_error"
verilator:
mode: lint-only
verilator_options:

View file

@ -48,10 +48,6 @@ targets:
parameters:
- SYNTHESIS=true
tools:
ascentlint:
ascentlint_options:
- "-wait_license"
- "-stop_on_error"
verilator:
mode: lint-only
verilator_options:

View file

@ -0,0 +1,51 @@
CAPI=2:
# Copyright lowRISC contributors.
# Licensed under the Apache License, Version 2.0, see LICENSE for details.
# SPDX-License-Identifier: Apache-2.0
name: "lowrisc:prim:otp_pkg:0.1"
description: "Package with common interface definitions for OTP primitives."
filesets:
files_rtl:
files:
- rtl/prim_otp_pkg.sv
file_type: systemVerilogSource
files_verilator_waiver:
depend:
# common waivers
- lowrisc:lint:common
files:
file_type: vlt
files_ascentlint_waiver:
depend:
# common waivers
- lowrisc:lint:common
files:
file_type: waiver
files_veriblelint_waiver:
depend:
# common waivers
- lowrisc:lint:common
- lowrisc:lint:comportable
targets:
default: &default_target
filesets:
- tool_verilator ? (files_verilator_waiver)
- tool_ascentlint ? (files_ascentlint_waiver)
- tool_veriblelint ? (files_veriblelint_waiver)
- files_rtl
lint:
<<: *default_target
default_tool: verilator
parameters:
- SYNTHESIS=true
tools:
verilator:
mode: lint-only
verilator_options:
- "-Wall"

View file

@ -3,6 +3,7 @@
// SPDX-License-Identifier: Apache-2.0
package prim_alert_pkg;
typedef struct packed {
logic alert_p;
logic alert_n;
@ -14,4 +15,13 @@ package prim_alert_pkg;
logic ack_p;
logic ack_n;
} alert_rx_t;
endpackage
parameter alert_tx_t ALERT_TX_DEFAULT = '{alert_p: 1'b0,
alert_n: 1'b1};
parameter alert_rx_t ALERT_RX_DEFAULT = '{ping_p: 1'b0,
ping_n: 1'b1,
ack_p: 1'b0,
ack_n: 1'b1};
endpackage : prim_alert_pkg

View file

@ -63,13 +63,13 @@ module prim_alert_receiver
) i_decode_alert (
.clk_i,
.rst_ni,
.diff_pi ( alert_tx_i.alert_p ),
.diff_ni ( alert_tx_i.alert_n ),
.level_o ( alert_level ),
.rise_o ( ),
.fall_o ( ),
.event_o ( ),
.sigint_o ( alert_sigint )
.diff_pi ( alert_tx_i.alert_p ),
.diff_ni ( alert_tx_i.alert_n ),
.level_o ( alert_level ),
.rise_o ( ),
.fall_o ( ),
.event_o ( ),
.sigint_o ( alert_sigint )
);
/////////////////////////////////////////////////////
@ -78,7 +78,8 @@ module prim_alert_receiver
typedef enum logic [1:0] {Idle, HsAckWait, Pause0, Pause1} state_e;
state_e state_d, state_q;
logic ping_rise;
logic ping_tog_d, ping_tog_q, ack_d, ack_q;
logic ping_tog, ping_tog_dp, ping_tog_qp, ping_tog_dn, ping_tog_qn;
logic ack, ack_dp, ack_qp, ack_dn, ack_qn;
logic ping_req_d, ping_req_q;
logic ping_pending_d, ping_pending_q;
@ -86,7 +87,25 @@ module prim_alert_receiver
// signalling is performed by a level change event on the diff output
assign ping_req_d = ping_req_i;
assign ping_rise = ping_req_i && !ping_req_q;
assign ping_tog_d = (ping_rise) ? ~ping_tog_q : ping_tog_q;
assign ping_tog = (ping_rise) ? ~ping_tog_qp : ping_tog_qp;
// This prevents further tool optimizations of the differential signal.
prim_buf u_prim_buf_ack_p (
.in_i(ack),
.out_o(ack_dp)
);
prim_buf u_prim_buf_ack_n (
.in_i(~ack),
.out_o(ack_dn)
);
prim_buf u_prim_buf_ping_p (
.in_i(ping_tog),
.out_o(ping_tog_dp)
);
prim_buf u_prim_buf_ping_n (
.in_i(~ping_tog),
.out_o(ping_tog_dn)
);
// the ping pending signal is used to in the FSM to distinguish whether the
// incoming handshake shall be treated as an alert or a ping response.
@ -96,10 +115,11 @@ module prim_alert_receiver
assign ping_pending_d = ping_rise | ((~ping_ok_o) & ping_req_i & ping_pending_q);
// diff pair outputs
assign alert_rx_o.ack_p = ack_q;
assign alert_rx_o.ack_n = ~ack_q;
assign alert_rx_o.ping_p = ping_tog_q;
assign alert_rx_o.ping_n = ~ping_tog_q;
assign alert_rx_o.ack_p = ack_qp;
assign alert_rx_o.ack_n = ack_qn;
assign alert_rx_o.ping_p = ping_tog_qp;
assign alert_rx_o.ping_n = ping_tog_qn;
// this FSM receives the four phase handshakes from the alert receiver
// note that the latency of the alert_p/n input diff pair is at least one
@ -108,7 +128,7 @@ module prim_alert_receiver
always_comb begin : p_fsm
// default
state_d = state_q;
ack_d = 1'b0;
ack = 1'b0;
ping_ok_o = 1'b0;
integ_fail_o = 1'b0;
alert_o = 1'b0;
@ -118,7 +138,7 @@ module prim_alert_receiver
// wait for handshake to be initiated
if (alert_level) begin
state_d = HsAckWait;
ack_d = 1'b1;
ack = 1'b1;
// signal either an alert or ping received on the output
if (ping_pending_q) begin
ping_ok_o = 1'b1;
@ -132,7 +152,7 @@ module prim_alert_receiver
if (!alert_level) begin
state_d = Pause0;
end else begin
ack_d = 1'b1;
ack = 1'b1;
end
end
// pause cycles between back-to-back handshakes
@ -144,7 +164,7 @@ module prim_alert_receiver
// override in case of sigint
if (alert_sigint) begin
state_d = Idle;
ack_d = 1'b0;
ack = 1'b0;
ping_ok_o = 1'b0;
integ_fail_o = 1'b1;
alert_o = 1'b0;
@ -154,14 +174,18 @@ module prim_alert_receiver
always_ff @(posedge clk_i or negedge rst_ni) begin : p_reg
if (!rst_ni) begin
state_q <= Idle;
ack_q <= 1'b0;
ping_tog_q <= 1'b0;
ack_qp <= 1'b0;
ack_qn <= 1'b1;
ping_tog_qp <= 1'b0;
ping_tog_qn <= 1'b1;
ping_req_q <= 1'b0;
ping_pending_q <= 1'b0;
end else begin
state_q <= state_d;
ack_q <= ack_d;
ping_tog_q <= ping_tog_d;
ack_qp <= ack_dp;
ack_qn <= ack_dn;
ping_tog_qp <= ping_tog_dp;
ping_tog_qn <= ping_tog_dn;
ping_req_q <= ping_req_d;
ping_pending_q <= ping_pending_d;
end

View file

@ -58,13 +58,13 @@ module prim_alert_sender
) i_decode_ping (
.clk_i,
.rst_ni,
.diff_pi ( alert_rx_i.ping_p ),
.diff_ni ( alert_rx_i.ping_n ),
.level_o ( ),
.rise_o ( ),
.fall_o ( ),
.event_o ( ping_event ),
.sigint_o ( ping_sigint )
.diff_pi ( alert_rx_i.ping_p ),
.diff_ni ( alert_rx_i.ping_n ),
.level_o ( ),
.rise_o ( ),
.fall_o ( ),
.event_o ( ping_event ),
.sigint_o ( ping_sigint )
);
logic ack_sigint, ack_level;
@ -74,13 +74,13 @@ module prim_alert_sender
) i_decode_ack (
.clk_i,
.rst_ni,
.diff_pi ( alert_rx_i.ack_p ),
.diff_ni ( alert_rx_i.ack_n ),
.level_o ( ack_level ),
.rise_o ( ),
.fall_o ( ),
.event_o ( ),
.sigint_o ( ack_sigint )
.diff_pi ( alert_rx_i.ack_p ),
.diff_ni ( alert_rx_i.ack_n ),
.level_o ( ack_level ),
.rise_o ( ),
.fall_o ( ),
.event_o ( ),
.sigint_o ( ack_sigint )
);
@ -98,11 +98,12 @@ module prim_alert_sender
Pause1
} state_e;
state_e state_d, state_q;
logic alert_pq, alert_nq, alert_pd, alert_nd;
logic alert_p, alert_n, alert_pq, alert_nq, alert_pd, alert_nd;
logic sigint_detected;
assign sigint_detected = ack_sigint | ping_sigint;
// diff pair output
assign alert_tx_o.alert_p = alert_pq;
assign alert_tx_o.alert_n = alert_nq;
@ -127,8 +128,8 @@ module prim_alert_sender
always_comb begin : p_fsm
// default
state_d = state_q;
alert_pd = 1'b0;
alert_nd = 1'b1;
alert_p = 1'b0;
alert_n = 1'b1;
ping_clr = 1'b0;
alert_clr = 1'b0;
@ -137,8 +138,8 @@ module prim_alert_sender
// alert always takes precedence
if (alert_req_i || alert_set_q || ping_event || ping_set_q) begin
state_d = (alert_req_i || alert_set_q) ? AlertHsPhase1 : PingHsPhase1;
alert_pd = 1'b1;
alert_nd = 1'b0;
alert_p = 1'b1;
alert_n = 1'b0;
end
end
// waiting for ack from receiver
@ -146,8 +147,8 @@ module prim_alert_sender
if (ack_level) begin
state_d = AlertHsPhase2;
end else begin
alert_pd = 1'b1;
alert_nd = 1'b0;
alert_p = 1'b1;
alert_n = 1'b0;
end
end
// wait for deassertion of ack
@ -162,8 +163,8 @@ module prim_alert_sender
if (ack_level) begin
state_d = PingHsPhase2;
end else begin
alert_pd = 1'b1;
alert_nd = 1'b0;
alert_p = 1'b1;
alert_n = 1'b0;
end
end
// wait for deassertion of ack
@ -192,8 +193,8 @@ module prim_alert_sender
state_d = Idle;
if (sigint_detected) begin
state_d = SigInt;
alert_pd = ~alert_pq;
alert_nd = ~alert_pq;
alert_p = ~alert_pq;
alert_n = ~alert_pq;
end
end
// catch parasitic states
@ -202,13 +203,23 @@ module prim_alert_sender
// bail out if a signal integrity issue has been detected
if (sigint_detected && (state_q != SigInt)) begin
state_d = SigInt;
alert_pd = 1'b0;
alert_nd = 1'b0;
alert_p = 1'b0;
alert_n = 1'b0;
ping_clr = 1'b0;
alert_clr = 1'b0;
end
end
// This prevents further tool optimizations of the differential signal.
prim_buf u_prim_buf_p (
.in_i(alert_p),
.out_o(alert_pd)
);
prim_buf u_prim_buf_n (
.in_i(alert_n),
.out_o(alert_nd)
);
always_ff @(posedge clk_i or negedge rst_ni) begin : p_reg
if (!rst_ni) begin
state_q <= Idle;

View file

@ -118,7 +118,7 @@ module prim_arbiter_ppc #(
always_comb begin
idx_o = '0;
for (int i = 0 ; i < N ; i++) begin
for (int unsigned i = 0 ; i < N ; i++) begin
if (winner[i]) begin
idx_o = i[IdxW-1:0];
end
@ -222,4 +222,3 @@ end
`endif
endmodule : prim_arbiter_ppc

View file

@ -0,0 +1,93 @@
// Copyright lowRISC contributors.
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
//
// This module can be used as a "gadget" to adapt the native 32bit width of the EDN network
// locally to the width needed by the consuming logic. For example, if the local consumer
// needs 128bit, this module would request four 32 bit words from EDN and stack them accordingly.
//
// The module also uses a req/ack synchronizer to synchronize the EDN data over to the local
// clock domain. Note that this assumes that the EDN data bus remains stable between subsequent
// requests.
//
`include "prim_assert.sv"
module prim_edn_req
import prim_alert_pkg::*;
#(
parameter int OutWidth = 32
) (
// Design side
input clk_i,
input rst_ni,
input req_i,
output logic ack_o,
output logic [OutWidth-1:0] data_o,
output logic fips_o,
// EDN side
input clk_edn_i,
input rst_edn_ni,
output edn_pkg::edn_req_t edn_o,
input edn_pkg::edn_rsp_t edn_i
);
// Stop requesting words from EDN once desired amount of data is available.
logic word_req, word_ack;
assign word_req = req_i & ~ack_o;
logic [edn_pkg::ENDPOINT_BUS_WIDTH-1:0] word_data;
logic word_fips;
prim_sync_reqack_data #(
.Width(edn_pkg::ENDPOINT_BUS_WIDTH),
.DataSrc2Dst(1'b0),
.DataReg(1'b0)
) u_prim_sync_reqack_data (
.clk_src_i ( clk_i ),
.rst_src_ni ( rst_ni ),
.clk_dst_i ( clk_edn_i ),
.rst_dst_ni ( rst_edn_ni ),
.src_req_i ( word_req ),
.src_ack_o ( word_ack ),
.dst_req_o ( edn_o.edn_req ),
.dst_ack_i ( edn_i.edn_ack ),
.data_i ( {edn_i.edn_fips, edn_i.edn_bus} ),
.data_o ( {word_fips, word_data} )
);
prim_packer_fifo #(
.InW(edn_pkg::ENDPOINT_BUS_WIDTH),
.OutW(OutWidth)
) u_prim_packer_fifo (
.clk_i,
.rst_ni,
.clr_i ( 1'b0 ), // not needed
.wvalid_i ( word_ack ),
.wdata_i ( word_data ),
// no need for backpressure since we're always ready to
// sink data at this point.
.wready_o ( ),
.rvalid_o ( ack_o ),
.rdata_o ( data_o ),
// we're always ready to receive the packed output word
// at this point.
.rready_i ( 1'b1 ),
.depth_o ( )
);
// Need to track if any of the packed words has been generated with a pre-FIPS seed, i.e., has
// fips == 1'b0.
logic fips_d, fips_q;
assign fips_d = (req_i && ack_o) ? 1'b1 : // clear
(word_ack) ? fips_q & word_fips : // accumulate
fips_q; // keep
always_ff @(posedge clk_i or negedge rst_ni) begin
if (!rst_ni) begin
fips_q <= 1'b1;
end else begin
fips_q <= fips_d;
end
end
assign fips_o = fips_q;
endmodule : prim_edn_req

View file

@ -3,6 +3,7 @@
// SPDX-License-Identifier: Apache-2.0
package prim_esc_pkg;
typedef struct packed {
logic esc_p;
logic esc_n;
@ -12,4 +13,11 @@ package prim_esc_pkg;
logic resp_p;
logic resp_n;
} esc_rx_t;
endpackage
parameter esc_tx_t ESC_TX_DEFAULT = '{esc_p: 1'b0,
esc_n: 1'b1};
parameter esc_rx_t ESC_RX_DEFAULT = '{resp_p: 1'b0,
resp_n: 1'b1};
endpackage : prim_esc_pkg

View file

@ -57,17 +57,27 @@ module prim_esc_receiver
typedef enum logic [2:0] {Idle, Check, PingResp, EscResp, SigInt} state_e;
state_e state_d, state_q;
logic resp_pd, resp_pq, resp_nd, resp_nq;
logic resp_p, resp_pd, resp_pq;
logic resp_n, resp_nd, resp_nq;
// This prevents further tool optimizations of the differential signal.
prim_buf u_prim_buf_p (
.in_i(resp_p),
.out_o(resp_pd)
);
prim_buf u_prim_buf_n (
.in_i(resp_n),
.out_o(resp_nd)
);
assign esc_rx_o.resp_p = resp_pq;
assign esc_rx_o.resp_n = resp_nq;
always_comb begin : p_fsm
// default
state_d = state_q;
resp_pd = 1'b0;
resp_nd = 1'b1;
resp_p = 1'b0;
resp_n = 1'b1;
esc_en_o = 1'b0;
unique case (state_q)
@ -75,8 +85,8 @@ module prim_esc_receiver
Idle: begin
if (esc_level) begin
state_d = Check;
resp_pd = 1'b1;
resp_nd = 1'b0;
resp_p = 1'b1;
resp_n = 1'b0;
end
end
// we decide here whether this is only a ping request or
@ -92,8 +102,8 @@ module prim_esc_receiver
// we got an escalation signal (pings cannot occur back to back)
PingResp: begin
state_d = Idle;
resp_pd = 1'b1;
resp_nd = 1'b0;
resp_p = 1'b1;
resp_n = 1'b0;
if (esc_level) begin
state_d = EscResp;
esc_en_o = 1'b1;
@ -105,8 +115,8 @@ module prim_esc_receiver
state_d = Idle;
if (esc_level) begin
state_d = EscResp;
resp_pd = ~resp_pq;
resp_nd = resp_pq;
resp_p = ~resp_pq;
resp_n = resp_pq;
esc_en_o = 1'b1;
end
end
@ -119,8 +129,8 @@ module prim_esc_receiver
state_d = Idle;
if (sigint_detected) begin
state_d = SigInt;
resp_pd = ~resp_pq;
resp_nd = ~resp_pq;
resp_p = ~resp_pq;
resp_n = ~resp_pq;
end
end
default : state_d = Idle;
@ -129,8 +139,8 @@ module prim_esc_receiver
// bail out if a signal integrity issue has been detected
if (sigint_detected && (state_q != SigInt)) begin
state_d = SigInt;
resp_pd = 1'b0;
resp_nd = 1'b0;
resp_p = 1'b0;
resp_n = 1'b0;
end
end

View file

@ -71,8 +71,18 @@ module prim_esc_sender
// ping enable is 1 cycle pulse
// escalation pulse is always longer than 2 cycles
assign esc_tx_o.esc_p = esc_req_i | esc_req_q | (ping_req_d & ~ping_req_q);
assign esc_tx_o.esc_n = ~esc_tx_o.esc_p;
logic esc_p;
assign esc_p = esc_req_i | esc_req_q | (ping_req_d & ~ping_req_q);
// This prevents further tool optimizations of the differential signal.
prim_buf u_prim_buf_p (
.in_i(esc_p),
.out_o(esc_tx_o.esc_p)
);
prim_buf u_prim_buf_n (
.in_i(~esc_p),
.out_o(esc_tx_o.esc_n)
);
//////////////
// RX Logic //

View file

@ -0,0 +1,35 @@
// Copyright lowRISC contributors.
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
//
// Multibit life cycle signal sender module.
//
// This module is instantiates a hand-picked flop cell
// for each bit in the life cycle control signal such that tools do not
// optimize the multibit encoding.
`include "prim_assert.sv"
module prim_lc_sender (
input clk_i,
input rst_ni,
input lc_ctrl_pkg::lc_tx_t lc_en_i,
output lc_ctrl_pkg::lc_tx_t lc_en_o
);
logic [lc_ctrl_pkg::TxWidth-1:0] lc_en, lc_en_out;
assign lc_en = lc_ctrl_pkg::TxWidth'(lc_en_i);
prim_generic_flop #(
.Width(lc_ctrl_pkg::TxWidth),
.ResetValue(lc_ctrl_pkg::TxWidth'(lc_ctrl_pkg::Off))
) u_prim_generic_flop (
.clk_i,
.rst_ni,
.d_i ( lc_en ),
.q_o ( lc_en_out )
);
assign lc_en_o = lc_ctrl_pkg::lc_tx_t'(lc_en_out);
endmodule : prim_lc_sender

View file

@ -36,23 +36,43 @@ module prim_lc_sync #(
.q_o(lc_en)
);
logic [NumCopies-1:0][lc_ctrl_pkg::TxWidth-1:0] lc_en_copies;
for (genvar j = 0; j < NumCopies; j++) begin : gen_buffs
logic [lc_ctrl_pkg::TxWidth-1:0] lc_en_out;
for (genvar k = 0; k < lc_ctrl_pkg::TxWidth; k++) begin : gen_bits
// TODO: replace this with a normal buffer primitive, once available.
prim_clock_buf u_prim_clock_buf (
.clk_i(lc_en[k]),
.clk_o(lc_en_copies[j][k])
prim_buf u_prim_buf (
.in_i(lc_en[k]),
.out_o(lc_en_out[k])
);
end
assign lc_en_o[j] = lc_ctrl_pkg::lc_tx_t'(lc_en_out);
end
assign lc_en_o = lc_en_copies;
////////////////
// Assertions //
////////////////
// TODO: add more assertions
// The outputs should be known at all times.
`ASSERT_KNOWN(OutputsKnown_A, lc_en_o)
// If the multibit signal is in a transient state, we expect it
// to be stable again within one clock cycle.
`ASSERT(CheckTransients_A,
!(lc_en_i inside {lc_ctrl_pkg::On, lc_ctrl_pkg::Off})
|=>
(lc_en_i inside {lc_ctrl_pkg::On, lc_ctrl_pkg::Off}))
// If a signal departs from passive state, we expect it to move to the active state
// with only one transient cycle in between.
`ASSERT(CheckTransients0_A,
$past(lc_en_i == lc_ctrl_pkg::Off) &&
!(lc_en_i inside {lc_ctrl_pkg::On, lc_ctrl_pkg::Off})
|=>
(lc_en_i == lc_ctrl_pkg::On))
`ASSERT(CheckTransients1_A,
$past(lc_en_i == lc_ctrl_pkg::On) &&
!(lc_en_i inside {lc_ctrl_pkg::On, lc_ctrl_pkg::Off})
|=>
(lc_en_i == lc_ctrl_pkg::Off))
endmodule : prim_lc_sync

View file

@ -6,7 +6,7 @@
//
// This module is only meant to be used in special cases where a handshake synchronizer
// is not viable (this is for instance the case for the multibit life cycle signals).
// For handshake-based synchronization, consider using prim_sync_reqack.
// For handshake-based synchronization, consider using prim_sync_reqack_data.
//
//
// Description:

View file

@ -0,0 +1,26 @@
// Copyright lowRISC contributors.
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
// Common interface definitions for OTP primitives.
package prim_otp_pkg;
parameter int CmdWidth = 2;
parameter int ErrWidth = 3;
typedef enum logic [CmdWidth-1:0] {
Read = 2'b00,
Write = 2'b01,
Init = 2'b11
} cmd_e;
typedef enum logic [ErrWidth-1:0] {
NoError = 3'h0,
MacroError = 3'h1,
MacroEccCorrError = 3'h2,
MacroEccUncorrError = 3'h3,
MacroWriteBlankError = 3'h4
} err_e;
endpackage : prim_otp_pkg

View file

@ -8,7 +8,8 @@
module prim_packer #(
parameter int InW = 32,
parameter int OutW = 32
parameter int OutW = 32,
parameter int HintByteData = 0 // If 1, The input/output are byte granularity
) (
input clk_i ,
input rst_ni,
@ -275,4 +276,22 @@ module prim_packer #(
|=> ($past(mask_i) >>
($past(lod_idx)+OutW-$countones($past(stored_mask))))
== stored_mask)
// Assertions for byte hint enabled
if (HintByteData != 0) begin : g_byte_assert
`ASSERT_INIT(InputDividedBy8_A, InW % 8 == 0)
`ASSERT_INIT(OutputDividedBy8_A, OutW % 8 == 0)
// Masking[8*i+:8] should be all zero or all one
for (genvar i = 0 ; i < InW/8 ; i++) begin : g_byte_input_masking
`ASSERT(InputMaskContiguous_A,
valid_i |-> (|mask_i[8*i+:8] == 1'b 0)
|| (&mask_i[8*i+:8] == 1'b 1))
end
for (genvar i = 0 ; i < OutW/8 ; i++) begin : g_byte_output_masking
`ASSERT(OutputMaskContiguous_A,
valid_o |-> (|mask_o[8*i+:8] == 1'b 0)
|| (&mask_o[8*i+:8] == 1'b 1))
end
end
endmodule

View file

@ -5,8 +5,8 @@
// Single-Port SRAM Wrapper
//
// Supported configurations:
// - ECC for 32b wide memories with no write mask
// (Width == 32 && DataBitsPerMask == 32).
// - ECC for 32b and 64b wide memories with no write mask
// (Width == 32 or Width == 64, DataBitsPerMask is ignored).
// - Byte parity if Width is a multiple of 8 bit and write masks have Byte
// granularity (DataBitsPerMask == 8).
//
@ -51,11 +51,6 @@ module prim_ram_1p_adv #(
`ASSERT_INIT(CannotHaveEccAndParity_A, !(EnableParity && EnableECC))
// While we require DataBitsPerMask to be per Byte (8) at the interface in case Byte parity is
// enabled, we need to switch this to a per-bit mask locally such that we can individually enable
// the parity bits to be written alongside the data.
localparam int LocalDataBitsPerMask = (EnableParity) ? 1 : DataBitsPerMask;
// Calculate ECC width
localparam int ParWidth = (EnableParity) ? Width/8 :
(!EnableECC) ? 0 :
@ -66,6 +61,13 @@ module prim_ram_1p_adv #(
(Width <= 120) ? 8 : 8 ;
localparam int TotalWidth = Width + ParWidth;
// If byte parity is enabled, the write enable bits are used to write memory colums
// with 8 + 1 = 9 bit width (data plus corresponding parity bit).
// If ECC is enabled, the DataBitsPerMask is ignored.
localparam int LocalDataBitsPerMask = (EnableParity) ? 9 :
(EnableECC) ? TotalWidth :
DataBitsPerMask;
////////////////////////////
// RAM Primitive Instance //
////////////////////////////
@ -75,7 +77,7 @@ module prim_ram_1p_adv #(
logic [Aw-1:0] addr_q, addr_d ;
logic [TotalWidth-1:0] wdata_q, wdata_d ;
logic [TotalWidth-1:0] wmask_q, wmask_d ;
logic rvalid_q, rvalid_d, rvalid_sram ;
logic rvalid_q, rvalid_d, rvalid_sram_q ;
logic [Width-1:0] rdata_q, rdata_d ;
logic [TotalWidth-1:0] rdata_sram ;
logic [1:0] rerror_q, rerror_d ;
@ -99,9 +101,9 @@ module prim_ram_1p_adv #(
always_ff @(posedge clk_i or negedge rst_ni) begin
if (!rst_ni) begin
rvalid_sram <= 1'b0;
rvalid_sram_q <= 1'b0;
end else begin
rvalid_sram <= req_q & ~write_q;
rvalid_sram_q <= req_q & ~write_q;
end
end
@ -154,21 +156,21 @@ module prim_ram_1p_adv #(
always_comb begin : p_parity
rerror_d = '0;
wmask_d[0+:Width] = wmask_i;
wdata_d[0+:Width] = wdata_i;
for (int i = 0; i < Width/8; i ++) begin
// parity generation (odd parity)
wdata_d[Width + i] = ~(^wdata_i[i*8 +: 8]);
wmask_d[Width + i] = &wmask_i[i*8 +: 8];
// parity decoding (errors are always uncorrectable)
rerror_d[1] |= ~(^{rdata_sram[i*8 +: 8], rdata_sram[Width + i]});
end
// tie to zero if the read data is not valid
rerror_d &= {2{rvalid_sram}};
end
// Data mapping. We have to make 8+1 = 9 bit groups
// that have the same write enable such that FPGA tools
// can map this correctly to BRAM resources.
wmask_d[i*9 +: 8] = wmask_i[i*8 +: 8];
wdata_d[i*9 +: 8] = wdata_i[i*8 +: 8];
rdata_d[i*8 +: 8] = rdata_sram[i*9 +: 8];
assign rdata_d = rdata_sram[0+:Width];
// parity generation (odd parity)
wdata_d[i*9 + 8] = ~(^wdata_i[i*8 +: 8]);
wmask_d[i*9 + 8] = &wmask_i[i*8 +: 8];
// parity decoding (errors are always uncorrectable)
rerror_d[1] |= ~(^{rdata_sram[i*9 +: 8], rdata_sram[i*9 + 8]});
end
end
end else begin : gen_nosecded_noparity
assign wmask_d = wmask_i;
assign wdata_d = wdata_i;
@ -177,7 +179,7 @@ module prim_ram_1p_adv #(
assign rerror_d = '0;
end
assign rvalid_d = rvalid_sram;
assign rvalid_d = rvalid_sram_q;
/////////////////////////////////////
// Input/Output Pipeline Registers //
@ -218,13 +220,15 @@ module prim_ram_1p_adv #(
end else begin
rvalid_q <= rvalid_d;
rdata_q <= rdata_d;
rerror_q <= rerror_d;
// tie to zero if the read data is not valid
rerror_q <= rerror_d & {2{rvalid_d}};
end
end
end else begin : gen_dirconnect_output
assign rvalid_q = rvalid_d;
assign rdata_q = rdata_d;
assign rerror_q = rerror_d;
// tie to zero if the read data is not valid
assign rerror_q = rerror_d & {2{rvalid_d}};
end
endmodule : prim_ram_1p_adv

View file

@ -10,9 +10,9 @@
//
// The currently implemented architecture uses a reduced-round PRINCE cipher primitive in CTR mode
// in order to (weakly) scramble the data written to the memory macro. Plain CTR mode does not
// diffuse the data since the keystream is just XOR'ed onto it, hence we also we perform Byte-wise
// diffuse the data since the keystream is just XOR'ed onto it, hence we also we perform byte-wise
// diffusion using a (shallow) substitution/permutation network layers in order to provide a limited
// avalanche effect within a Byte.
// avalanche effect within a byte.
//
// In order to break the linear addressing space, the address is passed through a bijective
// scrambling function constructed using a (shallow) substitution/permutation and a nonce. Due to
@ -24,16 +24,15 @@
`include "prim_assert.sv"
module prim_ram_1p_scr #(
parameter int Depth = 512, // Needs to be a power of 2 if NumAddrScrRounds > 0.
parameter int Width = 256, // Needs to be Byte aligned for parity
parameter int DataBitsPerMask = 8, // Currently only 8 is supported
parameter int Depth = 16*1024, // Needs to be a power of 2 if NumAddrScrRounds > 0.
parameter int Width = 32, // Needs to be byte aligned for parity
parameter int CfgWidth = 8, // WTC, RTC, etc
// Scrambling parameters. Note that this needs to be low-latency, hence we have to keep the
// amount of cipher rounds low. PRINCE has 5 half rounds in its original form, which corresponds
// to 2*5 + 1 effective rounds. Setting this to 2 halves this to approximately 5 effective rounds.
parameter int NumPrinceRoundsHalf = 2, // Number of PRINCE half rounds, can be [1..5]
// Number of extra intra-Byte diffusion rounds. Setting this to 0 disables intra-Byte diffusion.
// Number of extra intra-byte diffusion rounds. Setting this to 0 disables intra-byte diffusion.
parameter int NumByteScrRounds = 2,
// Number of address scrambling rounds. Setting this to 0 disables address scrambling.
parameter int NumAddrScrRounds = 2,
@ -57,19 +56,22 @@ module prim_ram_1p_scr #(
input clk_i,
input rst_ni,
// Key interface. Memory requests will not be granted if key_valid is set to 0.
input key_valid_i,
input [DataKeyWidth-1:0] key_i,
input [NonceWidth-1:0] nonce_i,
// Interface to TL-UL SRAM adapter
input req_i,
output logic gnt_o,
input write_i,
input [AddrWidth-1:0] addr_i,
input [Width-1:0] wdata_i,
input [Width-1:0] wmask_i, // Needs to be Byte-aligned for parity
input [Width-1:0] wmask_i, // Needs to be byte-aligned for parity
output logic [Width-1:0] rdata_o,
output logic rvalid_o, // Read response (rdata_o) is valid
output logic [1:0] rerror_o, // Bit1: Uncorrectable, Bit0: Correctable
output logic [AddrWidth-1:0] raddr_o, // Read address for error reporting.
output logic [31:0] raddr_o, // Read address for error reporting.
// config
input [CfgWidth-1:0] cfg_i
@ -86,41 +88,40 @@ module prim_ram_1p_scr #(
// Pending Write and Address Registers //
/////////////////////////////////////////
// Read / write strobes
logic read_en, write_en;
assign read_en = req_i & ~write_i;
assign write_en = req_i & write_i;
// Writes are delayed by one cycle, such the same keystream generation primitive (prim_prince) can
// be reused among reads and writes. Note however that with this arrangement, we have to introduce
// a mechanism to hold a pending write transaction in cases where that transaction is immediately
// followed by a read. The pending write transaction is written to memory as soon as there is no
// new read transaction incoming. The latter is a special case, and if that happens, we return the
// data from the write holding register.
logic macro_write;
logic write_pending_d, write_pending_q;
assign write_pending_d =
(write_en) ? 1'b1 : // Set new write request
(macro_write) ? 1'b0 : // Clear pending request when writing to memory
write_pending_q; // Keep pending write request alive
// new read transaction incoming. The latter can be a special case if the incoming read goes to
// the same address as the pending write. To that end, we detect the address collision and return
// the data from the write holding register.
logic collision_d, collision_q;
// Read / write strobes
logic read_en, write_en_d, write_en_q;
assign gnt_o = req_i & key_valid_i;
assign read_en = gnt_o & ~write_i;
assign write_en_d = gnt_o & write_i;
logic write_pending_q;
logic addr_collision_d, addr_collision_q;
logic [AddrWidth-1:0] waddr_q;
assign collision_d = read_en & write_pending_q & (addr_i == waddr_q);
assign addr_collision_d = read_en & (write_en_q | write_pending_q) & (addr_i == waddr_q);
// Macro requests and write strobe
logic macro_req;
assign macro_req = read_en | write_pending_q;
assign macro_req = read_en | write_en_q | write_pending_q;
// We are allowed to write a pending write transaction to the memory if there is no incoming read
assign macro_write = write_pending_q & ~read_en;
logic macro_write;
assign macro_write = (write_en_q | write_pending_q) & ~read_en;
// New read write collision
logic rw_collision;
assign rw_collision = write_en_q & read_en;
////////////////////////
// Address Scrambling //
////////////////////////
// TODO: check whether this is good enough for our purposes, or whether we should go for something
// else. Also, we may want to input some secret key material into this function as well.
// We only select the pending write address in case there is no incoming read transaction.
logic [AddrWidth-1:0] addr_mux;
assign addr_mux = (read_en) ? addr_i : waddr_q;
@ -132,7 +133,7 @@ module prim_ram_1p_scr #(
.DataWidth ( AddrWidth ),
.NumRounds ( NumAddrScrRounds ),
.Decrypt ( 0 )
) i_prim_subst_perm (
) u_prim_subst_perm (
.data_i ( addr_mux ),
// Since the counter mode concatenates {nonce_i[NonceWidth-1-AddrWidth:0], addr_i} to form
// the IV, the upper AddrWidth bits of the nonce are not used and can be used for address
@ -146,9 +147,8 @@ module prim_ram_1p_scr #(
end
// We latch the non-scrambled address for error reporting.
logic [AddrWidth-1:0] raddr_d, raddr_q;
assign raddr_d = addr_mux;
assign raddr_o = raddr_q;
logic [AddrWidth-1:0] raddr_q;
assign raddr_o = 32'(raddr_q);
//////////////////////////////////////////////
// Keystream Generation for Data Scrambling //
@ -169,7 +169,7 @@ module prim_ram_1p_scr #(
) u_prim_prince (
.clk_i,
.rst_ni,
.valid_i ( req_i ),
.valid_i ( gnt_o ),
// The IV is composed of a nonce and the row address
.data_i ( {nonce_i[k * (64 - AddrWidth) +: (64 - AddrWidth)], addr_i} ),
// All parallel scramblers use the same key
@ -198,13 +198,13 @@ module prim_ram_1p_scr #(
/////////////////////
// Data scrambling is a two step process. First, we XOR the write data with the keystream obtained
// by operating a reduced-round PRINCE cipher in CTR-mode. Then, we diffuse data within each Byte
// in order to get a limited "avalanche" behavior in case parts of the Bytes are flipped as a
// by operating a reduced-round PRINCE cipher in CTR-mode. Then, we diffuse data within each byte
// in order to get a limited "avalanche" behavior in case parts of the bytes are flipped as a
// result of a malicious attempt to tamper with the data in memory. We perform the diffusion only
// within Bytes in order to maintain the ability to write individual Bytes. Note that the
// within bytes in order to maintain the ability to write individual bytes. Note that the
// keystream XOR is performed first for the write path such that it can be performed last for the
// read path. This allows us to hide a part of the combinational delay of the PRINCE primitive
// behind the propagation delay of the SRAM macro and the per-Byte diffusion step.
// behind the propagation delay of the SRAM macro and the per-byte diffusion step.
// Write path. Note that since this does not fan out into the interconnect, the write path is not
// as critical as the read path below in terms of timing.
@ -214,12 +214,12 @@ module prim_ram_1p_scr #(
logic [7:0] wdata_xor;
assign wdata_xor = wdata_q[k*8 +: 8] ^ keystream_repl[k*8 +: 8];
// Byte aligned diffusion using a substitution / permutation network
// byte aligned diffusion using a substitution / permutation network
prim_subst_perm #(
.DataWidth ( 8 ),
.NumRounds ( NumByteScrRounds ),
.Decrypt ( 0 )
) i_prim_subst_perm (
) u_prim_subst_perm (
.data_i ( wdata_xor ),
.key_i ( '0 ),
.data_o ( wdata_scr_d[k*8 +: 8] )
@ -228,7 +228,7 @@ module prim_ram_1p_scr #(
// Read path. This is timing critical. The keystream XOR operation is performed last in order to
// hide the combinational delay of the PRINCE primitive behind the propagation delay of the
// SRAM and the Byte diffusion.
// SRAM and the byte diffusion.
logic [Width-1:0] rdata_scr, rdata;
for (genvar k = 0; k < Width/8; k++) begin : gen_undiffuse_rdata
// Reverse diffusion first
@ -237,7 +237,7 @@ module prim_ram_1p_scr #(
.DataWidth ( 8 ),
.NumRounds ( NumByteScrRounds ),
.Decrypt ( 1 )
) i_prim_subst_perm (
) u_prim_subst_perm (
.data_i ( rdata_scr[k*8 +: 8] ),
.key_i ( '0 ),
.data_o ( rdata_xor )
@ -265,24 +265,28 @@ module prim_ram_1p_scr #(
// need an additional holding register that can buffer the scrambled data of the first write in
// cycle 1.
// Clear this if we can write the memory in this cycle, otherwise set if there is a pending write
logic write_scr_pending_d, write_scr_pending_q;
assign write_scr_pending_d = (macro_write) ? 1'b0 : write_pending_q;
// Clear this if we can write the memory in this cycle. Set only if the current write cannot
// proceed due to an incoming read operation.
logic write_scr_pending_d;
assign write_scr_pending_d = (macro_write) ? 1'b0 :
(rw_collision) ? 1'b1 :
write_pending_q;
// Select the correct scrambled word to be written, based on whether the word in the scrambled
// data holding register is valid or not. Note that the write_scr_q register could in theory be
// combined with the wdata_q register. We don't do that here for timing reasons, since that would
// require another read data mux to inject the scrambled data into the read descrambling path.
logic [Width-1:0] wdata_scr;
assign wdata_scr = (write_scr_pending_q) ? wdata_scr_q : wdata_scr_d;
assign wdata_scr = (write_pending_q) ? wdata_scr_q : wdata_scr_d;
// Output read valid strobe
logic rvalid_q;
assign rvalid_o = rvalid_q;
// In case of a collision, we forward the write data from the unscrambled holding register
assign rdata_o = (collision_q) ? wdata_q : // forward pending (unscrambled) write data
(rvalid_q) ? rdata : // regular reads
'0; // tie to zero otherwise
assign rdata_o = (addr_collision_q) ? wdata_q : // forward pending (unscrambled) write data
(rvalid_q) ? rdata : // regular reads
'0; // tie to zero otherwise
///////////////
// Registers //
@ -292,26 +296,29 @@ module prim_ram_1p_scr #(
always_ff @(posedge clk_i or negedge rst_ni) begin : p_wdata_buf
if (!rst_ni) begin
write_pending_q <= 1'b0;
write_scr_pending_q <= 1'b0;
collision_q <= 1'b0;
addr_collision_q <= 1'b0;
rvalid_q <= 1'b0;
write_en_q <= 1'b0;
raddr_q <= '0;
waddr_q <= '0;
wmask_q <= '0;
wdata_q <= '0;
wdata_scr_q <= '0;
wmask_q <= '0;
raddr_q <= '0;
end else begin
write_scr_pending_q <= write_scr_pending_d;
write_pending_q <= write_pending_d;
collision_q <= collision_d;
write_pending_q <= write_scr_pending_d;
addr_collision_q <= addr_collision_d;
rvalid_q <= read_en;
raddr_q <= raddr_d;
if (write_en) begin
write_en_q <= write_en_d;
if (read_en) begin
raddr_q <= addr_i;
end
if (write_en_d) begin
waddr_q <= addr_i;
wmask_q <= wmask_i;
wdata_q <= wdata_i;
end
if (write_scr_pending_d) begin
if (rw_collision) begin
wdata_scr_q <= wdata_scr_d;
end
end
@ -324,10 +331,10 @@ module prim_ram_1p_scr #(
prim_ram_1p_adv #(
.Depth(Depth),
.Width(Width),
.DataBitsPerMask(DataBitsPerMask),
.DataBitsPerMask(8),
.CfgW(CfgWidth),
.EnableECC(1'b0),
.EnableParity(1'b1), // We are using Byte parity
.EnableParity(1'b1), // We are using byte parity
.EnableInputPipeline(1'b0),
.EnableOutputPipeline(1'b0)
) u_prim_ram_1p_adv (

View file

@ -5,8 +5,8 @@
// Dual-Port SRAM Wrapper
//
// Supported configurations:
// - ECC for 32b wide memories with no write mask
// (Width == 32 && DataBitsPerMask == 32).
// - ECC for 32b and 64b wide memories with no write mask
// (Width == 32 or Width == 64, DataBitsPerMask is ignored).
// - Byte parity if Width is a multiple of 8 bit and write masks have Byte
// granularity (DataBitsPerMask == 8).
//

View file

@ -5,8 +5,8 @@
// Asynchronous Dual-Port SRAM Wrapper
//
// Supported configurations:
// - ECC for 32b wide memories with no write mask
// (Width == 32 && DataBitsPerMask == 32).
// - ECC for 32b and 64b wide memories with no write mask
// (Width == 32 or Width == 64, DataBitsPerMask is ignored).
// - Byte parity if Width is a multiple of 8 bit and write masks have Byte
// granularity (DataBitsPerMask == 8).
//
@ -62,11 +62,6 @@ module prim_ram_2p_async_adv #(
`ASSERT_INIT(CannotHaveEccAndParity_A, !(EnableParity && EnableECC))
// While we require DataBitsPerMask to be per Byte (8) at the interface in case Byte parity is
// enabled, we need to switch this to a per-bit mask locally such that we can individually enable
// the parity bits to be written alongside the data.
localparam int LocalDataBitsPerMask = (EnableParity) ? 1 : DataBitsPerMask;
// Calculate ECC width
localparam int ParWidth = (EnableParity) ? Width/8 :
(!EnableECC) ? 0 :
@ -77,6 +72,13 @@ module prim_ram_2p_async_adv #(
(Width <= 120) ? 8 : 8 ;
localparam int TotalWidth = Width + ParWidth;
// If byte parity is enabled, the write enable bits are used to write memory colums
// with 8 + 1 = 9 bit width (data plus corresponding parity bit).
// If ECC is enabled, the DataBitsPerMask is ignored.
localparam int LocalDataBitsPerMask = (EnableParity) ? 9 :
(EnableECC) ? TotalWidth :
DataBitsPerMask;
////////////////////////////
// RAM Primitive Instance //
////////////////////////////
@ -86,7 +88,7 @@ module prim_ram_2p_async_adv #(
logic [Aw-1:0] a_addr_q, a_addr_d ;
logic [TotalWidth-1:0] a_wdata_q, a_wdata_d ;
logic [TotalWidth-1:0] a_wmask_q, a_wmask_d ;
logic a_rvalid_q, a_rvalid_d, a_rvalid_sram ;
logic a_rvalid_q, a_rvalid_d, a_rvalid_sram_q ;
logic [Width-1:0] a_rdata_q, a_rdata_d ;
logic [TotalWidth-1:0] a_rdata_sram ;
logic [1:0] a_rerror_q, a_rerror_d ;
@ -96,7 +98,7 @@ module prim_ram_2p_async_adv #(
logic [Aw-1:0] b_addr_q, b_addr_d ;
logic [TotalWidth-1:0] b_wdata_q, b_wdata_d ;
logic [TotalWidth-1:0] b_wmask_q, b_wmask_d ;
logic b_rvalid_q, b_rvalid_d, b_rvalid_sram ;
logic b_rvalid_q, b_rvalid_d, b_rvalid_sram_q ;
logic [Width-1:0] b_rdata_q, b_rdata_d ;
logic [TotalWidth-1:0] b_rdata_sram ;
logic [1:0] b_rerror_q, b_rerror_d ;
@ -128,16 +130,16 @@ module prim_ram_2p_async_adv #(
always_ff @(posedge clk_a_i or negedge rst_a_ni) begin
if (!rst_a_ni) begin
a_rvalid_sram <= 1'b0;
a_rvalid_sram_q <= 1'b0;
end else begin
a_rvalid_sram <= a_req_q & ~a_write_q;
a_rvalid_sram_q <= a_req_q & ~a_write_q;
end
end
always_ff @(posedge clk_b_i or negedge rst_b_ni) begin
if (!rst_b_ni) begin
b_rvalid_sram <= 1'b0;
b_rvalid_sram_q <= 1'b0;
end else begin
b_rvalid_sram <= b_req_q & ~b_write_q;
b_rvalid_sram_q <= b_req_q & ~b_write_q;
end
end
@ -197,28 +199,27 @@ module prim_ram_2p_async_adv #(
always_comb begin : p_parity
a_rerror_d = '0;
b_rerror_d = '0;
a_wmask_d[0+:Width] = a_wmask_i;
b_wmask_d[0+:Width] = b_wmask_i;
a_wdata_d[0+:Width] = a_wdata_i;
b_wdata_d[0+:Width] = b_wdata_i;
for (int i = 0; i < Width/8; i ++) begin
// parity generation (odd parity)
a_wdata_d[Width + i] = ~(^a_wdata_i[i*8 +: 8]);
b_wdata_d[Width + i] = ~(^b_wdata_i[i*8 +: 8]);
a_wmask_d[Width + i] = &a_wmask_i[i*8 +: 8];
b_wmask_d[Width + i] = &b_wmask_i[i*8 +: 8];
// parity decoding (errors are always uncorrectable)
a_rerror_d[1] |= ~(^{a_rdata_sram[i*8 +: 8], a_rdata_sram[Width + i]});
b_rerror_d[1] |= ~(^{b_rdata_sram[i*8 +: 8], b_rdata_sram[Width + i]});
end
// tie to zero if the read data is not valid
a_rerror_d &= {2{a_rvalid_sram}};
b_rerror_d &= {2{b_rvalid_sram}};
end
// Data mapping. We have to make 8+1 = 9 bit groups
// that have the same write enable such that FPGA tools
// can map this correctly to BRAM resources.
a_wmask_d[i*9 +: 8] = a_wmask_i[i*8 +: 8];
a_wdata_d[i*9 +: 8] = a_wdata_i[i*8 +: 8];
a_rdata_d[i*8 +: 8] = a_rdata_sram[i*9 +: 8];
b_wmask_d[i*9 +: 8] = b_wmask_i[i*8 +: 8];
b_wdata_d[i*9 +: 8] = b_wdata_i[i*8 +: 8];
b_rdata_d[i*8 +: 8] = b_rdata_sram[i*9 +: 8];
assign a_rdata_d = a_rdata_sram[0+:Width];
assign b_rdata_d = b_rdata_sram[0+:Width];
// parity generation (odd parity)
a_wdata_d[i*9 + 8] = ~(^a_wdata_i[i*8 +: 8]);
a_wmask_d[i*9 + 8] = &a_wmask_i[i*8 +: 8];
b_wdata_d[i*9 + 8] = ~(^b_wdata_i[i*8 +: 8]);
b_wmask_d[i*9 + 8] = &b_wmask_i[i*8 +: 8];
// parity decoding (errors are always uncorrectable)
a_rerror_d[1] |= ~(^{a_rdata_sram[i*9 +: 8], a_rdata_sram[i*9 + 8]});
b_rerror_d[1] |= ~(^{b_rdata_sram[i*9 +: 8], b_rdata_sram[i*9 + 8]});
end
end
end else begin : gen_nosecded_noparity
assign a_wmask_d = a_wmask_i;
assign b_wmask_d = b_wmask_i;
@ -230,8 +231,8 @@ module prim_ram_2p_async_adv #(
assign b_rerror_d = '0;
end
assign a_rvalid_d = a_rvalid_sram;
assign b_rvalid_d = b_rvalid_sram;
assign a_rvalid_d = a_rvalid_sram_q;
assign b_rvalid_d = b_rvalid_sram_q;
/////////////////////////////////////
// Input/Output Pipeline Registers //
@ -293,7 +294,8 @@ module prim_ram_2p_async_adv #(
end else begin
a_rvalid_q <= a_rvalid_d;
a_rdata_q <= a_rdata_d;
a_rerror_q <= a_rerror_d;
// tie to zero if the read data is not valid
a_rerror_q <= a_rerror_d & {2{a_rvalid_d}};
end
end
always_ff @(posedge clk_b_i or negedge rst_b_ni) begin
@ -304,17 +306,20 @@ module prim_ram_2p_async_adv #(
end else begin
b_rvalid_q <= b_rvalid_d;
b_rdata_q <= b_rdata_d;
b_rerror_q <= b_rerror_d;
// tie to zero if the read data is not valid
b_rerror_q <= b_rerror_d & {2{b_rvalid_d}};
end
end
end else begin : gen_dirconnect_output
assign a_rvalid_q = a_rvalid_d;
assign a_rdata_q = a_rdata_d;
assign a_rerror_q = a_rerror_d;
// tie to zero if the read data is not valid
assign a_rerror_q = a_rerror_d & {2{a_rvalid_d}};
assign b_rvalid_q = b_rvalid_d;
assign b_rdata_q = b_rdata_d;
assign b_rerror_q = b_rerror_d;
// tie to zero if the read data is not valid
assign b_rerror_q = b_rerror_d & {2{b_rvalid_d}};
end
endmodule : prim_ram_2p_async_adv

View file

@ -0,0 +1,125 @@
// Copyright lowRISC contributors.
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
//
// SECDED Decoder generated by secded_gen.py
module prim_secded_hamming_72_64_dec (
input [71:0] in,
output logic [63:0] d_o,
output logic [7:0] syndrome_o,
output logic [1:0] err_o
);
logic single_error;
// Syndrome calculation
assign syndrome_o[0] = in[64] ^ in[0] ^ in[1] ^ in[3] ^ in[4] ^ in[6] ^ in[8] ^ in[10] ^ in[11]
^ in[13] ^ in[15] ^ in[17] ^ in[19] ^ in[21] ^ in[23] ^ in[25] ^ in[26]
^ in[28] ^ in[30] ^ in[32] ^ in[34] ^ in[36] ^ in[38] ^ in[40] ^ in[42]
^ in[44] ^ in[46] ^ in[48] ^ in[50] ^ in[52] ^ in[54] ^ in[56] ^ in[57]
^ in[59] ^ in[61] ^ in[63];
assign syndrome_o[1] = in[65] ^ in[0] ^ in[2] ^ in[3] ^ in[5] ^ in[6] ^ in[9] ^ in[10] ^ in[12]
^ in[13] ^ in[16] ^ in[17] ^ in[20] ^ in[21] ^ in[24] ^ in[25] ^ in[27]
^ in[28] ^ in[31] ^ in[32] ^ in[35] ^ in[36] ^ in[39] ^ in[40] ^ in[43]
^ in[44] ^ in[47] ^ in[48] ^ in[51] ^ in[52] ^ in[55] ^ in[56] ^ in[58]
^ in[59] ^ in[62] ^ in[63];
assign syndrome_o[2] = in[66] ^ in[1] ^ in[2] ^ in[3] ^ in[7] ^ in[8] ^ in[9] ^ in[10] ^ in[14]
^ in[15] ^ in[16] ^ in[17] ^ in[22] ^ in[23] ^ in[24] ^ in[25] ^ in[29]
^ in[30] ^ in[31] ^ in[32] ^ in[37] ^ in[38] ^ in[39] ^ in[40] ^ in[45]
^ in[46] ^ in[47] ^ in[48] ^ in[53] ^ in[54] ^ in[55] ^ in[56] ^ in[60]
^ in[61] ^ in[62] ^ in[63];
assign syndrome_o[3] = in[67] ^ in[4] ^ in[5] ^ in[6] ^ in[7] ^ in[8] ^ in[9] ^ in[10] ^ in[18]
^ in[19] ^ in[20] ^ in[21] ^ in[22] ^ in[23] ^ in[24] ^ in[25] ^ in[33]
^ in[34] ^ in[35] ^ in[36] ^ in[37] ^ in[38] ^ in[39] ^ in[40] ^ in[49]
^ in[50] ^ in[51] ^ in[52] ^ in[53] ^ in[54] ^ in[55] ^ in[56];
assign syndrome_o[4] = in[68] ^ in[11] ^ in[12] ^ in[13] ^ in[14] ^ in[15] ^ in[16] ^ in[17]
^ in[18] ^ in[19] ^ in[20] ^ in[21] ^ in[22] ^ in[23] ^ in[24] ^ in[25]
^ in[41] ^ in[42] ^ in[43] ^ in[44] ^ in[45] ^ in[46] ^ in[47] ^ in[48]
^ in[49] ^ in[50] ^ in[51] ^ in[52] ^ in[53] ^ in[54] ^ in[55] ^ in[56];
assign syndrome_o[5] = in[69] ^ in[26] ^ in[27] ^ in[28] ^ in[29] ^ in[30] ^ in[31] ^ in[32]
^ in[33] ^ in[34] ^ in[35] ^ in[36] ^ in[37] ^ in[38] ^ in[39] ^ in[40]
^ in[41] ^ in[42] ^ in[43] ^ in[44] ^ in[45] ^ in[46] ^ in[47] ^ in[48]
^ in[49] ^ in[50] ^ in[51] ^ in[52] ^ in[53] ^ in[54] ^ in[55] ^ in[56];
assign syndrome_o[6] = in[70] ^ in[57] ^ in[58] ^ in[59] ^ in[60] ^ in[61] ^ in[62] ^ in[63];
assign syndrome_o[7] = in[71] ^ in[0] ^ in[1] ^ in[2] ^ in[3] ^ in[4] ^ in[5] ^ in[6] ^ in[7]
^ in[8] ^ in[9] ^ in[10] ^ in[11] ^ in[12] ^ in[13] ^ in[14] ^ in[15]
^ in[16] ^ in[17] ^ in[18] ^ in[19] ^ in[20] ^ in[21] ^ in[22] ^ in[23]
^ in[24] ^ in[25] ^ in[26] ^ in[27] ^ in[28] ^ in[29] ^ in[30] ^ in[31]
^ in[32] ^ in[33] ^ in[34] ^ in[35] ^ in[36] ^ in[37] ^ in[38] ^ in[39]
^ in[40] ^ in[41] ^ in[42] ^ in[43] ^ in[44] ^ in[45] ^ in[46] ^ in[47]
^ in[48] ^ in[49] ^ in[50] ^ in[51] ^ in[52] ^ in[53] ^ in[54] ^ in[55]
^ in[56] ^ in[57] ^ in[58] ^ in[59] ^ in[60] ^ in[61] ^ in[62] ^ in[63];
// Corrected output calculation
assign d_o[0] = (syndrome_o == 8'h83) ^ in[0];
assign d_o[1] = (syndrome_o == 8'h85) ^ in[1];
assign d_o[2] = (syndrome_o == 8'h86) ^ in[2];
assign d_o[3] = (syndrome_o == 8'h87) ^ in[3];
assign d_o[4] = (syndrome_o == 8'h89) ^ in[4];
assign d_o[5] = (syndrome_o == 8'h8a) ^ in[5];
assign d_o[6] = (syndrome_o == 8'h8b) ^ in[6];
assign d_o[7] = (syndrome_o == 8'h8c) ^ in[7];
assign d_o[8] = (syndrome_o == 8'h8d) ^ in[8];
assign d_o[9] = (syndrome_o == 8'h8e) ^ in[9];
assign d_o[10] = (syndrome_o == 8'h8f) ^ in[10];
assign d_o[11] = (syndrome_o == 8'h91) ^ in[11];
assign d_o[12] = (syndrome_o == 8'h92) ^ in[12];
assign d_o[13] = (syndrome_o == 8'h93) ^ in[13];
assign d_o[14] = (syndrome_o == 8'h94) ^ in[14];
assign d_o[15] = (syndrome_o == 8'h95) ^ in[15];
assign d_o[16] = (syndrome_o == 8'h96) ^ in[16];
assign d_o[17] = (syndrome_o == 8'h97) ^ in[17];
assign d_o[18] = (syndrome_o == 8'h98) ^ in[18];
assign d_o[19] = (syndrome_o == 8'h99) ^ in[19];
assign d_o[20] = (syndrome_o == 8'h9a) ^ in[20];
assign d_o[21] = (syndrome_o == 8'h9b) ^ in[21];
assign d_o[22] = (syndrome_o == 8'h9c) ^ in[22];
assign d_o[23] = (syndrome_o == 8'h9d) ^ in[23];
assign d_o[24] = (syndrome_o == 8'h9e) ^ in[24];
assign d_o[25] = (syndrome_o == 8'h9f) ^ in[25];
assign d_o[26] = (syndrome_o == 8'ha1) ^ in[26];
assign d_o[27] = (syndrome_o == 8'ha2) ^ in[27];
assign d_o[28] = (syndrome_o == 8'ha3) ^ in[28];
assign d_o[29] = (syndrome_o == 8'ha4) ^ in[29];
assign d_o[30] = (syndrome_o == 8'ha5) ^ in[30];
assign d_o[31] = (syndrome_o == 8'ha6) ^ in[31];
assign d_o[32] = (syndrome_o == 8'ha7) ^ in[32];
assign d_o[33] = (syndrome_o == 8'ha8) ^ in[33];
assign d_o[34] = (syndrome_o == 8'ha9) ^ in[34];
assign d_o[35] = (syndrome_o == 8'haa) ^ in[35];
assign d_o[36] = (syndrome_o == 8'hab) ^ in[36];
assign d_o[37] = (syndrome_o == 8'hac) ^ in[37];
assign d_o[38] = (syndrome_o == 8'had) ^ in[38];
assign d_o[39] = (syndrome_o == 8'hae) ^ in[39];
assign d_o[40] = (syndrome_o == 8'haf) ^ in[40];
assign d_o[41] = (syndrome_o == 8'hb0) ^ in[41];
assign d_o[42] = (syndrome_o == 8'hb1) ^ in[42];
assign d_o[43] = (syndrome_o == 8'hb2) ^ in[43];
assign d_o[44] = (syndrome_o == 8'hb3) ^ in[44];
assign d_o[45] = (syndrome_o == 8'hb4) ^ in[45];
assign d_o[46] = (syndrome_o == 8'hb5) ^ in[46];
assign d_o[47] = (syndrome_o == 8'hb6) ^ in[47];
assign d_o[48] = (syndrome_o == 8'hb7) ^ in[48];
assign d_o[49] = (syndrome_o == 8'hb8) ^ in[49];
assign d_o[50] = (syndrome_o == 8'hb9) ^ in[50];
assign d_o[51] = (syndrome_o == 8'hba) ^ in[51];
assign d_o[52] = (syndrome_o == 8'hbb) ^ in[52];
assign d_o[53] = (syndrome_o == 8'hbc) ^ in[53];
assign d_o[54] = (syndrome_o == 8'hbd) ^ in[54];
assign d_o[55] = (syndrome_o == 8'hbe) ^ in[55];
assign d_o[56] = (syndrome_o == 8'hbf) ^ in[56];
assign d_o[57] = (syndrome_o == 8'hc1) ^ in[57];
assign d_o[58] = (syndrome_o == 8'hc2) ^ in[58];
assign d_o[59] = (syndrome_o == 8'hc3) ^ in[59];
assign d_o[60] = (syndrome_o == 8'hc4) ^ in[60];
assign d_o[61] = (syndrome_o == 8'hc5) ^ in[61];
assign d_o[62] = (syndrome_o == 8'hc6) ^ in[62];
assign d_o[63] = (syndrome_o == 8'hc7) ^ in[63];
// err_o calc. bit0: single error, bit1: double error
assign single_error = ^syndrome_o;
assign err_o[0] = single_error;
assign err_o[1] = ~single_error & (|syndrome_o);
endmodule

View file

@ -0,0 +1,109 @@
// Copyright lowRISC contributors.
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
//
// SECDED Encoder generated by secded_gen.py
module prim_secded_hamming_72_64_enc (
input [63:0] in,
output logic [71:0] out
);
assign out[0] = in[0] ;
assign out[1] = in[1] ;
assign out[2] = in[2] ;
assign out[3] = in[3] ;
assign out[4] = in[4] ;
assign out[5] = in[5] ;
assign out[6] = in[6] ;
assign out[7] = in[7] ;
assign out[8] = in[8] ;
assign out[9] = in[9] ;
assign out[10] = in[10] ;
assign out[11] = in[11] ;
assign out[12] = in[12] ;
assign out[13] = in[13] ;
assign out[14] = in[14] ;
assign out[15] = in[15] ;
assign out[16] = in[16] ;
assign out[17] = in[17] ;
assign out[18] = in[18] ;
assign out[19] = in[19] ;
assign out[20] = in[20] ;
assign out[21] = in[21] ;
assign out[22] = in[22] ;
assign out[23] = in[23] ;
assign out[24] = in[24] ;
assign out[25] = in[25] ;
assign out[26] = in[26] ;
assign out[27] = in[27] ;
assign out[28] = in[28] ;
assign out[29] = in[29] ;
assign out[30] = in[30] ;
assign out[31] = in[31] ;
assign out[32] = in[32] ;
assign out[33] = in[33] ;
assign out[34] = in[34] ;
assign out[35] = in[35] ;
assign out[36] = in[36] ;
assign out[37] = in[37] ;
assign out[38] = in[38] ;
assign out[39] = in[39] ;
assign out[40] = in[40] ;
assign out[41] = in[41] ;
assign out[42] = in[42] ;
assign out[43] = in[43] ;
assign out[44] = in[44] ;
assign out[45] = in[45] ;
assign out[46] = in[46] ;
assign out[47] = in[47] ;
assign out[48] = in[48] ;
assign out[49] = in[49] ;
assign out[50] = in[50] ;
assign out[51] = in[51] ;
assign out[52] = in[52] ;
assign out[53] = in[53] ;
assign out[54] = in[54] ;
assign out[55] = in[55] ;
assign out[56] = in[56] ;
assign out[57] = in[57] ;
assign out[58] = in[58] ;
assign out[59] = in[59] ;
assign out[60] = in[60] ;
assign out[61] = in[61] ;
assign out[62] = in[62] ;
assign out[63] = in[63] ;
assign out[64] = in[0] ^ in[1] ^ in[3] ^ in[4] ^ in[6] ^ in[8] ^ in[10] ^ in[11] ^ in[13] ^ in[15]
^ in[17] ^ in[19] ^ in[21] ^ in[23] ^ in[25] ^ in[26] ^ in[28] ^ in[30] ^ in[32]
^ in[34] ^ in[36] ^ in[38] ^ in[40] ^ in[42] ^ in[44] ^ in[46] ^ in[48] ^ in[50]
^ in[52] ^ in[54] ^ in[56] ^ in[57] ^ in[59] ^ in[61] ^ in[63];
assign out[65] = in[0] ^ in[2] ^ in[3] ^ in[5] ^ in[6] ^ in[9] ^ in[10] ^ in[12] ^ in[13] ^ in[16]
^ in[17] ^ in[20] ^ in[21] ^ in[24] ^ in[25] ^ in[27] ^ in[28] ^ in[31] ^ in[32]
^ in[35] ^ in[36] ^ in[39] ^ in[40] ^ in[43] ^ in[44] ^ in[47] ^ in[48] ^ in[51]
^ in[52] ^ in[55] ^ in[56] ^ in[58] ^ in[59] ^ in[62] ^ in[63];
assign out[66] = in[1] ^ in[2] ^ in[3] ^ in[7] ^ in[8] ^ in[9] ^ in[10] ^ in[14] ^ in[15] ^ in[16]
^ in[17] ^ in[22] ^ in[23] ^ in[24] ^ in[25] ^ in[29] ^ in[30] ^ in[31] ^ in[32]
^ in[37] ^ in[38] ^ in[39] ^ in[40] ^ in[45] ^ in[46] ^ in[47] ^ in[48] ^ in[53]
^ in[54] ^ in[55] ^ in[56] ^ in[60] ^ in[61] ^ in[62] ^ in[63];
assign out[67] = in[4] ^ in[5] ^ in[6] ^ in[7] ^ in[8] ^ in[9] ^ in[10] ^ in[18] ^ in[19] ^ in[20]
^ in[21] ^ in[22] ^ in[23] ^ in[24] ^ in[25] ^ in[33] ^ in[34] ^ in[35] ^ in[36]
^ in[37] ^ in[38] ^ in[39] ^ in[40] ^ in[49] ^ in[50] ^ in[51] ^ in[52] ^ in[53]
^ in[54] ^ in[55] ^ in[56];
assign out[68] = in[11] ^ in[12] ^ in[13] ^ in[14] ^ in[15] ^ in[16] ^ in[17] ^ in[18] ^ in[19]
^ in[20] ^ in[21] ^ in[22] ^ in[23] ^ in[24] ^ in[25] ^ in[41] ^ in[42] ^ in[43]
^ in[44] ^ in[45] ^ in[46] ^ in[47] ^ in[48] ^ in[49] ^ in[50] ^ in[51] ^ in[52]
^ in[53] ^ in[54] ^ in[55] ^ in[56];
assign out[69] = in[26] ^ in[27] ^ in[28] ^ in[29] ^ in[30] ^ in[31] ^ in[32] ^ in[33] ^ in[34]
^ in[35] ^ in[36] ^ in[37] ^ in[38] ^ in[39] ^ in[40] ^ in[41] ^ in[42] ^ in[43]
^ in[44] ^ in[45] ^ in[46] ^ in[47] ^ in[48] ^ in[49] ^ in[50] ^ in[51] ^ in[52]
^ in[53] ^ in[54] ^ in[55] ^ in[56];
assign out[70] = in[57] ^ in[58] ^ in[59] ^ in[60] ^ in[61] ^ in[62] ^ in[63];
assign out[71] = in[0] ^ in[1] ^ in[2] ^ in[3] ^ in[4] ^ in[5] ^ in[6] ^ in[7] ^ in[8] ^ in[9]
^ in[10] ^ in[11] ^ in[12] ^ in[13] ^ in[14] ^ in[15] ^ in[16] ^ in[17] ^ in[18]
^ in[19] ^ in[20] ^ in[21] ^ in[22] ^ in[23] ^ in[24] ^ in[25] ^ in[26] ^ in[27]
^ in[28] ^ in[29] ^ in[30] ^ in[31] ^ in[32] ^ in[33] ^ in[34] ^ in[35] ^ in[36]
^ in[37] ^ in[38] ^ in[39] ^ in[40] ^ in[41] ^ in[42] ^ in[43] ^ in[44] ^ in[45]
^ in[46] ^ in[47] ^ in[48] ^ in[49] ^ in[50] ^ in[51] ^ in[52] ^ in[53] ^ in[54]
^ in[55] ^ in[56] ^ in[57] ^ in[58] ^ in[59] ^ in[60] ^ in[61] ^ in[62] ^ in[63];
endmodule

View file

@ -36,6 +36,7 @@ module prim_subst_perm #(
always_comb begin : p_dec
data_state_sbox = data_state[r] ^ key_i;
// Reverse odd/even grouping
data_state_flipped = data_state_sbox;
for (int k = 0; k < DataWidth/2; k++) begin
data_state_flipped[k * 2] = data_state_sbox[k];
data_state_flipped[k * 2 + 1] = data_state_sbox[k + DataWidth/2];
@ -53,7 +54,7 @@ module prim_subst_perm #(
////////////////////////////////
// encryption pass
end else begin : gen_enc
always_comb begin : p_dec
always_comb begin : p_enc
data_state_sbox = data_state[r] ^ key_i;
// This SBox layer is aligned to nibbles, so the uppermost bits may not be affected by this.
// However, the permutation below ensures that these bits get shuffled to a different
@ -68,6 +69,7 @@ module prim_subst_perm #(
// Regroup bits such that all even indices are stacked up first, followed by all odd
// indices. Note that if the Width is odd, this is still ok, since
// the uppermost bit just stays in place in that case.
data_state_sbox = data_state_flipped;
for (int k = 0; k < DataWidth/2; k++) begin
data_state_sbox[k] = data_state_flipped[k * 2];
data_state_sbox[k + DataWidth/2] = data_state_flipped[k * 2 + 1];

View file

@ -150,9 +150,9 @@ module prim_sync_reqack (
end
// Source domain cannot de-assert REQ while waiting for ACK.
`ASSERT(ReqAckSyncHoldReq, $fell(src_req_i) |-> (src_fsm_cs != HANDSHAKE), clk_src_i, rst_src_ni)
`ASSERT(ReqAckSyncHoldReq, $fell(src_req_i) |-> (src_fsm_cs != HANDSHAKE), clk_src_i, !rst_src_ni)
// Destination domain cannot assert ACK without REQ.
`ASSERT(ReqAckSyncAckNeedsReq, dst_ack_i |-> dst_req_o, clk_dst_i, rst_dst_ni)
`ASSERT(ReqAckSyncAckNeedsReq, dst_ack_i |-> dst_req_o, clk_dst_i, !rst_dst_ni)
endmodule

View file

@ -0,0 +1,103 @@
// Copyright lowRISC contributors.
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
//
// REQ/ACK synchronizer with associated data.
//
// This module synchronizes a REQ/ACK handshake with associated data across a clock domain
// crossing (CDC). Both domains will see a handshake with the duration of one clock cycle. By
// default, the data itself is not registered. The main purpose of feeding the data through this
// module to have an anchor point for waiving CDC violations. If the data is configured to
// flow from the destination (DST) to the source (SRC) domain, an additional register stage can be
// inserted for data buffering.
//
// Under the hood, this module uses a prim_sync_reqack primitive for synchronizing the
// REQ/ACK handshake. See prim_sync_reqack.sv for more details.
`include "prim_assert.sv"
module prim_sync_reqack_data #(
parameter int unsigned Width = 1,
parameter bit DataSrc2Dst = 1'b1, // Direction of data flow: 1'b1 = SRC to DST,
// 1'b0 = DST to SRC
parameter bit DataReg = 1'b0 // Enable optional register stage for data,
// only usable with DataSrc2Dst == 1'b0.
) (
input clk_src_i, // REQ side, SRC domain
input rst_src_ni, // REQ side, SRC domain
input clk_dst_i, // ACK side, DST domain
input rst_dst_ni, // ACK side, DST domain
input logic src_req_i, // REQ side, SRC domain
output logic src_ack_o, // REQ side, SRC domain
output logic dst_req_o, // ACK side, DST domain
input logic dst_ack_i, // ACK side, DST domain
input logic [Width-1:0] data_i,
output logic [Width-1:0] data_o
);
////////////////////////////////////
// REQ/ACK synchronizer primitive //
////////////////////////////////////
prim_sync_reqack u_prim_sync_reqack (
.clk_src_i,
.rst_src_ni,
.clk_dst_i,
.rst_dst_ni,
.src_req_i,
.src_ack_o,
.dst_req_o,
.dst_ack_i
);
/////////////////////////
// Data register stage //
/////////////////////////
// Optional - Only relevant if the data flows from DST to SRC. In this case, it must be ensured
// that the data remains stable until the ACK becomes visible in the SRC domain.
//
// Note that for larger data widths, it is recommended to adjust the data sender to hold the data
// stable until the next REQ in order to save the cost of this register stage.
if (DataSrc2Dst == 1'b0 && DataReg == 1'b1) begin : gen_data_reg
logic data_we;
logic [Width-1:0] data_d, data_q;
// Sample the data when seing the REQ/ACK handshake in the DST domain.
assign data_we = dst_req_o & dst_ack_i;
assign data_d = data_i;
always_ff @(posedge clk_dst_i or negedge rst_dst_ni) begin
if (!rst_dst_ni) begin
data_q <= '0;
end else if (data_we) begin
data_q <= data_d;
end
end
assign data_o = data_q;
end else begin : gen_no_data_reg
// Just feed through the data.
assign data_o = data_i;
end
////////////////
// Assertions //
////////////////
if (DataSrc2Dst == 1'b1) begin : gen_assert_data_src2dst
// SRC domain cannot change data while waiting for ACK.
`ASSERT(ReqAckSyncDataHoldSrc2Dst, !$stable(data_i) |->
!(src_req_i == 1'b1 && u_prim_sync_reqack.src_fsm_cs == u_prim_sync_reqack.HANDSHAKE),
clk_src_i, !rst_src_ni)
// Register stage cannot be used.
`ASSERT_INIT(ReqAckSyncDataReg, DataSrc2Dst && !DataReg)
end else if (DataSrc2Dst == 1'b0 && DataReg == 1'b0) begin : gen_assert_data_dst2src
// DST domain cannot change data while waiting for SRC domain to receive the ACK.
`ASSERT(ReqAckSyncDataHoldDst2Src, !$stable(data_i) |->
(u_prim_sync_reqack.dst_fsm_cs != u_prim_sync_reqack.SYNC),
clk_dst_i, !rst_dst_ni)
end
endmodule

View file

@ -40,6 +40,25 @@ def _get_random_data_hex_literal(width):
return literal_str
def _blockify(s, size, limit):
""" Make sure the output does not exceed a certain size per line"""
str_idx = 2
remain = size % (limit * 4)
numbits = remain if remain else limit * 4
s_list = []
remain = size
while remain > 0:
s_incr = int(numbits / 4)
s_list.append("{}'h{}".format(numbits, s[str_idx: str_idx + s_incr]))
str_idx += s_incr
remain -= numbits
numbits = limit * 4
return(",\n ".join(s_list))
def _get_random_perm_hex_literal(numel):
""" Compute a random permutation of 'numel' elements and
return as packed hex literal"""
@ -52,8 +71,7 @@ def _get_random_perm_hex_literal(numel):
literal_str += format(k, '0' + str(width) + 'b')
# convert to hex for space efficiency
literal_str = hex(int(literal_str, 2))
literal_str = str(width * numel) + "'h" + literal_str[2:]
return literal_str
return _blockify(literal_str, width * numel, 64)
def _wrapped_docstring():
@ -126,8 +144,9 @@ parameter int {}LfsrWidth = {};
typedef logic [{}LfsrWidth-1:0] {}lfsr_seed_t;
typedef logic [{}LfsrWidth-1:0][$clog2({}LfsrWidth)-1:0] {}lfsr_perm_t;
parameter {}lfsr_seed_t RndCnst{}LfsrSeedDefault = {};
parameter {}lfsr_perm_t RndCnst{}LfsrPermDefault =
{};
parameter {}lfsr_perm_t RndCnst{}LfsrPermDefault = {{
{}
}};
'''.format(args.width, args.seed, args.prefix,
args.prefix, args.width,
args.prefix, type_prefix,

View file

@ -378,8 +378,6 @@ def _generate_abstract_impl(gapi):
yaml.dump(abstract_prim_core,
f,
encoding="utf-8",
default_flow_style=False,
sort_keys=False,
Dumper=YamlDumper)
print("Core file written to %s" % (abstract_prim_core_filepath, ))

View file

@ -25,7 +25,7 @@ COPYRIGHT = """// Copyright lowRISC contributors.
// SPDX-License-Identifier: Apache-2.0
//
"""
CODE_OPTIONS = ['hsiao', 'hamming']
def min_paritysize(k):
# SECDED --> Hamming distance 'd': 4
@ -123,90 +123,20 @@ def print_enc(n, k, m, codes):
def calc_syndrome(code):
log.info("in syncrome {}".format(code))
return sum(map((lambda x: 2**x), code))
# return whether an integer is a power of 2
def is_pow2(n):
return (n & (n-1) == 0) and n != 0
def print_dec(n, k, m, codes):
outstr = ""
outstr += " logic single_error;\n"
outstr += "\n"
outstr += " // Syndrome calculation\n"
for i in range(m):
# Print combination
outstr += print_comb(n, k, m, i, codes, 1, 100,
" assign syndrome_o[%d] = in[%d] ^" % (i, k + i),
len(" in[%d] ^" % (k + i)) + 2)
outstr += "\n"
outstr += " // Corrected output calculation\n"
for i in range(k):
synd_v = calc_syndrome(codes[i])
outstr += " assign d_o[%d] = (syndrome_o == %d'h%x) ^ in[%d];\n" % (
i, m, calc_syndrome(codes[i]), i)
outstr += "\n"
outstr += " // err_o calc. bit0: single error, bit1: double error\n"
outstr += " assign single_error = ^syndrome_o;\n"
outstr += " assign err_o[0] = single_error;\n"
outstr += " assign err_o[1] = ~single_error & (|syndrome_o);\n"
return outstr
def main():
parser = argparse.ArgumentParser(
prog="secded_gen",
description='''This tool generates Single Error Correction Double Error
Detection(SECDED) encoder and decoder modules in SystemVerilog.
''')
parser.add_argument(
'-m',
type=int,
default=7,
help=
'parity length. If fan-in is too big, increasing m helps. (default: %(default)s)'
)
parser.add_argument(
'-k',
type=int,
default=32,
help=
'code length. Minimum \'m\' is calculated by the tool (default: %(default)s)'
)
parser.add_argument(
'--outdir',
default='../rtl',
help=
'output directory. The output file will be named `prim_secded_<n>_<k>_enc/dec.sv` (default: %(default)s)'
)
parser.add_argument('--verbose', '-v', action='store_true', help='Verbose')
args = parser.parse_args()
if (args.verbose):
log.basicConfig(format="%(levelname)s: %(message)s", level=log.DEBUG)
else:
log.basicConfig(format="%(levelname)s: %(message)s")
# Error checking
if (args.k <= 1 or args.k > 120):
log.error("Current tool doesn't support the value k (%d)", args.k)
k = args.k
if (args.m <= 1 or args.m > 20):
log.error("Current tool doesn't support the value m (%d)", args.m)
# Calculate 'm' (parity size)
min_m = min_paritysize(k)
if (args.m < min_m):
log.error("given \'m\' argument is smaller than minimum requirement")
m = min_m
else:
m = args.m
n = m + k
log.info("n(%d), k(%d), m(%d)", n, k, m)
random.seed(time.time())
def is_odd(n):
return (n % 2) > 0
# k = data bits
# m = parity bits
# generate hsiao code
def hsiao_code(k, m):
# using itertools combinations, generate odd number of 1 in a row
required_row = k # k rows are needed, decreasing everytime when it acquite
@ -285,13 +215,153 @@ def main():
# Found everything!
break
log.info(codes)
log.info("Hsiao codes {}".format(codes))
return codes
# n = total bits
# k = data bits
# m = parity bits
# generate hamming code
def hamming_code(n, k, m):
# construct a list of code tuples.
# Tuple corresponds to each bit position and shows which parity bit it participates in
# Only the data bits are shown, the parity bits are not.
codes = []
for pos in range(1, n+1):
# this is a valid parity bit position or the final parity bit
if (is_pow2(pos) or pos == n):
continue
else:
code = ()
for p in range(m):
# this is the starting parity position
parity_pos = 2**p
# back-track to the closest parity bit multiple and see if it is even or odd
# If even, we are in the skip phase, do not include
# If odd, we are in the include phase
parity_chk = int((pos - (pos % parity_pos)) / parity_pos)
log.debug("At position {} parity value {}, {}" \
.format(pos, parity_pos, parity_chk))
# valid for inclusion or final parity bit that includes everything
if is_odd(parity_chk) or p == m-1:
code = code + (p,)
log.info("add {} to tuple {}".format(p, code))
codes.append(code)
log.info("Hamming codes {}".format(codes))
return codes
def print_dec(n, k, m, codes):
outstr = ""
outstr += " logic single_error;\n"
outstr += "\n"
outstr += " // Syndrome calculation\n"
for i in range(m):
# Print combination
outstr += print_comb(n, k, m, i, codes, 1, 100,
" assign syndrome_o[%d] = in[%d] ^" % (i, k + i),
len(" in[%d] ^" % (k + i)) + 2)
outstr += "\n"
outstr += " // Corrected output calculation\n"
for i in range(k):
synd_v = calc_syndrome(codes[i])
outstr += " assign d_o[%d] = (syndrome_o == %d'h%x) ^ in[%d];\n" % (
i, m, calc_syndrome(codes[i]), i)
outstr += "\n"
outstr += " // err_o calc. bit0: single error, bit1: double error\n"
outstr += " assign single_error = ^syndrome_o;\n"
outstr += " assign err_o[0] = single_error;\n"
outstr += " assign err_o[1] = ~single_error & (|syndrome_o);\n"
return outstr
def main():
parser = argparse.ArgumentParser(
prog="secded_gen",
description='''This tool generates Single Error Correction Double Error
Detection(SECDED) encoder and decoder modules in SystemVerilog.
''')
parser.add_argument(
'-m',
type=int,
default=7,
help=
'parity length. If fan-in is too big, increasing m helps. (default: %(default)s)'
)
parser.add_argument(
'-k',
type=int,
default=32,
help=
'code length. Minimum \'m\' is calculated by the tool (default: %(default)s)'
)
parser.add_argument(
'-c',
default='hsiao',
help=
'ECC code used. Options: hsiao / hamming (default: %(default)s)'
)
parser.add_argument(
'--outdir',
default='../rtl',
help=
'output directory. The output file will be named `prim_secded_<n>_<k>_enc/dec.sv` (default: %(default)s)'
)
parser.add_argument('--verbose', '-v', action='store_true', help='Verbose')
args = parser.parse_args()
if (args.verbose):
log.basicConfig(format="%(levelname)s: %(message)s", level=log.DEBUG)
else:
log.basicConfig(format="%(levelname)s: %(message)s")
# Error checking
if (args.k <= 1 or args.k > 120):
log.error("Current tool doesn't support the value k (%d)", args.k)
k = args.k
if (args.m <= 1 or args.m > 20):
log.error("Current tool doesn't support the value m (%d)", args.m)
# Calculate 'm' (parity size)
min_m = min_paritysize(k)
if (args.m < min_m):
log.warning("given \'m\' argument is smaller than minimum requirement " +
"using calculated minimum")
m = min_m
else:
m = args.m
n = m + k
log.info("n(%d), k(%d), m(%d)", n, k, m)
random.seed(time.time())
# Error check code selection
codes = []
name = ''
if (args.c == 'hsiao'):
codes = hsiao_code(k, m)
elif (args.c == 'hamming'):
name = '_hamming'
codes = hamming_code(n, k, m)
else:
log.error("Invalid code {} selected, use one of {}".format(args.c, CODE_OPTIONS))
return
# Print Encoder
enc_out = print_enc(n, k, m, codes)
#log.info(enc_out)
module_name = "prim_secded_%d_%d" % (n, k)
module_name = "prim_secded%s_%d_%d" % (name, n, k)
with open(args.outdir + "/" + module_name + "_enc.sv", "w") as f:
f.write(COPYRIGHT)
@ -319,6 +389,5 @@ def main():
f.write(dec_out)
f.write("endmodule\n\n")
if __name__ == "__main__":
main()

View file

@ -55,6 +55,7 @@ RUST_INSTRUCTIONS = """
"""
# TODO: Consolidate the subfunctions below in a shared utility package.
def _wrapped_docstring():
'''Return a text-wrapped version of the module docstring'''
paras = []

View file

@ -0,0 +1,40 @@
CAPI=2:
# Copyright lowRISC contributors.
# Licensed under the Apache License, Version 2.0, see LICENSE for details.
# SPDX-License-Identifier: Apache-2.0
name: "lowrisc:prim_generic:buf"
description: "buffer"
filesets:
files_rtl:
files:
- rtl/prim_generic_buf.sv
file_type: systemVerilogSource
files_verilator_waiver:
depend:
# common waivers
- lowrisc:lint:common
files:
file_type: vlt
files_ascentlint_waiver:
depend:
# common waivers
- lowrisc:lint:common
files:
file_type: waiver
files_veriblelint_waiver:
depend:
# common waivers
- lowrisc:lint:common
- lowrisc:lint:comportable
targets:
default:
filesets:
- tool_verilator ? (files_verilator_waiver)
- tool_ascentlint ? (files_ascentlint_waiver)
- tool_veriblelint ? (files_veriblelint_waiver)
- files_rtl

View file

@ -11,6 +11,7 @@ filesets:
- lowrisc:prim:all
- lowrisc:prim:util
- lowrisc:prim:ram_1p_adv
- lowrisc:prim:otp_pkg
files:
- rtl/prim_generic_otp.sv
file_type: systemVerilogSource

View file

@ -0,0 +1,14 @@
// Copyright lowRISC contributors.
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
`include "prim_assert.sv"
module prim_generic_buf (
input in_i,
output logic out_o
);
assign out_o = in_i;
endmodule : prim_generic_buf

View file

@ -6,19 +6,22 @@
//
module prim_generic_flash #(
parameter int NumBanks = 2, // number of banks
parameter int InfosPerBank = 1, // info pages per bank
parameter int PagesPerBank = 256, // data pages per bank
parameter int WordsPerPage = 256, // words per page
parameter int DataWidth = 32, // bits per word
parameter int MetaDataWidth = 12, // metadata such as ECC
parameter int TestModeWidth = 2
parameter int NumBanks = 2, // number of banks
parameter int InfosPerBank = 1, // info pages per bank
parameter int InfoTypes = 1, // different info types
parameter int InfoTypesWidth = 1, // different info types
parameter int PagesPerBank = 256,// data pages per bank
parameter int WordsPerPage = 256,// words per page
parameter int DataWidth = 32, // bits per word
parameter int MetaDataWidth = 12, // metadata such as ECC
parameter int TestModeWidth = 2
) (
input clk_i,
input rst_ni,
input flash_phy_pkg::flash_phy_prim_flash_req_t [NumBanks-1:0] flash_req_i,
output flash_phy_pkg::flash_phy_prim_flash_rsp_t [NumBanks-1:0] flash_rsp_o,
output logic [flash_phy_pkg::ProgTypes-1:0] prog_type_avail_o,
input init_i,
output init_busy_o,
input tck_i,
input tdi_i,
@ -40,8 +43,14 @@ module prim_generic_flash #(
assign prog_type_avail_o[flash_ctrl_pkg::FlashProgRepair] = 1'b1;
for (genvar bank = 0; bank < NumBanks; bank++) begin : gen_prim_flash_banks
logic erase_suspend_req;
assign erase_suspend_req = flash_req_i[bank].erase_suspend_req &
(flash_req_i[bank].pg_erase_req | flash_req_i[bank].bk_erase_req);
prim_generic_flash_bank #(
.InfosPerBank(InfosPerBank),
.InfoTypes(InfoTypes),
.InfoTypesWidth(InfoTypesWidth),
.PagesPerBank(PagesPerBank),
.WordsPerPage(WordsPerPage),
.DataWidth(DataWidth),
@ -55,13 +64,18 @@ module prim_generic_flash #(
.prog_type_i(flash_req_i[bank].prog_type),
.pg_erase_i(flash_req_i[bank].pg_erase_req),
.bk_erase_i(flash_req_i[bank].bk_erase_req),
.erase_suspend_req_i(erase_suspend_req),
.he_i(flash_req_i[bank].he),
.addr_i(flash_req_i[bank].addr),
.part_i(flash_req_i[bank].part),
.info_sel_i(flash_req_i[bank].info_sel),
.prog_data_i(flash_req_i[bank].prog_full_data),
.ack_o(flash_rsp_o[bank].ack),
.done_o(flash_rsp_o[bank].done),
.rd_data_o(flash_rsp_o[bank].rdata),
.init_i,
.init_busy_o(init_busy[bank]),
.erase_suspend_done_o(flash_rsp_o[bank].erase_suspend_done),
.flash_power_ready_h_i,
.flash_power_down_h_i
);

View file

@ -6,11 +6,13 @@
//
module prim_generic_flash_bank #(
parameter int InfosPerBank = 1, // info pages per bank
parameter int PagesPerBank = 256, // data pages per bank
parameter int WordsPerPage = 256, // words per page
parameter int DataWidth = 32, // bits per word
parameter int MetaDataWidth = 12, // this is a temporary parameter to work around ECC issues
parameter int InfosPerBank = 1, // info pages per bank
parameter int InfoTypes = 1, // different info types
parameter int InfoTypesWidth = 1, // different info types
parameter int PagesPerBank = 256, // data pages per bank
parameter int WordsPerPage = 256, // words per page
parameter int DataWidth = 32, // bits per word
parameter int MetaDataWidth = 12, // this is a temporary parameter to work around ECC issues
// Derived parameters
localparam int PageW = $clog2(PagesPerBank),
@ -26,12 +28,17 @@ module prim_generic_flash_bank #(
input flash_ctrl_pkg::flash_prog_e prog_type_i,
input pg_erase_i,
input bk_erase_i,
input erase_suspend_req_i,
input he_i,
input [AddrW-1:0] addr_i,
input flash_ctrl_pkg::flash_part_e part_i,
input [InfoTypesWidth-1:0] info_sel_i,
input [DataWidth-1:0] prog_data_i,
output logic ack_o,
output logic done_o,
output logic erase_suspend_done_o,
output logic [DataWidth-1:0] rd_data_o,
input init_i,
output logic init_busy_o,
input flash_power_ready_h_i,
input flash_power_down_h_i
@ -72,6 +79,7 @@ module prim_generic_flash_bank #(
logic [DataWidth-1:0] mem_wdata;
logic [AddrW-1:0] mem_addr;
flash_ctrl_pkg::flash_part_e mem_part;
logic [InfoTypesWidth-1:0] mem_info_sel;
// insert a fifo here to break the large fanout from inputs to memories on reads
typedef struct packed {
@ -83,6 +91,7 @@ module prim_generic_flash_bank #(
logic bk_erase;
logic [AddrW-1:0] addr;
flash_ctrl_pkg::flash_part_e part;
logic [InfoTypesWidth-1:0] info_sel;
logic [DataWidth-1:0] prog_data;
} cmd_payload_t;
@ -100,6 +109,7 @@ module prim_generic_flash_bank #(
bk_erase: bk_erase_i,
addr: addr_i,
part: part_i,
info_sel: info_sel_i,
prog_data: prog_data_i
};
@ -145,6 +155,7 @@ module prim_generic_flash_bank #(
assign mem_rd_d = mem_req & ~mem_wr;
assign mem_addr = cmd_q.addr + index_cnt[AddrW-1:0];
assign mem_part = cmd_q.part;
assign mem_info_sel = cmd_q.info_sel;
always_ff @(posedge clk_i or negedge rst_ni) begin
if (!rst_ni) st_q <= StReset;
@ -177,11 +188,14 @@ module prim_generic_flash_bank #(
// latch partiton being read since the command fifo is popped early
flash_ctrl_pkg::flash_part_e rd_part_q;
logic [InfoTypesWidth-1:0] info_sel_q;
always_ff @(posedge clk_i or negedge rst_ni) begin
if (!rst_ni) begin
rd_part_q <= flash_ctrl_pkg::FlashPartData;
info_sel_q <= '0;
end else if (mem_rd_d) begin
rd_part_q <= cmd_q.part;
info_sel_q <= cmd_q.info_sel;
end
end
@ -230,11 +244,12 @@ module prim_generic_flash_bank #(
init_busy_o = '0;
pop_cmd = '0;
done_o = '0;
erase_suspend_done_o = '0;
unique case (st_q)
StReset: begin
init_busy_o = 1'b1;
if (flash_power_ready_h_i && !flash_power_down_h_i) begin
if (init_i && flash_power_ready_h_i && !flash_power_down_h_i) begin
st_d = StInit;
end
end
@ -315,7 +330,14 @@ module prim_generic_flash_bank #(
StErase: begin
// Actual erasing of the page
if (index_cnt < index_limit_q || time_cnt < time_limit_q) begin
if (erase_suspend_req_i) begin
st_d = StIdle;
pop_cmd = 1'b1;
done_o = 1'b1;
erase_suspend_done_o = 1'b1;
time_cnt_clr = 1'b1;
index_cnt_clr = 1'b1;
end else if (index_cnt < index_limit_q || time_cnt < time_limit_q) begin
mem_req = 1'b1;
mem_wr = 1'b1;
mem_wdata = {DataWidth{1'b1}};
@ -345,8 +367,10 @@ module prim_generic_flash_bank #(
localparam int MemWidth = DataWidth - MetaDataWidth;
logic [DataWidth-1:0] rd_data_main, rd_data_info;
logic [MemWidth-1:0] rd_nom_data_main, rd_nom_data_info;
logic [MetaDataWidth-1:0] rd_meta_data_main, rd_meta_data_info;
logic [MemWidth-1:0] rd_nom_data_main;
logic [MetaDataWidth-1:0] rd_meta_data_main;
logic [InfoTypes-1:0][MemWidth-1:0] rd_nom_data_info;
logic [InfoTypes-1:0][MetaDataWidth-1:0] rd_meta_data_info;
prim_ram_1p #(
.Width(MemWidth),
@ -376,40 +400,51 @@ module prim_generic_flash_bank #(
.rdata_o (rd_meta_data_main)
);
prim_ram_1p #(
.Width(MemWidth),
.Depth(WordsPerInfoBank),
.DataBitsPerMask(MemWidth)
) u_info_mem (
.clk_i,
.req_i (mem_req & (mem_part == flash_ctrl_pkg::FlashPartInfo)),
.write_i (mem_wr),
.addr_i (mem_addr[0 +: InfoAddrW]),
.wdata_i (mem_wdata[MemWidth-1:0]),
.wmask_i ({MemWidth{1'b1}}),
.rdata_o (rd_nom_data_info)
);
for (genvar info_type = 0; info_type < InfoTypes; info_type++) begin : gen_info_types
logic info_mem_req;
assign info_mem_req = mem_req &
(mem_part == flash_ctrl_pkg::FlashPartInfo) &
(mem_info_sel == info_type);
prim_ram_1p #(
.Width(MemWidth),
.Depth(WordsPerInfoBank),
.DataBitsPerMask(MemWidth)
) u_info_mem (
.clk_i,
.req_i (info_mem_req),
.write_i (mem_wr),
.addr_i (mem_addr[0 +: InfoAddrW]),
.wdata_i (mem_wdata[MemWidth-1:0]),
.wmask_i ({MemWidth{1'b1}}),
.rdata_o (rd_nom_data_info[info_type])
);
prim_ram_1p #(
.Width(MetaDataWidth),
.Depth(WordsPerInfoBank),
.DataBitsPerMask(MetaDataWidth)
) u_info_mem_meta (
.clk_i,
.req_i (info_mem_req),
.write_i (mem_wr),
.addr_i (mem_addr[0 +: InfoAddrW]),
.wdata_i (mem_wdata[MemWidth +: MetaDataWidth]),
.wmask_i ({MetaDataWidth{1'b1}}),
.rdata_o (rd_meta_data_info[info_type])
);
end
prim_ram_1p #(
.Width(MetaDataWidth),
.Depth(WordsPerInfoBank),
.DataBitsPerMask(MetaDataWidth)
) u_info_mem_meta (
.clk_i,
.req_i (mem_req & (mem_part == flash_ctrl_pkg::FlashPartInfo)),
.write_i (mem_wr),
.addr_i (mem_addr[0 +: InfoAddrW]),
.wdata_i (mem_wdata[MemWidth +: MetaDataWidth]),
.wmask_i ({MetaDataWidth{1'b1}}),
.rdata_o (rd_meta_data_info)
);
assign rd_data_main = {rd_meta_data_main, rd_nom_data_main};
assign rd_data_info = {rd_meta_data_info, rd_nom_data_info};
assign rd_data_info = {rd_meta_data_info[info_sel_q], rd_nom_data_info[info_sel_q]};
assign rd_data_d = rd_part_q == flash_ctrl_pkg::FlashPartData ? rd_data_main : rd_data_info;
flash_ctrl_pkg::flash_prog_e unused_prog_type;
assign unused_prog_type = cmd_q.prog_type;
logic unused_he;
assign unused_he = he_i;
endmodule // prim_generic_flash

View file

@ -2,16 +2,17 @@
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
module prim_generic_otp #(
module prim_generic_otp
import prim_otp_pkg::*;
#(
// Native OTP word size. This determines the size_i granule.
parameter int Width = 16,
parameter int Depth = 1024,
parameter int CmdWidth = otp_ctrl_pkg::OtpCmdWidth,
// This determines the maximum number of native words that
// can be transferred accross the interface in one cycle.
parameter int SizeWidth = otp_ctrl_pkg::OtpSizeWidth,
parameter int SizeWidth = 2,
// Width of the power sequencing signal.
parameter int PwrSeqWidth = otp_ctrl_pkg::OtpPwrSeqWidth,
parameter int PwrSeqWidth = 2,
// Number of Test TL-UL words
parameter int TlDepth = 16,
// Derived parameters
@ -32,13 +33,13 @@ module prim_generic_otp #(
output logic ready_o,
input valid_i,
input [SizeWidth-1:0] size_i, // #(Native words)-1, e.g. size == 0 for 1 native word.
input [CmdWidth-1:0] cmd_i, // 00: read command, 01: write command, 11: init command
input cmd_e cmd_i, // 00: read command, 01: write command, 11: init command
input [AddrWidth-1:0] addr_i,
input [IfWidth-1:0] wdata_i,
// Response channel
output logic valid_o,
output logic [IfWidth-1:0] rdata_o,
output otp_ctrl_pkg::otp_err_e err_o
output err_e err_o
);
// Not supported in open-source emulation model.
@ -131,7 +132,7 @@ module prim_generic_otp #(
} state_e;
state_e state_d, state_q;
otp_ctrl_pkg::otp_err_e err_d, err_q;
err_e err_d, err_q;
logic valid_d, valid_q;
logic req, wren, rvalid;
logic [1:0] rerror;
@ -154,7 +155,7 @@ module prim_generic_otp #(
state_d = state_q;
ready_o = 1'b0;
valid_d = 1'b0;
err_d = otp_ctrl_pkg::NoError;
err_d = NoError;
req = 1'b0;
wren = 1'b0;
cnt_clr = 1'b0;
@ -165,12 +166,12 @@ module prim_generic_otp #(
ResetSt: begin
ready_o = 1'b1;
if (valid_i) begin
if (cmd_i == otp_ctrl_pkg::OtpInit) begin
if (cmd_i == Init) begin
state_d = InitSt;
end else begin
// Invalid commands get caught here
valid_d = 1'b1;
err_d = otp_ctrl_pkg::MacroError;
err_d = MacroError;
end
end
end
@ -184,14 +185,14 @@ module prim_generic_otp #(
ready_o = 1'b1;
if (valid_i) begin
cnt_clr = 1'b1;
err_d = otp_ctrl_pkg::NoError;
err_d = NoError;
unique case (cmd_i)
otp_ctrl_pkg::OtpRead: state_d = ReadSt;
otp_ctrl_pkg::OtpWrite: state_d = WriteCheckSt;
Read: state_d = ReadSt;
Write: state_d = WriteCheckSt;
default: begin
// Invalid commands get caught here
valid_d = 1'b1;
err_d = otp_ctrl_pkg::MacroError;
err_d = MacroError;
end
endcase // cmd_i
end
@ -209,7 +210,7 @@ module prim_generic_otp #(
if (rerror[1]) begin
state_d = IdleSt;
valid_d = 1'b1;
err_d = otp_ctrl_pkg::MacroEccUncorrError;
err_d = MacroEccUncorrError;
end else begin
if (cnt_q == size_q) begin
state_d = IdleSt;
@ -219,7 +220,7 @@ module prim_generic_otp #(
end
// Correctable error, carry on but signal back.
if (rerror[0]) begin
err_d = otp_ctrl_pkg::MacroEccCorrError;
err_d = MacroEccCorrError;
end
end
end
@ -239,7 +240,7 @@ module prim_generic_otp #(
if (rerror[1] || (rdata_d & wdata_q[cnt_q]) != rdata_d) begin
state_d = IdleSt;
valid_d = 1'b1;
err_d = otp_ctrl_pkg::MacroWriteBlankError;
err_d = MacroWriteBlankError;
end else begin
if (cnt_q == size_q) begin
cnt_clr = 1'b1;
@ -280,7 +281,7 @@ module prim_generic_otp #(
.EnableECC (1'b1),
.EnableInputPipeline (1),
.EnableOutputPipeline (1)
) i_prim_ram_1p_adv (
) u_prim_ram_1p_adv (
.clk_i,
.rst_ni,
.req_i ( req ),
@ -295,7 +296,7 @@ module prim_generic_otp #(
);
// Currently it is assumed that no wrap arounds can occur.
`ASSERT(NoWrapArounds_A, addr >= addr_q)
`ASSERT(NoWrapArounds_A, req |-> (addr >= addr_q))
//////////
// Regs //
@ -318,7 +319,7 @@ module prim_generic_otp #(
always_ff @(posedge clk_i or negedge rst_ni) begin : p_regs
if (!rst_ni) begin
valid_q <= '0;
err_q <= otp_ctrl_pkg::NoError;
err_q <= NoError;
addr_q <= '0;
wdata_q <= '0;
rdata_q <= '0;

View file

@ -0,0 +1,38 @@
CAPI=2:
# Copyright lowRISC contributors.
# Licensed under the Apache License, Version 2.0, see LICENSE for details.
# SPDX-License-Identifier: Apache-2.0
name: "lowrisc:prim_xilinx:buf"
description: "buffer"
filesets:
files_rtl:
files:
- rtl/prim_xilinx_buf.sv
file_type: systemVerilogSource
files_verilator_waiver:
depend:
# common waivers
- lowrisc:lint:common
file_type: vlt
files_ascentlint_waiver:
depend:
# common waivers
- lowrisc:lint:common
file_type: waiver
files_veriblelint_waiver:
depend:
# common waivers
- lowrisc:lint:common
- lowrisc:lint:comportable
targets:
default:
filesets:
- tool_verilator ? (files_verilator_waiver)
- tool_ascentlint ? (files_ascentlint_waiver)
- tool_veriblelint ? (files_veriblelint_waiver)
- files_rtl

View file

@ -0,0 +1,12 @@
// Copyright lowRISC contributors.
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
module prim_xilinx_buf (
input in_i,
(* keep = "true" *) output logic out_o
);
assign out_o = in_i;
endmodule : prim_xilinx_buf

View file

@ -30,3 +30,8 @@ targets:
- tool_ascentlint ? (files_ascentlint)
- tool_veriblelint ? (files_veriblelint)
- files_check_tool_requirements
tools:
ascentlint:
ascentlint_options:
- "-wait_license"
- "-stop_on_error"

View file

@ -63,10 +63,6 @@ targets:
parameters:
- SYNTHESIS=true
tools:
ascentlint:
ascentlint_options:
- "-wait_license"
- "-stop_on_error"
verilator:
mode: lint-only
verilator_options:

View file

@ -6,6 +6,6 @@
# waiver for unused_* signals for HIER_* rules (note that our policy file has a
# similar exception list for rule NOT_READ)
waive -rules {HIER_NET_NOT_READ HIER_BRANCH_NOT_READ} -pattern {unused_*}
waive -rules {HIER_NET_NOT_READ HIER_BRANCH_NOT_READ} -pattern {gen_*.unused_*}
waive -rules {HIER_NET_NOT_READ HIER_BRANCH_NOT_READ} -regexp {unused_.*}
waive -rules {HIER_NET_NOT_READ HIER_BRANCH_NOT_READ} -regexp {gen_.*\.unused_.*}

View file

@ -2,8 +2,10 @@
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
{
// Ascentlint-specific results parsing script that is called after running lint
report_cmd: "{proj_root}/hw/lint/tools/{tool}/parse-lint-report.py "
report_opts: ["--repdir={build_dir}/lint-{tool}",
"--outdir={build_dir}"]
tool: "ascentlint"
// Ascentlint-specific results parsing script that is called after running lint
report_cmd: "{lint_root}/tools/{tool}/parse-lint-report.py "
report_opts: ["--repdir={build_dir}/lint-{tool}",
"--outdir={build_dir}"]
}

View file

@ -3,12 +3,13 @@
// SPDX-License-Identifier: Apache-2.0
{
flow: lint
flow_makefile: "{proj_root}/hw/lint/tools/dvsim/lint.mk"
lint_root: "{proj_root}/hw/lint"
flow_makefile: "{lint_root}/tools/dvsim/lint.mk"
import_cfgs: [// common server configuration for results upload
"{proj_root}/hw/data/common_project_cfg.hjson"
// tool-specific configuration
"{proj_root}/hw/lint/tools/dvsim/{tool}.hjson"]
"{lint_root}/tools/dvsim/{tool}.hjson"]
// Name of the DUT / top-level to be run through lint
dut: "{name}"
@ -18,7 +19,7 @@
build_log: "{build_dir}/lint.log"
// We rely on fusesoc to run lint for us
build_cmd: "fusesoc"
build_opts: ["--cores-root {proj_root}/hw",
build_opts: ["--cores-root {proj_root}",
"run",
"--flag=fileset_{design_level}",
"--target={flow}",
@ -30,5 +31,4 @@
sv_flist_gen_cmd: ""
sv_flist_gen_opts: []
sv_flist_gen_dir: ""
tool_srcs: []
}

View file

@ -1,6 +1,7 @@
# Copyright lowRISC contributors.
# Licensed under the Apache License, Version 2.0, see LICENSE for details.
# SPDX-License-Identifier: Apache-2.0
.DEFAULT_GOAL := all
all: build
@ -8,30 +9,31 @@ all: build
###################
## build targets ##
###################
build: compile_result
build: build_result
pre_compile:
@echo "[make]: pre_compile"
mkdir -p ${build_dir} && env | sort > ${build_dir}/env_vars
mkdir -p ${tool_srcs_dir}
-cp -Ru ${tool_srcs} ${tool_srcs_dir}
pre_build:
@echo "[make]: pre_build"
mkdir -p ${build_dir}
ifneq (${pre_build_cmds},)
cd ${build_dir} && ${pre_build_cmds}
endif
compile: pre_compile
@echo "[make]: compile"
# we check the status in the parse script below
do_build: pre_build
@echo "[make]: do_build"
-cd ${build_dir} && ${build_cmd} ${build_opts} 2>&1 | tee ${build_log}
post_compile: compile
@echo "[make]: post_compile"
post_build: do_build
@echo "[make]: post_build"
ifneq (${post_build_cmds},)
cd ${build_dir} && ${post_build_cmds}
endif
# Parse out result
compile_result: post_compile
@echo "[make]: compile_result"
build_result: post_build
@echo "[make]: build_result"
${report_cmd} ${report_opts}
.PHONY: build \
run \
pre_compile \
compile \
post_compile \
compile_result
pre_build \
do_build \
post_build \
build_result

View file

@ -2,12 +2,14 @@
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
{
// TODO(#1342): switch over to native structured tool output, once supported by Verible
// Verible lint-specific results parsing script that is called after running lint
report_cmd: "{proj_root}/hw/lint/tools/{tool}/parse-lint-report.py "
report_opts: ["--repdir={build_dir}",
"--outdir={build_dir}"]
tool: "veriblelint"
// This customizes the report format for style lint
is_style_lint: True
// TODO(#1342): switch over to native structured tool output, once supported by Verible
// Verible lint-specific results parsing script that is called after running lint
report_cmd: "{lint_root}/tools/{tool}/parse-lint-report.py "
report_opts: ["--repdir={build_dir}",
"--outdir={build_dir}"]
// This customizes the report format for style lint
is_style_lint: True
}

View file

@ -2,8 +2,10 @@
// Licensed under the Apache License, Version 2.0, see LICENSE for details.
// SPDX-License-Identifier: Apache-2.0
{
// Verilator lint-specific results parsing script that is called after running lint
report_cmd: "{proj_root}/hw/lint/tools/{tool}/parse-lint-report.py "
report_opts: ["--logpath={build_dir}/lint.log",
"--reppath={build_dir}/results.hjson"]
tool: "verilator"
// Verilator lint-specific results parsing script that is called after running lint
report_cmd: "{lint_root}/tools/{tool}/parse-lint-report.py "
report_opts: ["--logpath={build_dir}/lint.log",
"--reppath={build_dir}/results.hjson"]
}

View file

@ -0,0 +1,100 @@
# Copyright lowRISC contributors.
# Licensed under the Apache License, Version 2.0, see LICENSE for details.
# SPDX-License-Identifier: Apache-2.0
import logging as log
import sys
from CfgJson import load_hjson
import FpvCfg
import LintCfg
import SimCfg
import SynCfg
def _load_cfg(path, initial_values):
'''Worker function for make_cfg.
initial_values is passed to load_hjson (see documentation there).
Returns a pair (cls, hjson_data) on success or raises a RuntimeError on
failure.
'''
# Start by loading up the hjson file and any included files
hjson_data = load_hjson(path, initial_values)
# Look up the value of flow in the loaded data. This is a required field,
# and tells us what sort of FlowCfg to make.
flow = hjson_data.get('flow')
if flow is None:
raise RuntimeError('{!r}: No value for the "flow" key. Are you sure '
'this is a dvsim configuration file?'
.format(path))
classes = [
LintCfg.LintCfg,
SynCfg.SynCfg,
FpvCfg.FpvCfg,
SimCfg.SimCfg
]
found_cls = None
known_types = []
for cls in classes:
assert cls.flow is not None
known_types.append(cls.flow)
if cls.flow == flow:
found_cls = cls
break
if found_cls is None:
raise RuntimeError('{}: Configuration file sets "flow" to {!r}, but '
'this is not a known flow (known: {}).'
.format(path, flow, ', '.join(known_types)))
return (found_cls, hjson_data)
def _make_child_cfg(path, args, initial_values):
try:
cls, hjson_data = _load_cfg(path, initial_values)
except RuntimeError as err:
log.error(str(err))
sys.exit(1)
# Since this is a child configuration (from some primary configuration),
# make sure that we aren't ourselves a primary configuration. We don't need
# multi-level hierarchies and this avoids circular dependencies.
if 'use_cfgs' in hjson_data:
raise RuntimeError('{}: Configuration file has use_cfgs, but is '
'itself included from another configuration.'
.format(path))
# Call cls as a constructor. Note that we pass None as the mk_config
# argument: this is not supposed to load anything else.
return cls(path, hjson_data, args, None)
def make_cfg(path, args, proj_root):
'''Make a flow config by loading the config file at path
args is the arguments passed to the dvsim.py tool and proj_root is the top
of the project.
'''
initial_values = {'proj_root': proj_root}
if args.tool is not None:
initial_values['tool'] = args.tool
try:
cls, hjson_data = _load_cfg(path, initial_values)
except RuntimeError as err:
log.error(str(err))
sys.exit(1)
def factory(child_path):
child_ivs = initial_values.copy()
child_ivs['flow'] = hjson_data['flow']
return _make_child_cfg(child_path, args, child_ivs)
return cls(path, hjson_data, args, factory)

172
vendor/lowrisc_ip/util/dvsim/CfgJson.py vendored Normal file
View file

@ -0,0 +1,172 @@
# Copyright lowRISC contributors.
# Licensed under the Apache License, Version 2.0, see LICENSE for details.
# SPDX-License-Identifier: Apache-2.0
'''A wrapper for loading hjson files as used by dvsim's FlowCfg'''
from utils import parse_hjson, subst_wildcards
# A set of fields that can be overridden on the command line and shouldn't be
# loaded from the hjson in that case.
_CMDLINE_FIELDS = {'tool'}
def load_hjson(path, initial_values):
'''Load an hjson file and any includes
Combines them all into a single dictionary, which is then returned. This
does wildcard substitution on include names (since it might be needed to
find included files), but not otherwise.
initial_values is a starting point for the dictionary to be returned (which
is not modified). It needs to contain values for anything needed to resolve
include files (typically, this is 'proj_root' and 'tool' (if set)).
'''
worklist = [path]
seen = {path}
ret = initial_values.copy()
is_first = True
# Figure out a list of fields that had a value from the command line. These
# should have been passed in as part of initial_values and we need to know
# that we can safely ignore updates.
arg_keys = _CMDLINE_FIELDS & initial_values.keys()
while worklist:
next_path = worklist.pop()
new_paths = _load_single_file(ret, next_path, is_first, arg_keys)
if set(new_paths) & seen:
raise RuntimeError('{!r}: The file {!r} appears more than once '
'when processing includes.'
.format(path, next_path))
seen |= set(new_paths)
worklist += new_paths
is_first = False
return ret
def _load_single_file(target, path, is_first, arg_keys):
'''Load a single hjson file, merging its keys into target
Returns a list of further includes that should be loaded.
'''
hjson = parse_hjson(path)
if not isinstance(hjson, dict):
raise RuntimeError('{!r}: Top-level hjson object is not a dictionary.'
.format(path))
import_cfgs = []
for key, dict_val in hjson.items():
# If this key got set at the start of time and we want to ignore any
# updates: ignore them!
if key in arg_keys:
continue
# If key is 'import_cfgs', this should be a list. Add each item to the
# list of cfgs to process
if key == 'import_cfgs':
if not isinstance(dict_val, list):
raise RuntimeError('{!r}: import_cfgs value is {!r}, but '
'should be a list.'
.format(path, dict_val))
import_cfgs += dict_val
continue
# 'use_cfgs' is a bit like 'import_cfgs', but is only used for primary
# config files (where it is a list of the child configs). This
# shouldn't be used except at top-level (the first configuration file
# to be loaded).
#
# If defined, check that it's a list, but then allow it to be set in
# the target dictionary as usual.
if key == 'use_cfgs':
if not is_first:
raise RuntimeError('{!r}: File is included by another one, '
'but defines "use_cfgs".'
.format(path))
if not isinstance(dict_val, list):
raise RuntimeError('{!r}: use_cfgs must be a list. Saw {!r}.'
.format(path, dict_val))
# Otherwise, update target with this attribute
set_target_attribute(path, target, key, dict_val)
# Expand the names of imported configuration files as we return them
return [subst_wildcards(cfg_path,
target,
ignored_wildcards=[],
ignore_error=False)
for cfg_path in import_cfgs]
def set_target_attribute(path, target, key, dict_val):
'''Set an attribute on the target dictionary
This performs checks for conflicting values and merges lists /
dictionaries.
'''
old_val = target.get(key)
if old_val is None:
# A new attribute (or the old value was None, in which case it's
# just a placeholder and needs writing). Set it and return.
target[key] = dict_val
return
if isinstance(old_val, list):
if not isinstance(dict_val, list):
raise RuntimeError('{!r}: Conflicting types for key {!r}: was '
'{!r}, a list, but loaded value is {!r}, '
'of type {}.'
.format(path, key, old_val, dict_val,
type(dict_val).__name__))
# Lists are merged by concatenation
target[key] += dict_val
return
# The other types we support are "scalar" types.
scalar_types = [(str, [""]), (int, [0, -1]), (bool, [False])]
defaults = None
for st_type, st_defaults in scalar_types:
if isinstance(dict_val, st_type):
defaults = st_defaults
break
if defaults is None:
raise RuntimeError('{!r}: Value for key {!r} is {!r}, of '
'unknown type {}.'
.format(path, key, dict_val,
type(dict_val).__name__))
if not isinstance(old_val, st_type):
raise RuntimeError('{!r}: Value for key {!r} is {!r}, but '
'we already had the value {!r}, of an '
'incompatible type.'
.format(path, key, dict_val, old_val))
# The types are compatible. If the values are equal, there's nothing more
# to do
if old_val == dict_val:
return
old_is_default = old_val in defaults
new_is_default = dict_val in defaults
# Similarly, if new value looks like a default, ignore it (regardless
# of whether the current value looks like a default).
if new_is_default:
return
# If the existing value looks like a default and the new value doesn't,
# take the new value.
if old_is_default:
target[key] = dict_val
return
# Neither value looks like a default. Raise an error.
raise RuntimeError('{!r}: Value for key {!r} is {!r}, but '
'we already had a conflicting value of {!r}.'
.format(path, key, dict_val, old_val))

View file

@ -39,6 +39,11 @@ class Deploy():
# Max jobs dispatched in one go.
slot_limit = 20
# List of variable names that are to be treated as "list of commands".
# This tells `construct_cmd` that these vars are lists that need to
# be joined with '&&' instead of a space.
cmds_list_vars = []
def __self_str__(self):
if log.getLogger().isEnabledFor(VERBOSE):
return pprint.pformat(self.__dict__)
@ -197,7 +202,10 @@ class Deploy():
pretty_value = []
for item in value:
pretty_value.append(item.strip())
value = " ".join(pretty_value)
# Join attributes that are list of commands with '&&' to chain
# them together when executed as a Make target's recipe.
separator = " && " if attr in self.cmds_list_vars else " "
value = separator.join(pretty_value)
if type(value) is bool:
value = int(value)
if type(value) is str:
@ -248,6 +256,14 @@ class Deploy():
exports = os.environ.copy()
exports.update(self.exports)
# Clear the magic MAKEFLAGS variable from exports if necessary. This
# variable is used by recursive Make calls to pass variables from one
# level to the next. Here, self.cmd is a call to Make but it's
# logically a top-level invocation: we don't want to pollute the flow's
# Makefile with Make variables from any wrapper that called dvsim.
if 'MAKEFLAGS' in exports:
del exports['MAKEFLAGS']
args = shlex.split(self.cmd)
try:
# If renew_odir flag is True - then move it.
@ -626,6 +642,8 @@ class CompileSim(Deploy):
# Register all builds with the class
items = []
cmds_list_vars = ["pre_build_cmds", "post_build_cmds"]
def __init__(self, build_mode, sim_cfg):
# Initialize common vars.
super().__init__(sim_cfg)
@ -636,8 +654,7 @@ class CompileSim(Deploy):
self.mandatory_cmd_attrs.update({
# tool srcs
"tool_srcs": False,
"tool_srcs_dir": False,
"proj_root": False,
# Flist gen
"sv_flist_gen_cmd": False,
@ -646,8 +663,10 @@ class CompileSim(Deploy):
# Build
"build_dir": False,
"pre_build_cmds": False,
"build_cmd": False,
"build_opts": False
"build_opts": False,
"post_build_cmds": False,
})
self.mandatory_misc_attrs.update({
@ -697,8 +716,7 @@ class CompileOneShot(Deploy):
self.mandatory_cmd_attrs.update({
# tool srcs
"tool_srcs": False,
"tool_srcs_dir": False,
"proj_root": False,
# Flist gen
"sv_flist_gen_cmd": False,
@ -707,8 +725,10 @@ class CompileOneShot(Deploy):
# Build
"build_dir": False,
"pre_build_cmds": False,
"build_cmd": False,
"build_opts": False,
"post_build_cmds": False,
"build_log": False,
# Report processing
@ -743,6 +763,8 @@ class RunTest(Deploy):
# Register all runs with the class
items = []
cmds_list_vars = ["pre_run_cmds", "post_run_cmds"]
def __init__(self, index, test, sim_cfg):
# Initialize common vars.
super().__init__(sim_cfg)
@ -753,20 +775,17 @@ class RunTest(Deploy):
self.mandatory_cmd_attrs.update({
# tool srcs
"tool_srcs": False,
"tool_srcs_dir": False,
"proj_root": False,
"uvm_test": False,
"uvm_test_seq": False,
"run_opts": False,
"sw_test": False,
"sw_test_is_prebuilt": False,
"sw_images": False,
"sw_build_device": False,
"sw_build_dir": False,
"run_dir": False,
"pre_run_cmds": False,
"run_cmd": False,
"run_opts": False
"run_opts": False,
"post_run_cmds": False,
})
self.mandatory_misc_attrs.update({
@ -830,6 +849,54 @@ class RunTest(Deploy):
return RunTest.seeds.pop(0)
class CovUnr(Deploy):
"""
Abstraction for coverage UNR flow.
"""
# Register all builds with the class
items = []
def __init__(self, sim_cfg):
# Initialize common vars.
super().__init__(sim_cfg)
self.target = "cov_unr"
self.mandatory_cmd_attrs.update({
# tool srcs
"proj_root": False,
# Need to generate filelist based on build mode
"sv_flist_gen_cmd": False,
"sv_flist_gen_dir": False,
"sv_flist_gen_opts": False,
"build_dir": False,
"cov_unr_build_cmd": False,
"cov_unr_build_opts": False,
"cov_unr_run_cmd": False,
"cov_unr_run_opts": False
})
self.mandatory_misc_attrs.update({
"cov_unr_dir": False,
"build_fail_patterns": False
})
super().parse_dict(sim_cfg.__dict__)
self.__post_init__()
self.pass_patterns = []
# Reuse fail_patterns from sim build
self.fail_patterns = self.build_fail_patterns
# Start fail message construction
self.fail_msg = "\n**COV_UNR:** {}<br>\n".format(self.name)
log_sub_path = self.log.replace(self.sim_cfg.scratch_path + '/', '')
self.fail_msg += "**LOG:** $scratch_path/{}<br>\n".format(log_sub_path)
CovUnr.items.append(self)
class CovMerge(Deploy):
"""
Abstraction for merging coverage databases. An item of this class is created AFTER
@ -996,8 +1063,7 @@ class CovAnalyze(Deploy):
self.mandatory_cmd_attrs.update({
# tool srcs
"tool_srcs": False,
"tool_srcs_dir": False,
"proj_root": False,
"cov_analyze_cmd": False,
"cov_analyze_opts": False
})

View file

@ -12,25 +12,39 @@ import sys
import hjson
from CfgJson import set_target_attribute
from Deploy import Deploy
from utils import VERBOSE, md_results_to_html, parse_hjson, subst_wildcards
# A set of fields that can be overridden on the command line.
_CMDLINE_FIELDS = {'tool', 'verbosity'}
from utils import (VERBOSE, md_results_to_html,
subst_wildcards, find_and_substitute_wildcards)
# Interface class for extensions.
class FlowCfg():
'''Base class for the different flows supported by dvsim.py
The constructor expects some parsed hjson data. Create these objects with
the factory function in CfgFactory.py, which loads the hjson data and picks
a subclass of FlowCfg based on its contents.
'''
# Set in subclasses. This is the key that must be used in an hjson file to
# tell dvsim.py which subclass to use.
flow = None
# Can be overridden in subclasses to configure which wildcards to ignore
# when expanding hjson.
ignored_wildcards = []
def __str__(self):
return pprint.pformat(self.__dict__)
def __init__(self, flow_cfg_file, proj_root, args):
def __init__(self, flow_cfg_file, hjson_data, args, mk_config):
# Options set from command line
self.items = args.items
self.list_items = args.list
self.select_cfgs = args.select_cfgs
self.flow_cfg_file = flow_cfg_file
self.proj_root = proj_root
self.args = args
self.scratch_root = args.scratch_root
self.branch = args.branch
@ -40,9 +54,6 @@ class FlowCfg():
self.project = ""
self.scratch_path = ""
# Imported cfg files using 'import_cfgs' keyword
self.imported_cfg_files = [flow_cfg_file]
# Add exports using 'exports' keyword - these are exported to the child
# process' environment.
self.exports = []
@ -72,7 +83,7 @@ class FlowCfg():
self.errors_seen = False
self.rel_path = ""
self.results_title = ""
self.revision_string = ""
self.revision = ""
self.results_server_prefix = ""
self.results_server_url_prefix = ""
self.results_server_cmd = ""
@ -95,7 +106,68 @@ class FlowCfg():
self.email_summary_md = ""
self.results_summary_md = ""
def __post_init__(self):
# Merge in the values from the loaded hjson file. If subclasses want to
# add other default parameters that depend on the parameters above,
# they can override _merge_hjson and add their parameters at the start
# of that.
self._merge_hjson(hjson_data)
# Is this a primary config? If so, we need to load up all the child
# configurations at this point. If not, we place ourselves into
# self.cfgs and consider ourselves a sort of "degenerate primary
# configuration".
self.is_primary_cfg = 'use_cfgs' in hjson_data
if not self.is_primary_cfg:
self.cfgs.append(self)
else:
for entry in self.use_cfgs:
self._load_child_cfg(entry, mk_config)
if self.rel_path == "":
self.rel_path = os.path.dirname(self.flow_cfg_file).replace(
self.proj_root + '/', '')
# Process overrides before substituting wildcards
self._process_overrides()
# Expand wildcards. If subclasses need to mess around with parameters
# after merging the hjson but before expansion, they can override
# _expand and add the code at the start.
self._expand()
# Run any final checks
self._post_init()
def _merge_hjson(self, hjson_data):
'''Take hjson data and merge it into self.__dict__
Subclasses that need to do something just before the merge should
override this method and call super()._merge_hjson(..) at the end.
'''
for key, value in hjson_data.items():
set_target_attribute(self.flow_cfg_file,
self.__dict__,
key,
value)
def _expand(self):
'''Called to expand wildcards after merging hjson
Subclasses can override this to do something just before expansion.
'''
# If this is a primary configuration, it doesn't matter if we don't
# manage to expand everything.
partial = self.is_primary_cfg
self.__dict__ = find_and_substitute_wildcards(self.__dict__,
self.__dict__,
self.ignored_wildcards,
ignore_error=partial)
def _post_init(self):
# Run some post init checks
if not self.is_primary_cfg:
# Check if self.cfgs is a list of exactly 1 item (self)
@ -103,11 +175,27 @@ class FlowCfg():
log.error("Parse error!\n%s", self.cfgs)
sys.exit(1)
def create_instance(self, flow_cfg_file):
def create_instance(self, mk_config, flow_cfg_file):
'''Create a new instance of this class for the given config file.
mk_config is a factory method (passed explicitly to avoid a circular
dependency between this file and CfgFactory.py).
'''
return type(self)(flow_cfg_file, self.proj_root, self.args)
new_instance = mk_config(flow_cfg_file)
# Sanity check to make sure the new object is the same class as us: we
# don't yet support heterogeneous primary configurations.
if type(self) is not type(new_instance):
log.error("{}: Loading child configuration at {!r}, but the "
"resulting flow types don't match: ({} vs. {})."
.format(self.flow_cfg_file,
flow_cfg_file,
type(self).__name__,
type(new_instance).__name__))
sys.exit(1)
return new_instance
def kill(self):
'''kill running processes and jobs gracefully
@ -115,187 +203,38 @@ class FlowCfg():
for item in self.deploy:
item.kill()
def _parse_cfg(self, path, is_entry_point):
'''Load an hjson config file at path and update self accordingly.
def _load_child_cfg(self, entry, mk_config):
'''Load a child configuration for a primary cfg'''
if type(entry) is str:
# Treat this as a file entry. Substitute wildcards in cfg_file
# files since we need to process them right away.
cfg_file = subst_wildcards(entry,
self.__dict__,
ignore_error=True)
self.cfgs.append(self.create_instance(mk_config, cfg_file))
If is_entry_point is true, this is the top-level configuration file, so
it's possible that this is a primary config.
elif type(entry) is dict:
# Treat this as a cfg expanded in-line
temp_cfg_file = self._conv_inline_cfg_to_hjson(entry)
if not temp_cfg_file:
return
self.cfgs.append(self.create_instance(mk_config, temp_cfg_file))
'''
hjson_dict = parse_hjson(path)
# Delete the temp_cfg_file once the instance is created
try:
log.log(VERBOSE, "Deleting temp cfg file:\n%s",
temp_cfg_file)
os.system("/bin/rm -rf " + temp_cfg_file)
except IOError:
log.error("Failed to remove temp cfg file:\n%s",
temp_cfg_file)
# Check if this is the primary cfg, if this is the entry point cfg file
if is_entry_point:
self.is_primary_cfg = self.check_if_primary_cfg(hjson_dict)
# If not a primary cfg, then register self with self.cfgs
if self.is_primary_cfg is False:
self.cfgs.append(self)
# Resolve the raw hjson dict to build this object
self.resolve_hjson_raw(path, hjson_dict)
def _parse_flow_cfg(self, path):
'''Parse the flow's hjson configuration.
This is a private API which should be called by the __init__ method of
each subclass.
'''
self._parse_cfg(path, True)
if self.rel_path == "":
self.rel_path = os.path.dirname(self.flow_cfg_file).replace(
self.proj_root + '/', '')
def check_if_primary_cfg(self, hjson_dict):
# This is a primary cfg only if it has a single key called "use_cfgs"
# which contains a list of actual flow cfgs.
hjson_cfg_dict_keys = hjson_dict.keys()
return ("use_cfgs" in hjson_cfg_dict_keys and type(hjson_dict["use_cfgs"]) is list)
def _set_attribute(self, path, key, dict_val):
'''Set an attribute from an hjson file
The path argument is the path for the hjson file that we're reading.
'''
# Is this value overridden on the command line? If so, use the override
# instead.
args_val = None
if key in _CMDLINE_FIELDS:
args_val = getattr(self.args, key, None)
override_msg = ''
if args_val is not None:
dict_val = args_val
override_msg = ' from command-line override'
self_val = getattr(self, key, None)
if self_val is None:
# A new attribute (or the old value was None, in which case it's
# just a placeholder and needs writing). Set it and return.
setattr(self, key, dict_val)
return
# This is already an attribute. Firstly, we need to make sure the types
# are compatible.
if type(dict_val) != type(self_val):
log.error("Conflicting types for key {!r} when loading {!r}. "
"Cannot override value {!r} with {!r} (which is of "
"type {}{})."
.format(key, path, self_val, dict_val,
type(dict_val).__name__, override_msg))
else:
log.error(
"Type of entry \"%s\" in the \"use_cfgs\" key is invalid: %s",
entry, str(type(entry)))
sys.exit(1)
# Looks like the types are compatible. If they are lists, concatenate
# them.
if isinstance(self_val, list):
setattr(self, key, self_val + dict_val)
return
# Otherwise, check whether this is a type we know how to deal with.
scalar_types = {str: [""], int: [0, -1], bool: [False]}
defaults = scalar_types.get(type(dict_val))
if defaults is None:
log.error("When loading {!r} and setting key {!r}, found a value "
"of {!r}{} with unsupported type {}."
.format(path, key, dict_val,
override_msg, type(dict_val).__name__))
sys.exit(1)
# If the values are equal, there's nothing more to do
if self_val == dict_val:
return
old_is_default = self_val in defaults
new_is_default = dict_val in defaults
# Similarly, if new value looks like a default, ignore it (regardless
# of whether the current value looks like a default).
if new_is_default:
return
# If the existing value looks like a default and the new value doesn't,
# take the new value.
if old_is_default:
setattr(self, key, dict_val)
return
# Neither value looks like a default. Raise an error.
log.error("When loading {!r}, key {!r} is given a value of "
"{!r}{}, but the key is already set to {!r}."
.format(path, key, dict_val, override_msg, self_val))
sys.exit(1)
def resolve_hjson_raw(self, path, hjson_dict):
import_cfgs = []
use_cfgs = []
for key, dict_val in hjson_dict.items():
# If key is 'import_cfgs' then add to the list of cfgs to process
if key == 'import_cfgs':
import_cfgs.extend(dict_val)
continue
# If the key is 'use_cfgs', we're only allowed to take effect for a
# primary config list. If we are in a primary config list, add it.
if key == 'use_cfgs':
if not self.is_primary_cfg:
log.error("Key 'use_cfgs' encountered in the non-primary "
"cfg file list {!r}."
.format(path))
sys.exit(1)
use_cfgs.extend(dict_val)
continue
# Otherwise, set an attribute on self.
self._set_attribute(path, key, dict_val)
# Parse imported cfgs
for cfg_file in import_cfgs:
if cfg_file not in self.imported_cfg_files:
self.imported_cfg_files.append(cfg_file)
# Substitute wildcards in cfg_file files since we need to process
# them right away.
cfg_file = subst_wildcards(cfg_file, self.__dict__)
self._parse_cfg(cfg_file, False)
else:
log.error("Cfg file \"%s\" has already been parsed", cfg_file)
# Parse primary cfg files
if self.is_primary_cfg:
for entry in use_cfgs:
if type(entry) is str:
# Treat this as a file entry
# Substitute wildcards in cfg_file files since we need to process
# them right away.
cfg_file = subst_wildcards(entry,
self.__dict__,
ignore_error=True)
self.cfgs.append(self.create_instance(cfg_file))
elif type(entry) is dict:
# Treat this as a cfg expanded in-line
temp_cfg_file = self._conv_inline_cfg_to_hjson(entry)
if not temp_cfg_file:
continue
self.cfgs.append(self.create_instance(temp_cfg_file))
# Delete the temp_cfg_file once the instance is created
try:
log.log(VERBOSE, "Deleting temp cfg file:\n%s",
temp_cfg_file)
os.system("/bin/rm -rf " + temp_cfg_file)
except IOError:
log.error("Failed to remove temp cfg file:\n%s",
temp_cfg_file)
else:
log.error(
"Type of entry \"%s\" in the \"use_cfgs\" key is invalid: %s",
entry, str(type(entry)))
sys.exit(1)
def _conv_inline_cfg_to_hjson(self, idict):
'''Dump a temp hjson file in the scratch space from input dict.
This method is to be called only by a primary cfg'''
@ -457,11 +396,10 @@ class FlowCfg():
def gen_results(self):
'''Public facing API for _gen_results().
'''
results = []
for item in self.cfgs:
result = item._gen_results()
log.info("[results]: [%s]:\n%s\n\n", item.name, result)
results.append(result)
log.info("[results]: [%s]:\n%s\n", item.name, result)
log.info("[scratch_path]: [%s] [%s]", item.name, item.scratch_path)
self.errors_seen |= item.errors_seen
if self.is_primary_cfg:
@ -492,7 +430,7 @@ class FlowCfg():
f = open(results_html_file, 'w')
f.write(results_html)
f.close()
log.info("[results summary]: %s [%s]", "generated for email purpose", results_html_file)
log.info("[results:email]: [%s]", results_html_file)
def _publish_results(self):
'''Publish results to the opentitan web server.

View file

@ -9,20 +9,20 @@ import hjson
from tabulate import tabulate
from OneShotCfg import OneShotCfg
from utils import subst_wildcards
from utils import VERBOSE, subst_wildcards
class FpvCfg(OneShotCfg):
"""Derivative class for FPV purposes.
"""
def __init__(self, flow_cfg_file, proj_root, args):
super().__init__(flow_cfg_file, proj_root, args)
flow = 'fpv'
def __init__(self, flow_cfg_file, hjson_data, args, mk_config):
super().__init__(flow_cfg_file, hjson_data, args, mk_config)
self.header = ["name", "errors", "warnings", "proven", "cex", "undetermined",
"covered", "unreachable", "pass_rate", "cov_rate"]
self.summary_header = ["name", "pass_rate", "stimuli_cov", "coi_cov", "prove_cov"]
def __post_init__(self):
super().__post_init__()
self.results_title = self.name.upper() + " FPV Results"
def parse_dict_to_str(self, input_dict, excl_keys = []):
@ -126,8 +126,9 @@ class FpvCfg(OneShotCfg):
results_str = "## " + self.results_title + " (Summary)\n\n"
results_str += "### " + self.timestamp_long + "\n"
if self.revision_string:
results_str += "### " + self.revision_string + "\n"
if self.revision:
results_str += "### " + self.revision + "\n"
results_str += "### Branch: " + self.branch + "\n"
results_str += "\n"
colalign = ("center", ) * len(self.summary_header)
@ -219,8 +220,9 @@ class FpvCfg(OneShotCfg):
# }
results_str = "## " + self.results_title + "\n\n"
results_str += "### " + self.timestamp_long + "\n"
if self.revision_string:
results_str += "### " + self.revision_string + "\n"
if self.revision:
results_str += "### " + self.revision + "\n"
results_str += "### Branch: " + self.branch + "\n"
results_str += "### FPV Tool: " + self.tool.upper() + "\n"
results_str += "### LogFile dir: " + self.scratch_path + "/default\n\n"
@ -266,13 +268,12 @@ class FpvCfg(OneShotCfg):
with open(results_file, 'w') as f:
f.write(self.results_md)
log.info("[results page]: [%s] [%s]", self.name, results_file)
# Generate result summary
if not self.cov:
summary += ["N/A", "N/A", "N/A"]
self.result_summary[self.name] = summary
log.log(VERBOSE, "[results page]: [%s] [%s]", self.name, results_file)
return self.results_md
def _publish_results(self):

View file

@ -12,19 +12,19 @@ from pathlib import Path
from tabulate import tabulate
from OneShotCfg import OneShotCfg
from utils import print_msg_list, subst_wildcards
from utils import VERBOSE, print_msg_list, subst_wildcards
class LintCfg(OneShotCfg):
"""Derivative class for linting purposes.
"""
def __init__(self, flow_cfg_file, proj_root, args):
flow = 'lint'
def __init__(self, flow_cfg_file, hjson_data, args, mk_config):
# This is a lint-specific attribute
self.is_style_lint = ""
super().__init__(flow_cfg_file, proj_root, args)
def __post_init__(self):
super().__post_init__()
super().__init__(flow_cfg_file, hjson_data, args, mk_config)
# Convert to boolean
if self.is_style_lint == "True":
@ -48,11 +48,11 @@ class LintCfg(OneShotCfg):
results_str = "## " + self.results_title + " (Summary)\n\n"
results_str += "### " + self.timestamp_long + "\n"
if self.revision_string:
results_str += "### " + self.revision_string + "\n"
if self.revision:
results_str += "### " + self.revision + "\n"
results_str += "### Branch: " + self.branch + "\n"
results_str += "\n"
header = [
"Name", "Tool Warnings", "Tool Errors", "Lint Warnings",
"Lint Errors"
@ -114,8 +114,9 @@ class LintCfg(OneShotCfg):
# Generate results table for runs.
results_str = "## " + self.results_title + "\n\n"
results_str += "### " + self.timestamp_long + "\n"
if self.revision_string:
results_str += "### " + self.revision_string + "\n"
if self.revision:
results_str += "### " + self.revision + "\n"
results_str += "### Branch: " + self.branch + "\n"
results_str += "### Lint Tool: " + self.tool.upper() + "\n\n"
header = [
@ -137,7 +138,7 @@ class LintCfg(OneShotCfg):
result_data = Path(
subst_wildcards(self.build_dir, {"build_mode": mode.name}) +
'/results.hjson')
log.info("looking for result data file at %s", result_data)
log.info("[results:hjson]: [%s]: [%s]", self.name, result_data)
try:
with result_data.open() as results_file:
@ -184,7 +185,6 @@ class LintCfg(OneShotCfg):
("Lint Warnings", "lint_warnings"),
("Lint Errors", "lint_errors")]
# Lint fails if any warning or error message has occurred
self.errors_seen = False
for _, key in hdr_key_pairs:
@ -219,9 +219,9 @@ class LintCfg(OneShotCfg):
self.publish_results_md = self.results_md
# Write results to the scratch area
self.results_file = self.scratch_path + "/results_" + self.timestamp + ".md"
with open(self.results_file, 'w') as f:
results_file = self.scratch_path + "/results_" + self.timestamp + ".md"
with open(results_file, 'w') as f:
f.write(self.results_md)
log.info("[results page]: [%s] [%s]", self.name, self.results_file)
log.log(VERBOSE, "[results page]: [%s] [%s]", self.name, results_file)
return self.results_md

View file

@ -261,9 +261,14 @@ class BuildModes(Modes):
if not hasattr(self, "mname"):
self.mname = "mode"
self.is_sim_mode = 0
self.build_opts = []
self.run_opts = []
self.pre_build_cmds = []
self.post_build_cmds = []
self.en_build_modes = []
self.build_opts = []
self.pre_run_cmds = []
self.post_run_cmds = []
self.run_opts = []
self.sw_images = []
super().__init__(bdict)
self.en_build_modes = list(set(self.en_build_modes))
@ -287,13 +292,14 @@ class RunModes(Modes):
if not hasattr(self, "mname"):
self.mname = "mode"
self.reseed = None
self.pre_run_cmds = []
self.post_run_cmds = []
self.en_run_modes = []
self.run_opts = []
self.uvm_test = ""
self.uvm_test_seq = ""
self.build_mode = ""
self.en_run_modes = []
self.sw_test = ""
self.sw_test_is_prebuilt = ""
self.sw_images = []
self.sw_build_device = ""
super().__init__(rdict)
@ -319,8 +325,7 @@ class Tests(RunModes):
"uvm_test": "",
"uvm_test_seq": "",
"build_mode": "",
"sw_test": "",
"sw_test_is_prebuilt": "",
"sw_images": [],
"sw_build_device": "",
}
@ -408,20 +413,30 @@ class Tests(RunModes):
test_obj.name, test_obj.build_mode.name)
sys.exit(1)
# Merge build_mode's run_opts with self
# Merge build_mode's params with self
test_obj.pre_run_cmds.extend(test_obj.build_mode.pre_run_cmds)
test_obj.post_run_cmds.extend(test_obj.build_mode.post_run_cmds)
test_obj.run_opts.extend(test_obj.build_mode.run_opts)
test_obj.sw_images.extend(test_obj.build_mode.sw_images)
# Return the list of tests
return tests_objs
@staticmethod
def merge_global_opts(tests, global_build_opts, global_run_opts):
def merge_global_opts(tests, global_pre_build_cmds, global_post_build_cmds,
global_build_opts, global_pre_run_cmds,
global_post_run_cmds, global_run_opts, global_sw_images):
processed_build_modes = []
for test in tests:
if test.build_mode.name not in processed_build_modes:
test.build_mode.pre_build_cmds.extend(global_pre_build_cmds)
test.build_mode.post_build_cmds.extend(global_post_build_cmds)
test.build_mode.build_opts.extend(global_build_opts)
processed_build_modes.append(test.build_mode.name)
test.pre_run_cmds.extend(global_pre_run_cmds)
test.post_run_cmds.extend(global_post_run_cmds)
test.run_opts.extend(global_run_opts)
test.sw_images.extend(global_sw_images)
class Regressions(Modes):
@ -454,6 +469,10 @@ class Regressions(Modes):
self.excl_tests = [] # TODO: add support for this
self.en_sim_modes = []
self.en_run_modes = []
self.pre_build_cmds = []
self.post_build_cmds = []
self.pre_run_cmds = []
self.post_run_cmds = []
self.build_opts = []
self.run_opts = []
super().__init__(regdict)
@ -515,8 +534,8 @@ class Regressions(Modes):
sys.exit(1)
# Check if sim_mode_obj's sub-modes are a part of regressions's
# sim modes- if yes, then it will cause duplication of opts
# Throw an error and exit.
# sim modes- if yes, then it will cause duplication of cmds &
# opts. Throw an error and exit.
for sim_mode_obj_sub in sim_mode_obj.en_build_modes:
if sim_mode_obj_sub in regression_obj.en_sim_modes:
log.error(
@ -531,21 +550,31 @@ class Regressions(Modes):
if sim_mode_obj.name in sim_cfg.en_build_modes:
continue
# Merge the build and run opts from the sim modes
# Merge the build and run cmds & opts from the sim modes
regression_obj.pre_build_cmds.extend(
sim_mode_obj.pre_build_cmds)
regression_obj.post_build_cmds.extend(
sim_mode_obj.post_build_cmds)
regression_obj.build_opts.extend(sim_mode_obj.build_opts)
regression_obj.pre_run_cmds.extend(sim_mode_obj.pre_run_cmds)
regression_obj.post_run_cmds.extend(sim_mode_obj.post_run_cmds)
regression_obj.run_opts.extend(sim_mode_obj.run_opts)
# Unpack the run_modes
# TODO: If there are other params other than run_opts throw an error and exit
# TODO: If there are other params other than run_opts throw an
# error and exit
found_run_mode_objs = Modes.find_and_merge_modes(
regression_obj, regression_obj.en_run_modes, run_modes, False)
# Only merge the run_opts from the run_modes enabled
# Only merge the pre_run_cmds, post_run_cmds & run_opts from the
# run_modes enabled
for run_mode_obj in found_run_mode_objs:
# Check if run_mode_obj is also passed on the command line, in
# which case, skip
if run_mode_obj.name in sim_cfg.en_run_modes:
continue
regression_obj.pre_run_cmds.extend(run_mode_obj.pre_run_cmds)
regression_obj.post_run_cmds.extend(run_mode_obj.post_run_cmds)
regression_obj.run_opts.extend(run_mode_obj.run_opts)
# Unpack tests
@ -578,8 +607,12 @@ class Regressions(Modes):
processed_build_modes = []
for test in self.tests:
if test.build_mode.name not in processed_build_modes:
test.build_mode.pre_build_cmds.extend(self.pre_build_cmds)
test.build_mode.post_build_cmds.extend(self.post_build_cmds)
test.build_mode.build_opts.extend(self.build_opts)
processed_build_modes.append(test.build_mode.name)
test.pre_run_cmds.extend(self.pre_run_cmds)
test.post_run_cmds.extend(self.post_run_cmds)
test.run_opts.extend(self.run_opts)
# Override reseed if available.

View file

@ -13,18 +13,17 @@ from collections import OrderedDict
from Deploy import CompileOneShot
from FlowCfg import FlowCfg
from Modes import BuildModes, Modes
from utils import find_and_substitute_wildcards
class OneShotCfg(FlowCfg):
"""Simple one-shot build flow for non-simulation targets like
linting, synthesis and FPV.
"""
def __init__(self, flow_cfg_file, proj_root, args):
super().__init__(flow_cfg_file, proj_root, args)
assert args.tool is not None
ignored_wildcards = (FlowCfg.ignored_wildcards +
['build_mode', 'index', 'test'])
def __init__(self, flow_cfg_file, hjson_data, args, mk_config):
# Options set from command line
self.tool = args.tool
self.verbose = args.verbose
@ -75,27 +74,23 @@ class OneShotCfg(FlowCfg):
self.build_list = []
self.deploy = []
self.cov = args.cov
# Parse the cfg_file file tree
self._parse_flow_cfg(flow_cfg_file)
super().__init__(flow_cfg_file, hjson_data, args, mk_config)
def _merge_hjson(self, hjson_data):
# If build_unique is set, then add current timestamp to uniquify it
if self.build_unique:
self.build_dir += "_" + self.timestamp
# Process overrides before substituting the wildcards.
self._process_overrides()
super()._merge_hjson(hjson_data)
# Make substitutions, while ignoring the following wildcards
# TODO: Find a way to set these in sim cfg instead
ignored_wildcards = ["build_mode", "index", "test"]
self.__dict__ = find_and_substitute_wildcards(self.__dict__,
self.__dict__,
ignored_wildcards)
def _expand(self):
super()._expand()
# Stuff below only pertains to individual cfg (not primary cfg).
if not self.is_primary_cfg:
# Print info
log.info("[scratch_dir]: [%s]: [%s]", self.name, self.scratch_path)
# Print scratch_path at the start:
log.info("[scratch_path]: [%s] [%s]", self.name, self.scratch_path)
# Set directories with links for ease of debug / triage.
self.links = {
@ -113,13 +108,6 @@ class OneShotCfg(FlowCfg):
# tests and regressions, only if not a primary cfg obj
self._create_objects()
# Post init checks
self.__post_init__()
def __post_init__(self):
# Run some post init checks
super().__post_init__()
# Purge the output directories. This operates on self.
def _purge(self):
if self.scratch_path:

View file

@ -12,12 +12,14 @@ import subprocess
import sys
from collections import OrderedDict
from Deploy import CompileSim, CovAnalyze, CovMerge, CovReport, Deploy, RunTest
from Deploy import (CompileSim, CovAnalyze, CovMerge, CovReport, CovUnr,
Deploy, RunTest)
from FlowCfg import FlowCfg
from Modes import BuildModes, Modes, Regressions, RunModes, Tests
from tabulate import tabulate
from utils import VERBOSE
from testplanner import class_defs, testplan_utils
from utils import VERBOSE, find_and_substitute_wildcards
def pick_wave_format(fmts):
@ -46,8 +48,16 @@ class SimCfg(FlowCfg):
A simulation configuration class holds key information required for building a DV
regression framework.
"""
def __init__(self, flow_cfg_file, proj_root, args):
super().__init__(flow_cfg_file, proj_root, args)
flow = 'sim'
# TODO: Find a way to set these in sim cfg instead
ignored_wildcards = [
"build_mode", "index", "test", "seed", "uvm_test", "uvm_test_seq",
"cov_db_dirs", "sw_images", "sw_build_device"
]
def __init__(self, flow_cfg_file, hjson_data, args, mk_config):
# Options set from command line
self.tool = args.tool
self.build_opts = []
@ -71,10 +81,7 @@ class SimCfg(FlowCfg):
self.profile = args.profile or '(cfg uses profile without --profile)'
self.xprop_off = args.xprop_off
self.no_rerun = args.no_rerun
# Single-character verbosity setting (n, l, m, h, d). args.verbosity
# might be None, in which case we'll pick up a default value from
# configuration files.
self.verbosity = args.verbosity
self.verbosity = None # set in _expand
self.verbose = args.verbose
self.dry_run = args.dry_run
self.map_full_testplan = args.map_full_testplan
@ -97,9 +104,14 @@ class SimCfg(FlowCfg):
self.project = ""
self.flow = ""
self.flow_makefile = ""
self.pre_build_cmds = []
self.post_build_cmds = []
self.build_dir = ""
self.pre_run_cmds = []
self.post_run_cmds = []
self.run_dir = ""
self.sw_build_dir = ""
self.sw_images = []
self.pass_patterns = []
self.fail_patterns = []
self.name = ""
@ -132,9 +144,9 @@ class SimCfg(FlowCfg):
# Maintain an array of those in cov_deploys.
self.cov_deploys = []
# Parse the cfg_file file tree
self._parse_flow_cfg(flow_cfg_file)
super().__init__(flow_cfg_file, hjson_data, args, mk_config)
def _expand(self):
# Choose a wave format now. Note that this has to happen after parsing
# the configuration format because our choice might depend on the
# chosen tool.
@ -144,19 +156,15 @@ class SimCfg(FlowCfg):
if self.build_unique:
self.build_dir += "_" + self.timestamp
# Process overrides before substituting the wildcards.
self._process_overrides()
# If the user specified a verbosity on the command line then
# self.args.verbosity will be n, l, m, h or d. Set self.verbosity now.
# We will actually have loaded some other verbosity level from the
# config file, but that won't have any effect until expansion so we can
# safely switch it out now.
if self.args.verbosity is not None:
self.verbosity = self.args.verbosity
# Make substitutions, while ignoring the following wildcards
# TODO: Find a way to set these in sim cfg instead
ignored_wildcards = [
"build_mode", "index", "test", "seed", "uvm_test", "uvm_test_seq",
"cov_db_dirs", "sw_test", "sw_test_is_prebuilt", "sw_build_device"
]
self.__dict__ = find_and_substitute_wildcards(self.__dict__,
self.__dict__,
ignored_wildcards,
self.is_primary_cfg)
super()._expand()
# Set the title for simulation results.
self.results_title = self.name.upper() + " Simulation Results"
@ -176,8 +184,8 @@ class SimCfg(FlowCfg):
'and there was no --tool argument on the command line.')
sys.exit(1)
# Print info:
log.info("[scratch_dir]: [%s]: [%s]", self.name, self.scratch_path)
# Print scratch_path at the start:
log.info("[scratch_path]: [%s] [%s]", self.name, self.scratch_path)
# Set directories with links for ease of debug / triage.
self.links = {
@ -195,13 +203,6 @@ class SimCfg(FlowCfg):
# tests and regressions, only if not a primary cfg obj
self._create_objects()
# Post init checks
self.__post_init__()
def __post_init__(self):
# Run some post init checks
super().__post_init__()
def _resolve_waves(self):
'''Choose and return a wave format, if waves are enabled.
@ -269,8 +270,13 @@ class SimCfg(FlowCfg):
for en_build_mode in self.en_build_modes:
build_mode_obj = Modes.find_mode(en_build_mode, self.build_modes)
if build_mode_obj is not None:
self.pre_build_cmds.extend(build_mode_obj.pre_build_cmds)
self.post_build_cmds.extend(build_mode_obj.post_build_cmds)
self.build_opts.extend(build_mode_obj.build_opts)
self.pre_run_cmds.extend(build_mode_obj.pre_run_cmds)
self.post_run_cmds.extend(build_mode_obj.post_run_cmds)
self.run_opts.extend(build_mode_obj.run_opts)
self.sw_images.extend(build_mode_obj.sw_images)
else:
log.error(
"Mode \"%s\" enabled on the the command line is not defined",
@ -281,7 +287,10 @@ class SimCfg(FlowCfg):
for en_run_mode in self.en_run_modes:
run_mode_obj = Modes.find_mode(en_run_mode, self.run_modes)
if run_mode_obj is not None:
self.pre_run_cmds.extend(run_mode_obj.pre_run_cmds)
self.post_run_cmds.extend(run_mode_obj.post_run_cmds)
self.run_opts.extend(run_mode_obj.run_opts)
self.sw_images.extend(run_mode_obj.sw_images)
else:
log.error(
"Mode \"%s\" enabled on the the command line is not defined",
@ -376,7 +385,10 @@ class SimCfg(FlowCfg):
items_list = prune_items(items_list, marked_items)
# Merge the global build and run opts
Tests.merge_global_opts(self.run_list, self.build_opts, self.run_opts)
Tests.merge_global_opts(self.run_list, self.pre_build_cmds,
self.post_build_cmds, self.build_opts,
self.pre_run_cmds, self.post_run_cmds,
self.run_opts, self.sw_images)
# Check if all items have been processed
if items_list != []:
@ -495,8 +507,6 @@ class SimCfg(FlowCfg):
if self.cov:
self.cov_merge_deploy = CovMerge(self)
self.cov_report_deploy = CovReport(self)
# Generate reports only if merge was successful; add it as a dependency
# of merge.
self.cov_merge_deploy.sub.append(self.cov_report_deploy)
# Create initial set of directories before kicking off the regression.
@ -529,6 +539,9 @@ class SimCfg(FlowCfg):
'''Use the last regression coverage data to open up the GUI tool to
analyze the coverage.
'''
# Create initial set of directories, such as dispatched, passed etc.
self._create_dirs()
cov_analyze_deploy = CovAnalyze(self)
self.deploy = [cov_analyze_deploy]
@ -538,6 +551,26 @@ class SimCfg(FlowCfg):
for item in self.cfgs:
item._cov_analyze()
def _cov_unr(self):
'''Use the last regression coverage data to generate unreachable
coverage exclusions.
'''
# TODO, Only support VCS
if self.tool != 'vcs':
log.error("Currently only support VCS for coverage UNR")
sys.exit(1)
# Create initial set of directories, such as dispatched, passed etc.
self._create_dirs()
cov_unr_deploy = CovUnr(self)
self.deploy = [cov_unr_deploy]
def cov_unr(self):
'''Public facing API for analyzing coverage.
'''
for item in self.cfgs:
item._cov_unr()
def _gen_results(self):
'''
The function is called after the regression has completed. It collates the
@ -597,8 +630,9 @@ class SimCfg(FlowCfg):
# Generate results table for runs.
results_str = "## " + self.results_title + "\n"
results_str += "### " + self.timestamp_long + "\n"
if self.revision_string:
results_str += "### " + self.revision_string + "\n"
if self.revision:
results_str += "### " + self.revision + "\n"
results_str += "### Branch: " + self.branch + "\n"
# Add path to testplan, only if it has entries (i.e., its not dummy).
if self.testplan.entries:
@ -654,12 +688,10 @@ class SimCfg(FlowCfg):
# Write results to the scratch area
results_file = self.scratch_path + "/results_" + self.timestamp + ".md"
f = open(results_file, 'w')
f.write(self.results_md)
f.close()
with open(results_file, 'w') as f:
f.write(self.results_md)
# Return only the tables
log.info("[results page]: [%s] [%s]", self.name, results_file)
log.log(VERBOSE, "[results page]: [%s] [%s]", self.name, results_file)
return results_str
def gen_results_summary(self):
@ -679,8 +711,9 @@ class SimCfg(FlowCfg):
table.append(row)
self.results_summary_md = "## " + self.results_title + " (Summary)\n"
self.results_summary_md += "### " + self.timestamp_long + "\n"
if self.revision_string:
self.results_summary_md += "### " + self.revision_string + "\n"
if self.revision:
self.results_summary_md += "### " + self.revision + "\n"
self.results_summary_md += "### Branch: " + self.branch + "\n"
self.results_summary_md += tabulate(table,
headers="firstrow",
tablefmt="pipe",

View file

@ -12,17 +12,17 @@ import hjson
from tabulate import tabulate
from OneShotCfg import OneShotCfg
from utils import print_msg_list, subst_wildcards
from utils import VERBOSE, print_msg_list, subst_wildcards
class SynCfg(OneShotCfg):
"""Derivative class for synthesis purposes.
"""
def __init__(self, flow_cfg_file, proj_root, args):
super().__init__(flow_cfg_file, proj_root, args)
def __post_init__(self):
super().__post_init__()
flow = 'syn'
def __init__(self, flow_cfg_file, hjson_data, args, mk_config):
super().__init__(flow_cfg_file, hjson_data, args, mk_config)
# Set the title for synthesis results.
self.results_title = self.name.upper() + " Synthesis Results"
@ -36,8 +36,9 @@ class SynCfg(OneShotCfg):
results_str = "## " + self.results_title + " (Summary)\n\n"
results_str += "### " + self.timestamp_long + "\n"
if self.revision_string:
results_str += "### " + self.revision_string + "\n"
if self.revision:
results_str += "### " + self.revision + "\n"
results_str += "### Branch: " + self.branch + "\n"
results_str += "\n"
self.results_summary_md = results_str + "\nNot supported yet.\n"
@ -145,8 +146,9 @@ class SynCfg(OneShotCfg):
# Generate results table for runs.
results_str = "## " + self.results_title + "\n\n"
results_str += "### " + self.timestamp_long + "\n"
if self.revision_string:
results_str += "### " + self.revision_string + "\n"
if self.revision:
results_str += "### " + self.revision + "\n"
results_str += "### Branch: " + self.branch + "\n"
results_str += "### Synthesis Tool: " + self.tool.upper() + "\n\n"
# TODO: extend this to support multiple build modes
@ -389,9 +391,9 @@ class SynCfg(OneShotCfg):
# QoR history
# Write results to the scratch area
self.results_file = self.scratch_path + "/results_" + self.timestamp + ".md"
log.info("Detailed results are available at %s", self.results_file)
with open(self.results_file, 'w') as f:
results_file = self.scratch_path + "/results_" + self.timestamp + ".md"
with open(results_file, 'w') as f:
f.write(self.results_md)
log.log(VERBOSE, "[results page]: [%s] [%s]", self.name, results_file)
return self.results_md

Some files were not shown because too many files have changed in this diff Show more