I have the following code (which is an extract on something I'm currently working on):
library ieee;
use ieee.std_logic_1164.all;
entity tb_min_example is
end entity tb_min_example;
architecture arch of tb_min_example is
signal clk1, clk2, slow_clk : std_logic;
signal y : std_logic;
procedure drive_clk(signal clk : out std_logic; constant period : time) is
begin
loop
clk <= '1';
wait for period / 2;
clk <= '0';
wait for period / 2;
end loop;
end procedure drive_clk;
begin
drive_clk(clk1, period => 4 ns);
drive_clk(slow_clk, period => (119.0 / 8.0) * 4 ns);
clk2 <= clk1;
process(slow_clk) is
begin
if rising_edge(slow_clk) then
y <= clk1 xor clk2;
end if;
end process;
end architecture arch;
I would expect the signal y
to be low all the time since clk2
is assigned to clk1
and x xor x == 1
. However, I'm seeing this strange waveform:
It seems that clk2
(or clk1
) is sampled just a bit earlier than the other and therefore the result of clk2 xor clk1
is not zero. Why is that, is there some weird delta-cycle stuff going on?
I'm using XSim with Vivado version 2020.2
clk2
is a delayed version of clk1
- delayed by 1 simulation delta cycle.
Because slow_clk
and clk1
rise at the same time initially, they will always be in sync on every 2 slow_clk
rising edges (119/8 = 59.5, and hence they are both related clocks and align every 2nd rising edge of slow_clk
). At this point, clk1 will be '1' while clk2 will be '0', giving you the XOR result of '1'.
This is generally why re-assigning clocks in a design can cause you simulation problems.
If you simply delay the start of slow_clock
you should see y
being always '0'.