Search code examples
vhdldata-conversionbcd

16bit to bcd conversion


I'm trying to make a 16bit to BCD conversion. I have found this link for a 8 bit and I'm trying to convert it to 16 bits. http://vhdlguru.blogspot.nl/2010/04/8-bit-binary-to-bcd-converter-double.html

I don't know what im doing wrong the rpm_1000 keeps changing and the rpm_100 stays at 4. Does anyone have a idea what i did wrong?

process (Hex_Display_Data)
        variable i : integer:=0;
        variable bcd : std_logic_vector(19 downto 0) := (others => '0');
        variable bint : std_logic_vector(15 downto 0) := Hex_Display_Data;

    begin
        for i in 0 to 15 loop  -- repeating 16 times.
        bcd(19 downto 1) := bcd(18 downto 0);  --shifting the bits.
        bcd(0) := bint(15); -- shift bit in
        bint(15 downto 1) := bint(14 downto 0); --removing msb
        bint(0) :='0'; -- adding a '0'


        if(i < 15 and bcd(3 downto 0) > "0100") then --add 3 if BCD digit is greater than 4.
        bcd(3 downto 0) := bcd(3 downto 0) + "0011";
        end if;

        if(i < 15 and bcd(7 downto 4) > "0100") then --add 3 if BCD digit is greater than 4.
        bcd(7 downto 4) := bcd(7 downto 4) + "0011";
        end if;

        if(i < 15 and bcd(11 downto 8) > "0100") then  --add 3 if BCD digit is greater than 4.
        bcd(11 downto 8) := bcd(11 downto 8) + "0011";
        end if;

        if(i < 15 and bcd(15 downto 12) > "0100") then  --add 3 if BCD digit is greater than 4.
        bcd(15 downto 12) := bcd(15 downto 12) + "0011";
        end if;

    end loop;

    rpm_1000    <= bcd(15 downto 12);
    rpm_100     <= bcd(11 downto 8);
    rpm_10      <= bcd(7 downto 4);
    rpm_1       <= bcd(3 downto 0);


end process ;

Solution

  • Note four BCD digits can be wholly contained in 14 bits of input (your Hex_Display_Data) and unused bcd 'bits' (19 downto 16) will get eaten during synthesis along with all the add 3's that can't occur because their upper two bits are '0's (not > 4).

    If you constrain your bcd value to 4 hex digits, and your loop iteration to 14 bits:

    library ieee;
    use ieee.std_logic_1164.all;
    use ieee.numeric_std.all;
    
    entity bin2bcd is
        port ( 
            input:      in   std_logic_vector (15 downto 0);
            ones:       out  std_logic_vector (3 downto 0);
            tens:       out  std_logic_vector (3 downto 0);
            hundreds:   out  std_logic_vector (3 downto 0);
            thousands:  out  std_logic_vector (3 downto 0)
        );
    end entity;
    
    architecture fum of bin2bcd is
        alias Hex_Display_Data: std_logic_vector (15 downto 0) is input;
        alias rpm_1:    std_logic_vector (3 downto 0) is ones;
        alias rpm_10:   std_logic_vector (3 downto 0) is tens;
        alias rpm_100:  std_logic_vector (3 downto 0) is hundreds;
        alias rpm_1000: std_logic_vector (3 downto 0) is thousands;
    begin
        process (Hex_Display_Data)
            type fourbits is array (3 downto 0) of std_logic_vector(3 downto 0);
            -- variable i : integer := 0;  -- NOT USED
            -- variable bcd : std_logic_vector(15 downto 0) := (others => '0');
            variable bcd:   std_logic_vector (15 downto 0);
            -- variable bint : std_logic_vector(15 downto 0) := Hex_Display_Data;
            variable bint:  std_logic_vector (13 downto 0); -- SEE process body
        begin
            bcd := (others => '0');      -- ADDED for EVERY CONVERSION
            bint := Hex_Display_Data (13 downto 0); -- ADDED for EVERY CONVERSION
    
            for i in 0 to 13 loop
                bcd(15 downto 1) := bcd(14 downto 0);
                bcd(0) := bint(13);
                bint(13 downto 1) := bint(12 downto 0);
                bint(0) := '0';
    
                if i < 13 and bcd(3 downto 0) > "0100" then
                    bcd(3 downto 0) := 
                        std_logic_vector (unsigned(bcd(3 downto 0)) + 3);
                end if;
                if i < 13 and bcd(7 downto 4) > "0100" then
                    bcd(7 downto 4) := 
                        std_logic_vector(unsigned(bcd(7 downto 4)) + 3);
                end if;
                if i < 13 and bcd(11 downto 8) > "0100" then
                    bcd(11 downto 8) := 
                        std_logic_vector(unsigned(bcd(11 downto 8)) + 3);
                end if;
                if i < 13 and bcd(15 downto 12) > "0100" then
                    bcd(11 downto 8) := 
                        std_logic_vector(unsigned(bcd(15 downto 12)) + 3);
                end if;
            end loop;
    
            (rpm_1000, rpm_100, rpm_10, rpm_1)  <= 
                      fourbits'( bcd (15 downto 12), bcd (11 downto 8), 
                                   bcd ( 7 downto  4), bcd ( 3 downto 0) );
        end process ;
    end architecture;
    

    Note the use of aliases to enable your names to be used in an existing otherwise compatible Minimal, Complete and Verifiable Example which your question did not provide.

    Aggregate signal assignment is also taken from the original, your assignment to the individual digits should work just fine.

    There are two changes besides limiting the conversion to 14 bits and the number of BCD digits to match the number of digits output.

    The bcd and bint variables are now cleared every time the process is resumed (sensitive to updates to Hex_Display_Data). These were causing causing your otherwise unverifiable errors more than likely.

    Extraneous parentheses have been removed.

    You didn't supply context clauses. The code shown uses package numeric_std as opposed to the -2008 numeric_std_unsigned offering compatibility with earlier revisions of the standard while using IEEE authored packages.

    You'll get something that works, provable with a testbench:

    library ieee;
    use ieee.std_logic_1164.all;
    use ieee.numeric_std.all;
    
    entity bin2bcd_tb is
    end entity;
    
    architecture foo of bin2bcd_tb is
        signal input:      std_logic_vector (15 downto 0) := (others => '0');
        signal ones:       std_logic_vector (3 downto 0);
        signal tens:       std_logic_vector (3 downto 0);
        signal hundreds:   std_logic_vector (3 downto 0);
        signal thousands:  std_logic_vector (3 downto 0);
    begin
    DUT:
        entity work.bin2bcd
            port map (
                input => input,
                ones => ones,
                tens => tens,
                hundreds => hundreds,
                thousands => thousands
            );
    STIMULUS:
        process
        begin
            for i in 0 to 1001 loop
                wait for 20 ns;
                input <= std_logic_vector(to_unsigned(9999 - i, 16));
            end loop;
            wait for 20 ns;
            wait;
        end process;
    end architecture;
    

    Some other stimulus scheme can be used to toggle BCD digit roll over of all four digits.

    This testbench provides input values starting at 9999 and decrementing 1001 times to show all four digits transitioning:

    bin2bcd_tb.png

    I can easily be modified to prove every transition of every BCD digit.

    In summary the errors you were encountering appear to have come from the difference in elaboration for variables in a subprogram, where bcd and bint would be dynamically elaborated and initialized every function call, and in the process where they would be only initialized once.

    From examining Xilinx's User Guide 901 Vivado Design Suite User Guide, Synthesis (2015.3), Chapter 4: VHDL Support, Combinatorial Processes, case Statements, for-loop Statements, the for loop appears to be supported for synthesis and has been reported to be synthesis eligible in other double dabble questions on stackoverflow. The issue would be support for repetitive assignment to variables in repeated sequences of sequential statements, which should be supported. There are is at least one other double dabble question on stackoverflow where successful synthesis had been reported using such a for loop.

    Note that constraining the input value you deal with to 14 bits doesn't detect the effects of larger binary numbers (> 9999) which your process does not otherwise do either providing only 4 BCD output digits. You could deal with that by checking if the input value is greater than 9999 (x"270F").

    The + 3 represents 1 LUT depth in an FPGA (4 bit input, 4 bit output), there are some number of them layered in depth based on the size of the converted number (the range of i). Allowing time for conversion propagation through ADD3's is offset by the rate at which the display can be visually interpreted. If you updated Hex_Display_Data in the millisecond range you likely could not tell the difference visually.