2 min read

LLM-FSM: Advancing Finite-State Reasoning in RTL Code Generation

AIHardware DesignLarge Language ModelsFinite-State MachinesRTL

Executive Summary

LLM-FSM is an innovative benchmark aimed at evaluating large language models (LLMs) on their ability to translate natural-language specifications into correct RTL implementations, focusing on finite-state machine (FSM) behaviors. This work is pivotal for hardware design, offering an automated pipeline to assess and improve the efficacy of LLMs in complex reasoning tasks.

The Architecture / Core Concept

At the heart of LLM-FSM lies an automated pipeline that first constructs FSMs with configurable state counts and constrained transitions. The core idea is to prompt LLMs to express these FSMs using a structured YAML format, contextualized within specific applications, and convert these into natural-language (NL) specifications. The same YAML descriptions are then employed to synthesize reference RTL and testbench code, ensuring correctness through LLM-based and SAT-solver-based verification methods.

This architecture is designed to push the boundaries of what LLMs can achieve in terms of finite-state reasoning by increasing complexity gradually. The pipeline's design helps detect point of decline in accuracy, thereby identifying areas for model improvement.

Implementation Details

The LLM-FSM pipeline starts with YAML format definitions. Here's an illustrative example of how a simple FSM could be encoded:

title: "Simple Traffic Light"
states:
  - state: "Red"
    transitions:
      - condition: "time_elapsed"
        next_state: "Green"
  - state: "Green"
    transitions:
      - condition: "time_elapsed"
        next_state: "Yellow"
  - state: "Yellow"
    transitions:
      - condition: "time_elapsed"
        next_state: "Red"

This YAML can serve as a basis for describing a basic traffic light FSM, which can then be transformed by an LLM into an NL specification and, subsequently, into an RTL implementation.

From this same YAML, the synthesis pipeline generates RTL descriptions. Here is a simple RTL synthesis example (in Verilog pseudo-code):

module TrafficLightFSM(
    input clk,
    input reset,
    output reg [1:0] state
);
    reg [1:0] next_state;
    
    parameter RED = 2'b00, GREEN = 2'b01, YELLOW = 2'b10;
    
    always @(posedge clk or posedge reset) begin
        if (reset)
            state <= RED;
        else
            state <= next_state;
    end
    
    always @(state) begin
        case (state)
            RED:    next_state <= GREEN;
            GREEN:  next_state <= YELLOW;
            YELLOW: next_state <= RED;
        endcase
    end
endmodule

Engineering Implications

The ability of LLMs to handle FSM-to-RTL translation poses significant scalability challenges. As the complexity of FSMs increases, the compute demands skyrocket, particularly for test-time evaluations. There’s a trade-off between the granularity of FSM transitions and model performance—more complex FSMs push models to their limits, impacting latency and compute cost. Additionally, the reliance on LLM capabilities highlights the necessity for robust model training, possibly beyond current state-of-the-art supervised fine-tuning (SFT) techniques.

My Take

LLM-FSM represents a crucial leap forward in using AI for hardware design automation. The benchmark challenges our current understanding of LLM capabilities, particularly in handling state-dependent logic. As model architectures evolve, we can expect LLM-FSM to scale, providing a consistent measure against which to test new innovations. The future impact of this will likely lead to more intelligent automated design and verification processes, potentially reducing development cycles and improving the quality and reliability of RTL implementations.

Share this article

J

Written by James Geng

Software engineer passionate about building great products and sharing what I learn along the way.