Skip to content

Commit

Permalink
HLIR NextGen blog post
Browse files Browse the repository at this point in the history
  • Loading branch information
drernie committed Dec 3, 2024
1 parent 828b3f1 commit c333181
Show file tree
Hide file tree
Showing 2 changed files with 39 additions and 95 deletions.
46 changes: 8 additions & 38 deletions doc/HLIR/HCDL.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
# HCDL: A Hardware-Centric Design Language for Dialect Implementation
# HCDL - A Homoiconic Co-Development Language for MLIR Dialects

Expanding on **HLIR** (or **TableGen**) into a **Hardware-Centric Design Language (HCDL)** is an incredibly powerful idea. By leveraging the **homoiconicity** of HLIR and MLIR, HCDL could act as both a high-level specification tool for dialects and a low-level hardware description language (HDL) for fully defining the architecture implementing those dialects. This would unify **software-level IR design** with **hardware architecture specification**, enabling a truly end-to-end system.
## OBSOLETE: See RELIGN.md for the latest version.

Expanding on **[HLIR](https://ihack.us/2024/11/29/tsm-10-1-hlir-homoiconic-high-level-intermediate-representation/)** (or **[TableGen](https://ihack.us/2024/11/30/tsm-10-2-hlir-nextgen-a-tablegen-replacement-for-mlir/)**) into a **Homoiconic Co-Development Language (HCDL)** for hardware and software is an incredibly powerful idea. By leveraging the **homoiconicity** of HLIR and [MLIR](https://mlir.llvm.org), HCDL could act as both a high-level specification tool for dialects and a low-level hardware description language (HDL) for fully defining the architecture implementing those dialects. This would unify **software-level IR design** with **hardware architecture specification**, enabling a truly end-to-end system.

## 1. What Is HCDL?

HCDL (**Hardware-Centric Design Language**) is envisioned as:
HCDL (**Homoiconic Co-Development Language**) is envisioned as:

- A language to **specify dialects and their hardware implementations**.
- A **hardware/software co-design tool**, bridging software semantics and hardware architectures.
Expand All @@ -16,24 +18,12 @@ HCDL (**Hardware-Centric Design Language**) is envisioned as:

### 2.1 Dialect + Hardware Specification

HCDL would extend HLIR/TableGen to not only define MLIR dialect semantics but also map these semantics directly to hardware constructs.
HCDL would extend HLIR/TableGen to not only define MLIR dialect semantics but also be able to map these semantics directly to hardware constructs.

```sh
.toy {
.add {
inputs: [tensor, tensor]
outputs: [tensor]
semantics: [
// Tensor addition execution logic
let result = element_wise_add(input1, input2)
return result
]
hardware: [
// RTL description of hardware implementation
module toy_add(input [31:0] tensor1, input [31:0] tensor2, output [31:0] result);
assign result = tensor1 + tensor2;
endmodule
]
.add ^ (.input1 <[f32] <tensor>>, .input2 <[f32] <tensor>>) -> <[f32] <tensor>> {
f32_tensor_add(input1, input2)
}
}
```
Expand Down Expand Up @@ -97,26 +87,6 @@ HCDL extends HLIR/TableGen to define:
- Semantics for interpreters.
- Hardware implementations.

```shell
.customAI {
.matrix_mul {
inputs: [matrix<f32>, matrix<f32>]
outputs: [matrix<f32>]
semantics: [
// Abstract operation logic
let result = matrix_a * matrix_b;
return result;
]
hardware: [
// Hardware implementation in Verilog
module matrix_mul(input wire [31:0] matrix_a, matrix_b, output wire [31:0] result);
// Custom hardware for matrix multiplication
endmodule
]
}
}
```

### 4.2 Step 2: Software Generation

HCDL generates:
Expand Down
88 changes: 31 additions & 57 deletions doc/HLIR/TableGenNext.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,13 @@
# HLIR as a Next-Generation TableGen Replacement for MLIR
# HLIR NextGen - A TableGen Replacement for MLIR

The **HCLang HLIR (High-Level Intermediate Representation)** framework described in that repository could serve as a next-generation replacement for TableGen, especially if it's designed to handle the kind of semantic richness and extensibility required for a dynamic, multi-level execution framework like MLIR.
The
**[HLIR](https://ihack.us/2024/11/29/tsm-10-1-hlir-homoiconic-high-level-intermediate-representation/)
(High-Level Intermediate Representation)** framework written in [Homoiconic
C](https://ihack.us/2024/09/19/tsm-5-homoiconic-c-hc-syntax-cheat-sheet/) could
also serve as a next-generation replacement ("HLIR-NG") for LLVM's
[TableGen](https://llvm.org/docs/TableGen/), especially if it's designed to
handle the kind of semantic richness and extensibility required for a dynamic,
multi-level execution framework like [MLIR](https://mlir.llvm.org).

## 1. How HLIR Could Replace TableGen

Expand All @@ -10,7 +17,7 @@ The current **TableGen** system is primarily declarative, focusing on describing
- Traits and type constraints.
- Limited structural and semantic information.

**HLIR**, as described in the [README](./README.md), introduces concepts that go far beyond this. Here’s how it could serve as an enhanced replacement:
**HLIR** introduces concepts that go far beyond this. Here’s how it could serve as an enhanced replacement:

### 1.1 Structural Parity with TableGen

Expand Down Expand Up @@ -70,37 +77,23 @@ HLIR’s features align perfectly with the requirements of an MLIR interpreter:

### 3.1 Behavioral Semantics

HLIR could encode the execution rules for MLIR dialects directly, making it possible to interpret high-level IR without lowering it to LLVM.
HLIR could encode the execution rules for MLIR dialects directly, making it
possible to interpret high-level IR without lowering it to LLVM.

#### Example: Execution Rules for Tensor Addition

operation toy.add {
inputs: [tensor, tensor]
outputs: [tensor]
semantics: [
// Pseudo-code or C++ inline behavior
let result = element_wise_add(input1, input2)
return result
]
```shell
.toy {
.add ^ (.input1 <tensor>, .input2 <tensor>) -> <tensor> {
element_wise_add(input1, input2)
}
}
```

### 3.2 Dynamic Typing and Inference
HLIR encodes typing rules and inference for operations, allowing for dynamic
checks during interpretation and static checks during compilation.

HLIR could encode typing rules and inference for operations, allowing for dynamic checks during interpretation.

#### Example: Tensor Type Inference

operation toy.add {
inputs: [tensor<*>] // Accept tensors of any shape
outputs: [tensor<*>]
typing_rule: [
if input1.shape != input2.shape {
error("Shape mismatch in tensor addition")
}
]
}

### 3.3 Interpreter Generation
### 3.2 Interpreter Generation

HLIR could directly generate:

Expand Down Expand Up @@ -150,37 +143,18 @@ HLIR could include semantic information that facilitates optimizations:

## 6. Example Workflow with HLIR

### 6.1 Define a Dialect in HLIR

dialect toy {
operation toy.add {
inputs: [tensor, tensor]
outputs: [tensor]
semantics: [
// Inline execution logic
let result = element_wise_add(input1, input2)
return result
]
}
}

### 6.2 Generate the MLIR Interpreter

HLIR tooling could generate:

- C++ or Python runtime for interpreting `toy.add`.
- Dialect registration and validation logic.

### 6.3 Execute MLIR with the Interpreter

from hlir.runtime import Interpreter
- Define a Dialect in HLIR
- Generate the MLIR Interpreter
- Execute Operations Dynamically

# Load the toy dialect
interpreter = Interpreter("toy")
```shell
. <- .toy
.tensor_a <tensor> [1, 2, 3];
.tensor_b <tensor> [4, 5, 6];

# Define tensors and execute `toy.add`
result = interpreter.execute("toy.add", [tensor_a, tensor_b])
print(result)
.result toy.add(tensor_a, tensor_b);
result
```

---

Expand Down

0 comments on commit c333181

Please sign in to comment.