This is an interactive, web-based environment for exploring and sharing TLA+ specifications. The motivation is to have a better way to quickly interact with a TLA+ spec and easily share results. For example, having a way to share counterexample traces in a convenient, portable, and repeatable manner.
A live version of the tool is currently hosted here, and below are some example specs to try out:
- Lock server
- Cabbage Goat Wolf Puzzle (animated)
- Two phase commit (animated)
- Paxos
- Raft (animated)
- EWD998 (animated)
- Snapshot Isolation
You can also explore some interesting (and infamous) protocol traces:
- (Cabbage Goat Wolf) Solution to the cabbage goat wolf puzzle
- (Raft) Log entry is written and later rolled back
- (Snapshot Isolation) Read-only anomaly under snapshot isolation
- (Snapshot Isolation) Write skew anomaly under snapshot isolation
The current version of the tool utilizes the TLA+ tree-sitter grammar for parsing TLA+ specs and implements a TLA+ interpreter/executor on top of this in Javascript. This allows the tool to interpret specs natively in the browser, without relying on an external language server. The Javascript interpreter is likely much slower than TLC, but efficient model checking isn't currently a goal of the tool.
The current tool expects that a specification has defined its initial state predicate and next state relation as Init
and Next
definitions, respectively. If your specification has these defined under different names, they will not be recognized and no initial state or next state evaluation will occur. In this case, you can still use the tool in REPL mode, though.
Eventually this will be made configurable, but the current tool looks for these hard-coded definitions. Also, there is incomplete support for user module imports, so specs are largely expected to be written in a single module. The interpreter does, however, support most operators from the TLA+ standard modules by default.
You can also see a live demo of the tool and its features in this presentation, which also gives a very high level overview of the tool architecture and implementation details.
Currently, nearly all testing of the tool is done via conformance testing against TLC. That is, for a given specification, we generate its reachable state graph using TLC and compare this for equivalence against the reachable state graph generated by the Javascript interpreter. You can see the result of all current tests that are run on this page, and the underlying test specs here.