From 0e50ca8b30b59ca818116f6a3c53252524c9ccaf Mon Sep 17 00:00:00 2001
From: skydread1 This article introduces effective testing libraries and methods for those new to Clojure. We'll explore using the kaocha test runner in both REPL and terminal, along with configurations to enhance feedback. Then we will explain how tests as documentation can be done using rich-comment-tests. We will touch on how to do data validation, generation and instrumentation using malli. Finally, I will talk about how I manage integrations tests with eventual external services involved. First of all, always remember that it is important to have as many pure functions as possible. It means, the same input passed to a function always returns the same output. This will simplify the testing and make your code more robust. Here is an example of unpredictable impure logic: There are a bunch of options to enhance the development experience such as: These 2 options are very convenient for unit testing. However, when a code base contains slower tests, if the slower tests are run first, the watch mode is not so convenient because it won’t provide instant feedback. We saw that we can Finally, I don’t want to use the We can actually create another kaocha config file for our watch mode. Finally, I don’t want to use the We can actually create another kaocha config file for our watch mode. And if we want to run just the The Note the metadata of that function: The arrow When we To enable the instrumentation, we call Clojure is a dynamically typed language, allowing us to write functions without being constrained by rigid type definitions. This flexibility encourages rapid development, experimentation, and iteration. Thus, it makes testing a bliss because we can easily mock function inputs or provide partial inputs. However, if we start adding type check to all functions in all namespaces (in our case with malli metadata for instance), we introduce strict typing to our entire code base and therefore all the constraints that come with it. Personally, I recommend adding validation for the entry point of the app only. For instance, if we develop a library, we will most likely have a top level namespace called A second example could be that we develop an app that has a Finally, note the distinction between In functional programming, and especially in Clojure, it is important to avoid side effects (mutations, external factors, etc) as much as we can. Of course, we cannot avoid mutations as they are inevitable: start a server, connect to a database, IOs, update frontend web state and much more. What we can do is isolate these side effects so the rest of the code base remains pure and can enjoy the flexibility and thus predictable behavior. Some might argue that we should never mock data. From my humble personal experience, this is impossible for complex apps. An app I worked on consumes messages from different kafka topics, does write/read from a datomic database, makes http calls to multiple remote servers and produces messages to several kafka topics. So if I don’t mock anything, I need to have several remote http servers in a test cluster just for testing. I need to have a real datomic database with production-like data. I need all the other apps that will produce kafka messages that my consumers will process. In other words, it is not possible. We can mock functions using with-redefs which is very convenient for testing. Using the clojure.test use-fixtures is also great to start and tear down services after the tests are done. I mentioned above, an app using datomic and kafka for instance. In my integration tests, I want to be able to produce kafka messages and I want to interact with an actual datomic db to ensure proper behavior of my app. The common approach for this is to use What about the http calls? We can I have not touch on running tests in the CI, but integration tests should be run in the CI and if all services are embedded, there should be no difficulty in setting up a pipeline. To be sure an app performs well under heavy load, embedded services won’t work as they are limited in terms of performance, parallel processing etc. In our example above, If I want to start lots of kafka consumers and to use a big datomic transactor to cater lots of transactions, embedded datomic and embedded kafka won’t suffice. So I have to run a datomic transactor on my machine (maybe I want the DB to be pre-populated with millions or entities as well) and I will need to run kafka on my machine as well (maybe using confluent cp-all-in-one container setup). Let’s get fancy, and also run prometheus/grafana to monitor the performance of the stress tests. Your intuition is correct, it would be a nightmare for each developer of the project to setup all services. One solution is to containerized all these services. a datomic transactor can be run in docker, confluent provides a docker-compose to run kafka zookeeper, broker, control center etc, prometheus scrapper can be run in a container as well as grafana. So providing docker-compose files in our repo so each developer can just run Note that I do not containerized my clojure app so I do not have to change anything in my workflow. I deal with load/stress tests the same way I deal with my unit tests. I just start the services in the containers and my Clojure REPL as per usual. This setup is not the only solution to load/stress tests but it is the one I successfully implemented in my project and it really helps us being efficient. I highlighted some common testing tools and methods that the Clojure community use and I explained how I personally incorporated these tools and methods to my projects. Tools are common to everybody, but how we use them is considered opinionated and will differ depending on the projects and team decision. If you are starting your journey as a Clojure developer, I hope you can appreciate the quality of open-source testing libraries we have access to. Also, please remember that keeping things pure is the key to easy testing and debugging; a luxury not so common in the programming world. Inevitably, you will need to deal with side effects but isolate them as much as you can to make your code robust and your tests straight forward. Finally, there are some tools I didn’t mention to keep things short so feel free to explore what the Clojure community has to offer. The last advice I would give is to not try to use too many tools or only the shiny new ones you might find. Keep things simple and evaluate if a library is worth being added to your deps. The arrow When we To enable the instrumentation, we call Clojure is a dynamically typed language, allowing us to write functions without being constrained by rigid type definitions. This flexibility encourages rapid development, experimentation, and iteration. Thus, it makes testing a bliss because we can easily mock function inputs or provide partial inputs. However, if we start adding type check to all functions in all namespaces (in our case with malli metadata for instance), we introduce strict typing to our entire code base and therefore all the constraints that come with it. Personally, I recommend adding validation for the entry point of the app only. For instance, if we develop a library, we will most likely have a top level namespace called A second example could be that we develop an app that has a Finally, note the distinction between In functional programming, and especially in Clojure, it is important to avoid side effects (mutations, external factors, etc) as much as we can. Of course, we cannot avoid mutations as they are inevitable: start a server, connect to a database, IOs, update frontend web state and much more. What we can do is isolate these side effects so the rest of the code base remains pure and can enjoy the flexibility and thus predictable behavior. Some might argue that we should never mock data. From my humble personal experience, this is impossible for complex apps. An app I worked on consumes messages from different kafka topics, does write/read from a datomic database, makes http calls to multiple remote servers and produces messages to several kafka topics. So if I don’t mock anything, I need to have several remote http servers in a test cluster just for testing. I need to have a real datomic database with production-like data. I need all the other apps that will produce kafka messages that my consumers will process. In other words, it is not possible. We can mock functions using with-redefs which is very convenient for testing. Using the clojure.test use-fixtures is also great to start and tear down services after the tests are done. I mentioned above, an app using datomic and kafka for instance. In my integration tests, I want to be able to produce kafka messages and I want to interact with an actual datomic db to ensure proper behavior of my app. The common approach for this is to use What about the http calls? We can I have not touch on running tests in the CI, but integration tests should be run in the CI and if all services are embedded, there should be no difficulty in setting up a pipeline. To be sure an app performs well under heavy load, embedded services won’t work as they are limited in terms of performance, parallel processing etc. In our example above, If I want to start lots of kafka consumers and to use a big datomic transactor to cater lots of transactions, embedded datomic and embedded kafka won’t suffice. So I have to run a datomic transactor on my machine (maybe I want the DB to be pre-populated with millions or entities as well) and I will need to run kafka on my machine as well (maybe using confluent cp-all-in-one container setup). Let’s get fancy, and also run prometheus/grafana to monitor the performance of the stress tests. Your intuition is correct, it would be a nightmare for each developer of the project to setup all services. One solution is to containerized all these services. a datomic transactor can be run in docker, confluent provides a docker-compose to run kafka zookeeper, broker, control center etc, prometheus scrapper can be run in a container as well as grafana. So providing docker-compose files in our repo so each developer can just run Note that I do not containerized my clojure app so I do not have to change anything in my workflow. I deal with load/stress tests the same way I deal with my unit tests. I just start the services in the containers and my Clojure REPL as per usual. This setup is not the only solution to load/stress tests but it is the one I successfully implemented in my project and it really helps us being efficient. I highlighted some common testing tools and methods that the Clojure community use and I explained how I personally incorporated these tools and methods to my projects. Tools are common to everybody, but how we use them is considered opinionated and will differ depending on the projects and team decision. If you are starting your journey as a Clojure developer, I hope you can appreciate the quality of open-source testing libraries we have access to. Also, please remember that keeping things pure is the key to easy testing and debugging; a luxury not so common in the programming world. Inevitably, you will need to deal with side effects but isolate them as much as you can to make your code robust and your tests straight forward. Finally, there are some tools I didn’t mention to keep things short so feel free to explore what the Clojure community has to offer. The last advice I would give is to not try to use too many tools or only the shiny new ones you might find. Keep things simple and evaluate if a library is worth being added to your deps. It is always very confusing to deal with time in programming. In fact there are so many time representations, for legacy reasons, that sticking to one is not possible as our dependencies, databases or even programming languages might use different ways of representing time! You might have asked yourself the following questions: This article will answer these questions and will illustrate the answers with Clojure code snippets using the juxt/tick is an excellent open-source Clojure library to deal with The So basically, it is just an The obvious advantage is the universal simplicity of representing time. The disadvantage is the human readability. So we need to find a more human-friendly representation of time. Alice is having some fish and chips for her lunch in the UK. She checks her clock on the wall and it shows 12pm. She checks her calendar and it shows the day is January the 20th. The local time is the time in a specific time zone, usually represented using a date and time-of-day without any time zone information. In java it is called So if we ask Alice for the time and date, she will reply: We can see that since in this specific DST update to summer month, the day 03/31 "gained" an hour so it has a A Zone encapsulates the notion of UTC and DST. The time since epoch is the universal computer-friendly of representing time whereas the Instant is the universal human-friendly of representing time. A Finally, for Clojure developers, I highly recommend using We can see that since in this specific DST update to summer month, the day 03/31 "gained" an hour so it has a A Zone encapsulates the notion of UTC and DST. The time since epoch is the universal computer-friendly of representing time whereas the Instant is the universal human-friendly of representing time. A Finally, for Clojure developers, I highly recommend using If you are not familiar with fun-map, please refer to the doc Fun-Map: a solution to deps injection in Clojure. In this document, I will show you how we leverage In our backend, we use Here is the system we currently have for production: The So the system is started first via The So the system is started first via If you are not familiar with lasagna-pull, please refer to the doc Lasagna Pull: Precisely select from deep nested data In this document, I will show you how we leverage A good use case of the pattern is as parameter in a post request. In our backend, we have a structure representing all our endpoints: You can also notice that the data is being validated via Having no side-effects at all makes it way easier to tests and debug and it is more predictable. Finally, the So the You can also notice that the data is being validated via Having no side-effects at all makes it way easier to tests and debug and it is more predictable. Finally, the So the Our app skydread1/flybot.sg is a full-stack Clojure web and mobile app. We opted for a mono-repo to host: Note that the web app does not use NPM at all. However, the React Native mobile app does use NPM and the By using only one The goal of this document is to highlight the mono-repo structure and how to run the different parts (dev, test, build etc). You can have a look at the deps.edn. We can use namespaced aliases in I will go through the different aliases and explain their purposes and how to I used them to develop the app. First, the root deps of the deps.edn, inherited by all aliases: The deps above are used in both So every time you start a In the common/test/flybot/common/testsampledata.cljc namespace, we have sample data that can be loaded in both backend dev system of frontend dev systems. This is made possible by reader conditionals clj/cljs. I use the What is important to remember is that, when you work on the backend only, you just need a However, when you work on the frontend, you need to load the backend deps to have your server running but you also need to recompile the js when a cljs file is saved. Therefore your need both You can see that the common Following are the aliases used for the server: Following is the alias used for both web and mobile clients: The extra-paths contains the We can note the The main differences between the re-frame logic for Reagent and Reagent Native have to do with how to deal with Navigation and oauth2 redirection. That is the reason we have most of the logic in a common dir in Following are the aliases used for the mobile client: Following are the aliases used for the web client: Following is the alias used to build the js bundle or a uberjar: The build.clj contains the different build functions: Following is the alias used to build an image and push it to local docker or AWS ECR: Following is the alias used to points out outdated dependencies We have not released the mobile app yet, that is why there is no aliases related to CD for react native yet. This is one solution to handle server and clients in the same repo. Feel free to consult the complete deps.edn content. It is important to have a clear directory structure to only load required namespaces and avoid errors. Using Adding namespace to the aliases make the distinction between backend, common and client (web and mobile) clearer. Using You can have a look at the deps.edn. We can use namespaced aliases in I will go through the different aliases and explain their purposes and how to I used them to develop the app. First, the root deps of the deps.edn, inherited by all aliases: The deps above are used in both So every time you start a In the common/test/flybot/common/testsampledata.cljc namespace, we have sample data that can be loaded in both backend dev system of frontend dev systems. This is made possible by reader conditionals clj/cljs. I use the What is important to remember is that, when you work on the backend only, you just need a However, when you work on the frontend, you need to load the backend deps to have your server running but you also need to recompile the js when a cljs file is saved. Therefore your need both You can see that the common Following are the aliases used for the server: Following is the alias used for both web and mobile clients: The extra-paths contains the We can note the The main differences between the re-frame logic for Reagent and Reagent Native have to do with how to deal with Navigation and oauth2 redirection. That is the reason we have most of the logic in a common dir in Following are the aliases used for the mobile client: Following are the aliases used for the web client: Following is the alias used to build the js bundle or a uberjar: The build.clj contains the different build functions: Following is the alias used to build an image and push it to local docker or AWS ECR: Following is the alias used to points out outdated dependencies We have not released the mobile app yet, that is why there is no aliases related to CD for react native yet. This is one solution to handle server and clients in the same repo. Feel free to consult the complete deps.edn content. It is important to have a clear directory structure to only load required namespaces and avoid errors. Using Adding namespace to the aliases make the distinction between backend, common and client (web and mobile) clearer. Using This project is stored alongside the backend and the web frontend in the mono-repo: skydread1/flybot.sg The codebase is a full-stack Clojure(Script) app. The backend is written in Clojure and the web and mobile clients are written in ClojureScript. For the web app, we use reagent, a ClojureScript interface for For the mobile app, we use reagent-react-native, a ClojureScript interface for The mono-repo structure is as followed: As for now, the styling is directly done in the I hope that this unusual mobile app stack made you want to consider It is important to note that the state management logic (re-frame) is the same at 90% for both the web app and the mobile app which is very convenient. Finally, the web app is deployed but not the mobile app. All the codebase is open-source so feel free to take inspiration. As for now, the styling is directly done in the I hope that this unusual mobile app stack made you want to consider It is important to note that the state management logic (re-frame) is the same at 90% for both the web app and the mobile app which is very convenient. Finally, the web app is deployed but not the mobile app. All the codebase is open-source so feel free to take inspiration.Test good code
Pure functions
(defn fib
"Read the Fibonacci list length to be returned from a file,
@@ -11,7 +11,7 @@ Introducing some popular testing tools to developers new to Clojure. Highlight s
(take n))))
(comment
- ;; env.edn has the content {:FIB 10}
+ ;; env.edn has the content {:FIB {:length 10}}
(fib :FIB) ;=> 10
;; env.edn is empty
(fib :FIB) ;=> nil
@@ -156,7 +156,7 @@ Writing HTML report to: /Users/loicblanchard/workspaces/clojure-proj-template/ta
|-----------------+---------+---------|
Kaocha in terminal with options
clj -M:dev:test --watch --fail-fast
watch
mode makes Kaocha rerun the tests on file save.fail-fast
option makes Kaocha stop running the tests when it encounters a failing testfocus
on tests with a specific metadata tag, we can also skip
tests. Let’s pretend our system
test is slow and we want to skip it to only run unit tests: clj -M:dev:test --watch --fail-fast --skip-meta :system
-
plugins
(profiling and code coverage) on watch mode as it clutter the space in the terminal, so I want to exclude them from the report.tests-watch.edn
:#kaocha/v1
+
plugins
(profiling and code coverage) on watch mode as it clutter the space in the terminal, so I want to exclude them from the report.tests_watch.edn
:#kaocha/v1
{:tests [{:id :unit-watch :skip-meta [:system]}] ;; ignore system tests
:watch? true ;; watch mode on
:fail-fast? true} ;; stop running on first failure
@@ -197,7 +197,7 @@ clj꞉my-app.core.fib꞉>
(:require [clojure.test :refer [deftest testing]]
[com.mjdowney.rich-comment-tests.test-runner :as rctr]))
-(deftest ^rct rich-comment-tests
+(deftest ^:rct rich-comment-tests
(testing "all white box small tests"
(rctr/run-tests-in-file-tree! :dirs #{"src"})))
rct
tests, we can focus on the metadata (see the metadata in the deftest above).clj -M:dev:test --focus-meta :rct
@@ -318,7 +318,7 @@ clj꞉my-app.core.fib꞉>
(system cfg)))
system
function is straight forward. It takes a config map and returns the fib sequence.{:malli/schema
[:=> [:cat cfg/cfg-sch] [:sequential :int]]}
-
:=>
means it is a function schema. So in this case, we expect a config as unique argument and we expect a sequence of int as returned value.instrument
our namespace, we tell malli to check the given argument and returned value and to throw an error if they do not respect the schema in the metadata. It is very convenient.malli.dev/start!
as you can see in the core-test
namespace code snippet.When to use data validation/generation/instrumentation
my-app.core
or my-app.main
with the different functions our client can call. These functions are the ones we want to validate. All the internal logic, not supposed to be called by the clients, even though they can, do not need to be spec’ed as we want to maintain the flexibility I mentioned earlier.-main
function that will be called to start our system. A system can be whatever our app needs to perform. It can start servers, connect to databases, perform batch jobs etc. Note that in that case the entry point of our program is the -main
function. What we want to validate is that the proper params are passed to the system that our -main
function will start. Going back to our Fib app example, our system is very simple, it just returns the Fib sequence given the length. The length is what need to be validated in our case as it is provided externally via env variable. That is why we saw that the system function had malli metadata. However, our internal function have tests but no spec to keep that dynamic language flexibility that Clojure offers.instrumentation
, that is used for development (the metadata with the function schemas) and data validation for production (call to cfg/validate-cfg
). For overhead reasons, we don't want to instrument our functions in production, it is a development tool. However, we do want to have our system throws an error when wrong params are provided to our system, hence the call to cfg/validate-cfg
.Load/stress/integration tests
Mocking data
Integration tests
embedded
versions of these services. Our test fixtures can start/delete an embedded datomic database and start/stop kafka consumers/producers as well.with-redefs
those to return some valid but randomly generated values. Integration tests aim at ensuring that all components of our app work together as expected and embedded versions of external services and redefinitions of vars can make the tests predictable and suitable for CI.Load/stress tests
docker-compose up -d
to start all necessary services is the solution I recommend.Conclusion
:=>
means it is a function schema. So in this case, we expect a config as unique argument and we expect a sequence of int as returned value.instrument
our namespace, we tell malli to check the given argument and returned value and to throw an error if they do not respect the schema in the metadata. It is very convenient.malli.dev/start!
as you can see in the core-test
namespace code snippet.When to use data validation/generation/instrumentation
my-app.core
or my-app.main
with the different functions our client can call. These functions are the ones we want to validate. All the internal logic, not supposed to be called by the clients, even though they can, do not need to be spec’ed as we want to maintain the flexibility I mentioned earlier.-main
function that will be called to start our system. A system can be whatever our app needs to perform. It can start servers, connect to databases, perform batch jobs etc. Note that in that case the entry point of our program is the -main
function. What we want to validate is that the proper params are passed to the system that our -main
function will start. Going back to our Fib app example, our system is very simple, it just returns the Fib sequence given the length. The length is what need to be validated in our case as it is provided externally via env variable. That is why we saw that the system function had malli metadata. However, our internal function have tests but no spec to keep that dynamic language flexibility that Clojure offers.instrumentation
, that is used for development (the metadata with the function schemas) and data validation for production (call to cfg/validate-cfg
). For overhead reasons, we don't want to instrument our functions in production, it is a development tool. However, we do want to have our system throws an error when wrong params are provided to our system, hence the call to cfg/validate-cfg
.Load/stress/integration tests
Mocking data
Integration tests
embedded
versions of these services. Our test fixtures can start/delete an embedded datomic database and start/stop kafka consumers/producers as well.with-redefs
those to return some valid but randomly generated values. Integration tests aim at ensuring that all components of our app work together as expected and embedded versions of external services and redefinitions of vars can make the tests predictable and suitable for CI.Load/stress tests
docker-compose up -d
to start all necessary services is the solution I recommend.Conclusion
timestamp
, date-time
, offset-date-time
, zoned-date-time
, instant
, inst
?UTC
, DST
?Instant
instead of Java Date
?timestamp
?duration
and a period
?juxt/tick
library.What is
Tick
?date
and time
as values. The documentation is of very good quality as well.Time since epoch (timestamp)
time since epoch
, or timestamp
, is a way of measuring time by counting the number of time units that have elapsed since a specific point in time, called the epoch. It is often represented in either milliseconds or seconds, depending on the level of precision required for a particular application.int
such as 1705752000000
Local time
java.time.LocalDateTime
. However, tick
mentioned that when you asked someone the time, it is always going to be "local", so they prefer to call it date-time
as the local part is implicit.(-> (t/time "12:00")
(t/on "2024-01-20"))
@@ -420,7 +420,7 @@ Illustrate date and time concepts in programming using the Clojure Tick library:
(t/in "Europe/London")
(t/>> (t/new-duration 1 :days)))
#time/zoned-date-time "2024-03-31T09:00+01:00[Europe/London]"
-
duration
of 25 hours, therefore our new time is 09:00
. However, the period
taking into consideration the date in a calendar system, does not see a day as 24 hours (time-base) but as calendar unit (date-based) and therefore the new time is still 08:00
.Conclusion
duration
measures an amount of time using time-based values whereas a period
uses date-based (calendar) values.juxt/tick
as it allows us to handle time efficiently (conversion, operations) and elegantly (readable, as values) and I use it in several of my projects. It is also of course possible to do interop with the java.time.Instant
class directly if you prefer.duration
of 25 hours, therefore our new time is 09:00
. However, the period
taking into consideration the date in a calendar system, does not see a day as 24 hours (time-base) but as calendar unit (date-based) and therefore the new time is still 08:00
.Conclusion
duration
measures an amount of time using time-based values whereas a period
uses date-based (calendar) values.juxt/tick
as it allows us to handle time efficiently (conversion, operations) and elegantly (readable, as values) and I use it in several of my projects. It is also of course possible to do interop with the java.time.Instant
class directly if you prefer.Goal
fun-map
to create different systems in the website flybot.sg: prod-system
, dev-system
, test-system
and figwheel-system
.Prod System
life-cycle-map
to manage the life cycle of all our stateful components.Describe the system
(defn system
[{:keys [http-port db-uri google-creds oauth2-callback client-root-path]
@@ -573,7 +573,7 @@ how we leverage `fun-map` to create different systems in the website flybot.sg:
(-> figwheel-system
touch
:reitit-router)))
-
figheel-handler
is the value of the key :reitit-router
of our running system.touch
and its handler is provided to the servers figwheel starts that will be running while we work on our frontend.figheel-handler
is the value of the key :reitit-router
of our running system.touch
and its handler is provided to the servers figwheel starts that will be running while we work on our frontend.Goal
lasagna-pull
in the flybot app to define a pure data API.Defines API as pure data
;; BACKEND data structure
(defn pullable-data
@@ -792,7 +792,7 @@ How we leverage `lasagna-pull` in the flybot.sg Clojure web app to define a pure
{:response ('&? resp)
:effects-desc effects
:session (merge session sessions)}))
-
pull/with-data-schema
. In case of validation error, since we do not have any side effects done during the pulling, an error will be thrown and no mutations will be done.ring-handler
will be the component responsible to execute all the side effects at once. saturn-handler
purpose was to be sure the data is being pulled properly, validated using malli, and that the side effects descriptions are gathered in one place to be executed later on.pull/with-data-schema
. In case of validation error, since we do not have any side effects done during the pulling, an error will be thrown and no mutations will be done.ring-handler
will be the component responsible to execute all the side effects at once. saturn-handler
purpose was to be sure the data is being pulled properly, validated using malli, and that the side effects descriptions are gathered in one place to be executed later on.server
: Clojure appweb
client: Reagent (React) app using Re-Framemobile
client: Reagent Native (React Native) app using Re-Framenode_modules
need to be generated.deps.edn
, we can easily starts the different parts of the app.Goal
Repo structure
├── client
│ ├── common
@@ -820,7 +820,7 @@ Example of a Clojure mono-repo structure for a web server and 2 clients (web and
│ │ └── flybot.server
│ └── test
│ └── flybot.server
-
server
dir contains then .clj
filescommon
dir the .cljc
filesclients
dir the .cljs
files.Deps Management
deps.edn
to make the process clearer.Common libraries
clj and cljc deps
Both frontend and backend
Backend
server/src
and common/src
(clj and cljc files).deps
REPL or a deps+figwheel
REPL, these deps will be loaded.Sample data
IDE integration
calva
extension in VSCode to jack-in deps and figwheel REPLs but you can use Emacs if you prefer for instance.deps
REPL. There is no need for figwheel since we do not modify the cljs content. So in this scenario, the frontend is fixed (the main.js is generated and not being reloaded) but the backend changes (the clj
files and cljc
files).deps+figwheel
REPL. So in this scenario, the backend is fixed and running but the frontend changes (the cljs
files and cljc
files)cljc
files are being watched in both scenarios which makes sense since they "become" clj or cljs code depending on what REPL type you are currently working in.Server aliases
:jvm-base
: JVM options to make datalevin work with java version > java8:server/dev
: clj paths for the backend systems and tests:server/test
: Run clj testsClient common aliases
:client
: deps for frontend libraries common to web and react native.cljs
files.client/common/src
path that contains most of the re-frame
logic because most subscriptions and events work on both web and react native right away!client
.Mobile Client
:mobile/rn
: contains the cljs deps only used for react native. They are added on top of the client deps.:mobile/ios
: starts the figwheel REPL to work on iOS.Web Client
:web/dev
: starts the dev REPL:web/prod
: generates the optimized js bundle main.js:web/test
: runs the cljs tests:web/test-headless
: runs the headless cljs tests (fot GitHub CI)CI/CD aliases
build.clj
:build
: clojure/tools.build is used to build the main.js and also an uber jar for local testing, we use .clj -T:build js-bundle
clj -T:build uber
clj -T:build uber+js
Jibbit
:jib
: build image and push to image repoAntq
:outdated
: prints the outdated deps and their last available versionNotes on Mobile CD
Conclusion
:extra-paths
and :extra-deps
in deps.edn is important because it prevent deploying unnecessary namespaces and libraries on the server and client.deps
jack-in for server only work and deps+figwheel
for frontend work is made easy using calva
in VSCode (work in other editors as well).server
dir contains then .clj
filescommon
dir the .cljc
filesclients
dir the .cljs
files.Deps Management
deps.edn
to make the process clearer.Common libraries
clj and cljc deps
Both frontend and backend
Backend
server/src
and common/src
(clj and cljc files).deps
REPL or a deps+figwheel
REPL, these deps will be loaded.Sample data
IDE integration
calva
extension in VSCode to jack-in deps and figwheel REPLs but you can use Emacs if you prefer for instance.deps
REPL. There is no need for figwheel since we do not modify the cljs content. So in this scenario, the frontend is fixed (the main.js is generated and not being reloaded) but the backend changes (the clj
files and cljc
files).deps+figwheel
REPL. So in this scenario, the backend is fixed and running but the frontend changes (the cljs
files and cljc
files)cljc
files are being watched in both scenarios which makes sense since they "become" clj or cljs code depending on what REPL type you are currently working in.Server aliases
:jvm-base
: JVM options to make datalevin work with java version > java8:server/dev
: clj paths for the backend systems and tests:server/test
: Run clj testsClient common aliases
:client
: deps for frontend libraries common to web and react native.cljs
files.client/common/src
path that contains most of the re-frame
logic because most subscriptions and events work on both web and react native right away!client
.Mobile Client
:mobile/rn
: contains the cljs deps only used for react native. They are added on top of the client deps.:mobile/ios
: starts the figwheel REPL to work on iOS.Web Client
:web/dev
: starts the dev REPL:web/prod
: generates the optimized js bundle main.js:web/test
: runs the cljs tests:web/test-headless
: runs the headless cljs tests (fot GitHub CI)CI/CD aliases
build.clj
:build
: clojure/tools.build is used to build the main.js and also an uber jar for local testing, we use .clj -T:build js-bundle
clj -T:build uber
clj -T:build uber+js
Jibbit
:jib
: build image and push to image repoAntq
:outdated
: prints the outdated deps and their last available versionNotes on Mobile CD
Conclusion
:extra-paths
and :extra-deps
in deps.edn is important because it prevent deploying unnecessary namespaces and libraries on the server and client.deps
jack-in for server only work and deps+figwheel
for frontend work is made easy using calva
in VSCode (work in other editors as well).React
.React Native
.├── client
│ ├── common
@@ -968,7 +968,7 @@ cd ..
(fn [{:keys [db]} [_ cookie-value]]
{:db (assoc db :user/cookie cookie-value)
:fx [[:dispatch [:evt.app/initialize]]]}))
-
Styling
:style
keys of the RN component’s hiccups. Some more complex components have some styling that takes functions and or not in the :style
keyword.Conclusion
ClojureScript
as a good alternative to build mobile apps.Styling
:style
keys of the RN component’s hiccups. Some more complex components have some styling that takes functions and or not in the :style
keyword.Conclusion
ClojureScript
as a good alternative to build mobile apps.
I will use the flybot.sg website as example of app to deploy.
datalevin
as embedded database which resides alongside the Clojure code inside a containerInstead of using datomic pro and having the burden to have a separate containers for the app and transactor, we decided to use juji-io/datalevin and its embedded storage on disk. Thus, we only need to deploy one container with the app.
To do so, we can use the library atomisthq/jibbit baed on GoogleContainerTools/jib (Build container images for Java applications).
It does not use docker to generate the image, so there is no need to have docker installed to generate images.
jibbit can be added as alias
in deps.edn:
:jib
{:deps {io.github.atomisthq/jibbit {:git/tag "v0.1.14" :git/sha "ca4f7d3"}}
@@ -1065,7 +1065,7 @@ docker run \
-e ADMIN_USER="secret" \
-e SYSTEM="{:http-port 8123, :db-uri \"/datalevin/prod/flybotdb\", :oauth2-callback \"https://www.flybot.sg/oauth/google/callback\"}" \
acc.dkr.ecr.region.amazonaws.com/flybot-website:test
-
Even if we have one single EC2 instance running, there are several benefits we can get from AWS load balancers.
In our case, we have an Application Load Balancer (ALB) as target of a Network Load Balancer (NLB). Easily adding an ALB as target of NLB is a recent feature in AWS that allows us to combine the strength of both LBs.
The internal ALB purposes:
ACM
)ACM allows us to requests certificates for www.flybot.sg
and flybot.sg
and attach them to the ALB rules to perform path redirection in our case. This is convenient as we do not need to install any ssl certificates or handle any redirects in the instance directly or change the code base.
Since the ALB has dynamic IPs, we cannot use it in our goDaddy A
record for flybot.sg
. One solution is to use AWS route53 because AWS added the possibility to register the ALB DNS name in a A record (which is not possible with external DNS managers). However, we already use goDaddy as DNS host and we don’t want to depend on route53 for that.
Another solution is to place an internet-facing NLB behind the ALB because NLB provides static IP.
ALB works at level 7 but NLB works at level 4.
Thus, we have for the NLB:
The target group is where the traffic from the load balancers is sent. We have 3 target groups.
Since the ELB is the internet-facing entry points, we use a CNAME
record for www
resolving to the ELB DNS name.
For the root domain flybot.sg
, we use a A
record for @
resolving to the static IP of the ELB (for the AZ where the EC2 resides).
You can have a look at the open-source repo: skydread1/flybot.sg
]]>Even if we have one single EC2 instance running, there are several benefits we can get from AWS load balancers.
In our case, we have an Application Load Balancer (ALB) as target of a Network Load Balancer (NLB). Easily adding an ALB as target of NLB is a recent feature in AWS that allows us to combine the strength of both LBs.
The internal ALB purposes:
ACM
)ACM allows us to requests certificates for www.flybot.sg
and flybot.sg
and attach them to the ALB rules to perform path redirection in our case. This is convenient as we do not need to install any ssl certificates or handle any redirects in the instance directly or change the code base.
Since the ALB has dynamic IPs, we cannot use it in our goDaddy A
record for flybot.sg
. One solution is to use AWS route53 because AWS added the possibility to register the ALB DNS name in a A record (which is not possible with external DNS managers). However, we already use goDaddy as DNS host and we don’t want to depend on route53 for that.
Another solution is to place an internet-facing NLB behind the ALB because NLB provides static IP.
ALB works at level 7 but NLB works at level 4.
Thus, we have for the NLB:
The target group is where the traffic from the load balancers is sent. We have 3 target groups.
Since the ELB is the internet-facing entry points, we use a CNAME
record for www
resolving to the ELB DNS name.
For the root domain flybot.sg
, we use a A
record for @
resolving to the static IP of the ELB (for the AZ where the EC2 resides).
You can have a look at the open-source repo: skydread1/flybot.sg
]]>While working on flybot.sg , I experimented with datomic-free
, datomic starter-pro
with Cassandra and datomic starter-pro with embedded storage.
You can read the rationale of Datomic from their on-prem documentation
Stuart Sierra explained very well how datomic works in the video Intro to Datomic.
Basically, Datomic works as a layer on top of your underlying storage (in this case, we will use Cassandra db).
Your application
and a Datomic transactor
are contained in a peer
.
The transactor is the process that controls inbounds, and coordinates persistence to the storage services.
The process acts as a single authority for inbound transactions. A single transactor process allows the to be ACID compliant and fully consistent.
The peer is the process that will query the persisted data.
Since Datomic leverages existing storage services, you can change persistent storage fairly easily.
Datomic is closed-source and commercial.
You can see the different pricing models in the page Get Datomic On-Prem.
There are a few way to get started for free. The first one being to use the datomic-free version which comes with in-mem database storage and local-storage transactor. You don’t need any license to use it so it is a good choice to get familiar with the datomic Clojure API.
Then, there is datomic pro starter
renamed datomic starter
which is free and maintained for 1 year. After the one year threshold, you won’t benefit from support and you won’t get new versions of Datomic. You need to register to Datomic to get the license key.
Datomic only support Cassandra up to version 3.x.x
Datomic start pro version of Cassandra at the time of writting: 3.7.1
Closest stable version of Cassandra: 3.11.10
Problem 1: Datomic does not support java 11 so we have to have a java 8 version on the machine
Solution: use jenv to manage multiple java version
# jenv to manage java version
brew install jenv
@@ -1209,7 +1209,7 @@ host=0.0.0.0
port=4334
alt-host=datomicdb
storage-access=remote
-
After updating the transactor properties, you should be able to see the app running on port 8123 and be able to perform transactions as expected.
]]>After updating the transactor properties, you should be able to see the app running on port 8123 and be able to perform transactions as expected.
]]>Your Clojure library is assumed to be already compiled to dotnet.
To know how to do this, refer to the article: Port your Clojure lib to the CLR with MAGIC
In this article, I will show you:
Just use the command nos dotnet/build
at the root of the Clojure project.
The dlls are by default generated in a /build
folder.
A .csproj
file (XML) must be added at the root of the Clojure project.
You can find an example here: clr.test.check.csproj
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
@@ -1326,7 +1326,7 @@ dotnet nuget push "bin/Release/clr.test.check.1.1.1.nupkg" --source &q
Once you have the proper required config files ready, you can use Nostrand
to Build your dlls:
nos dotnet/build
Pack your dlls in a nuget package and push to a remote host:
nos dotnet/nuget-push
Import your packages in Unity:
nuget restore
-
Magic.Unity
is the Magic runtime for Unity and is already nuget packaged on its public repo
Magic.Unity
is the Magic runtime for Unity and is already nuget packaged on its public repo
The Lasagna stack library fun-map by @robertluo blurs the line between identity, state and function. As a results, it is a very convenient tool to define system
in your applications by providing an elegant way to perform associative dependency injections.
In this document, I will show you the benefit of fun-map
, and especially the life-cycle-map
as dependency injection system.
In any kind of programs, we need to manage the state. In Clojure, we want to keep the mutation parts of our code as isolated and minimum as possible. The different components of our application such as the db connections, queues or servers for instance are mutating the world and sometimes need each other to do so. The talk Components Just Enough Structure by Stuart Sierra explains this dependency injection problem very well and provides a Clojure solution to this problem with the library component.
fun-map is another way of dealing with inter-dependent components. In order to understand why fun-map
is very convenient, it is interesting to look at other existing solutions first.
Let’s first have a look at existing solution to deal with life cycle management of components in Clojure, especially the Component library which is a very good library to provide a way to define systems.
In the Clojure word, we have stateful components (atom, channel etc) and we don’t want it to be scattered in our code without any clear way to link them and also know the order of which to start these external resources.
The component
of the library component is just a record that implements a Lifecycle
protocol to properly start and stop the component. As a developer, you just implement the start
and stop
methods of the protocol for each of your components (DB, server or even domain model).
A DB component could look like this for instance
(defrecord Database [host port connection]
component/Lifecycle
@@ -1419,7 +1419,7 @@ m
;=> b closed
; a closed v2
; nil
-
fun-map also support other features such as function call tracing, value caching or lookup for instance. More info in the readme.
To see Fun Map in action, refer to the doc Fun-Map applied to flybot.sg.
]]>fun-map also support other features such as function call tracing, value caching or lookup for instance. More info in the readme.
To see Fun Map in action, refer to the doc Fun-Map applied to flybot.sg.
]]>flybot-sg/lasagna-pull by @robertluo aims at precisely select from deep data structure in Clojure.
In this document, I will show you the benefit of pull-pattern
in pulling nested data.
In Clojure, it is very common to have to precisely select data in nested maps. the Clojure core select-keys
and get-in
functions do not allow to easily select in deeper levels of the maps with custom filters or parameters.
One of the libraries of the lasagna-stack
is flybot-sg/lasagna-pull. It takes inspiration from the datomic pull API and the library redplanetlabs/specter.
lasagna-pull
aims at providing a clearer pattern than the datomic pull API.
It also allows the user to add options on the selected keys (filtering, providing params to values which are functions etc). It supports less features than the specter
library but the syntax is more intuitive and covers all major use cases you might need to select the data you want.
Finally, a metosin/malli schema can be provided to perform data validation directly using the provided pattern. This allows the client to prevent unnecessary pulling if the pattern does not match the expected shape (such as not providing the right params to a function, querying the wrong type etc).
Selecting data in nested structure is made intuitive via a pattern that describes the data to be pulled following the shape of the data.
Here are some simple cases to showcase the syntax:
(require '[sg.flybot.pullable :as pull])
@@ -1470,7 +1470,7 @@ Rational of flybot-sg/lasagna-pull library: precisely select from deep data stru
;=> {&? {:a (10 14)}}
Apply to sequence value of a query, useful for pagination:
((pull/query '[{:a ? :b ?} ? :seq [2 3]]) [{:a 0} {:a 1} {:a 2} {:a 3} {:a 4}])
;=> {&? ({:a 2} {:a 3} {:a 4})}
-
As you can see with the different options above, the transformations are specified within the selected keys. Unlike specter however, we do not have a way to apply transformation to all the keys for instance.
We can optionally provide a metosin/malli schema to specify the shape of the data to be pulled.
The client malli schema provided is actually internally "merged" to a internal schema that checks the pattern shape so both the pattern syntax and the pattern shape are validated.
You can provide a context to the query. You can provide a modifier
and a finalizer
.
This context can help you gathering information from the query and apply a function on the results.
To see Lasagna Pull in action, refer to the doc Lasagna Pull applied to flybot.sg.
]]>As you can see with the different options above, the transformations are specified within the selected keys. Unlike specter however, we do not have a way to apply transformation to all the keys for instance.
We can optionally provide a metosin/malli schema to specify the shape of the data to be pulled.
The client malli schema provided is actually internally "merged" to a internal schema that checks the pattern shape so both the pattern syntax and the pattern shape are validated.
You can provide a context to the query. You can provide a modifier
and a finalizer
.
This context can help you gathering information from the query and apply a function on the results.
To see Lasagna Pull in action, refer to the doc Lasagna Pull applied to flybot.sg.
]]>Note: the steps for packing the code into nugget package, pushing it to remote github and fetching it in Unity are highlighted in another article.
Magic is a bootsrapped compiler writhen in Clojure that take Clojure code as input and produces dotnet assemblies (.dll) as output.
Compiler Bootstrapping is the technique for producing a self-compiling compiler that is written in the same language it intends to compile. In our case, MAGIC is a Clojure compiler that compiles Clojure code to .NET assemblies (.dll and .exe files).
It means we need the old dlls of MAGIC to generate the new dlls of the MAGIC compiler. We repeat this process until the compiler is good enough.
The very first magic dlls were generated with the clojure/clojure-clr project which is also a Clojure compiler to CLR but written in C# with limitations over the dlls generated (the problem MAGIC is intended to solve).
The already existing clojure->clr compiler clojure/clojure-clr. However, clojure-clr uses a technology called the DLR (dynamic language runtime) to optimize dynamic call sites but it emits self modifying code which make the assemblies not usable on mobile devices (IL2CPP in Unity). So we needed a way to have a compiler that emit assemblies that can target both Desktop and mobile (IL2CPP), hence the Magic compiler.
We don’t want separate branches for JVM and CLR so we use reader conditionals.
You can find how to use the reader conditionals in this guide.
You will mainly need them for the require
and import
as well as the function parameters.
Don’t forget to change the extension of your file from .clj
to .cljc
.
In Emacs
(with spacemacs
distribution), you might encounter some lint issues if you are using reader conditionals and some configuration might be needed.
The Clojure linter library clj-kondo/clj-kondo supports the reader conditionals.
All the instruction on how to integrate it to the editor you prefer here.
To use clj-kondo with syl20bnr/spacemacs, you need the layer borkdude/flycheck-clj-kondo.
However, there is no way to add configuration in the .spacemacs
config file.
The problem is that we need to set :clj
as the default language to be checked.
In VScode
I did not need any config to make it work.
It has nothing to do with the :default
reader conditional key such as:
#?(:clj (Clojure expression)
:cljs (ClojureScript expression)
@@ -1569,7 +1569,7 @@ user> (f [(->PokerCard :d :3) (->PokerCard :c :4
(require ns))
(run-all-tests)))
To run the tests, just run the nos
command at the root of your project:
nos dotnet/run-tests
-
An example of a Clojure library that has been ported to Magic is skydread1/clr.test.check, a fork of clojure/clr.test.check. My fork uses reader conditionals so it can be run and tested in both JVM and CLR.
Now that your library is compiled to dotnet, you can learn how to package it to nuget, push it in to your host repo and import in Unity in this article:
]]>An example of a Clojure library that has been ported to Magic is skydread1/clr.test.check, a fork of clojure/clr.test.check. My fork uses reader conditionals so it can be run and tested in both JVM and CLR.
Now that your library is compiled to dotnet, you can learn how to package it to nuget, push it in to your host repo and import in Unity in this article:
]]>At Flybot Pte Ltd, we wanted to have a robot-player that can play several rounds of some of our card games (such as big-two
) at a decent level.
The main goal of this robot-player was to take over an AFK player for instance.
We are considering using it for an offline mode with different level of difficulty.
Vocabulary:
big-two
: popular Chinese Card game (锄大地)AI
or robot
: refer to a robot-player in the card game.2 approaches were used:
The repositories are closed-source because private to Flybot Pte. Ltd. The approaches used are generic enough so they can be applied to any kind of games.
In this article, I will explain the general principle of MCTS applied to our specific case of big-two
.
Monte Carlo Tree Search (MCTS) is an important algorithm behind many major successes of recent AI applications such as AlphaGo’s striking showdown in 2016.
Essentially, MCTS uses Monte Carlo simulation to accumulate value estimates to guide towards highly rewarding trajectories in the search tree. In other words, MCTS pays more attention to nodes that are more promising, so it avoids having to brute force all possibilities which is impractical to do.
At its core, MCTS consists of repeated iterations (ideally infinite, in practice constrained by computing time and resources) of 4 steps: selection
, expansion
, simulation
and update
.
For more information, this MCTS article explains the concept very well.
MCTS algorithm works very well on deterministic games with perfect information. In other words, games in which each player perfectly knows the current state of the game and there are no chance events (e.g. draw a card from a deck, dice rolling) during the game.
However, there are a lot of games in which there is not one or both of the two components: these types of games are called stochastic (chance events) and games with imperfect information (partial observability of states).
Thus, in big-two, we don’t know the cards of the other players, so it is a game with imperfect information (more info in this paper).
So we can apply the MCTS to big-two but we will need to do 1 of the 2 at least:
Our tree representation looks like this:
{:S0 {::sut/visits 11 ::sut/score [7 3] ::sut/chldn [:S1 :S2]}
:S1 {::sut/visits 5 ::sut/score [7 3] ::sut/chldn [:S3 :S4]}
diff --git a/resources/public/main.js b/resources/public/main.js
index 1aaa289..59c3ce1 100644
--- a/resources/public/main.js
+++ b/resources/public/main.js
@@ -2447,7 +2447,7 @@ new S(null,1,5,T,[new S(null,2,5,T,["Flybot","https://github.com/skydread1/flybo
new S(null,1,5,T,[new S(null,2,5,T,["Flybot","https://github.com/skydread1/flybot.sg"],null)],null),"\nReagent React Native mobile app reusing re-frame logic from exiting web client.\n",su,"Reagent React Native Mobile App","reagent-native-app",new n(null,3,[dp,"/assets/loic-blog-logo.png",ap,"/assets/loic-blog-logo.png",Xl,"Logo referencing Aperture Science"],null),"blog-reagent-native"]),jj([Vl,Bm,Zn,Hp,Qp,Rp,vq,Lr,gt,Bt],[new S(null,4,5,T,["Clojure","Compiler","CLR","Unity"],null),new S(null,1,
5,T,["2022-04-08"],null),'\nIn this article, I will show you:\n\n1. how to handle CLR interop to prepare your Clojure code for the CLR\n2. how to use type hints to have your code more performant on the CLR\n3. how to manage dependencies\n4. how to compile to the CLR using Nostrand\n5. how to test in the CLR using Nostrand\n\nNote: the steps for packing the code into nugget package, pushing it to remote github and fetching it in Unity are highlighted in another article.\n\n## Rational\n\n### What is the Magic Compiler\n\nMagic is a bootsrapped compiler writhen in Clojure that take Clojure code as input and produces dotnet assemblies (.dll) as output.\n\nCompiler Bootstrapping is the technique for producing a self-compiling compiler that is written in the same language it intends to compile. In our case, MAGIC is a **Clojure** compiler that compiles **Clojure** code to .**NET** assemblies (.dll and .exe files).\n\nIt means we need the old dlls of MAGIC to generate the new dlls of the MAGIC compiler. We repeat this process until the compiler is good enough. \n\nThe very first magic dlls were generated with the [clojure/clojure-clr](https://github.com/clojure/clojure-clr) project which is also a Clojure compiler to CLR but written in **C#** with limitations over the dlls generated (the problem MAGIC is intended to solve).\n\n### Why the Magic Compiler\n\nThe already existing clojure-\x3eclr compiler [clojure/clojure-clr](https://github.com/clojure/clojure-clr). However, clojure-clr uses a technology called the DLR (dynamic language runtime) to optimize dynamic call sites but it emits self modifying code which make the assemblies not usable on mobile devices (IL2CPP in Unity). So we needed a way to have a compiler that emit assemblies that can target both Desktop and mobile (IL2CPP), hence the Magic compiler.\n\n## Step 1: Interop\n\n### Reader conditionals\n\nWe don’t want separate branches for JVM and CLR so we use reader conditionals.\n\nYou can find how to use the reader conditionals in this [guide](https://clojure.org/guides/reader_conditionals).\n\nYou will mainly need them for the `require` and `import` as well as the function parameters.\n\nDon’t forget to change the extension of your file from `.clj` to `.cljc`.\n\n### Clj-kondo Linter supporting reader conditionals\n\nIn `Emacs` (with `spacemacs` distribution), you might encounter some lint issues if you are using reader conditionals and some configuration might be needed.\n\nThe Clojure linter library [clj-kondo/clj-kondo](https://github.com/clj-kondo/clj-kondo) supports the reader conditionals.\n\nAll the instruction on how to integrate it to the editor you prefer [here](https://github.com/clj-kondo/clj-kondo/blob/master/doc/editor-integration.md).\n\nTo use [clj-kondo](https://github.com/clj-kondo/clj-kondo) with [syl20bnr/spacemacs](https://github.com/syl20bnr/spacemacs), you need the layer [borkdude/flycheck-clj-kondo](https://github.com/borkdude/flycheck-clj-kondo).\n\nHowever, there is no way to add configuration in the `.spacemacs` config file.\n\nThe problem is that we need to set `:clj` as the default language to be checked.\n\nIn `VScode` I did not need any config to make it work.\n\n### Setting up the default reader conditionals of the Clj-kondo linter\n\nIt has nothing to do with the `:default` reader conditional key such as:\n\n```clojure\n#?(:clj (Clojure expression)\n :cljs (ClojureScript expression)\n :cljr (Clojure CLR expression)\n :default (fallthrough expression))\n```\n\nIn the code above, the `:default` reader is used if none of the other reader matches the platform the code is run on. There is no need to add the `:default` tag everywhere as the code will be ran only on 2 potential environment: `:clj` and `:cljr`.\n\nFor our linter, on your Clojure environment (in case of Emacs with [syl20bnr/spacemacs](https://github.com/syl20bnr/spacemacs) distribution), you can highlight the codes for the `:clj` reader only.\n\nThe `:cljr` code will be displayed as comments. \n\nTo add the default `:clj` reader, we need to add it in the config file : `~/.config/clj-kondo/config.edn` (to affect all our repos). It is possible to add config at project level as well as stated [here](https://cljdoc.org/d/clj-kondo/clj-kondo/2020.09.09/doc/configuration).\n\nHere is the config to setup `:clj` as default reader:\n\n```clojure\n{:cljc {:features #{:clj}}}\n```\n\nIf you don’t specify a default reader, `clj-kondo` will trigger lots of error if you don’t provide the `:default` reader because it assumes that you might run the code on a platform that doesn’t match any of the provided reader.\n\n## Step 2 (optional): Add type hints\n\nMagic supports the same shorthands as in Clojure: [Magic types shorthands](https://github.com/nasser/magic/blob/master/src/magic/analyzer/types.clj#L37).\n\n### Value Type hints\n\nWe want to add Magic type hints in our Clojure code to prevent slow argument boxing at run time.\n\nThe main place we want to add the type hints are the function arguments such as in:\n\n```clojure\n(defn straights-n\n "Returns all possible straights with given length of cards."\n [n cards wheel?]\n #?(:clj [n cards wheel?]\n :cljr [^int n cards ^Boolean wheel?])\n (...))\n```\n\nNote the user conditionals here to not affect our Clojure codes and tests to be run on the JVM. \n\nI did not remove the reader conditionals here (the shorthands being the same in both Clojure and Magic It would run), because we don’t want our Clojure tests to be affected and we want to keep the dynamic idiom of Clojure. Also `wheel?` could very likely have the value `nil`, passed by one of the tests, which is in fact not a boolean.\n\nSo we want to keep our type hints in the `:cljr` reader to prevent Magic from doing slow reflection but we don’t want to affect our `:clj` reader that must remain dynamic and so type free to not alter our tests.\n\n### Ref Type hints\n\nOne of the best benefit of type hinting for Magic is to type hint records and their fields.\n\nHere is an example of a record fields type hinting:\n\n```clojure\n(defrecord GameState #?(:clj [players next-pos game-over?]\n :cljr [players ^long next-pos ^boolean game-over?])\n(...))\n```\n\nAs you can see, not all fields are type hinted because for some, we don’t have a way to do so.\n\nThere is no way to type hints a collection parameter in Magic.\n\n`players` is a vector of `Players` records. We don’t have a way to type hints such type. Actually we don’t have a way to type hints a collection in Magic. In Clojure (Java), we can type hint a collection of a known types such as:\n\n```clojure\n;; Clojure file\nuser\x3e (defn f\n "`poker-cards` is a vector of `PokerCard`."\n [^"[Lmyproj.PokerCard;" poker-cards]\n (map :num poker-cards))\n;\x3d\x3e #\'myproj.combination/f\n\n;; Clojure REPL\nuser\x3e (f [(-\x3ePokerCard :d :3) (-\x3ePokerCard :c :4)])\n;\x3d\x3e (:3 :4)\n```\n\nHowever, in Magic, such thing is not possible.\n\nparameters which are `maps` do not benefit much from type hinting because a map could be a `PersistentArrayMap`, a `PersistentHashMap` or even a `PersistentTreeMap` so we would need to just `^clojure.lang.APersistentMap` just to be generic which is not really relevant.\n\nTo type hint a record as parameter, it is advices to `import` it first to avoid having to write the fully qualified namespace:\n\n```clojure\n;; Import the Combination class so we can use type hint format ^Combination\n#?(:cljr (:import [myproj.combination Combination]))\n```\n\nThen we can type hint a parameter which is a record conveniently such as:\n\n```clojure\n(defn pass?\n "Returns true it the combi is a pass."\n #?(:clj [combi]\n :cljr [^Combination combi])\n (combi/empty-combi? combi))\n```\n\nA record field can also a be a known record types such as:\n\n```clojure\n(defrecord Player #?(:clj [combi penalty?]\n :cljr [^Combination combi\n ^boolean penalty?]))\n```\n\n### Type hints and testing\n\nSince in Clojure, we tend to use simplified parameters to our function to isolate the logic being tested (a map instead of a record, nil instead of false, a namespaced keyword instead of a map etc.), naturally lots of tests will fail in the CLR because of the type hints.\n\nWe don’t want to change our test suite with domain types so you can just add a reader conditionals to the tests affected by the type hints in the CLR.\n\n### Interop common cases\n\n#### Normal case\n\nFor interop, you can use the reader conditionals such as in:\n\n```clojure\n(defn round-perc\n "Rounds the given `number`."\n [number]\n #?(:clj (-\x3e number double Math/round)\n :cljr (-\x3e number double Math/Round long)))\n```\n\n#### Deftype equals methods override\n\nFor the `deftype` to work in the CLR, we need to override different equals methods than the Java ones. In Java we use `hashCode` and `equal` but in .net we use `hasheq` and `equiv`.\n\nHere is an example on how to override such methods:\n\n```clojure\n(deftype MyRecord [f-conj m rm]\n ;; Override equals method to compare two MyRecord.\n #?@(:clj\n [Object\n (hashCode [_] (.hashCode m))\n (equals [_ other]\n (and (instance? MyRecord other) (\x3d m (.m other))))]\n :cljr\n [clojure.lang.IHashEq\n (hasheq [_] (hash m))\n clojure.lang.IPersistentCollection\n (equiv [_ other]\n (and (instance? MyRecord other) (\x3d m (.m other))))]))\n```\n\n#### Defecord empty method override for IL2CCP\n\nFor the `defrecord` to work in case we target **IL2CPP** (all our apps), you need to override the default implementation of the `empty` method such as:\n\n```clojure\n(defrecord PokerCard [^clojure.lang.Keyword suit ^clojure.lang.Keyword num]\n #?@(:cljr\n [clojure.lang.IPersistentCollection\n (empty [_] nil)]))\n```\n\nNote the vector required with the **splicing** reader conditional `#?@`.\n\n## Step 3: Manage dependencies\n\nSince magic was created before `tools.deps` or `leiningen`, it has its own deps management system and the dedicated file for it is `project.edn`.\n\nHere is an example of a project.edn:\n\n```clojure\n{:name "My project"\n :source-paths ["src" "test"]\n :dependencies [[:github skydread1/clr.test.check "magic"\n :sha "a23fe55e8b51f574a63d6b904e1f1299700153ed"\n :paths ["src"]]\n [:gitlab my-private-lib1 "master"\n :paths ["src"]\n :sha "791ef67978796aadb9f7aa62fe24180a23480625"\n :token "r7TM52xnByEbL6mfXx2x"\n :domain "my.domain.sg"\n :project-id "777"]]}\n```\n\nRefer to the Nostrand [README](https://github.com/nasser/nostrand/blob/master/README.md) for more details.\n\nSo you need to add a `project.edn`at the root of your directory with other libraries.\n\n## Step 4: Compile to the CLR\n\n### Nostrand\n\n[nasser/nostrand](https://github.com/nasser/nostrand) is for magic what [tools.deps](https://github.com/clojure/tools.deps.alpha) or [leiningen](https://github.com/technomancy/leiningen) are for a regular Clojure project. Magic has its own dependency manager and does not use tools.deps or len because it was implemented before these deps manager came out!\n\nYou can find all the information you need to build and test your libraries in dotnet in the [README](https://github.com/nasser/nostrand/blob/master/README.md).\n\nIn short, you need to clone nostrand and create a dedicated Clojure namespace at the root of your project to run function with Nostrand.\n\n### Build your Clojure project to .net\n\nIn my case I named my nostrand namespace `dotnet.clj`.\n\nYou cna have a look at the [clr.test.check/dotnet.clj](https://github.com/skydread1/clr.test.check/blob/magic/dotnet.clj), it is a port of clojure/test.check that compiles in both JVM and CLR.\n\nWe have the following require:\n\n```clojure\n(:require [clojure.test :refer [run-all-tests]]\n [magic.flags :as mflags])\n```\n\nDon’t forget to set the 2 magic flags to true:\n\n```clojure\n(defn build\n "Compiles the project to dlls.\n This function is used by `nostrand` and is called from the terminal in the root folder as:\n nos dotnet/build"\n []\n (binding [*compile-path* "build"\n *unchecked-math* *warn-on-reflection*\n mflags/*strongly-typed-invokes* true\n mflags/*direct-linking* true\n mflags/*elide-meta* false]\n (println "Compile into DLL To : " *compile-path*)\n (doseq [ns prod-namespaces]\n (println (str "Compiling " ns))\n (compile ns))))\n```\n\nTo build to the `*compile-path*` folder, just run the `nos` command at the root of your project:\n\n```clojure\nnos dotnet/build\n```\n\n## Step 5: Test your Clojure project to .net\n\nSame remark as for the build section:\n\n```clojure\n(defn run-tests\n "Run all the tests on the CLR.\n This function is used by `nostrand` and is called from the terminal in the root folder as:\n nos dotnet/run-tests"\n []\n (binding [*unchecked-math* *warn-on-reflection*\n mflags/*strongly-typed-invokes* true\n mflags/*direct-linking* true\n mflags/*elide-meta* false]\n (doseq [ns (concat prod-namespaces test-namespaces)]\n (require ns))\n (run-all-tests)))\n```\n\nTo run the tests, just run the `nos` command at the root of your project:\n\n```clojure\nnos dotnet/run-tests\n```\n\n## Example of a Clojure library ported to Magic\n\nAn example of a Clojure library that has been ported to Magic is [skydread1/clr.test.check](https://github.com/skydread1/clr.test.check/tree/magic), a fork of clojure/clr.test.check.\nMy fork uses reader conditionals so it can be run and tested in both JVM and CLR.\n\n## Learn more\n\nNow that your library is compiled to dotnet, you can learn how to package it to nuget, push it in to your host repo and import in Unity in this article:\n- [Pack, Push and Import Clojure to Unity](https://www.loicblanchard.me/blog/clojure-in-unity).\n',
new S(null,2,5,T,[new S(null,2,5,T,["Magic","https://github.com/nasser/magic"],null),new S(null,2,5,T,["Nostrand","https://github.com/nasser/nostrand"],null)],null),"\nHow to port your Clojure lib to the CLR. Then how to be build with the MAGIC compiler allowing you to obtain DLLs compatible with Unity (no DLR used by MAGIC).\n",su,"Port your Clojure lib to the CLR with MAGIC","port-clj-lib-to-clr",new n(null,3,[dp,"/assets/loic-blog-logo.png",ap,"/assets/loic-blog-logo.png",Xl,"Logo referencing Aperture Science"],
-null),"blog-port-clj-lib"]),jj([Vl,Bm,Zn,Qp,Rp,vq,Lr,gt,Bt],[new S(null,6,5,T,"Clojure;Kaocha;Malli;Rich Comment Tests;Instrumentation;Data validation/generation".split(";"),null),new S(null,1,5,T,["2024-08-10"],null),'\n## Introduction\n\nThis article introduces effective testing libraries and methods for those new to Clojure.\n\nWe\'ll explore using the [kaocha](https://github.com/lambdaisland/kaocha) test runner in both REPL and terminal, along with configurations to enhance feedback. Then we will explain how tests as documentation can be done using [rich-comment-tests](https://github.com/matthewdowney/rich-comment-tests).\n\nWe will touch on how to do data validation, generation and instrumentation using [malli](https://github.com/metosin/malli).\n\nFinally, I will talk about how I manage integrations tests with eventual external services involved.\n\n## Test good code\n\n### Pure functions\n\nFirst of all, always remember that it is important to have as many pure functions as possible. It means, the same input passed to a function always returns the same output. This will simplify the testing and make your code more robust.\n\nHere is an example of unpredictable **impure** logic:\n\n```clojure\n(defn fib\n "Read the Fibonacci list length to be returned from a file,\n Return the Fibonacci sequence."\n [variable]\n (when-let [n (-\x3e (slurp "config/env.edn") edn/read-string (get variable) :length)]\n (-\x3e\x3e (iterate (fn [[a b]] [b (+\' a b)])\n [0 1])\n (map first)\n (take n))))\n\n(comment\n ;; env.edn has the content {:FIB 10}\n (fib :FIB) ;\x3d\x3e 10\n ;; env.edn is empty\n (fib :FIB) ;\x3d\x3e nil\n )\n```\n\nFor instance, reading the `length` value from a file before computing the Fibonacci sequence is **unpredictable** for several reasons:\n\n- the file could not have the expected value\n- the file could be missing\n- in prod, the env variable would be read from the system not a file so the function would always return `nil`\n- what if the FIB value from the file has the wrong format.\n\nWe would need to test too many cases unrelated to the Fibonacci logic itself, which is bad practice.\n\nThe solution is to **isolate** the impure code:\n\n```clojure\n(defn fib\n "Return the Fibonacci sequence with a lenght of `n`."\n [n]\n (-\x3e\x3e (iterate (fn [[a b]] [b (+\' a b)])\n [0 1])\n (map first)\n (take n)))\n\n^:rct/test\n(comment\n (fib 10) ;\x3d\x3e [0 1 1 2 3 5 8 13 21 34]\n (fib 0) ;\x3d\x3e []\n )\n\n(defn config\x3c-file\n "Reads the `config/env.edn` file, gets the value of the given key `variable`\n and returns it as clojure data."\n [variable]\n (-\x3e (slurp "config/env.edn") edn/read-string (get variable)))\n\n(comment\n ;; env.edn contains :FIB key with value {:length 10}\n (config\x3c-file :FIB) ;\x3d\x3e {:length 10}\n ;; env.edn is empty\n (config\x3c-file :FIB) ;\x3d\x3e {:length nil}\n )\n```\n\nThe `fib` function is now **pure** and the same input will always yield the same output. I can therefore write my unit tests and be confident of the result. You might have noticed I added `^:rct/test` above the comment block which is actually a unit test that can be run with RCT (more on this later).\n\nThe **impure** code is isolated in the `config\x3c-file` function, which handles reading the environment variable from a file.\n\nThis may seem basic, but it\'s the essential first step in testing: ensuring the code is as pure as possible for easier testing is one of the strengths of **data-oriented** programming!\n\n## Test runner: Kaocha\n\nFor all my personal and professional projects, I have used [kaocha](https://github.com/lambdaisland/kaocha) as a test-runner. \n\nThere are 2 main ways to run the tests that developers commonly use:\n\n- Within the **REPL** as we implement our features or fix bugs\n- In the **terminal**: to verify that all tests pass or to target a specific group of tests\n\nHere is the `deps.edn` I will use in this example:\n\n```clojure\n{:deps {org.clojure/clojure {:mvn/version "1.11.3"}\n org.slf4j/slf4j-nop {:mvn/version "2.0.15"}\n metosin/malli {:mvn/version "0.16.1"}}\n :paths ["src"]\n :aliases\n {:dev {:extra-paths ["config" "test" "dev"]\n :extra-deps {io.github.robertluo/rich-comment-tests {:git/tag "v1.1.1", :git/sha "3f65ecb"}}}\n :test {:extra-paths ["test"]\n :extra-deps {lambdaisland/kaocha {:mvn/version "1.91.1392"}\n lambdaisland/kaocha-cloverage {:mvn/version "1.1.89"}}\n :main-opts ["-m" "kaocha.runner"]}\n :jib {:paths ["jibbit" "src"]\n :deps {io.github.atomisthq/jibbit {:git/url "https://github.com/skydread1/jibbit.git"\n :git/sha "bd873e028c031dbbcb95fe3f64ff51a305f75b54"}}\n :ns-default jibbit.core\n :ns-aliases {jib jibbit.core}}\n :outdated {:deps {com.github.liquidz/antq {:mvn/version "RELEASE"}}\n :main-opts ["-m" "antq.core"]}\n :cljfmt {:deps {io.github.weavejester/cljfmt {:git/tag "0.12.0", :git/sha "434408f"}}\n :ns-default cljfmt.tool}}}\n```\n\n### Kaocha in REPL\n\nRegarding the bindings to run the tests From the REPL, refer to your IDE documentation. I have experience using both Emacs (spacemacs distribution) and VSCode and running my tests was always straight forward. If you are starting to learn Clojure, I recommend using VSCode, as the Clojure extension [calva](https://github.com/BetterThanTomorrow/calva) is of very good quality and well documented. I’ll use VSCode in the following example.\n\nLet’s say we have the following test namespace:\n\n```clojure\n(ns my-app.core.fib-test\n (:require [clojure.test :refer [deftest is testing]]\n [my-app.core :as sut]))\n\n(deftest fib-test\n (testing "The Fib sequence is returned."\n (is (\x3d [0 1 1 2 3 5 8 13 21 34]\n (sut/fib 10)))))\n```\n\nAfter I `jack-in` using my *dev* alias form the `deps.edn` file, I can load the `my-app.core-test` namespace and run the tests. Using Calva, the flow will be like this:\n\n1. *ctrl+alt+c* *ctrl+alt+j*: jack-in (select the `dev` alias in my case)\n2. *ctrl+alt+c* *enter* (in the `fib-test` namespace): load the ns in the REPL\n3. *ctrl+alt+c* *t* (in the `fib-test` namespace): run the tests\n\nIn the REPL, we see:\n\n```clojure\nclj꞉user꞉\x3e\n; Evaluating file: fib_test.clj\n#\'my-app.core.fib-test/system-test\nclj꞉my-app.core.fib-test꞉\x3e \n; Running tests for the following namespaces:\n; my-app.core.fib-test\n; my-app.core.fib\n\n; 1 tests finished, all passing \ud83d\udc4d, ns: 1, vars: 1\n```\n\n### Kaocha in terminal\n\nBefore committing code, it\'s crucial to run all project tests to ensure new changes haven\'t broken existing functionalities.\n\nI added a few other namespaces and some tests.\n\nLet’s run all the tests in the terminal:\n\n```clojure\nclj -M:dev:test\nLoading namespaces: (my-app.core.cfg my-app.core.env my-app.core.fib my-app.core)\nTest namespaces: (:system :unit)\nInstrumented my-app.core.cfg\nInstrumented my-app.core.env\nInstrumented my-app.core.fib\nInstrumented my-app.core\nInstrumented 4 namespaces in 0.4 seconds.\nmalli: instrumented 1 function vars\nmalli: dev-mode started\n[(.)][(()(..)(..)(..))(.)(.)]\n4 tests, 9 assertions, 0 failures.\n```\n\nNote the `Test namespaces: (:system :unit)`. By default, Kaocha runs all tests. When no metadata is specified on the `deftest`, it is considered in the Kaocha `:unit` group. However, as the project grows, we might have slower tests that are system tests, load tests, stress tests etc. We can add metadata to their `deftest` in order to group them together. For instance:\n\n```clojure\n(ns my-app.core-test\n (:require [clojure.test :refer [deftest is testing]]\n [malli.dev :as dev]\n [malli.dev.pretty :as pretty]\n [my-app.core :as sut]))\n\n(dev/start! {:report (pretty/reporter)})\n\n(deftest ^:system system-test ;; metadata to add this test in the `system` kaocha test group \n (testing "The Fib sequence is returned."\n (is (\x3d [0 1 1 2 3 5 8 13 21 34]\n (sut/system #:cfg{:app #:app{:name "app" :version "1.0.0"}\n :fib #:fib{:length 10}})))))\n```\n\nWe need to tell Kaocha when and how to run the system test. Kaocha configurations are provided in a `tests.edn` file:\n\n```clojure\n#kaocha/v1\n {:tests [{:id :system :focus-meta [:system]} ;; only system tests\n {:id :unit}]} ;; all tests\n```\n\nThen in the terminal:\n\n```bash\nclj -M:dev:test --focus :system\nmalli: instrumented 1 function vars\nmalli: dev-mode started\n[(.)]\n1 tests, 1 assertions, 0 failures.\n```\n\nWe can add a bunch of metrics on top of the tests results. These metrics can be added via the `:plugins` keys:\n\n```clojure\n#kaocha/v1\n {:tests [{:id :system :focus-meta [:system]}\n {:id :unit}]\n :plugins [:kaocha.plugin/profiling\n :kaocha.plugin/cloverage]}\n```\n\nIf I run the tests again:\n\n```clojure\nclj -M:dev:test --focus :system\nLoading namespaces: (my-app.core.cfg my-app.core.env my-app.core.fib my-app.core)\nTest namespaces: (:system :unit)\nInstrumented my-app.core.cfg\nInstrumented my-app.core.env\nInstrumented my-app.core.fib\nInstrumented my-app.core\nInstrumented 4 namespaces in 0.4 seconds.\nmalli: instrumented 1 function vars\nmalli: dev-mode started\n[(.)]\n1 tests, 1 assertions, 0 failures.\n\nTop 1 slowest kaocha.type/clojure.test (0.02208 seconds, 97.0% of total time)\n system\n 0.02208 seconds average (0.02208 seconds / 1 tests)\n\nTop 1 slowest kaocha.type/ns (0.01914 seconds, 84.1% of total time)\n my-app.core-test\n 0.01914 seconds average (0.01914 seconds / 1 tests)\n\nTop 1 slowest kaocha.type/var (0.01619 seconds, 71.1% of total time)\n my-app.core-test/system-test\n 0.01619 seconds my_app/core_test.clj:9\nRan tests.\nWriting HTML report to: /Users/loicblanchard/workspaces/clojure-proj-template/target/coverage/index.html\n\n|-----------------+---------+---------|\n| Namespace | % Forms | % Lines |\n|-----------------+---------+---------|\n| my-app.core | 44.44 | 62.50 |\n| my-app.core.cfg | 69.57 | 74.07 |\n| my-app.core.env | 11.11 | 44.44 |\n| my-app.core.fib | 100.00 | 100.00 |\n|-----------------+---------+---------|\n| ALL FILES | 55.26 | 70.59 |\n|-----------------+---------+---------|\n```\n\n### Kaocha in terminal with options\n\nThere are a bunch of options to enhance the development experience such as:\n\n```bash\nclj -M:dev:test --watch --fail-fast\n```\n\n- `watch` mode makes Kaocha rerun the tests on file save.\n- `fail-fast` option makes Kaocha stop running the tests when it encounters a failing test\n\nThese 2 options are very convenient for unit testing.\n\nHowever, when a code base contains slower tests, if the slower tests are run first, the watch mode is not so convenient because it won’t provide instant feedback.\n\nWe saw that we can `focus` on tests with a specific metadata tag, we can also `skip` tests. Let’s pretend our `system` test is slow and we want to skip it to only run unit tests:\n\n```bash\n clj -M:dev:test --watch --fail-fast --skip-meta :system\n```\n\nFinally, I don’t want to use the `plugins` (profiling and code coverage) on watch mode as it clutter the space in the terminal, so I want to exclude them from the report.\n\nWe can actually create another kaocha config file for our watch mode.\n\n`tests-watch.edn`:\n\n```clojure\n#kaocha/v1\n {:tests [{:id :unit-watch :skip-meta [:system]}] ;; ignore system tests\n :watch? true ;; watch mode on\n :fail-fast? true} ;; stop running on first failure\n```\n\nNotice that there is no plugins anymore, and watch mode and fail fast options are enabled. Also, the `system` tests are skipped.\n\n```clojure\nclj -M:dev:test --config-file tests_watch.edn\nSLF4J(I): Connected with provider of type [org.slf4j.nop.NOPServiceProvider]\nmalli: instrumented 1 function vars\nmalli: dev-mode started\n[(.)(()(..)(..)(..))]\n2 tests, 7 assertions, 0 failures.\n```\n\nWe can now leave the terminal always on, change a file and save it and the tests will be rerun using all the options mentioned above.\n\n## Documentation as unit tests: Rich Comment Tests\n\nAnother approach to unit testing is to enhance the `comment` blocks to contain tests. This means that we don’t need a test file, we can just write our tests right below our functions and it serves as both documentation and unit tests.\n\nGoing back to our first example:\n\n```clojure\n(ns my-app.core.fib)\n\n(defn fib\n "Return the Fibonacci sequence with a lenght of `n`."\n [n]\n (-\x3e\x3e (iterate (fn [[a b]] [b (+\' a b)])\n [0 1])\n (map first)\n (take n)))\n\n^:rct/test\n(comment\n (fib 10) ;\x3d\x3e [0 1 1 2 3 5 8 13 21 34]\n (fib 0) ;\x3d\x3e []\n )\n```\n\nThe `comment` block showcases example of what the `fib` could return given some inputs and the values after `;\x3d\x3e` are actually verified when the tests are run.\n\n### RC Tests in the REPL\n\nWe just need to evaluate `(com.mjdowney.rich-comment-tests/run-ns-tests! *ns*)` in the namespace we want to test:\n\n```clojure\nclj꞉my-app.core-test꞉\x3e \n; Evaluating file: fib.clj\nnil\nclj꞉my-app.core.fib꞉\x3e \n(com.mjdowney.rich-comment-tests/run-ns-tests! *ns*)\n; \n; Testing my-app.core.fib\n; \n; Ran 1 tests containing 2 assertions.\n; 0 failures, 0 errors.\n{:test 1, :pass 2, :fail 0, :error 0}\n```\n\n### RC Tests in the terminal\n\nYou might wonder how to run all the RC Tests of the project. Actually, we already did that, when we ran Kaocha unit tests in the terminal.\n\nThis is possible by wrapping the RC Tests in a deftest like so:\n\n```clojure\n(ns my-app.rc-test\n "Rich Comment tests"\n (:require [clojure.test :refer [deftest testing]]\n [com.mjdowney.rich-comment-tests.test-runner :as rctr]))\n\n(deftest ^rct rich-comment-tests\n (testing "all white box small tests"\n (rctr/run-tests-in-file-tree! :dirs #{"src"})))\n```\n\nAnd if we want to run just the `rct` tests, we can focus on the metadata (see the metadata in the deftest above).\n\n```clojure\nclj -M:dev:test --focus-meta :rct\n```\n\nIt is possible to run the RC Tests without using Kaocha of course, refer to their doc for that.\n\n## clojure.test vs RCT?\n\nI personally use a mix of both. When the function is not too complex and internal (not supposed to be called by the client), I would use RCT.\n\nFor system tests, which inevitably often involve side-effects, I have a dedicated test namespace. Using `fixture` is often handy and also the tests are way more verbose which would have polluted the src namespaces with a `comment` block.\n\nIn the short example I used in this article, the project tree is as follow:\n\n```bash\n├── README.md\n├── config\n│ └── env.edn\n├── deps.edn\n├── dev\n│ └── user.clj\n├── jib.edn\n├── project.edn\n├── src\n│ └── my_app\n│ ├── core\n│ │ ├── cfg.clj\n│ │ ├── env.clj\n│ │ └── fib.clj\n│ └── core.clj\n├── test\n│ └── my_app\n│ ├── core_test.clj\n│ └── rc_test.clj\n├── tests.edn\n└── tests_watch.edn\n```\n\n`cfg.clj`, `env.clj` and `fib.clj` have RCT and `core_test.clj` has regular deftest.\n\nA rule of thumb could be: use regular deftest if the tests require at least one of the following:\n\n- fixtures: start and tear down resources (db, kafka, entire system etc)\n- verbose setup (configs, logging etc)\n- side-effects (testing the entire system, load tests, stress tests etc)\n\nWhen the implementation is easy to test, using RCT is good for a combo doc+test.\n\n## Data Validation and Generative testing\n\nThere are 2 main libraries I personally used for data validation an generative testing: [clojure/spec.alpha](https://github.com/clojure/spec.alpha) and [malli](https://github.com/metosin/malli). I will not explain in details how both work because that could be a whole article on its own. However, you can guess which one I used in my example project as you might have noticed the `instrumentation` logs when I ran the Kaocha tests: Malli.\n\n### Malli: Data validation\n\nHere is the config namespace that is responsible to validate the env variables passed to our hypothetical app:\n\n```clojure\n(ns my-app.core.cfg\n (:require [malli.core :as m]\n [malli.registry :as mr]\n [malli.util :as mu]))\n\n;; ---------- Schema Registry ----------\n\n(def domain-registry\n "Registry for malli schemas."\n {::app\n [:map {:closed true}\n [:app/name :string]\n [:app/version :string]]\n ::fib\n [:map {:closed true}\n [:fib/length :int]]})\n\n;; ---------- Validation ----------\n\n(mr/set-default-registry!\n (mr/composite-registry\n (m/default-schemas)\n (mu/schemas)\n domain-registry))\n\n(def cfg-sch\n [:map {:closed true}\n [:cfg/app ::app]\n [:cfg/fib ::fib]])\n\n(defn validate\n "Validates the given `data` against the given `schema`.\n If the validation passes, returns the data.\n Else, returns the error data."\n [data schema]\n (let [validator (m/validator schema)]\n (if (validator data)\n data\n (throw\n (ex-info "Invalid Configs Provided"\n (m/explain schema data))))))\n\n(defn validate-cfg\n [cfg]\n (validate cfg cfg-sch))\n\n^:rct/test\n(comment\n (def cfg #:cfg{:app #:app{:name "my-app"\n :version "1.0.0-RC1"}\n :fib #:fib{:length 10}})\n\n (validate-cfg cfg) ;\x3d\x3e\x3e cfg\n (validate-cfg (assoc cfg :cfg/wrong 2)) ;throws\x3d\x3e\x3e some?\n )\n```\n\nNot going into too much details here but you can see that we define a `schema` that follows our data structure. In this case, my data structure I want to spec is my config map.\n\n### Malli: Data Generation\n\nLet’s have a look at a simple example of a test of our system which randomly generates a length and verifies that the result is indeed a sequence of numbers with `length` element:\n\n```clojure\n(ns my-app.core-test\n (:require [clojure.test :refer [deftest is testing]]\n [malli.dev :as dev]\n [malli.dev.pretty :as pretty]\n [malli.generator :as mg]\n [my-app.core :as sut]\n [my-app.core.cfg :as cfg]))\n\n(dev/start! {:report (pretty/reporter)})\n\n(deftest ^:system system-test\n (testing "The Fib sequence is returned."\n (is (\x3d [0 1 1 2 3 5 8 13 21 34]\n (sut/system #:cfg{:app #:app{:name "app" :version "1.0.0"}\n :fib #:fib{:length 10}}))))\n (testing "No matter the length of the sequence provided, the system returns the Fib sequence."\n (let [length (mg/generate pos-int? {:size 10})\n cfg #:cfg{:app #:app{:name "app" :version "1.0.0"}\n :fib #:fib{:length length}}\n rslt (sut/system cfg)]\n (is (cfg/validate\n rslt\n [:sequential {:min length :max length} :int])))))\n```\n\nThe second `testing` highlights both data generation (the `length`) and data validation (result must be a sequence of `int` with `length` elements).\n\nThe `dev/start!` starts malli instrumentation. It automatically detects functions which have malli specs and validate it. Let’s see what it does exactly in the next section.\n\n### Malli: Instrumentation\n\nEarlier, we saw tests for the `core/system` functions. Here is the core namespace:\n\n```clojure\n(ns my-app.core\n (:require [my-app.core.cfg :as cfg]\n [my-app.core.env :as env]\n [my-app.core.fib :as fib]))\n\n(defn system\n {:malli/schema\n [:\x3d\x3e [:cat cfg/cfg-sch] [:sequential :int]]}\n [cfg]\n (let [length (-\x3e cfg :cfg/fib :fib/length)]\n (fib/fib length)))\n\n(defn -main [\x26 _]\n (let [cfg (cfg/validate-cfg #:cfg{:app (env/config\x3c-env :APP)\n :fib (env/config\x3c-env :FIB)})]\n (system cfg)))\n```\n\nThe `system` function is straight forward. It takes a config map and returns the fib sequence.\n\nNote the metadata of that function:\n\n```clojure\n{:malli/schema\n [:\x3d\x3e [:cat cfg/cfg-sch] [:sequential :int]]}\n```\n\nThe arrow `:\x3d\x3e` means it is a function schema. So in this case, we expect a config as unique argument and we expect a sequence of int as returned value.\n\nWhen we `instrument` our namespace, we tell malli to check the given argument and returned value and to throw an error if they do not respect the schema in the metadata. It is very convenient.\n\nTo enable the instrumentation, we call `malli.dev/start!` as you can see in the `core-test` namespace code snippet.\n\n### When to use data validation/generation/instrumentation\n\nClojure is a dynamically typed language, allowing us to write functions without being constrained by rigid type definitions. This flexibility encourages rapid development, experimentation, and iteration. Thus, it makes testing a bliss because we can easily mock function inputs or provide partial inputs.\n\nHowever, if we start adding type check to all functions in all namespaces (in our case with malli metadata for instance), we introduce strict typing to our entire code base and therefore all the constraints that come with it.\n\nPersonally, I recommend adding validation for the entry point of the app only. For instance, if we develop a library, we will most likely have a top level namespace called `my-app.core` or `my-app.main` with the different functions our client can call. These functions are the ones we want to validate. All the internal logic, not supposed to be called by the clients, even though they can, do not need to be spec’ed as we want to maintain the flexibility I mentioned earlier.\n\nA second example could be that we develop an app that has a `-main` function that will be called to start our system. A system can be whatever our app needs to perform. It can start servers, connect to databases, perform batch jobs etc. Note that in that case the entry point of our program is the `-main` function. What we want to validate is that the proper params are passed to the system that our `-main` function will start. Going back to our Fib app example, our system is very simple, it just returns the Fib sequence given the length. The length is what need to be validated in our case as it is provided externally via env variable. That is why we saw that the system function had malli metadata. However, our internal function have tests but no spec to keep that dynamic language flexibility that Clojure offers.\n\nFinally, note the distinction between `instrumentation`, that is used for development (the metadata with the function schemas) and data validation for production (call to `cfg/validate-cfg`). For overhead reasons, we don\'t want to instrument our functions in production, it is a development tool. However, we do want to have our system throws an error when wrong params are provided to our system, hence the call to `cfg/validate-cfg`.\n\n## Load/stress/integration tests\n\nIn functional programming, and especially in Clojure, it is important to avoid side effects (mutations, external factors, etc) as much as we can. Of course, we cannot avoid mutations as they are inevitable: start a server, connect to a database, IOs, update frontend web state and much more. What we can do is isolate these side effects so the rest of the code base remains pure and can enjoy the flexibility and thus predictable behavior.\n\n### Mocking data\n\nSome might argue that we should never mock data. From my humble personal experience, this is impossible for complex apps. An app I worked on consumes messages from different kafka topics, does write/read from a datomic database, makes http calls to multiple remote servers and produces messages to several kafka topics. So if I don’t mock anything, I need to have several remote http servers in a test cluster just for testing. I need to have a real datomic database with production-like data. I need all the other apps that will produce kafka messages that my consumers will process. In other words, it is not possible.\n\nWe can mock functions using [with-redefs](https://clojuredocs.org/clojure.core/with-redefs) which is very convenient for testing. Using the clojure.test [use-fixtures](https://clojuredocs.org/clojure.test/use-fixtures) is also great to start and tear down services after the tests are done.\n\n### Integration tests\n\nI mentioned above, an app using datomic and kafka for instance. In my integration tests, I want to be able to produce kafka messages and I want to interact with an actual datomic db to ensure proper behavior of my app. The common approach for this is to use `embedded` versions of these services. Our test fixtures can start/delete an embedded datomic database and start/stop kafka consumers/producers as well.\n\nWhat about the http calls? We can `with-redefs` those to return some valid but randomly generated values. Integration tests aim at ensuring that all components of our app work together as expected and embedded versions of external services and redefinitions of vars can make the tests predictable and suitable for CI.\n\nI have not touch on running tests in the CI, but integration tests should be run in the CI and if all services are embedded, there should be no difficulty in setting up a pipeline.\n\n### Load/stress tests\n\nTo be sure an app performs well under heavy load, embedded services won’t work as they are limited in terms of performance, parallel processing etc. In our example above, If I want to start lots of kafka consumers and to use a big datomic transactor to cater lots of transactions, embedded datomic and embedded kafka won’t suffice. So I have to run a datomic transactor on my machine (maybe I want the DB to be pre-populated with millions or entities as well) and I will need to run kafka on my machine as well (maybe using confluent [cp-all-in-one](https://github.com/confluentinc/cp-all-in-one) container setup). Let’s get fancy, and also run prometheus/grafana to monitor the performance of the stress tests.\n\nYour intuition is correct, it would be a nightmare for each developer of the project to setup all services. One solution is to containerized all these services. a datomic transactor can be run in docker, confluent provides a docker-compose to run kafka zookeeper, broker, control center etc, prometheus scrapper can be run in a container as well as grafana. So providing docker-compose files in our repo so each developer can just run `docker-compose up -d` to start all necessary services is the solution I recommend.\n\nNote that I do not containerized my clojure app so I do not have to change anything in my workflow. I deal with load/stress tests the same way I deal with my unit tests. I just start the services in the containers and my Clojure REPL as per usual.\n\nThis setup is not the only solution to load/stress tests but it is the one I successfully implemented in my project and it really helps us being efficient.\n\n## Conclusion\n\nI highlighted some common testing tools and methods that the Clojure community use and I explained how I personally incorporated these tools and methods to my projects. Tools are common to everybody, but how we use them is considered opinionated and will differ depending on the projects and team decision.\n\nIf you are starting your journey as a Clojure developer, I hope you can appreciate the quality of open-source testing libraries we have access to. Also, please remember that keeping things pure is the key to easy testing and debugging; a luxury not so common in the programming world. Inevitably, you will need to deal with side effects but isolate them as much as you can to make your code robust and your tests straight forward.\n\nFinally, there are some tools I didn’t mention to keep things short so feel free to explore what the Clojure community has to offer. The last advice I would give is to not try to use too many tools or only the shiny new ones you might find. Keep things simple and evaluate if a library is worth being added to your deps.\n\n',
+null),"blog-port-clj-lib"]),jj([Vl,Bm,Zn,Qp,Rp,vq,Lr,gt,Bt],[new S(null,6,5,T,"Clojure;Kaocha;Malli;Rich Comment Tests;Instrumentation;Data validation/generation".split(";"),null),new S(null,1,5,T,["2024-08-10"],null),'\n## Introduction\n\nThis article introduces effective testing libraries and methods for those new to Clojure.\n\nWe\'ll explore using the [kaocha](https://github.com/lambdaisland/kaocha) test runner in both REPL and terminal, along with configurations to enhance feedback. Then we will explain how tests as documentation can be done using [rich-comment-tests](https://github.com/matthewdowney/rich-comment-tests).\n\nWe will touch on how to do data validation, generation and instrumentation using [malli](https://github.com/metosin/malli).\n\nFinally, I will talk about how I manage integrations tests with eventual external services involved.\n\n## Test good code\n\n### Pure functions\n\nFirst of all, always remember that it is important to have as many pure functions as possible. It means, the same input passed to a function always returns the same output. This will simplify the testing and make your code more robust.\n\nHere is an example of unpredictable **impure** logic:\n\n```clojure\n(defn fib\n "Read the Fibonacci list length to be returned from a file,\n Return the Fibonacci sequence."\n [variable]\n (when-let [n (-\x3e (slurp "config/env.edn") edn/read-string (get variable) :length)]\n (-\x3e\x3e (iterate (fn [[a b]] [b (+\' a b)])\n [0 1])\n (map first)\n (take n))))\n\n(comment\n ;; env.edn has the content {:FIB {:length 10}}\n (fib :FIB) ;\x3d\x3e 10\n ;; env.edn is empty\n (fib :FIB) ;\x3d\x3e nil\n )\n```\n\nFor instance, reading the `length` value from a file before computing the Fibonacci sequence is **unpredictable** for several reasons:\n\n- the file could not have the expected value\n- the file could be missing\n- in prod, the env variable would be read from the system not a file so the function would always return `nil`\n- what if the FIB value from the file has the wrong format.\n\nWe would need to test too many cases unrelated to the Fibonacci logic itself, which is bad practice.\n\nThe solution is to **isolate** the impure code:\n\n```clojure\n(defn fib\n "Return the Fibonacci sequence with a lenght of `n`."\n [n]\n (-\x3e\x3e (iterate (fn [[a b]] [b (+\' a b)])\n [0 1])\n (map first)\n (take n)))\n\n^:rct/test\n(comment\n (fib 10) ;\x3d\x3e [0 1 1 2 3 5 8 13 21 34]\n (fib 0) ;\x3d\x3e []\n )\n\n(defn config\x3c-file\n "Reads the `config/env.edn` file, gets the value of the given key `variable`\n and returns it as clojure data."\n [variable]\n (-\x3e (slurp "config/env.edn") edn/read-string (get variable)))\n\n(comment\n ;; env.edn contains :FIB key with value {:length 10}\n (config\x3c-file :FIB) ;\x3d\x3e {:length 10}\n ;; env.edn is empty\n (config\x3c-file :FIB) ;\x3d\x3e {:length nil}\n )\n```\n\nThe `fib` function is now **pure** and the same input will always yield the same output. I can therefore write my unit tests and be confident of the result. You might have noticed I added `^:rct/test` above the comment block which is actually a unit test that can be run with RCT (more on this later).\n\nThe **impure** code is isolated in the `config\x3c-file` function, which handles reading the environment variable from a file.\n\nThis may seem basic, but it\'s the essential first step in testing: ensuring the code is as pure as possible for easier testing is one of the strengths of **data-oriented** programming!\n\n## Test runner: Kaocha\n\nFor all my personal and professional projects, I have used [kaocha](https://github.com/lambdaisland/kaocha) as a test-runner. \n\nThere are 2 main ways to run the tests that developers commonly use:\n\n- Within the **REPL** as we implement our features or fix bugs\n- In the **terminal**: to verify that all tests pass or to target a specific group of tests\n\nHere is the `deps.edn` I will use in this example:\n\n```clojure\n{:deps {org.clojure/clojure {:mvn/version "1.11.3"}\n org.slf4j/slf4j-nop {:mvn/version "2.0.15"}\n metosin/malli {:mvn/version "0.16.1"}}\n :paths ["src"]\n :aliases\n {:dev {:extra-paths ["config" "test" "dev"]\n :extra-deps {io.github.robertluo/rich-comment-tests {:git/tag "v1.1.1", :git/sha "3f65ecb"}}}\n :test {:extra-paths ["test"]\n :extra-deps {lambdaisland/kaocha {:mvn/version "1.91.1392"}\n lambdaisland/kaocha-cloverage {:mvn/version "1.1.89"}}\n :main-opts ["-m" "kaocha.runner"]}\n :jib {:paths ["jibbit" "src"]\n :deps {io.github.atomisthq/jibbit {:git/url "https://github.com/skydread1/jibbit.git"\n :git/sha "bd873e028c031dbbcb95fe3f64ff51a305f75b54"}}\n :ns-default jibbit.core\n :ns-aliases {jib jibbit.core}}\n :outdated {:deps {com.github.liquidz/antq {:mvn/version "RELEASE"}}\n :main-opts ["-m" "antq.core"]}\n :cljfmt {:deps {io.github.weavejester/cljfmt {:git/tag "0.12.0", :git/sha "434408f"}}\n :ns-default cljfmt.tool}}}\n```\n\n### Kaocha in REPL\n\nRegarding the bindings to run the tests From the REPL, refer to your IDE documentation. I have experience using both Emacs (spacemacs distribution) and VSCode and running my tests was always straight forward. If you are starting to learn Clojure, I recommend using VSCode, as the Clojure extension [calva](https://github.com/BetterThanTomorrow/calva) is of very good quality and well documented. I’ll use VSCode in the following example.\n\nLet’s say we have the following test namespace:\n\n```clojure\n(ns my-app.core.fib-test\n (:require [clojure.test :refer [deftest is testing]]\n [my-app.core :as sut]))\n\n(deftest fib-test\n (testing "The Fib sequence is returned."\n (is (\x3d [0 1 1 2 3 5 8 13 21 34]\n (sut/fib 10)))))\n```\n\nAfter I `jack-in` using my *dev* alias form the `deps.edn` file, I can load the `my-app.core-test` namespace and run the tests. Using Calva, the flow will be like this:\n\n1. *ctrl+alt+c* *ctrl+alt+j*: jack-in (select the `dev` alias in my case)\n2. *ctrl+alt+c* *enter* (in the `fib-test` namespace): load the ns in the REPL\n3. *ctrl+alt+c* *t* (in the `fib-test` namespace): run the tests\n\nIn the REPL, we see:\n\n```clojure\nclj꞉user꞉\x3e\n; Evaluating file: fib_test.clj\n#\'my-app.core.fib-test/system-test\nclj꞉my-app.core.fib-test꞉\x3e \n; Running tests for the following namespaces:\n; my-app.core.fib-test\n; my-app.core.fib\n\n; 1 tests finished, all passing \ud83d\udc4d, ns: 1, vars: 1\n```\n\n### Kaocha in terminal\n\nBefore committing code, it\'s crucial to run all project tests to ensure new changes haven\'t broken existing functionalities.\n\nI added a few other namespaces and some tests.\n\nLet’s run all the tests in the terminal:\n\n```clojure\nclj -M:dev:test\nLoading namespaces: (my-app.core.cfg my-app.core.env my-app.core.fib my-app.core)\nTest namespaces: (:system :unit)\nInstrumented my-app.core.cfg\nInstrumented my-app.core.env\nInstrumented my-app.core.fib\nInstrumented my-app.core\nInstrumented 4 namespaces in 0.4 seconds.\nmalli: instrumented 1 function vars\nmalli: dev-mode started\n[(.)][(()(..)(..)(..))(.)(.)]\n4 tests, 9 assertions, 0 failures.\n```\n\nNote the `Test namespaces: (:system :unit)`. By default, Kaocha runs all tests. When no metadata is specified on the `deftest`, it is considered in the Kaocha `:unit` group. However, as the project grows, we might have slower tests that are system tests, load tests, stress tests etc. We can add metadata to their `deftest` in order to group them together. For instance:\n\n```clojure\n(ns my-app.core-test\n (:require [clojure.test :refer [deftest is testing]]\n [malli.dev :as dev]\n [malli.dev.pretty :as pretty]\n [my-app.core :as sut]))\n\n(dev/start! {:report (pretty/reporter)})\n\n(deftest ^:system system-test ;; metadata to add this test in the `system` kaocha test group \n (testing "The Fib sequence is returned."\n (is (\x3d [0 1 1 2 3 5 8 13 21 34]\n (sut/system #:cfg{:app #:app{:name "app" :version "1.0.0"}\n :fib #:fib{:length 10}})))))\n```\n\nWe need to tell Kaocha when and how to run the system test. Kaocha configurations are provided in a `tests.edn` file:\n\n```clojure\n#kaocha/v1\n {:tests [{:id :system :focus-meta [:system]} ;; only system tests\n {:id :unit}]} ;; all tests\n```\n\nThen in the terminal:\n\n```bash\nclj -M:dev:test --focus :system\nmalli: instrumented 1 function vars\nmalli: dev-mode started\n[(.)]\n1 tests, 1 assertions, 0 failures.\n```\n\nWe can add a bunch of metrics on top of the tests results. These metrics can be added via the `:plugins` keys:\n\n```clojure\n#kaocha/v1\n {:tests [{:id :system :focus-meta [:system]}\n {:id :unit}]\n :plugins [:kaocha.plugin/profiling\n :kaocha.plugin/cloverage]}\n```\n\nIf I run the tests again:\n\n```clojure\nclj -M:dev:test --focus :system\nLoading namespaces: (my-app.core.cfg my-app.core.env my-app.core.fib my-app.core)\nTest namespaces: (:system :unit)\nInstrumented my-app.core.cfg\nInstrumented my-app.core.env\nInstrumented my-app.core.fib\nInstrumented my-app.core\nInstrumented 4 namespaces in 0.4 seconds.\nmalli: instrumented 1 function vars\nmalli: dev-mode started\n[(.)]\n1 tests, 1 assertions, 0 failures.\n\nTop 1 slowest kaocha.type/clojure.test (0.02208 seconds, 97.0% of total time)\n system\n 0.02208 seconds average (0.02208 seconds / 1 tests)\n\nTop 1 slowest kaocha.type/ns (0.01914 seconds, 84.1% of total time)\n my-app.core-test\n 0.01914 seconds average (0.01914 seconds / 1 tests)\n\nTop 1 slowest kaocha.type/var (0.01619 seconds, 71.1% of total time)\n my-app.core-test/system-test\n 0.01619 seconds my_app/core_test.clj:9\nRan tests.\nWriting HTML report to: /Users/loicblanchard/workspaces/clojure-proj-template/target/coverage/index.html\n\n|-----------------+---------+---------|\n| Namespace | % Forms | % Lines |\n|-----------------+---------+---------|\n| my-app.core | 44.44 | 62.50 |\n| my-app.core.cfg | 69.57 | 74.07 |\n| my-app.core.env | 11.11 | 44.44 |\n| my-app.core.fib | 100.00 | 100.00 |\n|-----------------+---------+---------|\n| ALL FILES | 55.26 | 70.59 |\n|-----------------+---------+---------|\n```\n\n### Kaocha in terminal with options\n\nThere are a bunch of options to enhance the development experience such as:\n\n```bash\nclj -M:dev:test --watch --fail-fast\n```\n\n- `watch` mode makes Kaocha rerun the tests on file save.\n- `fail-fast` option makes Kaocha stop running the tests when it encounters a failing test\n\nThese 2 options are very convenient for unit testing.\n\nHowever, when a code base contains slower tests, if the slower tests are run first, the watch mode is not so convenient because it won’t provide instant feedback.\n\nWe saw that we can `focus` on tests with a specific metadata tag, we can also `skip` tests. Let’s pretend our `system` test is slow and we want to skip it to only run unit tests:\n\n```bash\n clj -M:dev:test --watch --fail-fast --skip-meta :system\n```\n\nFinally, I don’t want to use the `plugins` (profiling and code coverage) on watch mode as it clutter the space in the terminal, so I want to exclude them from the report.\n\nWe can actually create another kaocha config file for our watch mode.\n\n`tests_watch.edn`:\n\n```clojure\n#kaocha/v1\n {:tests [{:id :unit-watch :skip-meta [:system]}] ;; ignore system tests\n :watch? true ;; watch mode on\n :fail-fast? true} ;; stop running on first failure\n```\n\nNotice that there is no plugins anymore, and watch mode and fail fast options are enabled. Also, the `system` tests are skipped.\n\n```clojure\nclj -M:dev:test --config-file tests_watch.edn\nSLF4J(I): Connected with provider of type [org.slf4j.nop.NOPServiceProvider]\nmalli: instrumented 1 function vars\nmalli: dev-mode started\n[(.)(()(..)(..)(..))]\n2 tests, 7 assertions, 0 failures.\n```\n\nWe can now leave the terminal always on, change a file and save it and the tests will be rerun using all the options mentioned above.\n\n## Documentation as unit tests: Rich Comment Tests\n\nAnother approach to unit testing is to enhance the `comment` blocks to contain tests. This means that we don’t need a test file, we can just write our tests right below our functions and it serves as both documentation and unit tests.\n\nGoing back to our first example:\n\n```clojure\n(ns my-app.core.fib)\n\n(defn fib\n "Return the Fibonacci sequence with a lenght of `n`."\n [n]\n (-\x3e\x3e (iterate (fn [[a b]] [b (+\' a b)])\n [0 1])\n (map first)\n (take n)))\n\n^:rct/test\n(comment\n (fib 10) ;\x3d\x3e [0 1 1 2 3 5 8 13 21 34]\n (fib 0) ;\x3d\x3e []\n )\n```\n\nThe `comment` block showcases example of what the `fib` could return given some inputs and the values after `;\x3d\x3e` are actually verified when the tests are run.\n\n### RC Tests in the REPL\n\nWe just need to evaluate `(com.mjdowney.rich-comment-tests/run-ns-tests! *ns*)` in the namespace we want to test:\n\n```clojure\nclj꞉my-app.core-test꞉\x3e \n; Evaluating file: fib.clj\nnil\nclj꞉my-app.core.fib꞉\x3e \n(com.mjdowney.rich-comment-tests/run-ns-tests! *ns*)\n; \n; Testing my-app.core.fib\n; \n; Ran 1 tests containing 2 assertions.\n; 0 failures, 0 errors.\n{:test 1, :pass 2, :fail 0, :error 0}\n```\n\n### RC Tests in the terminal\n\nYou might wonder how to run all the RC Tests of the project. Actually, we already did that, when we ran Kaocha unit tests in the terminal.\n\nThis is possible by wrapping the RC Tests in a deftest like so:\n\n```clojure\n(ns my-app.rc-test\n "Rich Comment tests"\n (:require [clojure.test :refer [deftest testing]]\n [com.mjdowney.rich-comment-tests.test-runner :as rctr]))\n\n(deftest ^:rct rich-comment-tests\n (testing "all white box small tests"\n (rctr/run-tests-in-file-tree! :dirs #{"src"})))\n```\n\nAnd if we want to run just the `rct` tests, we can focus on the metadata (see the metadata in the deftest above).\n\n```clojure\nclj -M:dev:test --focus-meta :rct\n```\n\nIt is possible to run the RC Tests without using Kaocha of course, refer to their doc for that.\n\n## clojure.test vs RCT?\n\nI personally use a mix of both. When the function is not too complex and internal (not supposed to be called by the client), I would use RCT.\n\nFor system tests, which inevitably often involve side-effects, I have a dedicated test namespace. Using `fixture` is often handy and also the tests are way more verbose which would have polluted the src namespaces with a `comment` block.\n\nIn the short example I used in this article, the project tree is as follow:\n\n```bash\n├── README.md\n├── config\n│ └── env.edn\n├── deps.edn\n├── dev\n│ └── user.clj\n├── jib.edn\n├── project.edn\n├── src\n│ └── my_app\n│ ├── core\n│ │ ├── cfg.clj\n│ │ ├── env.clj\n│ │ └── fib.clj\n│ └── core.clj\n├── test\n│ └── my_app\n│ ├── core_test.clj\n│ └── rc_test.clj\n├── tests.edn\n└── tests_watch.edn\n```\n\n`cfg.clj`, `env.clj` and `fib.clj` have RCT and `core_test.clj` has regular deftest.\n\nA rule of thumb could be: use regular deftest if the tests require at least one of the following:\n\n- fixtures: start and tear down resources (db, kafka, entire system etc)\n- verbose setup (configs, logging etc)\n- side-effects (testing the entire system, load tests, stress tests etc)\n\nWhen the implementation is easy to test, using RCT is good for a combo doc+test.\n\n## Data Validation and Generative testing\n\nThere are 2 main libraries I personally used for data validation an generative testing: [clojure/spec.alpha](https://github.com/clojure/spec.alpha) and [malli](https://github.com/metosin/malli). I will not explain in details how both work because that could be a whole article on its own. However, you can guess which one I used in my example project as you might have noticed the `instrumentation` logs when I ran the Kaocha tests: Malli.\n\n### Malli: Data validation\n\nHere is the config namespace that is responsible to validate the env variables passed to our hypothetical app:\n\n```clojure\n(ns my-app.core.cfg\n (:require [malli.core :as m]\n [malli.registry :as mr]\n [malli.util :as mu]))\n\n;; ---------- Schema Registry ----------\n\n(def domain-registry\n "Registry for malli schemas."\n {::app\n [:map {:closed true}\n [:app/name :string]\n [:app/version :string]]\n ::fib\n [:map {:closed true}\n [:fib/length :int]]})\n\n;; ---------- Validation ----------\n\n(mr/set-default-registry!\n (mr/composite-registry\n (m/default-schemas)\n (mu/schemas)\n domain-registry))\n\n(def cfg-sch\n [:map {:closed true}\n [:cfg/app ::app]\n [:cfg/fib ::fib]])\n\n(defn validate\n "Validates the given `data` against the given `schema`.\n If the validation passes, returns the data.\n Else, returns the error data."\n [data schema]\n (let [validator (m/validator schema)]\n (if (validator data)\n data\n (throw\n (ex-info "Invalid Configs Provided"\n (m/explain schema data))))))\n\n(defn validate-cfg\n [cfg]\n (validate cfg cfg-sch))\n\n^:rct/test\n(comment\n (def cfg #:cfg{:app #:app{:name "my-app"\n :version "1.0.0-RC1"}\n :fib #:fib{:length 10}})\n\n (validate-cfg cfg) ;\x3d\x3e\x3e cfg\n (validate-cfg (assoc cfg :cfg/wrong 2)) ;throws\x3d\x3e\x3e some?\n )\n```\n\nNot going into too much details here but you can see that we define a `schema` that follows our data structure. In this case, my data structure I want to spec is my config map.\n\n### Malli: Data Generation\n\nLet’s have a look at a simple example of a test of our system which randomly generates a length and verifies that the result is indeed a sequence of numbers with `length` element:\n\n```clojure\n(ns my-app.core-test\n (:require [clojure.test :refer [deftest is testing]]\n [malli.dev :as dev]\n [malli.dev.pretty :as pretty]\n [malli.generator :as mg]\n [my-app.core :as sut]\n [my-app.core.cfg :as cfg]))\n\n(dev/start! {:report (pretty/reporter)})\n\n(deftest ^:system system-test\n (testing "The Fib sequence is returned."\n (is (\x3d [0 1 1 2 3 5 8 13 21 34]\n (sut/system #:cfg{:app #:app{:name "app" :version "1.0.0"}\n :fib #:fib{:length 10}}))))\n (testing "No matter the length of the sequence provided, the system returns the Fib sequence."\n (let [length (mg/generate pos-int? {:size 10})\n cfg #:cfg{:app #:app{:name "app" :version "1.0.0"}\n :fib #:fib{:length length}}\n rslt (sut/system cfg)]\n (is (cfg/validate\n rslt\n [:sequential {:min length :max length} :int])))))\n```\n\nThe second `testing` highlights both data generation (the `length`) and data validation (result must be a sequence of `int` with `length` elements).\n\nThe `dev/start!` starts malli instrumentation. It automatically detects functions which have malli specs and validate it. Let’s see what it does exactly in the next section.\n\n### Malli: Instrumentation\n\nEarlier, we saw tests for the `core/system` functions. Here is the core namespace:\n\n```clojure\n(ns my-app.core\n (:require [my-app.core.cfg :as cfg]\n [my-app.core.env :as env]\n [my-app.core.fib :as fib]))\n\n(defn system\n {:malli/schema\n [:\x3d\x3e [:cat cfg/cfg-sch] [:sequential :int]]}\n [cfg]\n (let [length (-\x3e cfg :cfg/fib :fib/length)]\n (fib/fib length)))\n\n(defn -main [\x26 _]\n (let [cfg (cfg/validate-cfg #:cfg{:app (env/config\x3c-env :APP)\n :fib (env/config\x3c-env :FIB)})]\n (system cfg)))\n```\n\nThe `system` function is straight forward. It takes a config map and returns the fib sequence.\n\nNote the metadata of that function:\n\n```clojure\n{:malli/schema\n [:\x3d\x3e [:cat cfg/cfg-sch] [:sequential :int]]}\n```\n\nThe arrow `:\x3d\x3e` means it is a function schema. So in this case, we expect a config as unique argument and we expect a sequence of int as returned value.\n\nWhen we `instrument` our namespace, we tell malli to check the given argument and returned value and to throw an error if they do not respect the schema in the metadata. It is very convenient.\n\nTo enable the instrumentation, we call `malli.dev/start!` as you can see in the `core-test` namespace code snippet.\n\n### When to use data validation/generation/instrumentation\n\nClojure is a dynamically typed language, allowing us to write functions without being constrained by rigid type definitions. This flexibility encourages rapid development, experimentation, and iteration. Thus, it makes testing a bliss because we can easily mock function inputs or provide partial inputs.\n\nHowever, if we start adding type check to all functions in all namespaces (in our case with malli metadata for instance), we introduce strict typing to our entire code base and therefore all the constraints that come with it.\n\nPersonally, I recommend adding validation for the entry point of the app only. For instance, if we develop a library, we will most likely have a top level namespace called `my-app.core` or `my-app.main` with the different functions our client can call. These functions are the ones we want to validate. All the internal logic, not supposed to be called by the clients, even though they can, do not need to be spec’ed as we want to maintain the flexibility I mentioned earlier.\n\nA second example could be that we develop an app that has a `-main` function that will be called to start our system. A system can be whatever our app needs to perform. It can start servers, connect to databases, perform batch jobs etc. Note that in that case the entry point of our program is the `-main` function. What we want to validate is that the proper params are passed to the system that our `-main` function will start. Going back to our Fib app example, our system is very simple, it just returns the Fib sequence given the length. The length is what need to be validated in our case as it is provided externally via env variable. That is why we saw that the system function had malli metadata. However, our internal function have tests but no spec to keep that dynamic language flexibility that Clojure offers.\n\nFinally, note the distinction between `instrumentation`, that is used for development (the metadata with the function schemas) and data validation for production (call to `cfg/validate-cfg`). For overhead reasons, we don\'t want to instrument our functions in production, it is a development tool. However, we do want to have our system throws an error when wrong params are provided to our system, hence the call to `cfg/validate-cfg`.\n\n## Load/stress/integration tests\n\nIn functional programming, and especially in Clojure, it is important to avoid side effects (mutations, external factors, etc) as much as we can. Of course, we cannot avoid mutations as they are inevitable: start a server, connect to a database, IOs, update frontend web state and much more. What we can do is isolate these side effects so the rest of the code base remains pure and can enjoy the flexibility and thus predictable behavior.\n\n### Mocking data\n\nSome might argue that we should never mock data. From my humble personal experience, this is impossible for complex apps. An app I worked on consumes messages from different kafka topics, does write/read from a datomic database, makes http calls to multiple remote servers and produces messages to several kafka topics. So if I don’t mock anything, I need to have several remote http servers in a test cluster just for testing. I need to have a real datomic database with production-like data. I need all the other apps that will produce kafka messages that my consumers will process. In other words, it is not possible.\n\nWe can mock functions using [with-redefs](https://clojuredocs.org/clojure.core/with-redefs) which is very convenient for testing. Using the clojure.test [use-fixtures](https://clojuredocs.org/clojure.test/use-fixtures) is also great to start and tear down services after the tests are done.\n\n### Integration tests\n\nI mentioned above, an app using datomic and kafka for instance. In my integration tests, I want to be able to produce kafka messages and I want to interact with an actual datomic db to ensure proper behavior of my app. The common approach for this is to use `embedded` versions of these services. Our test fixtures can start/delete an embedded datomic database and start/stop kafka consumers/producers as well.\n\nWhat about the http calls? We can `with-redefs` those to return some valid but randomly generated values. Integration tests aim at ensuring that all components of our app work together as expected and embedded versions of external services and redefinitions of vars can make the tests predictable and suitable for CI.\n\nI have not touch on running tests in the CI, but integration tests should be run in the CI and if all services are embedded, there should be no difficulty in setting up a pipeline.\n\n### Load/stress tests\n\nTo be sure an app performs well under heavy load, embedded services won’t work as they are limited in terms of performance, parallel processing etc. In our example above, If I want to start lots of kafka consumers and to use a big datomic transactor to cater lots of transactions, embedded datomic and embedded kafka won’t suffice. So I have to run a datomic transactor on my machine (maybe I want the DB to be pre-populated with millions or entities as well) and I will need to run kafka on my machine as well (maybe using confluent [cp-all-in-one](https://github.com/confluentinc/cp-all-in-one) container setup). Let’s get fancy, and also run prometheus/grafana to monitor the performance of the stress tests.\n\nYour intuition is correct, it would be a nightmare for each developer of the project to setup all services. One solution is to containerized all these services. a datomic transactor can be run in docker, confluent provides a docker-compose to run kafka zookeeper, broker, control center etc, prometheus scrapper can be run in a container as well as grafana. So providing docker-compose files in our repo so each developer can just run `docker-compose up -d` to start all necessary services is the solution I recommend.\n\nNote that I do not containerized my clojure app so I do not have to change anything in my workflow. I deal with load/stress tests the same way I deal with my unit tests. I just start the services in the containers and my Clojure REPL as per usual.\n\nThis setup is not the only solution to load/stress tests but it is the one I successfully implemented in my project and it really helps us being efficient.\n\n## Conclusion\n\nI highlighted some common testing tools and methods that the Clojure community use and I explained how I personally incorporated these tools and methods to my projects. Tools are common to everybody, but how we use them is considered opinionated and will differ depending on the projects and team decision.\n\nIf you are starting your journey as a Clojure developer, I hope you can appreciate the quality of open-source testing libraries we have access to. Also, please remember that keeping things pure is the key to easy testing and debugging; a luxury not so common in the programming world. Inevitably, you will need to deal with side effects but isolate them as much as you can to make your code robust and your tests straight forward.\n\nFinally, there are some tools I didn’t mention to keep things short so feel free to explore what the Clojure community has to offer. The last advice I would give is to not try to use too many tools or only the shiny new ones you might find. Keep things simple and evaluate if a library is worth being added to your deps.\n\n',
"\nIntroducing some popular testing tools to developers new to Clojure. Highlight solutions for how to do unit testing with Rich Comment Tests, data validation and generative testing with Malli, running test suites and metrics with Kaocha and how to do integration testing using external containerized services.\n",su,"Testing in Clojure","testing-in-clojure",new n(null,3,[dp,"/assets/loic-blog-logo.png",ap,"/assets/loic-blog-logo.png",Xl,"Logo referencing Aperture Science"],null),"testing-in-clojure"]),
jj([Vl,Bm,Zn,Qp,Rp,vq,Lr,gt,Bt],[new S(null,4,5,T,["Git","Workflows","Branching","CI/CD"],null),new S(null,1,5,T,["2024-05-12"],null),"\n## Introduction\n\nDepending on the size of the projects and its CI/CD requirements, one might choose one of the popular [Git Workflows](https://www.atlassian.com/git/tutorials/comparing-workflows). Some are good for some scenarios, some are never good and some are questionable to say the least.\n\nIn this article, I will explain how the main workflows work and which one to use and when in my opinion.\n\n## Trunk-Based Development\n\n### Timeline Example\n\n![Trunked Based Dev](/assets/git-workflows/trunk-based-dev.png)\n\n### No Branching\n\nThat’s it. You have your `main` branch and everybody pushes to it. Some might call it madness others would say that excellent CI/CD setup does not require branching.\n\nIf you are the only one working on your project, you *could* push to `main` directly. If you are an excellent developer and have the confidence to push to main and have very good CI/CD in place or none (so merging to `main` is not critical), you could use that strategy. I see this strategy quite often in small open-source projects maintained by a single developer with manual release (so no CD, just CI for testing).\n\n### Should you use it?\n\nYou might have realized already that this strategy applies to very few teams and I don’t think you will encounter that one-branch strategy a lot at your daily jobs. I don’t recommend that strategy as in my humble opinion, PRs are essentials in a good development process. Some people tend to view PR as someone having authority on their code but that’s the wrong way of seeing it. PR offers a second opinion on the code and **everybody** can suggest good changes. I make Junior Developers review my code from the moment they join the company and they have good suggestions in the comments of the PRs regularly.\n\nBack to `TBD`, you need good trust in your colleagues as there is no code review. That is the reason I mentioned that it might be suitable for experience developers.\n\nAnyway, don’t use trunk-based dev unless you know exactly what you are doing and have lots of experience already or a pretty non-critical project and you want very **fast** code base updates.\n\n## Feature Branches\n\n### Timeline Example\n\n![Feature Branching](/assets/git-workflows/feature-branching.png)\n\n### Pull Requests\n\nEverybody should be familiar with that one. Bob pulls a branch from main, implements the feature and pushes that feature branch to remote. Bob then opens a PR/MR (Github call it Pull Request, Gitlab call it Merge Request) and Alice reviews Bob's code before merging to `main`.\n\nIf Alice suggests some changes, Bob pushes new commits to his `feature` branch. Once Alice approves the changes, Bob can merge to `main`.\n\n### Solo Dev\n\nI think that even for personal projects, you should create PR to merge into `main`. This allows you to define properly what is the `scope` of the changes you are working on. Furthermore, you might have CI that checks format, run tests etc that would be different depending on pushing to a `feature` branch and merging to `main`.\n\nFor example, I have a portfolio website (Single Page Application) that is hosted on Netlify. When I open a PR, Netlify builds a js bundle and shows me a preview of what the new version of the website will look like on Web and Mobile. This is very convenient. Once I merge to `main`, Netlify deploys the new js bundle to my domain. So my PR triggers test check and UI preview (CI) and merging to `main` deployed the js bundle to my domain (CD).\n\n### Working with others\n\nHaving `features` branches that are merged to `main` is the bare minimum to have when working with other developers in my opinion. \n\nTherefore, I suggest for the feature you want to implement, create a branch from `main`, solve the issue and raise a PR to get your colleague’s feedback. In your CI, describes the jobs you want to run on commit to a feature branch and the jobs you want to run when the code is merged to `main`.\n\nYour `main` branch should be protected, meaning, only reviewed code can be merged to it and nobody can push directly to it (thus the CI jobs cannot be bypassed).\n\nThis workflow is suitable for simple project with one or a few contributors and with simple CI/CD.\n\nFinally, the feature branches should be **short lived.** Some people refer to CI (Continuous Integration) strictly as a way of saying we merge quickly to main even if the feature is partially implemented as long as it works in production (or hidden behind a flag for instance).\n\n### GitHub Flow\n\nThe feature branching is what they use at GitHub, they call it `GitHub Flow` but it is the same as `feature branching`. See by yourself form their doc:\n\n\x3e So, what is GitHub Flow?\n\x3e \n\x3e - Anything in the `main` branch is deployable\n\x3e - To work on something new, create a descriptively named branch off of `main` (ie: `new-oauth2-scopes`)\n\x3e - Commit to that branch locally and regularly push your work to the same named branch on the server\n\x3e - When you need feedback or help, or you think the branch is ready for merging, open a [pull request](http://help.github.com/send-pull-requests/)\n\x3e - After someone else has reviewed and signed off on the feature, you can merge it into main\n\x3e - Once it is merged and pushed to 'main', you can and *should* deploy immediately\n\n### Should you use it?\n\nYes. Actually, pretty much everybody uses feature branches.\n\n## Forking\n\n### Timeline Example\n\n![Forking](/assets/git-workflows/forking.png)\n\n### Open Source Contributions\n\nForking is the method used for open-source project contributions. In short, you could **clone** the repo locally but you won’t be able to push any branches because the author won't allow you. Just imagine if anybody could freely push branches to your repo! So the trick is to **fork** (personal copy on a version control platform) to your own GitHub account. Then you clone that repository instead and from there. The original Github repo is called the `upstream` and your own copy of the Github repo is called the `origin`.\n\nThen, once your feature implemented, you can push the code to `origin` (your fork) and then raise a PR to merge the feature `origin/my-feature` to the `upstream/main` branch. When the authors/maintainers of the upstream repo approve your PR and merge it to `upstream/main` , you can then “sync” (merge `upstream/main` to `origin/main`) and start working on another feature.\n\nTo link the forking to our previous strategies, you can see that you are basically doing **feature branching** again. \n\nSome open-source authors might push directly to their `main` branch while accepting PR from forks. In that specific scenario, we can see that authors are doing **Trunk-Based Development** while requiring external contributors to follow **feature branching**. Interesting, isn’t it?\n\n## Release Branches\n\n### Timeline Example\n\n![Release Branching](/assets/git-workflows/release-branching.png)\n\n### It’s getting ugly\n\nIndeed, some projects might have multiple versions deployed and accessible by clients at the same time. The common example would be the need to still support old products or old API versions.\n\nIn the timeline chart above, you can see that it is getting a bit more verbose but not so difficult to grasp. We branch `release-1.0` from `main`. Bob starts working on features and merges them to `release-1.0`. At some point, the code is deemed ready to be deployed and therefore merged to `main`. Bob quickly move on to build features for the next release `release1.1`.\n\nUnfortunately, a bug is discovered in production and needs urgent fixing. Alice merges some hotfix into `main` to patch the issue. The production is now stable and a new version arises from the patch: `v1.0.1`. We then sync `release-1.0` with `main` so our version on `release-1.0` is also `v1.0.1`\n\nWhile Alice was patching `production`, Bob already pushed some features to the new release branch. So, we need to merge the patches made by Alice to Bob’s new code and that is why we also need to sync `release-1.1` with `main`. After syncing, Bob can merge is new release as `1.1.1` to `main`.\n\nIf you got confused with the version numbers, I redirect you to [SemVer](https://semver.org/) but in short, a version is of format *Major.Minor.Patch*. **Major** used for incompatible codes (like 2 independent API versions). **Minor** is in our example the `release` and **Patch** is the `hotfix` from Alice. This way when Bob merged his branch `release-1.1`, he did include the hotfix of Alice making the new version in `main` not `1.1.0` but indeed `1.1.1`.\n\n### Should you use it?\n\nIf you don’t need to support multiple releases at once, no, don’t use it. Ideally, you merge your features quite frequently and one release does not break the other ones. It is actually very often the case that we do not need to support old versions. So if you can, don’t use it.\n\n## GitFlow\n\n### Timeline Example\n\n![GitFlow](/assets/git-workflows/gitflow.png)\n\n### Fatality\n\nTo quote [Atlassian](https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow):\n\n\x3e Gitflow is a legacy Git workflow that was originally a disruptive and novel strategy for managing Git branches. Gitflow has fallen in popularity in favor of [trunk-based workflows](https://www.atlassian.com/continuous-delivery/continuous-integration/trunk-based-development), which are now considered best practices for modern continuous software development and [DevOps](https://www.atlassian.com/devops/what-is-devops) practices. Gitflow also can be challenging to use with [CI/CD](https://www.atlassian.com/continuous-delivery).\n\x3e \n\nSo GitFlow is obsolete and you will soon understand why.\n\nIt is similar to what we just saw with the **release branching** but now we have another branch called `develop`. So every feature is merged to `develop`. Once a version is ready we merge it to the corresponding `release` branch. On that release branch, some additional commits might be pushed before merging to `main`. On new version merged to `main`, we need to sync A LOT. You can see on the chart above all the potential merge conflicts represented by a sword. I hope this visual representation highlights the problem: too many potential merge conflicts.\n\n### But why?\n\nIt is a good question, I am not sure. The idea of having a `develop` branch is very common in a lot of projects, but why combine it with `release` branches like that I am not sure to be frank. I don’t recommend to use GitFlow and it seems obsolete for a reason. In general we want the following:\n\n- as few branches as possible\n- short lived branches with small or partial but workable features to be deployed\n\nI see `GitFlow` as the opposite of `Continuous Integration` (in the sense of merging frequently new features and having new deployable codes ready regularly). For fun, let’s have a look at what happens after a hotfix in prod:\n\n- hotfix-1.0.1 ⚔️ main\n- main ⚔️ release-1.0\n- main ⚔️ release-1.1\n- main ⚔️ develop\n- develop ⚔️ feature\n\nI have the feeling that implementing it would mean having a dedicated engineer to take care of the branching, a sort of *Git gardener*.\n\nFor legacy big projects, it might still be in use or necessary but I personally think it should be avoided.\n\n## Feature branching on develop\n\n### Timeline Example\n\n![Feature Branching on Develop](/assets/git-workflows/feature-branching-on-develop.png)\n\n### Develop branch\n\nThe GitFlow aspect that most people still use is the `develop` branch. All the feature branches are merged to `develop` instead of `main`. Once `develop` is deemed ready for release, it is merged to `main`.\n\nThis is useful for a few reasons:\n\n- at any time, we know the commit of the stable release (code in prod) via the `main` branch\n- at any time, we know what is the latest commit of the ongoing new version via the `develop` branch\n\nThis seems like the sweet spot for most cases and that is why it is popular.\n\nMerging a `feature` to `develop` triggers a bunch of CI jobs (the usual, format check, test checks etc)\n\nMerging `develop` to `main` triggers a bunch of CI jobs (build a docker image, push it to a container registry for instance)\n\n### Should you use it?\n\nYes. It is simple yet efficient.\n\n## Release Candidate workflow\n\n### Timeline Example\n\n![Release Candidate Workflow](/assets/git-workflows/RC-workflow.png)\n\nIt is very similar to **Feature Branching to Develop**. The only difference is that when `develop` is merged to `main` it creates a **Release Candidate** (RC) to be tested in a test/staging environment. If an issue in the test environment arises, a hotfix is done and we have a new RC (RC2 in this case). Once everything is ok in the test env, we have a stable release (we just tag a branch basically).\n\nThe advantage of this strategy is that `main` is the line of truth for both test and prod env. `main` contains the RC and stable versions which is great for reporting what went wrong in the test cluster and what is stable in prod.\n\nThis strategy works if `main` does not automatically deploy to production. It could deploy something non-critical, such as a docker image of the app to a container registry for instance.\n\n### Tagging Example\n\n- Bob has merged a few features to `develop` and deemed `develop` ready to be merged to `main`. It is a release candidate with version `v1.0.0-RC1`\n- Alice approves Bob's changes and merges `develop` to `main`\n- Alice deploys the app to **staging** and realizes one feature needs correction.\n- Alice branches out of `main` and implement the RC fix and the code is merged to `main`. The new version is `v1.0.0-RC2`.\n- Alice redeploys to **staging** and everything works as expected. Thus Alice bumps the version to stable: `v1.0.0`. She then deploys to **prod**.\n- Unfortunately, in a very edge case, a feature fails in production and needs urgent fixing.\n- Alice branches out of `main` and implements the *hotfix* and merges back to `main`. The version is now `v1.0.1`.\n- All is well now and it's time to *sync* `develop` with `main`.\n\n### Recap\n\n- `feature` branches are merged to `develop`\n- `develop` branch is merged to `main` as version *x.y.z-RCp*\n- `RC-fixes` branches are merged to `main` as new RCs until test passes in test env. Version is *x.y.z-RC(p+1)*\n- `hotfix` branches are merged to `main` if urgent bug in prod env and version is incremented like so: *x.y.z+1*\n- `main` branch is merged to `develop` (Sync) and eventual conflicts with new features are resolved\n- new `features` are implemented for the version *x.(y+1)+z*\n\n### Should you use it?\n\nIf you need a test/staging environment to test changes, RC strategy is good for you. However, if you have only one env and your CD is not critical, prefer the **Feature branching to develop**\n\n## Conclusion\n\nUse **trunk-based** development if you are working alone on a project or with experienced developers you can trust.\n\nPrefer **feature branching** for the PR dedicated CI/feedback from colleagues or yourself.\n\nHaving a **develop** branch between the `features` and `main` branches helps you follow the “Continuous Integration” philosophy in the sense of frequently merging short-lived feature branches to a development line of truth (even if a bit ahead/diverging from main, production line of truth).\n\nOnly use **release branching** if it is absolutely required because of older release maintenance constraints.\n\nIf you have a test/staging env that needs to go through integration testing before going to prod, the **Release Candidate workflow** is advisable.\n\nI think people tend to refer to CI as the test jobs done on PRs from `feature` to `develop` and CD to refer to the build jobs happening on merge to `main`. Others refer to CI as the philosophy of merging short/partial (but working) features as quickly as possible. This can be applied in **Feature branching to develop** in my opinion.\n\nTaking the time to have the simplest branching strategy possible for your project can really make the development experience a bliss for all developers of your team. People should focus on implementing quality features and not doing some botanic (lots of branches… anybody?).\n",
"\nWhat Git workflow is suitable for your project: trunk-based, feature branching, forking, release branching, release candidate workflow, Feature branching to develop, GitFlow\n",su,"What Git workflow is suitable for your project","git-workflows",new n(null,3,[dp,"/assets/loic-blog-logo.png",ap,"/assets/loic-blog-logo.png",Xl,"Logo referencing Aperture Science"],null),"git-workflows"]),jj([Vl,Bm,Zn,Qp,Rp,vq,Lr,gt,Bt],[new S(null,2,5,T,["Clojure","MCTS"],null),new S(null,1,5,T,["2021-08-13"],null),
From 7535e6b3cca685885f598848cdb64baa53079e72 Mon Sep 17 00:00:00 2001
From: skydread1
Date: Tue, 3 Sep 2024 20:03:19 +0800
Subject: [PATCH 4/5] Set posts pubDate to 18:00 in rss feed items
---
src/loicb/server/md.clj | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/loicb/server/md.clj b/src/loicb/server/md.clj
index b4a6201..ae24947 100644
--- a/src/loicb/server/md.clj
+++ b/src/loicb/server/md.clj
@@ -88,7 +88,7 @@
{:title title
:link (str blog-url "/" id)
:guid (str blog-url "/" id)
- :pubDate (-> (t/time) (t/on (first date)) (t/in "Asia/Singapore") t/instant)
+ :pubDate (-> (t/time "18:00") (t/on (first date)) (t/in "Asia/Singapore") t/instant)
:description md-content-short
"content:encoded" (str "
Date: Tue, 3 Sep 2024 12:05:30 +0000
Subject: [PATCH 5/5] Compile the cljs to the js bundle and update RSS feed
---
resources/public/blog/rss/clojure-feed.xml | 26 +++++++++++-----------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/resources/public/blog/rss/clojure-feed.xml b/resources/public/blog/rss/clojure-feed.xml
index c6b28b2..cddfc51 100644
--- a/resources/public/blog/rss/clojure-feed.xml
+++ b/resources/public/blog/rss/clojure-feed.xml
@@ -1,4 +1,4 @@
-Loic Blanchard - Clojure Blog Feed https://www.loicblanchard.meArticles related to Clojure en-us Tue, 03 Sep 2024 11:51:16 +0000 clj-rss Testing in Clojure https://www.loicblanchard.me/blog/testing-in-clojurehttps://www.loicblanchard.me/blog/testing-in-clojure Sat, 10 Aug 2024 03:51:16 +0000
+Loic Blanchard - Clojure Blog Feed https://www.loicblanchard.meArticles related to Clojure en-us Tue, 03 Sep 2024 12:05:27 +0000 clj-rss Testing in Clojure https://www.loicblanchard.me/blog/testing-in-clojurehttps://www.loicblanchard.me/blog/testing-in-clojure Sat, 10 Aug 2024 10:00:00 +0000
Introducing some popular testing tools to developers new to Clojure. Highlight solutions for how to do unit testing with Rich Comment Tests, data validation and generative testing with Malli, running test suites and metrics with Kaocha and how to do integration testing using external containerized services.
IntroductionThis article introduces effective testing libraries and methods for those new to Clojure.
We'll explore using the kaocha test runner in both REPL and terminal, along with configurations to enhance feedback. Then we will explain how tests as documentation can be done using rich-comment-tests.
We will touch on how to do data validation, generation and instrumentation using malli.
Finally, I will talk about how I manage integrations tests with eventual external services involved.
Test good code
Pure functions
First of all, always remember that it is important to have as many pure functions as possible. It means, the same input passed to a function always returns the same output. This will simplify the testing and make your code more robust.
Here is an example of unpredictable impure logic:
(defn fib
"Read the Fibonacci list length to be returned from a file,
@@ -318,7 +318,7 @@ clj꞉my-app.core.fib꞉>
(system cfg)))
The system
function is straight forward. It takes a config map and returns the fib sequence.
Note the metadata of that function:
{:malli/schema
[:=> [:cat cfg/cfg-sch] [:sequential :int]]}
-
The arrow :=>
means it is a function schema. So in this case, we expect a config as unique argument and we expect a sequence of int as returned value.
When we instrument
our namespace, we tell malli to check the given argument and returned value and to throw an error if they do not respect the schema in the metadata. It is very convenient.
To enable the instrumentation, we call malli.dev/start!
as you can see in the core-test
namespace code snippet.
When to use data validation/generation/instrumentation
Clojure is a dynamically typed language, allowing us to write functions without being constrained by rigid type definitions. This flexibility encourages rapid development, experimentation, and iteration. Thus, it makes testing a bliss because we can easily mock function inputs or provide partial inputs.
However, if we start adding type check to all functions in all namespaces (in our case with malli metadata for instance), we introduce strict typing to our entire code base and therefore all the constraints that come with it.
Personally, I recommend adding validation for the entry point of the app only. For instance, if we develop a library, we will most likely have a top level namespace called my-app.core
or my-app.main
with the different functions our client can call. These functions are the ones we want to validate. All the internal logic, not supposed to be called by the clients, even though they can, do not need to be spec’ed as we want to maintain the flexibility I mentioned earlier.
A second example could be that we develop an app that has a -main
function that will be called to start our system. A system can be whatever our app needs to perform. It can start servers, connect to databases, perform batch jobs etc. Note that in that case the entry point of our program is the -main
function. What we want to validate is that the proper params are passed to the system that our -main
function will start. Going back to our Fib app example, our system is very simple, it just returns the Fib sequence given the length. The length is what need to be validated in our case as it is provided externally via env variable. That is why we saw that the system function had malli metadata. However, our internal function have tests but no spec to keep that dynamic language flexibility that Clojure offers.
Finally, note the distinction between instrumentation
, that is used for development (the metadata with the function schemas) and data validation for production (call to cfg/validate-cfg
). For overhead reasons, we don't want to instrument our functions in production, it is a development tool. However, we do want to have our system throws an error when wrong params are provided to our system, hence the call to cfg/validate-cfg
.
Load/stress/integration tests
In functional programming, and especially in Clojure, it is important to avoid side effects (mutations, external factors, etc) as much as we can. Of course, we cannot avoid mutations as they are inevitable: start a server, connect to a database, IOs, update frontend web state and much more. What we can do is isolate these side effects so the rest of the code base remains pure and can enjoy the flexibility and thus predictable behavior.
Mocking data
Some might argue that we should never mock data. From my humble personal experience, this is impossible for complex apps. An app I worked on consumes messages from different kafka topics, does write/read from a datomic database, makes http calls to multiple remote servers and produces messages to several kafka topics. So if I don’t mock anything, I need to have several remote http servers in a test cluster just for testing. I need to have a real datomic database with production-like data. I need all the other apps that will produce kafka messages that my consumers will process. In other words, it is not possible.
We can mock functions using with-redefs which is very convenient for testing. Using the clojure.test use-fixtures is also great to start and tear down services after the tests are done.
Integration tests
I mentioned above, an app using datomic and kafka for instance. In my integration tests, I want to be able to produce kafka messages and I want to interact with an actual datomic db to ensure proper behavior of my app. The common approach for this is to use embedded
versions of these services. Our test fixtures can start/delete an embedded datomic database and start/stop kafka consumers/producers as well.
What about the http calls? We can with-redefs
those to return some valid but randomly generated values. Integration tests aim at ensuring that all components of our app work together as expected and embedded versions of external services and redefinitions of vars can make the tests predictable and suitable for CI.
I have not touch on running tests in the CI, but integration tests should be run in the CI and if all services are embedded, there should be no difficulty in setting up a pipeline.
Load/stress tests
To be sure an app performs well under heavy load, embedded services won’t work as they are limited in terms of performance, parallel processing etc. In our example above, If I want to start lots of kafka consumers and to use a big datomic transactor to cater lots of transactions, embedded datomic and embedded kafka won’t suffice. So I have to run a datomic transactor on my machine (maybe I want the DB to be pre-populated with millions or entities as well) and I will need to run kafka on my machine as well (maybe using confluent cp-all-in-one container setup). Let’s get fancy, and also run prometheus/grafana to monitor the performance of the stress tests.
Your intuition is correct, it would be a nightmare for each developer of the project to setup all services. One solution is to containerized all these services. a datomic transactor can be run in docker, confluent provides a docker-compose to run kafka zookeeper, broker, control center etc, prometheus scrapper can be run in a container as well as grafana. So providing docker-compose files in our repo so each developer can just run docker-compose up -d
to start all necessary services is the solution I recommend.
Note that I do not containerized my clojure app so I do not have to change anything in my workflow. I deal with load/stress tests the same way I deal with my unit tests. I just start the services in the containers and my Clojure REPL as per usual.
This setup is not the only solution to load/stress tests but it is the one I successfully implemented in my project and it really helps us being efficient.
Conclusion
I highlighted some common testing tools and methods that the Clojure community use and I explained how I personally incorporated these tools and methods to my projects. Tools are common to everybody, but how we use them is considered opinionated and will differ depending on the projects and team decision.
If you are starting your journey as a Clojure developer, I hope you can appreciate the quality of open-source testing libraries we have access to. Also, please remember that keeping things pure is the key to easy testing and debugging; a luxury not so common in the programming world. Inevitably, you will need to deal with side effects but isolate them as much as you can to make your code robust and your tests straight forward.
Finally, there are some tools I didn’t mention to keep things short so feel free to explore what the Clojure community has to offer. The last advice I would give is to not try to use too many tools or only the shiny new ones you might find. Keep things simple and evaluate if a library is worth being added to your deps.
]]> Time as a value with Tick https://www.loicblanchard.me/blog/tickhttps://www.loicblanchard.me/blog/tick Sat, 20 Apr 2024 03:51:16 +0000
+
The arrow :=>
means it is a function schema. So in this case, we expect a config as unique argument and we expect a sequence of int as returned value.
When we instrument
our namespace, we tell malli to check the given argument and returned value and to throw an error if they do not respect the schema in the metadata. It is very convenient.
To enable the instrumentation, we call malli.dev/start!
as you can see in the core-test
namespace code snippet.
Clojure is a dynamically typed language, allowing us to write functions without being constrained by rigid type definitions. This flexibility encourages rapid development, experimentation, and iteration. Thus, it makes testing a bliss because we can easily mock function inputs or provide partial inputs.
However, if we start adding type check to all functions in all namespaces (in our case with malli metadata for instance), we introduce strict typing to our entire code base and therefore all the constraints that come with it.
Personally, I recommend adding validation for the entry point of the app only. For instance, if we develop a library, we will most likely have a top level namespace called my-app.core
or my-app.main
with the different functions our client can call. These functions are the ones we want to validate. All the internal logic, not supposed to be called by the clients, even though they can, do not need to be spec’ed as we want to maintain the flexibility I mentioned earlier.
A second example could be that we develop an app that has a -main
function that will be called to start our system. A system can be whatever our app needs to perform. It can start servers, connect to databases, perform batch jobs etc. Note that in that case the entry point of our program is the -main
function. What we want to validate is that the proper params are passed to the system that our -main
function will start. Going back to our Fib app example, our system is very simple, it just returns the Fib sequence given the length. The length is what need to be validated in our case as it is provided externally via env variable. That is why we saw that the system function had malli metadata. However, our internal function have tests but no spec to keep that dynamic language flexibility that Clojure offers.
Finally, note the distinction between instrumentation
, that is used for development (the metadata with the function schemas) and data validation for production (call to cfg/validate-cfg
). For overhead reasons, we don't want to instrument our functions in production, it is a development tool. However, we do want to have our system throws an error when wrong params are provided to our system, hence the call to cfg/validate-cfg
.
In functional programming, and especially in Clojure, it is important to avoid side effects (mutations, external factors, etc) as much as we can. Of course, we cannot avoid mutations as they are inevitable: start a server, connect to a database, IOs, update frontend web state and much more. What we can do is isolate these side effects so the rest of the code base remains pure and can enjoy the flexibility and thus predictable behavior.
Some might argue that we should never mock data. From my humble personal experience, this is impossible for complex apps. An app I worked on consumes messages from different kafka topics, does write/read from a datomic database, makes http calls to multiple remote servers and produces messages to several kafka topics. So if I don’t mock anything, I need to have several remote http servers in a test cluster just for testing. I need to have a real datomic database with production-like data. I need all the other apps that will produce kafka messages that my consumers will process. In other words, it is not possible.
We can mock functions using with-redefs which is very convenient for testing. Using the clojure.test use-fixtures is also great to start and tear down services after the tests are done.
I mentioned above, an app using datomic and kafka for instance. In my integration tests, I want to be able to produce kafka messages and I want to interact with an actual datomic db to ensure proper behavior of my app. The common approach for this is to use embedded
versions of these services. Our test fixtures can start/delete an embedded datomic database and start/stop kafka consumers/producers as well.
What about the http calls? We can with-redefs
those to return some valid but randomly generated values. Integration tests aim at ensuring that all components of our app work together as expected and embedded versions of external services and redefinitions of vars can make the tests predictable and suitable for CI.
I have not touch on running tests in the CI, but integration tests should be run in the CI and if all services are embedded, there should be no difficulty in setting up a pipeline.
To be sure an app performs well under heavy load, embedded services won’t work as they are limited in terms of performance, parallel processing etc. In our example above, If I want to start lots of kafka consumers and to use a big datomic transactor to cater lots of transactions, embedded datomic and embedded kafka won’t suffice. So I have to run a datomic transactor on my machine (maybe I want the DB to be pre-populated with millions or entities as well) and I will need to run kafka on my machine as well (maybe using confluent cp-all-in-one container setup). Let’s get fancy, and also run prometheus/grafana to monitor the performance of the stress tests.
Your intuition is correct, it would be a nightmare for each developer of the project to setup all services. One solution is to containerized all these services. a datomic transactor can be run in docker, confluent provides a docker-compose to run kafka zookeeper, broker, control center etc, prometheus scrapper can be run in a container as well as grafana. So providing docker-compose files in our repo so each developer can just run docker-compose up -d
to start all necessary services is the solution I recommend.
Note that I do not containerized my clojure app so I do not have to change anything in my workflow. I deal with load/stress tests the same way I deal with my unit tests. I just start the services in the containers and my Clojure REPL as per usual.
This setup is not the only solution to load/stress tests but it is the one I successfully implemented in my project and it really helps us being efficient.
I highlighted some common testing tools and methods that the Clojure community use and I explained how I personally incorporated these tools and methods to my projects. Tools are common to everybody, but how we use them is considered opinionated and will differ depending on the projects and team decision.
If you are starting your journey as a Clojure developer, I hope you can appreciate the quality of open-source testing libraries we have access to. Also, please remember that keeping things pure is the key to easy testing and debugging; a luxury not so common in the programming world. Inevitably, you will need to deal with side effects but isolate them as much as you can to make your code robust and your tests straight forward.
Finally, there are some tools I didn’t mention to keep things short so feel free to explore what the Clojure community has to offer. The last advice I would give is to not try to use too many tools or only the shiny new ones you might find. Keep things simple and evaluate if a library is worth being added to your deps.
]]>It is always very confusing to deal with time in programming. In fact there are so many time representations, for legacy reasons, that sticking to one is not possible as our dependencies, databases or even programming languages might use different ways of representing time!
You might have asked yourself the following questions:
timestamp
, date-time
, offset-date-time
, zoned-date-time
, instant
, inst
?UTC
, DST
?Instant
instead of Java Date
?timestamp
?duration
and a period
?This article will answer these questions and will illustrate the answers with Clojure code snippets using the juxt/tick
library.
Tick
?juxt/tick is an excellent open-source Clojure library to deal with date
and time
as values. The documentation is of very good quality as well.
The time since epoch
, or timestamp
, is a way of measuring time by counting the number of time units that have elapsed since a specific point in time, called the epoch. It is often represented in either milliseconds or seconds, depending on the level of precision required for a particular application.
So basically, it is just an int
such as 1705752000000
The obvious advantage is the universal simplicity of representing time. The disadvantage is the human readability. So we need to find a more human-friendly representation of time.
Alice is having some fish and chips for her lunch in the UK. She checks her clock on the wall and it shows 12pm. She checks her calendar and it shows the day is January the 20th.
The local time is the time in a specific time zone, usually represented using a date and time-of-day without any time zone information. In java it is called java.time.LocalDateTime
. However, tick
mentioned that when you asked someone the time, it is always going to be "local", so they prefer to call it date-time
as the local part is implicit.
So if we ask Alice for the time and date, she will reply:
(-> (t/time "12:00")
(t/on "2024-01-20"))
@@ -420,7 +420,7 @@ Illustrate date and time concepts in programming using the Clojure Tick library:
(t/in "Europe/London")
(t/>> (t/new-duration 1 :days)))
#time/zoned-date-time "2024-03-31T09:00+01:00[Europe/London]"
-
We can see that since in this specific DST update to summer month, the day 03/31 "gained" an hour so it has a duration
of 25 hours, therefore our new time is 09:00
. However, the period
taking into consideration the date in a calendar system, does not see a day as 24 hours (time-base) but as calendar unit (date-based) and therefore the new time is still 08:00
.
A Zone encapsulates the notion of UTC and DST.
The time since epoch is the universal computer-friendly of representing time whereas the Instant is the universal human-friendly of representing time.
A duration
measures an amount of time using time-based values whereas a period
uses date-based (calendar) values.
Finally, for Clojure developers, I highly recommend using juxt/tick
as it allows us to handle time efficiently (conversion, operations) and elegantly (readable, as values) and I use it in several of my projects. It is also of course possible to do interop with the java.time.Instant
class directly if you prefer.
We can see that since in this specific DST update to summer month, the day 03/31 "gained" an hour so it has a duration
of 25 hours, therefore our new time is 09:00
. However, the period
taking into consideration the date in a calendar system, does not see a day as 24 hours (time-base) but as calendar unit (date-based) and therefore the new time is still 08:00
.
A Zone encapsulates the notion of UTC and DST.
The time since epoch is the universal computer-friendly of representing time whereas the Instant is the universal human-friendly of representing time.
A duration
measures an amount of time using time-based values whereas a period
uses date-based (calendar) values.
Finally, for Clojure developers, I highly recommend using juxt/tick
as it allows us to handle time efficiently (conversion, operations) and elegantly (readable, as values) and I use it in several of my projects. It is also of course possible to do interop with the java.time.Instant
class directly if you prefer.
If you are not familiar with fun-map, please refer to the doc Fun-Map: a solution to deps injection in Clojure.
In this document, I will show you how we leverage fun-map
to create different systems in the website flybot.sg: prod-system
, dev-system
, test-system
and figwheel-system
.
In our backend, we use life-cycle-map
to manage the life cycle of all our stateful components.
Here is the system we currently have for production:
(defn system
[{:keys [http-port db-uri google-creds oauth2-callback client-root-path]
@@ -573,7 +573,7 @@ how we leverage `fun-map` to create different systems in the website flybot.sg:
(-> figwheel-system
touch
:reitit-router)))
-
The figheel-handler
is the value of the key :reitit-router
of our running system.
So the system is started first via touch
and its handler is provided to the servers figwheel starts that will be running while we work on our frontend.
The figheel-handler
is the value of the key :reitit-router
of our running system.
So the system is started first via touch
and its handler is provided to the servers figwheel starts that will be running while we work on our frontend.
If you are not familiar with lasagna-pull, please refer to the doc Lasagna Pull: Precisely select from deep nested data
In this document, I will show you how we leverage lasagna-pull
in the flybot app to define a pure data API.
A good use case of the pattern is as parameter in a post request.
In our backend, we have a structure representing all our endpoints:
;; BACKEND data structure
(defn pullable-data
@@ -792,7 +792,7 @@ How we leverage `lasagna-pull` in the flybot.sg Clojure web app to define a pure
{:response ('&? resp)
:effects-desc effects
:session (merge session sessions)}))
-
You can also notice that the data is being validated via pull/with-data-schema
. In case of validation error, since we do not have any side effects done during the pulling, an error will be thrown and no mutations will be done.
Having no side-effects at all makes it way easier to tests and debug and it is more predictable.
Finally, the ring-handler
will be the component responsible to execute all the side effects at once.
So the saturn-handler
purpose was to be sure the data is being pulled properly, validated using malli, and that the side effects descriptions are gathered in one place to be executed later on.
You can also notice that the data is being validated via pull/with-data-schema
. In case of validation error, since we do not have any side effects done during the pulling, an error will be thrown and no mutations will be done.
Having no side-effects at all makes it way easier to tests and debug and it is more predictable.
Finally, the ring-handler
will be the component responsible to execute all the side effects at once.
So the saturn-handler
purpose was to be sure the data is being pulled properly, validated using malli, and that the side effects descriptions are gathered in one place to be executed later on.
Our app skydread1/flybot.sg is a full-stack Clojure web and mobile app.
We opted for a mono-repo to host:
server
: Clojure appweb
client: Reagent (React) app using Re-Framemobile
client: Reagent Native (React Native) app using Re-FrameNote that the web app does not use NPM at all. However, the React Native mobile app does use NPM and the node_modules
need to be generated.
By using only one deps.edn
, we can easily starts the different parts of the app.
The goal of this document is to highlight the mono-repo structure and how to run the different parts (dev, test, build etc).
├── client
│ ├── common
@@ -820,7 +820,7 @@ Example of a Clojure mono-repo structure for a web server and 2 clients (web and
│ │ └── flybot.server
│ └── test
│ └── flybot.server
-
server
dir contains then .clj
filescommon
dir the .cljc
filesclients
dir the .cljs
files.You can have a look at the deps.edn.
We can use namespaced aliases in deps.edn
to make the process clearer.
I will go through the different aliases and explain their purposes and how to I used them to develop the app.
First, the root deps of the deps.edn, inherited by all aliases:
The deps above are used in both server/src
and common/src
(clj and cljc files).
So every time you start a deps
REPL or a deps+figwheel
REPL, these deps will be loaded.
In the common/test/flybot/common/testsampledata.cljc namespace, we have sample data that can be loaded in both backend dev system of frontend dev systems.
This is made possible by reader conditionals clj/cljs.
I use the calva
extension in VSCode to jack-in deps and figwheel REPLs but you can use Emacs if you prefer for instance.
What is important to remember is that, when you work on the backend only, you just need a deps
REPL. There is no need for figwheel since we do not modify the cljs content. So in this scenario, the frontend is fixed (the main.js is generated and not being reloaded) but the backend changes (the clj
files and cljc
files).
However, when you work on the frontend, you need to load the backend deps to have your server running but you also need to recompile the js when a cljs file is saved. Therefore your need both deps+figwheel
REPL. So in this scenario, the backend is fixed and running but the frontend changes (the cljs
files and cljc
files)
You can see that the common cljc
files are being watched in both scenarios which makes sense since they "become" clj or cljs code depending on what REPL type you are currently working in.
Following are the aliases used for the server:
:jvm-base
: JVM options to make datalevin work with java version > java8:server/dev
: clj paths for the backend systems and tests:server/test
: Run clj testsFollowing is the alias used for both web and mobile clients:
:client
: deps for frontend libraries common to web and react native.The extra-paths contains the cljs
files.
We can note the client/common/src
path that contains most of the re-frame
logic because most subscriptions and events work on both web and react native right away!
The main differences between the re-frame logic for Reagent and Reagent Native have to do with how to deal with Navigation and oauth2 redirection. That is the reason we have most of the logic in a common dir in client
.
Following are the aliases used for the mobile client:
:mobile/rn
: contains the cljs deps only used for react native. They are added on top of the client deps.:mobile/ios
: starts the figwheel REPL to work on iOS.Following are the aliases used for the web client:
:web/dev
: starts the dev REPL:web/prod
: generates the optimized js bundle main.js:web/test
: runs the cljs tests:web/test-headless
: runs the headless cljs tests (fot GitHub CI)Following is the alias used to build the js bundle or a uberjar:
:build
: clojure/tools.build is used to build the main.js and also an uber jar for local testing, we use .The build.clj contains the different build functions:
clj -T:build js-bundle
clj -T:build uber
clj -T:build uber+js
Following is the alias used to build an image and push it to local docker or AWS ECR:
:jib
: build image and push to image repoFollowing is the alias used to points out outdated dependencies
:outdated
: prints the outdated deps and their last available versionWe have not released the mobile app yet, that is why there is no aliases related to CD for react native yet.
This is one solution to handle server and clients in the same repo.
Feel free to consult the complete deps.edn content.
It is important to have a clear directory structure to only load required namespaces and avoid errors.
Using :extra-paths
and :extra-deps
in deps.edn is important because it prevent deploying unnecessary namespaces and libraries on the server and client.
Adding namespace to the aliases make the distinction between backend, common and client (web and mobile) clearer.
Using deps
jack-in for server only work and deps+figwheel
for frontend work is made easy using calva
in VSCode (work in other editors as well).
server
dir contains then .clj
filescommon
dir the .cljc
filesclients
dir the .cljs
files.You can have a look at the deps.edn.
We can use namespaced aliases in deps.edn
to make the process clearer.
I will go through the different aliases and explain their purposes and how to I used them to develop the app.
First, the root deps of the deps.edn, inherited by all aliases:
The deps above are used in both server/src
and common/src
(clj and cljc files).
So every time you start a deps
REPL or a deps+figwheel
REPL, these deps will be loaded.
In the common/test/flybot/common/testsampledata.cljc namespace, we have sample data that can be loaded in both backend dev system of frontend dev systems.
This is made possible by reader conditionals clj/cljs.
I use the calva
extension in VSCode to jack-in deps and figwheel REPLs but you can use Emacs if you prefer for instance.
What is important to remember is that, when you work on the backend only, you just need a deps
REPL. There is no need for figwheel since we do not modify the cljs content. So in this scenario, the frontend is fixed (the main.js is generated and not being reloaded) but the backend changes (the clj
files and cljc
files).
However, when you work on the frontend, you need to load the backend deps to have your server running but you also need to recompile the js when a cljs file is saved. Therefore your need both deps+figwheel
REPL. So in this scenario, the backend is fixed and running but the frontend changes (the cljs
files and cljc
files)
You can see that the common cljc
files are being watched in both scenarios which makes sense since they "become" clj or cljs code depending on what REPL type you are currently working in.
Following are the aliases used for the server:
:jvm-base
: JVM options to make datalevin work with java version > java8:server/dev
: clj paths for the backend systems and tests:server/test
: Run clj testsFollowing is the alias used for both web and mobile clients:
:client
: deps for frontend libraries common to web and react native.The extra-paths contains the cljs
files.
We can note the client/common/src
path that contains most of the re-frame
logic because most subscriptions and events work on both web and react native right away!
The main differences between the re-frame logic for Reagent and Reagent Native have to do with how to deal with Navigation and oauth2 redirection. That is the reason we have most of the logic in a common dir in client
.
Following are the aliases used for the mobile client:
:mobile/rn
: contains the cljs deps only used for react native. They are added on top of the client deps.:mobile/ios
: starts the figwheel REPL to work on iOS.Following are the aliases used for the web client:
:web/dev
: starts the dev REPL:web/prod
: generates the optimized js bundle main.js:web/test
: runs the cljs tests:web/test-headless
: runs the headless cljs tests (fot GitHub CI)Following is the alias used to build the js bundle or a uberjar:
:build
: clojure/tools.build is used to build the main.js and also an uber jar for local testing, we use .The build.clj contains the different build functions:
clj -T:build js-bundle
clj -T:build uber
clj -T:build uber+js
Following is the alias used to build an image and push it to local docker or AWS ECR:
:jib
: build image and push to image repoFollowing is the alias used to points out outdated dependencies
:outdated
: prints the outdated deps and their last available versionWe have not released the mobile app yet, that is why there is no aliases related to CD for react native yet.
This is one solution to handle server and clients in the same repo.
Feel free to consult the complete deps.edn content.
It is important to have a clear directory structure to only load required namespaces and avoid errors.
Using :extra-paths
and :extra-deps
in deps.edn is important because it prevent deploying unnecessary namespaces and libraries on the server and client.
Adding namespace to the aliases make the distinction between backend, common and client (web and mobile) clearer.
Using deps
jack-in for server only work and deps+figwheel
for frontend work is made easy using calva
in VSCode (work in other editors as well).
This project is stored alongside the backend and the web frontend in the mono-repo: skydread1/flybot.sg
The codebase is a full-stack Clojure(Script) app. The backend is written in Clojure and the web and mobile clients are written in ClojureScript.
For the web app, we use reagent, a ClojureScript interface for React
.
For the mobile app, we use reagent-react-native, a ClojureScript interface for React Native
.
The mono-repo structure is as followed:
├── client
│ ├── common
@@ -968,7 +968,7 @@ cd ..
(fn [{:keys [db]} [_ cookie-value]]
{:db (assoc db :user/cookie cookie-value)
:fx [[:dispatch [:evt.app/initialize]]]}))
-
As for now, the styling is directly done in the :style
keys of the RN component’s hiccups. Some more complex components have some styling that takes functions and or not in the :style
keyword.
I hope that this unusual mobile app stack made you want to consider ClojureScript
as a good alternative to build mobile apps.
It is important to note that the state management logic (re-frame) is the same at 90% for both the web app and the mobile app which is very convenient.
Finally, the web app is deployed but not the mobile app. All the codebase is open-source so feel free to take inspiration.
]]>As for now, the styling is directly done in the :style
keys of the RN component’s hiccups. Some more complex components have some styling that takes functions and or not in the :style
keyword.
I hope that this unusual mobile app stack made you want to consider ClojureScript
as a good alternative to build mobile apps.
It is important to note that the state management logic (re-frame) is the same at 90% for both the web app and the mobile app which is very convenient.
Finally, the web app is deployed but not the mobile app. All the codebase is open-source so feel free to take inspiration.
]]>I will use the flybot.sg website as example of app to deploy.
datalevin
as embedded database which resides alongside the Clojure code inside a containerInstead of using datomic pro and having the burden to have a separate containers for the app and transactor, we decided to use juji-io/datalevin and its embedded storage on disk. Thus, we only need to deploy one container with the app.
To do so, we can use the library atomisthq/jibbit baed on GoogleContainerTools/jib (Build container images for Java applications).
It does not use docker to generate the image, so there is no need to have docker installed to generate images.
jibbit can be added as alias
in deps.edn:
:jib
{:deps {io.github.atomisthq/jibbit {:git/tag "v0.1.14" :git/sha "ca4f7d3"}}
@@ -1065,7 +1065,7 @@ docker run \
-e ADMIN_USER="secret" \
-e SYSTEM="{:http-port 8123, :db-uri \"/datalevin/prod/flybotdb\", :oauth2-callback \"https://www.flybot.sg/oauth/google/callback\"}" \
acc.dkr.ecr.region.amazonaws.com/flybot-website:test
-
Even if we have one single EC2 instance running, there are several benefits we can get from AWS load balancers.
In our case, we have an Application Load Balancer (ALB) as target of a Network Load Balancer (NLB). Easily adding an ALB as target of NLB is a recent feature in AWS that allows us to combine the strength of both LBs.
The internal ALB purposes:
ACM
)ACM allows us to requests certificates for www.flybot.sg
and flybot.sg
and attach them to the ALB rules to perform path redirection in our case. This is convenient as we do not need to install any ssl certificates or handle any redirects in the instance directly or change the code base.
Since the ALB has dynamic IPs, we cannot use it in our goDaddy A
record for flybot.sg
. One solution is to use AWS route53 because AWS added the possibility to register the ALB DNS name in a A record (which is not possible with external DNS managers). However, we already use goDaddy as DNS host and we don’t want to depend on route53 for that.
Another solution is to place an internet-facing NLB behind the ALB because NLB provides static IP.
ALB works at level 7 but NLB works at level 4.
Thus, we have for the NLB:
The target group is where the traffic from the load balancers is sent. We have 3 target groups.
Since the ELB is the internet-facing entry points, we use a CNAME
record for www
resolving to the ELB DNS name.
For the root domain flybot.sg
, we use a A
record for @
resolving to the static IP of the ELB (for the AZ where the EC2 resides).
You can have a look at the open-source repo: skydread1/flybot.sg
]]>Even if we have one single EC2 instance running, there are several benefits we can get from AWS load balancers.
In our case, we have an Application Load Balancer (ALB) as target of a Network Load Balancer (NLB). Easily adding an ALB as target of NLB is a recent feature in AWS that allows us to combine the strength of both LBs.
The internal ALB purposes:
ACM
)ACM allows us to requests certificates for www.flybot.sg
and flybot.sg
and attach them to the ALB rules to perform path redirection in our case. This is convenient as we do not need to install any ssl certificates or handle any redirects in the instance directly or change the code base.
Since the ALB has dynamic IPs, we cannot use it in our goDaddy A
record for flybot.sg
. One solution is to use AWS route53 because AWS added the possibility to register the ALB DNS name in a A record (which is not possible with external DNS managers). However, we already use goDaddy as DNS host and we don’t want to depend on route53 for that.
Another solution is to place an internet-facing NLB behind the ALB because NLB provides static IP.
ALB works at level 7 but NLB works at level 4.
Thus, we have for the NLB:
The target group is where the traffic from the load balancers is sent. We have 3 target groups.
Since the ELB is the internet-facing entry points, we use a CNAME
record for www
resolving to the ELB DNS name.
For the root domain flybot.sg
, we use a A
record for @
resolving to the static IP of the ELB (for the AZ where the EC2 resides).
You can have a look at the open-source repo: skydread1/flybot.sg
]]>While working on flybot.sg , I experimented with datomic-free
, datomic starter-pro
with Cassandra and datomic starter-pro with embedded storage.
You can read the rationale of Datomic from their on-prem documentation
Stuart Sierra explained very well how datomic works in the video Intro to Datomic.
Basically, Datomic works as a layer on top of your underlying storage (in this case, we will use Cassandra db).
Your application
and a Datomic transactor
are contained in a peer
.
The transactor is the process that controls inbounds, and coordinates persistence to the storage services.
The process acts as a single authority for inbound transactions. A single transactor process allows the to be ACID compliant and fully consistent.
The peer is the process that will query the persisted data.
Since Datomic leverages existing storage services, you can change persistent storage fairly easily.
Datomic is closed-source and commercial.
You can see the different pricing models in the page Get Datomic On-Prem.
There are a few way to get started for free. The first one being to use the datomic-free version which comes with in-mem database storage and local-storage transactor. You don’t need any license to use it so it is a good choice to get familiar with the datomic Clojure API.
Then, there is datomic pro starter
renamed datomic starter
which is free and maintained for 1 year. After the one year threshold, you won’t benefit from support and you won’t get new versions of Datomic. You need to register to Datomic to get the license key.
Datomic only support Cassandra up to version 3.x.x
Datomic start pro version of Cassandra at the time of writting: 3.7.1
Closest stable version of Cassandra: 3.11.10
Problem 1: Datomic does not support java 11 so we have to have a java 8 version on the machine
Solution: use jenv to manage multiple java version
# jenv to manage java version
brew install jenv
@@ -1209,7 +1209,7 @@ host=0.0.0.0
port=4334
alt-host=datomicdb
storage-access=remote
-
After updating the transactor properties, you should be able to see the app running on port 8123 and be able to perform transactions as expected.
]]>After updating the transactor properties, you should be able to see the app running on port 8123 and be able to perform transactions as expected.
]]>Your Clojure library is assumed to be already compiled to dotnet.
To know how to do this, refer to the article: Port your Clojure lib to the CLR with MAGIC
In this article, I will show you:
Just use the command nos dotnet/build
at the root of the Clojure project.
The dlls are by default generated in a /build
folder.
A .csproj
file (XML) must be added at the root of the Clojure project.
You can find an example here: clr.test.check.csproj
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
@@ -1326,7 +1326,7 @@ dotnet nuget push "bin/Release/clr.test.check.1.1.1.nupkg" --source &q
Once you have the proper required config files ready, you can use Nostrand
to Build your dlls:
nos dotnet/build
Pack your dlls in a nuget package and push to a remote host:
nos dotnet/nuget-push
Import your packages in Unity:
nuget restore
-
Magic.Unity
is the Magic runtime for Unity and is already nuget packaged on its public repo
Magic.Unity
is the Magic runtime for Unity and is already nuget packaged on its public repo
The Lasagna stack library fun-map by @robertluo blurs the line between identity, state and function. As a results, it is a very convenient tool to define system
in your applications by providing an elegant way to perform associative dependency injections.
In this document, I will show you the benefit of fun-map
, and especially the life-cycle-map
as dependency injection system.
In any kind of programs, we need to manage the state. In Clojure, we want to keep the mutation parts of our code as isolated and minimum as possible. The different components of our application such as the db connections, queues or servers for instance are mutating the world and sometimes need each other to do so. The talk Components Just Enough Structure by Stuart Sierra explains this dependency injection problem very well and provides a Clojure solution to this problem with the library component.
fun-map is another way of dealing with inter-dependent components. In order to understand why fun-map
is very convenient, it is interesting to look at other existing solutions first.
Let’s first have a look at existing solution to deal with life cycle management of components in Clojure, especially the Component library which is a very good library to provide a way to define systems.
In the Clojure word, we have stateful components (atom, channel etc) and we don’t want it to be scattered in our code without any clear way to link them and also know the order of which to start these external resources.
The component
of the library component is just a record that implements a Lifecycle
protocol to properly start and stop the component. As a developer, you just implement the start
and stop
methods of the protocol for each of your components (DB, server or even domain model).
A DB component could look like this for instance
(defrecord Database [host port connection]
component/Lifecycle
@@ -1419,7 +1419,7 @@ m
;=> b closed
; a closed v2
; nil
-
fun-map also support other features such as function call tracing, value caching or lookup for instance. More info in the readme.
To see Fun Map in action, refer to the doc Fun-Map applied to flybot.sg.
]]>fun-map also support other features such as function call tracing, value caching or lookup for instance. More info in the readme.
To see Fun Map in action, refer to the doc Fun-Map applied to flybot.sg.
]]>flybot-sg/lasagna-pull by @robertluo aims at precisely select from deep data structure in Clojure.
In this document, I will show you the benefit of pull-pattern
in pulling nested data.
In Clojure, it is very common to have to precisely select data in nested maps. the Clojure core select-keys
and get-in
functions do not allow to easily select in deeper levels of the maps with custom filters or parameters.
One of the libraries of the lasagna-stack
is flybot-sg/lasagna-pull. It takes inspiration from the datomic pull API and the library redplanetlabs/specter.
lasagna-pull
aims at providing a clearer pattern than the datomic pull API.
It also allows the user to add options on the selected keys (filtering, providing params to values which are functions etc). It supports less features than the specter
library but the syntax is more intuitive and covers all major use cases you might need to select the data you want.
Finally, a metosin/malli schema can be provided to perform data validation directly using the provided pattern. This allows the client to prevent unnecessary pulling if the pattern does not match the expected shape (such as not providing the right params to a function, querying the wrong type etc).
Selecting data in nested structure is made intuitive via a pattern that describes the data to be pulled following the shape of the data.
Here are some simple cases to showcase the syntax:
(require '[sg.flybot.pullable :as pull])
@@ -1470,7 +1470,7 @@ Rational of flybot-sg/lasagna-pull library: precisely select from deep data stru
;=> {&? {:a (10 14)}}
Apply to sequence value of a query, useful for pagination:
((pull/query '[{:a ? :b ?} ? :seq [2 3]]) [{:a 0} {:a 1} {:a 2} {:a 3} {:a 4}])
;=> {&? ({:a 2} {:a 3} {:a 4})}
-
As you can see with the different options above, the transformations are specified within the selected keys. Unlike specter however, we do not have a way to apply transformation to all the keys for instance.
We can optionally provide a metosin/malli schema to specify the shape of the data to be pulled.
The client malli schema provided is actually internally "merged" to a internal schema that checks the pattern shape so both the pattern syntax and the pattern shape are validated.
You can provide a context to the query. You can provide a modifier
and a finalizer
.
This context can help you gathering information from the query and apply a function on the results.
To see Lasagna Pull in action, refer to the doc Lasagna Pull applied to flybot.sg.
]]>As you can see with the different options above, the transformations are specified within the selected keys. Unlike specter however, we do not have a way to apply transformation to all the keys for instance.
We can optionally provide a metosin/malli schema to specify the shape of the data to be pulled.
The client malli schema provided is actually internally "merged" to a internal schema that checks the pattern shape so both the pattern syntax and the pattern shape are validated.
You can provide a context to the query. You can provide a modifier
and a finalizer
.
This context can help you gathering information from the query and apply a function on the results.
To see Lasagna Pull in action, refer to the doc Lasagna Pull applied to flybot.sg.
]]>Note: the steps for packing the code into nugget package, pushing it to remote github and fetching it in Unity are highlighted in another article.
Magic is a bootsrapped compiler writhen in Clojure that take Clojure code as input and produces dotnet assemblies (.dll) as output.
Compiler Bootstrapping is the technique for producing a self-compiling compiler that is written in the same language it intends to compile. In our case, MAGIC is a Clojure compiler that compiles Clojure code to .NET assemblies (.dll and .exe files).
It means we need the old dlls of MAGIC to generate the new dlls of the MAGIC compiler. We repeat this process until the compiler is good enough.
The very first magic dlls were generated with the clojure/clojure-clr project which is also a Clojure compiler to CLR but written in C# with limitations over the dlls generated (the problem MAGIC is intended to solve).
The already existing clojure->clr compiler clojure/clojure-clr. However, clojure-clr uses a technology called the DLR (dynamic language runtime) to optimize dynamic call sites but it emits self modifying code which make the assemblies not usable on mobile devices (IL2CPP in Unity). So we needed a way to have a compiler that emit assemblies that can target both Desktop and mobile (IL2CPP), hence the Magic compiler.
We don’t want separate branches for JVM and CLR so we use reader conditionals.
You can find how to use the reader conditionals in this guide.
You will mainly need them for the require
and import
as well as the function parameters.
Don’t forget to change the extension of your file from .clj
to .cljc
.
In Emacs
(with spacemacs
distribution), you might encounter some lint issues if you are using reader conditionals and some configuration might be needed.
The Clojure linter library clj-kondo/clj-kondo supports the reader conditionals.
All the instruction on how to integrate it to the editor you prefer here.
To use clj-kondo with syl20bnr/spacemacs, you need the layer borkdude/flycheck-clj-kondo.
However, there is no way to add configuration in the .spacemacs
config file.
The problem is that we need to set :clj
as the default language to be checked.
In VScode
I did not need any config to make it work.
It has nothing to do with the :default
reader conditional key such as:
#?(:clj (Clojure expression)
:cljs (ClojureScript expression)
@@ -1569,7 +1569,7 @@ user> (f [(->PokerCard :d :3) (->PokerCard :c :4
(require ns))
(run-all-tests)))
To run the tests, just run the nos
command at the root of your project:
nos dotnet/run-tests
-
An example of a Clojure library that has been ported to Magic is skydread1/clr.test.check, a fork of clojure/clr.test.check. My fork uses reader conditionals so it can be run and tested in both JVM and CLR.
Now that your library is compiled to dotnet, you can learn how to package it to nuget, push it in to your host repo and import in Unity in this article:
]]>An example of a Clojure library that has been ported to Magic is skydread1/clr.test.check, a fork of clojure/clr.test.check. My fork uses reader conditionals so it can be run and tested in both JVM and CLR.
Now that your library is compiled to dotnet, you can learn how to package it to nuget, push it in to your host repo and import in Unity in this article:
]]>At Flybot Pte Ltd, we wanted to have a robot-player that can play several rounds of some of our card games (such as big-two
) at a decent level.
The main goal of this robot-player was to take over an AFK player for instance.
We are considering using it for an offline mode with different level of difficulty.
Vocabulary:
big-two
: popular Chinese Card game (锄大地)AI
or robot
: refer to a robot-player in the card game.2 approaches were used:
The repositories are closed-source because private to Flybot Pte. Ltd. The approaches used are generic enough so they can be applied to any kind of games.
In this article, I will explain the general principle of MCTS applied to our specific case of big-two
.
Monte Carlo Tree Search (MCTS) is an important algorithm behind many major successes of recent AI applications such as AlphaGo’s striking showdown in 2016.
Essentially, MCTS uses Monte Carlo simulation to accumulate value estimates to guide towards highly rewarding trajectories in the search tree. In other words, MCTS pays more attention to nodes that are more promising, so it avoids having to brute force all possibilities which is impractical to do.
At its core, MCTS consists of repeated iterations (ideally infinite, in practice constrained by computing time and resources) of 4 steps: selection
, expansion
, simulation
and update
.
For more information, this MCTS article explains the concept very well.
MCTS algorithm works very well on deterministic games with perfect information. In other words, games in which each player perfectly knows the current state of the game and there are no chance events (e.g. draw a card from a deck, dice rolling) during the game.
However, there are a lot of games in which there is not one or both of the two components: these types of games are called stochastic (chance events) and games with imperfect information (partial observability of states).
Thus, in big-two, we don’t know the cards of the other players, so it is a game with imperfect information (more info in this paper).
So we can apply the MCTS to big-two but we will need to do 1 of the 2 at least:
Our tree representation looks like this:
{:S0 {::sut/visits 11 ::sut/score [7 3] ::sut/chldn [:S1 :S2]}
:S1 {::sut/visits 5 ::sut/score [7 3] ::sut/chldn [:S3 :S4]}