Skip to content

Commit

Permalink
update(falco): bump chart and Falco version
Browse files Browse the repository at this point in the history
Signed-off-by: Andrea Terzolo <andreaterzolo3@gmail.com>
  • Loading branch information
Andreagit97 authored and poiana committed Oct 16, 2023
1 parent 6c02081 commit 4062048
Show file tree
Hide file tree
Showing 4 changed files with 16 additions and 18 deletions.
5 changes: 5 additions & 0 deletions falco/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,11 @@
This file documents all notable changes to Falco Helm Chart. The release
numbering uses [semantic versioning](http://semver.org).

## v3.8.0

* Upgrade Falco to 0.36.1: https://github.com/falcosecurity/falco/releases/tag/0.36.1
* Sync values.yaml with 0.36.1 falco.yaml config file.

## v3.7.1

* Update readme
Expand Down
4 changes: 2 additions & 2 deletions falco/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
apiVersion: v2
name: falco
version: 3.7.1
appVersion: "0.36.0"
version: 3.8.0
appVersion: "0.36.1"
description: Falco
keywords:
- monitoring
Expand Down
4 changes: 2 additions & 2 deletions falco/generated/helm-values.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Configuration values for falco chart
`Chart version: v3.7.1`
`Chart version: v3.8.0`
## Values

| Key | Type | Default | Description |
Expand Down Expand Up @@ -75,7 +75,7 @@
| falco.modern_bpf | object | `{"cpus_for_each_syscall_buffer":2}` | - [Suggestions] The default choice of index 2 (one syscall buffer for each CPU pair) was made because the modern bpf probe utilizes a different memory allocation strategy compared to the other two drivers (bpf and kernel module). However, you have the flexibility to experiment and find the optimal configuration for your system. When considering a fixed syscall_buf_size_preset and a fixed buffer dimension: - Increasing this configs value results in lower number of buffers and you can speed up your system and reduce memory usage - However, using too few buffers may increase contention in the kernel, leading to a slowdown. If you have low event throughputs and minimal drops, reducing the number of buffers (higher `cpus_for_each_syscall_buffer`) can lower the memory footprint. |
| falco.output_timeout | int | `2000` | The `output_timeout` parameter specifies the duration, in milliseconds, to wait before considering the deadline exceeded. By default, the timeout is set to 2000ms (2 seconds), meaning that the consumer of Falco outputs can block the Falco output channel for up to 2 seconds without triggering a timeout error. Falco actively monitors the performance of output channels. With this setting the timeout error can be logged, but please note that this requires setting Falco's operational logs `log_level` to a minimum of `notice`. It's important to note that Falco outputs will not be discarded from the output queue. This means that if an output channel becomes blocked indefinitely, it indicates a potential issue that needs to be addressed by the user. |
| falco.outputs | object | `{"max_burst":1000,"rate":0}` | A throttling mechanism, implemented as a token bucket, can be used to control the rate of Falco outputs. Each event source has its own rate limiter, ensuring that alerts from one source do not affect the throttling of others. The following options control the mechanism: - rate: the number of tokens (i.e. right to send a notification) gained per second. When 0, the throttling mechanism is disabled. Defaults to 0. - max_burst: the maximum number of tokens outstanding. Defaults to 1000. For example, setting the rate to 1 allows Falco to send up to 1000 notifications initially, followed by 1 notification per second. The burst capacity is fully restored after 1000 seconds of no activity. Throttling can be useful in various scenarios, such as preventing notification floods, managing system load, controlling event processing, or complying with rate limits imposed by external systems or APIs. It allows for better resource utilization, avoids overwhelming downstream systems, and helps maintain a balanced and controlled flow of notifications. With the default settings, the throttling mechanism is disabled. |
| falco.outputs_queue | object | `{"capacity":0,"recovery":"exit"}` | Falco utilizes tbb::concurrent_bounded_queue for handling outputs, and this parameter allows you to customize the queue capacity. Please refer to the official documentation: https://oneapi-src.github.io/oneTBB/main/tbb_userguide/Concurrent_Queue_Classes.html. On a healthy system with optimized Falco rules, the queue should not fill up. If it does, it is most likely happening due to the entire event flow being too slow, indicating that the server is under heavy load. Lowering the number of items can prevent memory from steadily increasing until the OOM killer stops the Falco process. We provide recovery actions to self-limit or self-kill in order to handle this situation earlier, similar to how we expose the kernel buffer size as a parameter. However, it will not address the root cause of the event pipe not keeping up. `capacity`: the maximum number of items allowed in the queue is determined by this value. Setting the value to 0 (which is the default) is equivalent to keeping the queue unbounded. In other words, when this configuration is set to 0, the number of allowed items is effectively set to the largest possible long value, disabling this setting. `recovery`: strategy to follow when the queue becomes filled up. It applies only when the queue is bounded and there is still available system memory. In the case of an unbounded queue, if the available memory on the system is consumed, the Falco process would be OOM killed. The value `exit` is the default, `continue` does nothing special and `empty` empties the queue and then continues. |
| falco.outputs_queue | object | `{"capacity":0}` | Falco utilizes tbb::concurrent_bounded_queue for handling outputs, and this parameter allows you to customize the queue capacity. Please refer to the official documentation: https://oneapi-src.github.io/oneTBB/main/tbb_userguide/Concurrent_Queue_Classes.html. On a healthy system with optimized Falco rules, the queue should not fill up. If it does, it is most likely happening due to the entire event flow being too slow, indicating that the server is under heavy load. `capacity`: the maximum number of items allowed in the queue is determined by this value. Setting the value to 0 (which is the default) is equivalent to keeping the queue unbounded. In other words, when this configuration is set to 0, the number of allowed items is effectively set to the largest possible long value, disabling this setting. In the case of an unbounded queue, if the available memory on the system is consumed, the Falco process would be OOM killed. When using this option and setting the capacity, the current event would be dropped, and the event loop would continue. This behavior mirrors kernel-side event drops when the buffer between kernel space and user space is full. |
| falco.plugins | list | `[{"init_config":null,"library_path":"libk8saudit.so","name":"k8saudit","open_params":"http://:9765/k8s-audit"},{"library_path":"libcloudtrail.so","name":"cloudtrail"},{"init_config":"","library_path":"libjson.so","name":"json"}]` | Customize subsettings for each enabled plugin. These settings will only be applied when the corresponding plugin is enabled using the `load_plugins` option. |
| falco.priority | string | `"debug"` | Any rule with a priority level more severe than or equal to the specified minimum level will be loaded and run by Falco. This allows you to filter and control the rules based on their severity, ensuring that only rules of a certain priority or higher are active and evaluated by Falco. Supported levels: "emergency", "alert", "critical", "error", "warning", "notice", "info", "debug" |
| falco.program_output | object | `{"enabled":false,"keep_alive":false,"program":"jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX"}` | Redirect the output to another program or command. Possible additional things you might want to do with program output: - send to a slack webhook: program: "jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX" - logging (alternate method than syslog): program: logger -t falco-test - send over a network connection: program: nc host.example.com 80 If `keep_alive` is set to `true`, the program will be started once and continuously written to, with each output message on its own line. If `keep_alive` is set to `false`, the program will be re-spawned for each output message. Furthermore, the program will be re-spawned if Falco receives the SIGUSR1 signal. |
Expand Down
21 changes: 7 additions & 14 deletions falco/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -491,24 +491,17 @@ falco:
# If it does, it is most likely happening due to the entire event flow being too slow,
# indicating that the server is under heavy load.
#
# Lowering the number of items can prevent memory from steadily increasing until the OOM
# killer stops the Falco process. We provide recovery actions to self-limit or self-kill
# in order to handle this situation earlier, similar to how we expose the kernel buffer size
# as a parameter. However, it will not address the root cause of the event pipe not keeping up.
#
# `capacity`: the maximum number of items allowed in the queue is determined by this value.
# Setting the value to 0 (which is the default) is equivalent to keeping the queue unbounded.
# In other words, when this configuration is set to 0, the number of allowed items is effectively
# set to the largest possible long value, disabling this setting.
#
# `recovery`: strategy to follow when the queue becomes filled up. It applies only when the
# queue is bounded and there is still available system memory. In the case of an unbounded
# queue, if the available memory on the system is consumed, the Falco process would be
# OOM killed. The value `exit` is the default, `continue` does nothing special and `empty`
# empties the queue and then continues.
# In other words, when this configuration is set to 0, the number of allowed items is
# effectively set to the largest possible long value, disabling this setting.
#
# In the case of an unbounded queue, if the available memory on the system is consumed,
# the Falco process would be OOM killed. When using this option and setting the capacity,
# the current event would be dropped, and the event loop would continue. This behavior mirrors
# kernel-side event drops when the buffer between kernel space and user space is full.
outputs_queue:
capacity: 0
recovery: exit


#################
Expand Down

0 comments on commit 4062048

Please sign in to comment.