From cc2a7eecb00cb66e123bee56a5ae488a742d92e5 Mon Sep 17 00:00:00 2001
From: GitHub Action Splunk Connect for Syslog is an open source packaged solution for getting data into Splunk. It is based on the syslog-ng Open Source Edition (Syslog-NG OSE) and transports data to Splunk via the Splunk HTTP event Collector (HEC) rather than writing events to disk for collection by a Universal Forwarder. Splunk Support: If you are an existing Splunk customer with access to the Support Portal, create a support ticket for the quickest resolution to any issues you experience. Here are some examples of when it may be appropriate to create a support ticket: - If you experience an issue with the current version of SC4S, such as a feature gap or a documented feature that is not working as expected. - If you have difficulty with the configuration of SC4S, either at the back end or with the out-of-box parsers or index configurations. - If you experience performance issues and need help understanding the bottlenecks. - If you have any questions or issues with the SC4S documentation. GitHub Issues: For all enhancement requests, please feel free to create GitHub issues. We prioritize and work on issues based on their priority and resource availability. You can help us by tagging the requests with the appropriate labels. Splunk Developers are active in the external usergroup on best effort basis, please use support case/github issues to resolve your issues quickly We welcome feedback and contributions from the community! Please see our contribution guidelines for more information on how to get involved. Configuration and documentation licensed subject to CC0 Code and scripts licensed subject to BSD-2-Clause Third Party Axoflow image of syslog-ng License Third Party Syslog-NG (OSE) License Splunk welcomes contributions from the SC4S community, and your feedback and enhancements are appreciated. There\u2019s always code that can be clarified, functionality that can be extended, and new data filters to develop, and documentation to refine. If you see something you think should be fixed or added, go for it! Splunk Connect for Syslog is a community built and maintained product. Anyone with internet access can get a Splunk GitHub account and participate. As with any publicly available repository, care must be taken to never share private data via Issues, Pull Requests or any other mechanisms. Any data that is shared in the Splunk Connect for Syslog GitHub repository is made available to the entire Community without limits. Members of the Community and/or their employers (including Splunk) assume no responsibility or liability for any damages resulting from the sharing of private data via the Splunk GitHub. Any data samples shared in the Splunk GitHub repository must be free of private data. * Working locally, identify potentially sensitive field values in data samples (Public IP address, URL, Hostname, Etc.) * Replace all potentially sensitive field values with synthetic values * Manually review data samples to re-confirm they are free of private data before sharing in the Splunk GitHub When contributing to this repository, please first discuss the change you wish to make via a GitHub issue or Slack message with the owners of this repository. For a basic development environment docker and a bash shell is all that is required. For a more complete IDE experience see our wiki (Setup PyCharm)[https://github.com/splunk/splunk-connect-for-syslog/wiki/SC4S-Development-Setup-Using-PyCharm] Have ideas on improvements or found a problem? While the community encourages everyone to contribute code, it is also appreciated when someone reports an issue. Please report any issues or bugs you find through GitHub\u2019s issue tracker. If you are reporting a bug, please include the following details: We want to hear about your enhancements as well. Feel free to submit them as issues: Look through our issue tracker to find problems to fix! Feel free to comment and tag community members of this project with any questions or concerns. What is a \u201cpull request\u201d? It informs the project\u2019s core developers about the changes you want to review and merge. Once you submit a pull request, it enters a stage of code review where you and others can discuss its potential modifications and even add more commits to it later on. If you want to learn more, please consult this tutorial on how pull requests work in the GitHub Help Center. Here\u2019s an overview of how you can make a pull request against this project: There are two aspects of code review: giving and receiving. To make it easier for your PR to receive reviews, consider the reviewers will need you to: Testing is the responsibility of all contributors. In general, we try to adhere to TDD, writing the test first. There are multiple types of tests. The location of the test code varies with type, as do the specifics of the environment needed to successfully run the test. We could always use improvements to our documentation! Anyone can contribute to these docs - whether you\u2019re new to the project, you\u2019ve been around a long time, and whether you self-identify as a developer, an end user, or someone who just can\u2019t stand seeing typos. What exactly is needed? To add commit messages to release notes, tag the message in following format
"},{"location":"#support","title":"Support","text":"
"},{"location":"#references","title":"References","text":"
"},{"location":"CONTRIBUTING/","title":"CONTRIBUTING","text":"
"},{"location":"CONTRIBUTING/#fixing-issues","title":"Fixing Issues","text":"
"},{"location":"CONTRIBUTING/#code-review","title":"Code Review","text":"git clone git@github.com:YOUR_GITHUB_USERNAME/splunk-connect-for-syslog.git\ncd splunk-connect-for-syslog\n
git checkout -b your-bugfix-branch-name develop\n
cd splunk-connect-for-syslog\n./test-with-compose.sh\n
git commit -m \"\"\ngit push\n
"},{"location":"CONTRIBUTING/#testing","title":"Testing","text":"
"},{"location":"CONTRIBUTING/#release-notes","title":"Release Notes","text":"
[TYPE] can be among the following * FEATURE * FIX * DOC * TEST * CI * REVERT * FILTERADD * FILTERMOD[TYPE] <commit message>\n
Sample commit:\ngit commit -m \"[TEST] test-message\"\n
"},{"location":"architecture/","title":"SC4S Architectural Considerations","text":"SC4S provides performant and reliable syslog data collection. When you are planning your configuration, review the following architectural considerations. These recommendations pertain to the Syslog protocol and age, and are not specific to Splunk Connect for Syslog.
"},{"location":"architecture/#the-syslog-protocol","title":"The syslog Protocol","text":"The syslog protocol design prioritizes speed and efficiency, which can occur at the expense of resiliency and reliability. User Data Protocol (UDP) provides the ability to \u201csend and forget\u201d events over the network without regard to or acknowledgment of receipt. Transport Layer Secuirty (TLS) and Secure Sockets Layer (SSL) protocols are also supported, though UDP prevails as the preferred syslog transport for most data centers.
Because of these tradeoffs, traditional methods to provide scale and resiliency do not necessarily transfer to syslog.
"},{"location":"architecture/#ip-protocol","title":"IP protocol","text":"By default, SC4S listens on ports using IPv4. IPv6 is also supported, see SC4S_IPV6_ENABLE
in source configuration options.
Since syslog is a \u201csend and forget\u201d protocol, it does not perform well when routed through substantial network infrastructure. This includes front-side load balancers and WAN. The most reliable way to collect syslog traffic is to provide for edge collection rather than centralized collection. If you centrally locate your syslog server, the UDP and (stateless) TCP traffic cannot adjust and data loss will occur.
"},{"location":"architecture/#syslog-data-collection-at-scale","title":"syslog Data Collection at Scale","text":"As a best practice, do not co-locate syslog-ng servers for horizontal scale and load balance to them with a front-side load balancer:
Attempting to load balance for scale can cause more data loss due to normal device operations and attendant buffer loss. A simple, robust single server or shared-IP cluster provides the best performance.
Front-side load balancing causes inadequate data distribution on the upstream side, leading to uneven data load on the indexers.
Load balancing for high availability does not work well for stateless, unacknowledged syslog traffic. More data is preserved when you use a more simple design such as vMotioned VMs. With syslog, the protocol itself is prone to loss, and syslog data collection can be made \u201cmostly available\u201d at best.
"},{"location":"architecture/#udp-vs-tcp","title":"UDP vs. TCP","text":"Run your syslog configuration on UDP rather than TCP.
The syslogd daemon optimally uses UDP for log forwarding to reduce overhead. This is because UDP\u2019s streaming method does not require the overhead of establishing a network session. UDP reduces network load on the network stream with no required receipt verification or window adjustment.
TCP uses Acknowledgement Signals (ACKS) to avoid data loss, however, loss can still occur when:
Use TCP if the syslog event is larger than the maximum size of the UDP packet on your network typically limited to Web Proxy, DLP, and IDs type sources. To mitigate the drawbacks of TCP you can use TLS over TCP:
SC4S is primarily controlled by environment variables. This topic describes the categories and variables you need to properly configure SC4S for your environment.
"},{"location":"configuration/#global-configuration-variables","title":"Global configuration variables","text":"Variable Values Description SC4S_USE_REVERSE_DNS yes or no (default) Use reverse DNS to identify hosts when HOST is not valid in the syslog header. SC4S_REVERSE_DNS_KEEP_FQDN yes or no (default) When enabled, SC4S will not extract the hostname from FQDN, and instead will pass the full domain name to the host. SC4S_CONTAINER_HOST string Variable that is passed to the container to identify the actual log host for container implementations.If the host value is not present in an event, and you require that a true hostname be attached to each event, SC4S provides an optional ability to perform a reverse IP to name lookup. If the variable SC4S_USE_REVERSE_DNS
is set to \u201cyes\u201d, then SC4S first checks host.csv
and replaces the value of host
with the specified value that matches the incoming IP address. If no value is found in host.csv
, SC4S attempts a reverse DNS lookup against the configured nameserver. In this case, SC4S by default extracts only the hostname from FQDN (example.domain.com
-> example
). If SC4S_REVERSE_DNS_KEEP_FQDN
variable is set to \u201cyes\u201d, full domain name is assigned to the host field.
Note: Using the SC4S_USE_REVERSE_DNS
variable can have a significant impact on performance if the reverse DNS facility is not performant. Check this variable if you notice that events are indexed later than the actual timestamp in the event, for example, if you notice a latency between _indextime
and _time
.
Many HTTP proxies are not provisioned with application traffic in mind. Ensure adequate capacity is available to avoid data loss and proxy outages. The following variables must be entered in lower case:
Variable Values Description http_proxy undefined Use libcurl format proxy string \u201chttp://username:password@proxy.server:port\u201d https_proxy undefined Use libcurl format proxy string \u201chttp://username:password@proxy.server:port\u201d"},{"location":"configuration/#configure-your-splunk-hec-destination","title":"Configure your Splunk HEC destination","text":"Variable Values Description SC4S_DEST_SPLUNK_HEC_<ID>_CIPHER_SUITE comma separated list Opens the SSL cipher suite list. SC4S_DEST_SPLUNK_HEC_<ID>_SSL_VERSION comma separated list Opens the SSL version list. SC4S_DEST_SPLUNK_HEC_<ID>_WORKERS numeric The number of destination workers (threads), the default value is 10 threads. You do not need to change this variable from the default unless your environment has a very high or low volume. Consult with the SC4S community for advice about configuring your settings for environments with very high or low volumes. SC4S_DEST_SPLUNK_INDEXED_FIELDS r_unixtime,facility,severity,container,loghost,destport,fromhostip,protonone This is the list of SC4S indexed fields that will be included with each event in Splunk. The default is the entire list except \u201cnone\u201d. Two other indexed fields,sc4s_vendor_product
and sc4s_syslog_format
, also appear along with the fields selected and cannot be turned on or off individually. If you do not want any indexed fields, set the value to the single value of \u201cnone\u201d. When you set this variable, you must separate multiple entries with commas, do not include extra spaces.This list maps to the following indexed fields that will appear in all Splunk events:facility: sc4s_syslog_facilityseverity: sc4s_syslog_severitycontainer: sc4s_containerloghost: sc4s_loghostdport: sc4s_destportfromhostip: sc4s_fromhostipproto: sc4s_proto The destination operating parameters outlined above should be individually controlled using the destination ID. For example, to set the number of workers for the default destination, use SC4S_DEST_SPLUNK_HEC_DEFAULT_WORKERS
. To configure workers for the alternate HEC destination d_hec_FOO
, use SC4S_DEST_SPLUNK_HEC_FOO_WORKERS
.
Set the SC4S_DEFAULT_TIMEZONE
variable to a recognized \u201czone info\u201d (Region/City) time zone format such as America/New_York
. Setting this value forces SC4S to use the specified timezone and honor its associated Daylight Savings rules for all events without a timezone offset in the header or message payload.
SC4S provides the ability to minimize the number of lost events if the connection to all the Splunk indexers is lost. This capability utilizes the disk buffering feature of Syslog-ng.
SC4S receives a response from the Splunk HTTP Event Collector (HEC) when a message is received successfully. If a confirmation message from the HEC endpoint is not received (or a \u201cserver busy\u201d reply, such as a \u201c503\u201d is sent), the load balancer will try the next HEC endpoint in the pool. If all pool members are exhausted, for example, if there were a full network outage to the HEC endpoints, events will queue to the local disk buffer on the SC4S Linux host.
SC4S will continue attempting to send the failed events while it buffers all new incoming events to disk. If the disk space allocated to disk buffering fills up then SC4S will stop accepting new events and subsequent events will be lost.
Once SC4S gets confirmation that events are again being received by one or more indexers, events will then stream from the buffer using FIFO queueing.
The number of events in the disk buffer will reduce as long as the incoming event volume is less than the maximum SC4S, with the disk buffer in the path, can handle. When all events have been emptied from the disk buffer, SC4S will resume streaming events directly to Splunk.
Disk buffers in SC4S are allocated per destination. Keep this in mind when using additional destinations that have disk buffering configured. By default, when you configure alternate HEC destinations, disk buffering is configured identically to that of the main HEC destination, unless overridden individually.
"},{"location":"configuration/#estimate-your-storage-allocation","title":"Estimate your storage allocation","text":"As an example, to protect against a full day of lost connectivity from SC4S to all your indexers at maximum throughput, the calculation would look like the following:
60,000 EPS * 86400 seconds * 800 bytes * 1.7 = 6.4 TB of storage
"},{"location":"configuration/#about-disk-buffering","title":"About disk buffering","text":"Note the following about disk buffering:
\u201cReliable\u201d disk buffering offers little advantage over \u201cnormal\u201d disk buffering, but has a significant performance penalty. For this reason, normal disk buffering is recommended.
Pay attention to the cumulative buffer requirements when allocating local disk space.
Disk buffer storage is configured using container volumes and is persistent between container restarts. Be sure to account for disk space requirements on the local SC4S host when you create the container volumes in your respective runtime environment. These volumes can grow significantly during an extended outage to the SC4S destination HEC endpoints. See the \u201cEstimate your storage allocation\u201d section.
When you change the disk buffering directory, the new directory must exist. Otherwise, syslog-ng will fail to start.
When you change the disk buffering directory, if buffering has previously occurred on that instance, a persist file may exist which will prevent syslog-ng from changing the directory.
Note: The buffer options apply to each worker rather than the entire destination.
"},{"location":"configuration/#archive-file-configuration","title":"Archive File Configuration","text":"This feature is designed to support compliance or diode mode archival of all messages. The files are stored in a folder structure at the mount point using the pattern shown in the table below, depending on the value of the SC4S_GLOBAL_ARCHIVE_MODE
variable. Events for both modes are formatted using syslog-ng\u2019s EWMM template.
<archive mount>/${.splunk.sourcetype}/${HOST}/$YEAR-$MONTH-$DAY-archive.log
SC4S_GLOBAL_ARCHIVE_MODE diode <archive mount>/${YEAR}/${MONTH}/${DAY}/${fields.sc4s_vendor_product}_${YEAR}${MONTH}${DAY}${HOUR}${MIN}.log\"
Use the following variables to select global archiving or per-source archiving. SC4S does not prune the files that are created, therefore an administrator must provide a means of log rotation to prune files and move them to an archival system to avoid exhausting disk space.
Variable Values Description SC4S_ARCHIVE_GLOBAL yes or undefined Enable archiving of all vendor_products. SC4S_DEST_<VENDOR_PRODUCT>_ARCHIVE yes(default) or undefined Enables selective archiving by vendor product."},{"location":"configuration/#syslog-source-configuration","title":"Syslog Source Configuration","text":"Variable Values/Default Description SC4S_SOURCE_TLS_ENABLE yes or no(default) Enable TLS globally. Be sure to configure the certificate as shown below. SC4S_LISTEN_DEFAULT_TLS_PORT undefined or 6514 Enable a TLS listener on port 6514. SC4S_LISTEN_DEFAULT_RFC6425_PORT undefined or 5425 Enable a TLS listener on port 5425. SC4S_SOURCE_TLS_OPTIONSno-sslv2
Comma-separated list of the following options: no-sslv2, no-sslv3, no-tlsv1, no-tlsv11, no-tlsv12, none
. See syslog-ng docs for the latest list and default values. SC4S_SOURCE_TLS_CIPHER_SUITE See openssl Colon-delimited list of ciphers to support, for example, ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384
. See openssl for the latest list and defaults. SC4S_SOURCE_TCP_MAX_CONNECTIONS 2000 Maximum number of TCP connections. SC4S_SOURCE_UDP_IW_USE yes or no(default) Determine whether to change the initial Window size for UDP. SC4S_SOURCE_UDP_FETCH_LIMIT 1000 Number of events to fetch from server buffer at one time. SC4S_SOURCE_UDP_IW_SIZE 250000 Initial Window size. SC4S_SOURCE_TCP_IW_SIZE 20000000 Initial Window size. SC4S_SOURCE_TCP_FETCH_LIMIT 2000 Number of events to fetch from server buffer at one time. SC4S_SOURCE_UDP_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_TCP_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_TLS_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_RFC5426_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_RFC6587_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_RFC5425_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_LISTEN_UDP_SOCKETS 4 Number of kernel sockets per active UDP port, which configures multi-threading of the UDP input buffer in the kernel to prevent packet loss. Total UDP input buffer is the multiple of SOCKETS x SO_RCVBUFF. SC4S_SOURCE_LISTEN_RFC5426_SOCKETS 1 Number of kernel sockets per active UDP port, which configures multi-threading of the input buffer in the kernel to prevent packet loss. Total UDP input buffer is the sum of SOCKETS x SO_RCVBUFF. SC4S_SOURCE_LISTEN_RFC6587_SOCKETS 1 Number of kernel sockets per active UDP port, which configures multi-threading of the input buffer in the kernel to prevent packet loss. Total UDP input buffer is the sum of SOCKETS x SO_RCVBUFF. SC4S_SOURCE_LISTEN_RFC5425_SOCKETS 1 Number of kernel sockets per active UDP port, which configures multi-threading of the input buffer in the kernel to prevent packet loss. Total UDP input buffer is the sum of SOCKETS x SO_RCVBUFF. SC4S_SOURCE_STORE_RAWMSG undefined or \u201cno\u201d Store unprocessed \u201con the wire\u201d raw message in the RAWMSG macro for use with the \u201cfallback\u201d sourcetype. Do not set this in production, substantial memory and disk overhead will result. Use this only for log path and filter development. SC4S_IPV6_ENABLE yes or no(default) Enable dual-stack IPv6 listeners and health checks."},{"location":"configuration/#configure-your-syslog-source-tls-certificate","title":"Configure your syslog source TLS certificate","text":"/opt/sc4s/tls
./opt/sc4s/tls/server.key
./opt/sc4s/tls/server.pem
.SC4S_SOURCE_TLS_ENABLE=yes
exists in /opt/sc4s/env_file
.Additional certificate authorities may be trusted by appending each PEM formatted certificate to /opt/sc4s/tls/trusted.pem
.
Set Splunk metadata before the data arrives in Splunk and before any add-on processing occurs. The filters apply the index, source, sourcetype, host, and timestamp metadata automatically by individual data source. Values for this metadata, including a recommended index and output format, are included with all \u201cout-of-the-box\u201d log paths included with SC4S and are chosen to properly interface with the corresponding add-on in Splunk. You must ensure all recommended indexes accept this data if the defaults are not changed.
To accommodate the override of default values, each log path consults an internal lookup file that maps Splunk metadata to the specific data source being processed. This file contains the defaults that are used by SC4S to set the appropriate Splunk metadata, index
, host
, source
, and sourcetype
, for each data source. This file is not directly available to the administrator, but a copy of the file is deposited in the local mounted directory for reference, /opt/sc4s/local/context/splunk_metadata.csv.example
by default. This copy is provided solely for reference. To add to the list or to override default entries, create an override file without the example
extension (for example /opt/sc4s/local/context/splunk_metadata.csv
) and modify it according to the instructions below.
splunk_metadata.csv
is a CSV file containing a \u201ckey\u201d that is referenced in the log path for each data source. These keys are documented in the individual source files in this section, and let you override Splunk metadata.
The following is example line from a typical splunk_metadata.csv
override file:
juniper_netscreen,index,ns_index\n
The columns in this file are key
, metadata
, and value
. To make a change using the override file, consult the example
file (or the source documentation) for the proper key and modify and add rows in the table, specifying one or more of the following metadata/value
pairs for a given key
:
key
which refers to the vendor and product name of the data source, using the vendor_product
convention. For overrides, these keys are listed in the example
file. For new custom sources, be sure to choose a key that accurately reflects the vendor and product being configured and that matches the log path.index
to specify an alternate value
for index.source
to specify an alternate value
for source.host
to specify an alternate value
for host.sourcetype
to specify an alternate value
for sourcetype. Only change this if no upstream TA used, or a custom TA is being used.sc4s_template
to specify an alternate value
for the syslog-ng template that will be used to format the event that is indexed by Splunk. Changing this will affect the upstream TA. The template choices are documented here.In our example above, the juniper_netscreen
key references a new index used for that data source called ns_index
.
For most deployments the index should be the only change needed, other default metadata should almost never be overridden.
The splunk_metadata.csv
file is a true override file and the entire example
file should not be copied over to the override. The override file is usually just one or two lines, unless an entire index category (for example netfw
) needs to be overridden.
When building a custom SC4S log path, append the splunk_metadata.csv
file with an appropriate new key and default for the index. The new key will not exist in the internal lookup or in the example
file. Care should be taken during log path design to choose appropriate index, sourcetype and template defaults so that admins are not compelled to override them. If the custom log path is later added to the list of SC4S-supported sources, this addendum can be removed.
The splunk_metadata.csv.example
file is provided for reference only and is not used directly by SC4S. It is an exact copy of the internal file, and can therefore change from release to release. Be sure to check the example file to make sure the keys for any overrides map correctly to the ones in the example file.
In some cases you can provide the same overrides based on PCI scope, geography, or other criteria. Use a file that uniquely identifies these source exceptions via syslog-ng filters, which map to an associated lookup of alternate indexes, sources, or other metadata. Indexed fields can also be added to further classify the data.
The conf
and csv
files referenced below are populated into the /opt/sc4s/local/context
directory when SC4S is run for the first time, in a similar fashion to splunk_metadata.csv
. After this first-time population of the files takes place, you can edit them and restart SC4S for the changes to take effect. To get started:
Edit the file compliance_meta_by_source.conf
to supply uniquely named filters to identify events subject to override.
compliance_meta_by_source.csv
to supply appropriate fields and values.The csv
file provides three columns: filter name
, field name
, and value
. Filter names in the conf
file must match one or more corresponding filter name
rows in the csv
file. The field name
column obeys the following convention:
.splunk.index
to specify an alternate value
for index..splunk.source
to specify an alternate value
for source..splunk.sourcetype
to specify an alternate value
for sourcetype (only changing this if a downstream TA is present, or if a custom TA is present.)fields.fieldname
where fieldname
will become the name of an indexed field sent to Splunk with the supplied value
. This file construct is best shown by an example. Here is an example of a compliance_meta_by_source.conf
file and its corresponding compliance_meta_by_source.csv
file:
filter f_test_test {\n host(\"something-*\" type(glob)) or\n netmask(192.168.100.1/24)\n};\n
f_test_test,.splunk.index,\"pciindex\"\nf_test_test,fields.compliance,\"pci\"\n
Ensure that the filter names in the conf
file match one or more rows in the csv
file. Any incoming message with a hostname starting with something-
or arriving from a netmask of 192.168.100.1/24
will match the f_test_test
filter, and the corresponding entries in the csv
file will be checked for overrides. The new index is pciindex
, and an indexed field named compliance
will be sent to Splunk with its value set to pci
. To add additional overrides, add another filter foo_bar {};
stanza to the conf
file, then add appropriate entries to the csv
file that match the filter names to the overrides.
Take care that your syntax is correct; for more information on proper syslog-ng syntax, see the syslog-ng documentation. A syntax error will cause the runtime process to abort in the \u201cpreflight\u201d phase at startup.
To update your changes, restart SC4S.
"},{"location":"configuration/#drop-all-data-by-ip-or-subnet-deprecated","title":"Drop all data by IP or subnet (deprecated)","text":"Using vendor_product_by_source
to null queue is now a deprecated task. See the supported method for dropping data in Filtering events from output.
Splunk Connect for Syslog uses the syslog-ng template mechanism to format the output event that will be sent to Splunk. These templates can format the messages in a number of ways, including straight text and JSON, and can utilize the many syslog-ng \u201cmacros\u201d fields to specify what gets placed in the event delivered to the destination. The following table is a list of the templates used in SC4S, which can be used for metadata override. New templates can also be added by the administrator in the \u201clocal\u201d section for local destinations; pay careful attention to the syntax as the templates are \u201clive\u201d syslog-ng config code.
Template name Template contents Notes t_standard ${DATE} ${HOST} ${MSGHDR}${MESSAGE} Standard template for most RFC3164 (standard syslog) traffic. t_msg_only ${MSGONLY} syslog-ng $MSG is sent, no headers (host, timestamp, etc.) . t_msg_trim $(strip $MSGONLY) Similar to syslog-ng $MSG with whitespace stripped. t_everything ${ISODATE} ${HOST} ${MSGHDR}${MESSAGE} Standard template with ISO date format. t_hdr_msg ${MSGHDR}${MESSAGE} Useful for non-compliant syslog messages. t_legacy_hdr_msg ${LEGACY_MSGHDR}${MESSAGE} Useful for non-compliant syslog messages. t_hdr_sdata_msg ${MSGHDR}${MSGID} ${SDATA} ${MESSAGE} Useful for non-compliant syslog messages. t_program_msg ${PROGRAM}[${PID}]: ${MESSAGE} Useful for non-compliant syslog messages. t_program_nopid_msg ${PROGRAM}: ${MESSAGE} Useful for non-compliant syslog messages. t_JSON_3164 $(format-json \u2013scope rfc3164\u2013pair PRI=\u201d<$PRI>\u201d\u2013key LEGACY_MSGHDR\u2013exclude FACILITY\u2013exclude PRIORITY) JSON output of all RFC3164-based syslog-ng macros. Useful with the \u201cfallback\u201d sourcetype to aid in new filter development. t_JSON_5424 $(format-json \u2013scope rfc5424\u2013pair PRI=\u201d<$PRI>\u201d\u2013key ISODATE\u2013exclude DATE\u2013exclude FACILITY\u2013exclude PRIORITY) JSON output of all RFC5424-based syslog-ng macros; for use with RFC5424-compliant traffic. t_JSON_5424_SDATA $(format-json \u2013scope rfc5424\u2013pair PRI=\u201d<$PRI>\u201d\u2013key ISODATE\u2013exclude DATE\u2013exclude FACILITY\u2013exclude PRIORITY)\u2013exclude MESSAGE JSON output of all RFC5424-based syslog-ng macros except for MESSAGE; for use with RFC5424-compliant traffic."},{"location":"configuration/#about-ebpf","title":"About eBPF","text":"eBPF helps mitigate congestion of single heavy data stream by utilizing multithreading and is used with SC4S_SOURCE_LISTEN_UDP_SOCKETS
. To leverage this feature you need your host OS to be able to use eBPF and must run Docker or Podman in privileged mode.
SC4S_SOURCE_LISTEN_UDP_SOCKETS
. To run Docker or Podman in privileged mode, edit the service file /lib/systemd/system/sc4s.service
to add the --privileged
flag to the Docker or Ppodman run command:
ExecStart=/usr/bin/podman run \\\n -e \"SC4S_CONTAINER_HOST=${SC4SHOST}\" \\\n -v \"$SC4S_PERSIST_MOUNT\" \\\n -v \"$SC4S_LOCAL_MOUNT\" \\\n -v \"$SC4S_ARCHIVE_MOUNT\" \\\n -v \"$SC4S_TLS_MOUNT\" \\\n --privileged \\\n --env-file=/opt/sc4s/env_file \\\n --health-cmd=\"/healthcheck.sh\" \\\n --health-interval=10s --health-retries=6 --health-timeout=6s \\\n --network host \\\n --name SC4S \\\n --rm $SC4S_IMAGE\n
"},{"location":"configuration/#change-your-status-port","title":"Change your status port","text":"Use SC4S_LISTEN_STATUS_PORT
to change the \u201cstatus\u201d port used by the internal health check process. The default value is 8080
.
SC4S parsers perform operations that would normally be performed during index time, including linebreaking, source and sourcetype setting, and timestamping. You can write your own parser if the parsers available in the SC4S package do not meet your needs.
"},{"location":"create-parser/#before-you-start","title":"Before you start","text":"Prepare your testing environment. With Python>=3.9:
pip3 install poetry\npoetry install\n
Prepare your testing command:
poetry run pytest -v --tb=long \\\n--splunk_type=external \\\n--splunk_hec_token=<HEC_TOKEN> \\\n--splunk_host=<HEC_ENDPOINT> \\\n--sc4s_host=<SC4S_IP> \\\n--junitxml=test-results/test.xml \\\n-n <NUMBER_OF_JOBS> \\\n<TEST>\n
Create a new branch in the repository where you will apply your changes.
If you already have a raw log message, you can skip this step. Otherwise, you need to extract one to have something to work with. You can do this in multiple ways, this section describes three methods.
"},{"location":"create-parser/#procure-a-raw-log-message-using-tcpdump","title":"Procure a raw log message usingtcpdump
","text":"You can use the tcpdump
command to get incoming raw messages on a given port of your server:
tcpdump -n -s 0 -S -i any -v port 8088\n\ntcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes\n09:54:26.051644 IP (tos 0x0, ttl 64, id 29465, offset 0, flags [DF], proto UDP (17), length 466)\n10.202.22.239.41151 > 10.202.33.242.syslog: SYSLOG, length: 438\nFacility local0 (16), Severity info (6)\nMsg: 2022-04-28T16:16:15.466731-04:00 NTNX-21SM6M510425-B-CVM audispd[32075]: node=ntnx-21sm6m510425-b-cvm type=SYSCALL msg=audit(1651176975.464:2828209): arch=c000003e syscall=2 success=yes exit=6 a0=7f2955ac932e a1=2 a2=3e8 a3=3 items=1 ppid=29680 pid=4684 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=964698 comm=\u201csshd\u201d exe=\u201c/usr/sbin/sshd\u201d subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 key=\u201clogins\u201d\\0x0a\n
"},{"location":"create-parser/#procure-a-raw-log-message-using-wireshark","title":"Procure a raw log message using Wireshark","text":"Once you get your stream of messages, copy one of them. Note that in UDP there are not usually any message separators. You can also read the logs using Wireshark from the .pcap file. From Wireshark go to Statistics > Conversations, then click on Follow Stream
:
See Obtaining \u201cOn-the-wire\u201d Raw Events.
"},{"location":"create-parser/#create-a-unit-test","title":"Create a unit test","text":"To create a unit test, use the existing test case that is most similar to your use case. The naming convention is test_vendor_product.py
.
<14>1 2022-03-30T11:17:11.900862-04:00 host - - - - Carbon Black App Control event: text=\"File 'c:\\program files\\azure advanced threat protection sensor\\2.175.15073.51407\\winpcap\\x86\\packet.dll' [c4e671bf409076a6bf0897e8a11e6f1366d4b21bf742c5e5e116059c9b571363] would have blocked if the rule was not in Report Only mode.\" type=\"Policy Enforcement\" subtype=\"Execution block (unapproved file)\" hostname=\"CORP\\USER\" username=\"NT AUTHORITY\\SYSTEM\" date=\"3/30/2022 3:16:40 PM\" ip_address=\"10.0.0.3\" process=\"c:\\program files\\azure advanced threat protection sensor\\2.175.15073.51407\\microsoft.tri.sensor.updater.exe\" file_path=\"c:\\program files\\azure advanced threat protection sensor\\2.175.15073.51407\\winpcap\\x86\\packet.dll\" file_name=\"packet.dll\" file_hash=\"c4e671bf409076a6bf0897e8a11e6f1366d4b21bf742c5e5e116059c9b571363\" policy=\"High Enforcement - Domain Controllers\" rule_name=\"Report read-only memory map operations on unapproved executables by .NET applications\" process_key=\"00000433-0000-23d8-01d8-44491b26f203\" server_version=\"8.5.4.3\" file_trust=\"-2\" file_threat=\"-2\" process_trust=\"-2\" process_threat=\"-2\" prevalence=\"50\"
Now run the test, for example:
poetry run pytest -v --tb=long \\\n--splunk_type=external \\\n--splunk_hec_token=<HEC_TOKEN> \\\n--splunk_host=<HEC_ENDPOINT> \\\n--sc4s_host=<SC4S_IP> \\\n--junitxml=test-results/test.xml \\\n-n <NUMBER_OF_JOBS> \\\ntest/test_vendor_product.py\n
The parsed log should appear in Splunk:
In this example the message is being parsed as a generic nix:syslog
sourcetype. This means that the message format complied with RFC standards, and SC4S could correctly identify the format fields in the message.
To assign your messages to the proper index and sourcetype you will need to create a parser. Your parser must be declared in package/etc/conf.d/conflib
. The naming convention is app-type-vendor_product.conf
.
The most basic configuration will forward raw log data with correct metadata, for example:
block parser app-syslog-vmware_cb-protect() {\n channel {\n rewrite {\n r_set_splunk_dest_default(\n index(\"epintel\")\n sourcetype('vmware:cb:protect')\n vendor(\"vmware\")\n product(\"cb-protect\")\n template(\"t_msg_only\")\n );\n };\n };\n};\napplication app-syslog-vmware_cb-protect[sc4s-syslog] {\n filter {\n message('Carbon Black App Control event: ' type(string) flags(prefix));\n }; \n parser { app-syslog-vmware_cb-protect(); };\n};\n
All messages that start with the string Carbon Black App Control event:
will now be routed to the proper index and assigned the given sourcetype: For more info about using message filtering go to sources documentation. To apply more transformations, add the parser:
block parser app-syslog-vmware_cb-protect() {\n channel {\n rewrite {\n r_set_splunk_dest_default(\n index(\"epintel\")\n sourcetype('vmware:cb:protect')\n vendor(\"vmware\")\n product(\"cb-protect\")\n template(\"t_kv_values\")\n );\n };\n\n parser {\n csv-parser(delimiters(chars('') strings(': '))\n columns('header', 'message')\n prefix('.tmp.')\n flags(greedy, drop-invalid));\n kv-parser(\n prefix(\".values.\")\n pair-separator(\" \")\n template('${.tmp.message}')\n );\n };\n };\n};\napplication app-syslog-vmware_cb-protect[sc4s-syslog] {\n filter {\n message('Carbon Black App Control event: ' type(string) flags(prefix));\n }; \n parser { app-syslog-vmware_cb-protect(); };\n};\n
This example extracts all fields that are nested in the raw log message first by using csv-parser
to split Carbon Black App Control event
and the rest of message as a two separate fields named header
and message
. kv-parser
will extract all key-value pairs in the message
field. To test your parser, run a previously created test case. If you need more debugging, use docker ps
to see your running containers and docker logs
to see what\u2019s happening to the parsed message.
Commit your changes and open a pull request.
The SC4S Metrics and Events dashboard lets you monitor metrics and event flows for all SC4S instances sending data to a chosen Splunk platform.
"},{"location":"dashboard/#functionalities","title":"Functionalities","text":""},{"location":"dashboard/#overview-metrics","title":"Overview metrics","text":"The SC4S and Metrics Overview dashboard displays the cumulative sum of received and dropped messages for all SC4S instances in a chosen interval for the specified time range. By default the interval is set to 30 seconds and the time range is set to 15 minutes.
The Received Messages panel can be used as a heartbeat metric. A healthy SC4S instance should send at least one message per 30 seconds. This metrics message is included in the count.
Keep the Dropped Messages panel at a constant level of 0. If SC4S drops messages due to filters, slow performance, or for any other reason, the number of dropped messages will persist until the instance restarts. The Dropped Messages panel does not include potential UDP messages dropped from the port buffer, which SC4S is not able to track.
"},{"location":"dashboard/#single-instance-metrics","title":"Single instance metrics","text":"You can display the instance name and SC4S version for a specific SC4S instance (available in versions 3.16.0 and later).
This dashboard also displays a timechart of deltas for received, queued, and dropped messages for a specific SC4S instance.
"},{"location":"dashboard/#single-instance-events","title":"Single instance events","text":"You can analyze traffic processed by an SC4S instance by visualizing the following events data:
You can configure Splunk Connect for Syslog to use any destination available in syslog-ng OSE. Helpers manage configuration for the three most common destination needs:
HTTP traffic compression helps to reduce network bandwidth usage when sending to a HEC destination. SC4S currently supports gzip for compressing transmitted traffic. Using the gzip compression algorithm can result in lower CPU load and increased utilization of RAM. The algorithm may also cause a decrease in performance by 6% to 7%. Compression affects the content but does not affect the HTTP headers. Enable batch packet processing to make the solution efficient, as this allows compression of a large number of logs at once.
Variable Values Description SC4S_DEST_SPLUNK_HEC_<ID>_HTTP_COMPRESSION yes or no(default) Compress outgoing HTTP traffic using the gzip method."},{"location":"destinations/#syslog-standard-destination","title":"Syslog standard destination","text":"The use of \u201csyslog\u201d as a network protocol has been defined in Internet Engineering Task Force standards RFC5424, RFC5425, and RFC6587.
Note: SC4S sending messages to a syslog destination behaves like a relay. This means overwriting some original information, for example the original source IP.
"},{"location":"destinations/#configuration-options_1","title":"Configuration options","text":"Variable Values Description SC4S_DEST_SYSLOG_<ID>_HOST fqdn or ip The FQDN or IP of the target. SC4S_DEST_SYSLOG_<ID>_PORT number 601 is the default when framed, 514 is the default when not framed. SC4S_DEST_SYSLOG_<ID>_IETF yes/no, the default value is yes. Use IETF Standard frames. SC4S_DEST_SYSLOG_<ID>_TRANSPORT tcp,udp,tls. The default value is tcp. SC4S_DEST_SYSLOG_<ID>_MODE string \u201cGLOBAL\u201d or \u201cSELECT\u201d."},{"location":"destinations/#send-rfc5424-with-frames","title":"Send RFC5424 with frames","text":"In this example, SC4S will send Cisco ASA events as RFC5424 syslog to a third party system.
The message format will be similar to: 123 <166>1 2022-02-02T14:59:55.000+00:00 kinetic-charlie - - - - %FTD-6-430003: DeviceUUID
.
The destination name is taken from the environment variable, each destination must have a unique name. This value should be short and meaningful.
#env_file\nSC4S_DEST_SYSLOG_MYSYS_HOST=172.17.0.1\nSC4S_DEST_SYSLOG_MYSYS_PORT=514\nSC4S_DEST_SYSLOG_MYSYS_MODE=SELECT\n
#filename: /opt/sc4s/local/config/app_parsers/selectors/sc4s-lp-cisco_asa_d_syslog_mysys.conf\napplication sc4s-lp-cisco_asa_d_syslog_mysys[sc4s-lp-dest-select-d_syslog_mysys] {\n filter {\n 'cisco' eq \"${fields.sc4s_vendor}\"\n and 'asa' eq \"${fields.sc4s_product}\"\n }; \n};\n
"},{"location":"destinations/#send-rfc5424-without-frames","title":"Send RFC5424 without frames","text":"In this example SC4S will send Cisco ASA events to a third party system without frames.
The message format will be similar to: <166>1 2022-02-02T14:59:55.000+00:00 kinetic-charlie - - - - %FTD-6-430003: DeviceUUID
.
#env_file\nSC4S_DEST_SYSLOG_MYSYS_HOST=172.17.0.1\nSC4S_DEST_SYSLOG_MYSYS_PORT=514\nSC4S_DEST_SYSLOG_MYSYS_MODE=SELECT\n# set to #yes for ietf frames\nSC4S_DEST_SYSLOG_MYSYS_IETF=no \n
#filename: /opt/sc4s/local/config/app_parsers/selectors/sc4s-lp-cisco_asa_d_syslog_mysys.conf\napplication sc4s-lp-cisco_asa_d_syslog_mysys[sc4s-lp-dest-select-d_syslog_mysys] {\n filter {\n 'cisco' eq \"${fields.sc4s_vendor}\"\n and 'asa' eq \"${fields.sc4s_product}\"\n }; \n};\n
"},{"location":"destinations/#legacy-bsd","title":"Legacy BSD","text":"In many cases, the actual configuration required is Legacy BSD syslog which is not a standard and was documented in RFC3164.
Variable Values Description SC4S_DEST_BSD_<ID>_HOST fqdn or ip The FQDN or IP of the target. SC4S_DEST_BSD_<ID>_PORT number, the default is 514. SC4S_DEST_BSD_<ID>_TRANSPORT tcp,udp,tls, the default is tcp. SC4S_DEST_BSD_<ID>_MODE string \u201cGLOBAL\u201d or \u201cSELECT\u201d."},{"location":"destinations/#send-legacy-bsd","title":"Send legacy BSD","text":"The message format will be similar to: <134>Feb 2 13:43:05.000 horse-ammonia CheckPoint[26203]
.
#env_file\nSC4S_DEST_BSD_MYSYS_HOST=172.17.0.1\nSC4S_DEST_BSD_MYSYS_PORT=514\nSC4S_DEST_BSD_MYSYS_MODE=SELECT\n
#filename: /opt/sc4s/local/config/app_parsers/selectors/sc4s-lp-cisco_asa_d_bsd_mysys.conf\napplication sc4s-lp-cisco_asa_d_bsd_mysys[sc4s-lp-dest-select-d_bsd_mysys] {\n filter {\n 'cisco' eq \"${fields.sc4s_vendor}\"\n and 'asa' eq \"${fields.sc4s_product}\"\n }; \n};\n
"},{"location":"destinations/#multiple-destinations","title":"Multiple destinations","text":"SC4S can send data to multiple destinations. In the original setup the default destination accepts all events. This ensures that at least one destination receives the event, helping to avoid data loss due to misconfiguration. The provided examples demonstrate possible options for configuring additional HEC destinations.
"},{"location":"destinations/#send-all-events-to-the-additional-destination","title":"Send all events to the additional destination","text":"After adding this example to your basic configuration SC4S will send all events both to SC4S_DEST_SPLUNK_HEC_DEFAULT_URL
and SC4S_DEST_SPLUNK_HEC_OTHER_URL
.
#Note \"OTHER\" should be a meaningful name\nSC4S_DEST_SPLUNK_HEC_OTHER_URL=https://splunk:8088\nSC4S_DEST_SPLUNK_HEC_OTHER_TOKEN=${SPLUNK_HEC_TOKEN}\nSC4S_DEST_SPLUNK_HEC_OTHER_TLS_VERIFY=no\nSC4S_DEST_SPLUNK_HEC_OTHER_MODE=GLOBAL\n
"},{"location":"destinations/#send-only-selected-events-to-the-additional-destination","title":"Send only selected events to the additional destination","text":"After adding this example to your basic configuration SC4S will send Cisco IOS events to SC4S_DEST_SPLUNK_HEC_OTHER_URL
.
#Note \"OTHER\" should be a meaningful name\nSC4S_DEST_SPLUNK_HEC_OTHER_URL=https://splunk:8088\nSC4S_DEST_SPLUNK_HEC_OTHER_TOKEN=${SPLUNK_HEC_TOKEN}\nSC4S_DEST_SPLUNK_HEC_OTHER_TLS_VERIFY=no\nSC4S_DEST_SPLUNK_HEC_OTHER_MODE=SELECT\n
application sc4s-lp-cisco_ios_dest_fmt_other[sc4s-lp-dest-select-d_hec_fmt_other] {\n filter {\n 'cisco' eq \"${fields.sc4s_vendor}\"\n and 'asa' eq \"${fields.sc4s_product}\"\n };\n};\n
"},{"location":"destinations/#advanced-topic-configure-filtered-alternate-destinations","title":"Advanced topic: Configure filtered alternate destinations","text":"You may require more granularity for a specific data source. For example, you may want to send all Cisco ASA debug traffic to Cisco Prime for analysis. To accommodate this, filtered alternate destinations let you supply a filter to redirect a portion of a source\u2019s traffic to a list of alternate destinations and, optionally, prevent matching events from being sent to Splunk. You configure this using environment variables:
Variable Values Description SC4S_DEST_<VENDOR_PRODUCT>_ALT_FILTER syslog-ng filter Filter to determine which events are sent to alternate destinations. SC4S_DEST_<VENDOR_PRODUCT>_FILTERED_ALTERNATES Comma or space-separated list of syslog-ng destinations. Send filtered events to alternate syslog-ng destinations using the VENDOR_PRODUCT syntax, for example,SC4S_DEST_CISCO_ASA_FILTERED_ALTERNATES
. This is an advanced capability, and filters and destinations using proper syslog-ng syntax must be constructed before using this functionality.
The regular destinations, including the primary HEC destination or configured archive destination, for example d_hec
or d_archive
, are not included for events matching the configured alternate destination filter. If an event matches the filter, the list of filtered alternate destinations completely replaces any mainline destinations, including defaults and global or source-based standard alternate destinations. Include them in the filtered destination list if desired.
Since the filtered alternate destinations completely replace the mainline destinations, including HEC to Splunk, a filter that matches all traffic can be used with a destination list that does not include the standard HEC destination to effectively turn off HEC for a given data source.
"},{"location":"edge_processor/","title":"Edge Processor integration guide (Experimental)","text":""},{"location":"edge_processor/#intro","title":"Intro","text":"You can use the Edge Processor
to:
SPL2
.SPL2
.AWS S3
or Apache Kafka
.stateDiagram\n direction LR\n\n SC4S: SC4S\n EP: Edge Processor\n Dest: Another destination\n Device: Your device\n S3: AWS S3\n Instance: Instance\n Pipeline: Pipeline with SPL2\n\n Device --> SC4S: Syslog protocol\n SC4S --> EP: HEC\n state EP {\n direction LR\n Instance --> Pipeline\n }\n EP --> Splunk\n EP --> S3\n EP --> Dest
"},{"location":"edge_processor/#set-up-the-edge-processor-for-sc4s","title":"Set up the Edge Processor for SC4S","text":"SC4S using same protocol for communication with Splunk and Edge Processor. For that reason setup process will be very similar, but it have some differences.
Set up on Docker / PodmanSet up on Kubernetesenv_file
, configure the HEC URL as IP of managed instance, that you registered on Edge Processor.SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://x.x.x.x:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
values.yaml
HEC URL using the IP of managed instance, that you registered on Edge Processor.splunk:\n hec_url: \"http://x.x.x.x:8088\"\n hec_token: \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\"\n hec_verify_tls: \"no\"\n
"},{"location":"edge_processor/#mtls-encryption","title":"mTLS encryption","text":"Before setup, generate mTLS certificates. Server mTLS certificates should be uploaded to Edge Processor
and client certifcates should be used with SC4S
.
Rename the certificate files. SC4S requires the following names:
key.pem
- client certificate keycert.pem
- client certificateca_cert.pem
- certificate authoritySC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://x.x.x.x:8088
.key.pem
, cert.pem
, ca_cert.pem
) to /opt/sc4s/tls/hec
./opt/sc4s/tls/hec
to /etc/syslog-ng/tls/hec
using docker/podman volumes.SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_MOUNT=/etc/syslog-ng/tls/hec
.values.yaml
file:splunk:\n hec_url: \"https://x.x.x.x:8088\"\n hec_token: \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\"\n hec_tls: \"hec-tls-secret\"\n
charts/splunk-connect-for-syslog/secrets.yaml
file:hec_tls:\n secret: \"hec-tls-secret\"\n value:\n key: |\n -----BEGIN PRIVATE KEY-----\n Exmaple key\n -----END PRIVATE KEY-----\n cert: |\n -----BEGIN CERTIFICATE-----\n Exmaple cert\n -----END CERTIFICATE-----\n ca: |\n -----BEGIN CERTIFICATE-----\n Example ca\n -----END CERTIFICATE-----\n
secrets.yaml
:ansible-vault encrypt charts/splunk-connect-for-syslog/secrets.yaml\n
Add the IP address for your cluster nodes to the inventory file ansible/inventory/inventory_microk8s_ha.yaml
.
Deploy the Ansible playbook:
ansible-playbook -i ansible/inventory/inventory_microk8s_ha.yaml ansible/playbooks/microk8s_ha.yml --ask-vault-pass\n
"},{"location":"edge_processor/#scaling-edge-processor","title":"Scaling Edge Processor","text":"To scale you can distribute traffic between Edge Processor managed instances. To set this up, update the HEC URL with a comma-separated list of URLs for your managed instances.
Set up on Docker/PodmanSet up on KubernetesUpdate HEC URL in env_file
:
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://x.x.x.x:8088,http://x.x.x.x:8088,http://x.x.x.x:8088\n
Update HEC URL in values.yaml
:
splunk:\n hec_url: \"http://x.x.x.x:8088,http://x.x.x.x:8088,http://x.x.x.x:8088\"\n
"},{"location":"experiments/","title":"Current experimental features","text":""},{"location":"experiments/#3120","title":"> 3.12.0","text":"SC4S_USE_NAME_CACHE=yes
supports IPv6.
eBPF is a feature that leverages Linux kernel infrastructure to evenly distribute the load, especially in cases when there is a huge stream of messages incoming from a single appliance. To use the eBPF feature, you must have a host machine with and OS that supports eBPF. eBPF should be used only in cases when other ways of SC4S tuning fail. See the instruction for configuration details. To learn more visit this blog post.
"},{"location":"experiments/#sc4s-lite","title":"SC4S Lite","text":"In the new 3.0.0 update, we\u2019ve introduced SC4S Lite. SC4S Lite is designed for those who prefer speed and custom filters over the pre-set ones that come with the standard SC4S. It\u2019s similar to our default version, without the pre-defined filters and complex app_parser topics. More information can be found at dedicated page.
"},{"location":"experiments/#2130","title":"> 2.13.0","text":"env_file
, SC4S sets SC4S_USE_NAME_CACHE=yes
to enable caching of the last valid host string, replaces nill, null, or IPv4 with the last good value, and stores this information in the hostip.sqlite
file. hostip.sqlite
file, set SC4S_CLEAR_NAME_CACHE=yes
flag in env_file
. This action will automatically delete the hostip.sqlite file
when SC4S restarts.env_file
set SC4S_SOURCE_VMWARE_VSPHERE_GROUPMSG=yes
to enable additional post processing and merge multiline vmware events. You should also enable SC4S_USE_NAME_CACHE=yes
, to accomodate event that have malformed or missing host names.env_file
set SC4S_USE_VPS_CACHE=yes
to enable automatic configuration of vendor_product
by source where possible. This feature caches vendor
and product
fields to determine of the best values for generic Linux events. For example, without this feature the \u201cvendor product by host\u201d app parser must be configured to identify ESX hosts so that ESX SSHD events can be routed using the meta key vmware_vsphere_nix_syslog
. With this feature enabled a common event such as an event containing \u201cprogram=vpxa\u201d will cache this value. SC4S_SOURCE_PROXYCONNECT=yes
for TCP and TLS connection expect \u201cPROXY CONNECT\u201d to provide the original client IP in SNAT load balancing.Q: The universal forwarder with file-based architecture has been the documented Splunk best practice for a long time. Why should I switch to an HTTP Event Collector (HEC) based architecture?
A:
Using HEC to stream events directly to the indexers provides superior load balancing, and has shown to produce more even data distribution across the indexers. This even distribution results in significantly enhanced search performance. This benefit is especially valuable in large Splunk deployments.
The HEC architecture designed in SC4S is also easier to administer with newer versions of syslog-ng. There are fewer opportunities for configuration errors, resulting in higher overall performance.
HEC, and in particular the \u201c/event\u201d endpoint, offers the opportunity for a far richer data stream to Splunk, with lower resource utilization at ingest time. This rich data stream can be taken advantage of in next-generation add-ons.
Q: Is the Splunk HTTP Event Collector (HEC) as reliable as the Splunk universal forwarder?
A: HEC utilizes standard HTTP mechanisms to confirm that the endpoint is responsive before sending data. The HEC architecture allows you to use an industry standard load balancer between SC4S and the indexer or the included load balancing capability built into SC4S itself.
Q: What if my team doesn\u2019t know how to manage containers?
A: Using a runtime like Podman to deploy and manage SC4S containers is exceptionally easy even for those with no prior \u201ccontainer experience\u201d. Our application of container technology behaves much like a packaging system. The interaction uses \u201csystemctl\u201d commands a Linux admin would use for other common administration activities. The best approach is to try it out in a lab to see what the experience is like for yourself!
Q: Can my team use SC4S with Windows?
A: You can now run Docker on Windows! Microsoft has introduced public preview technology for Linux containers on Windows. Alternatively, a minimal Centos/Ubuntu Linux VM running on Windows hyper-v is a reliable production-grade choice.
Q: My company has the traditional universal forwarder and files-based syslog architecture deployed and running, should I rip and replace a working installation with SC4S?
A: Generally speaking, if a deployment is working and you are happy with it, it\u2019s best to leave it as is until there is need for major deployment changes, such as scaling your configuration. The search performance improvements from better data distribution is one benefit, so if Splunk users have complained about search performance or you are curious about the possible performance gains, we recommend doing an analysis of the data distribution across the indexers.
Q: What is the best way to migrate to SC4S from an existing syslog architecture?
A: When exploring migration to SC4S we strongly recommend that you experiment in a lab prior to deployment to production. There are a couple of approaches to consider:
Q: How can SC4S be deployed to provide high availability?
A: The syslog protocol was not designed with HA as a goal, so configuration can be challenging. See Performant AND Reliable Syslog UDP is best for an excellent overview of this topic.
The syslog protocol limits the extent to which you can make any syslog collection architecture HA; at best it can be made \u201cmostly available\u201d. To do this, keep it simple and use OS clustering (shared IP) or even just VMs with vMotion. This simple architecture will encounter far less data loss over time than more complicated schemes. Another possible option is containerization HA schemes for SC4S (centered around MicroK8s) that will take some of the administrative burden of clustering away, but still functions as OS clustering under the hood.
Q: I\u2019m worried about data loss if SC4S goes down. Could I feed syslog to redundant SC4S servers to provide HA, without creating duplicate events in Splunk?
A: In many system design decisions there is some level of compromise. Any network protocol that doesn\u2019t have an application level ACK will lose data because speed is selected over reliability in the design. This is the case with syslog. Use a clustered IP with an active/passive node for a level of resilience while keeping complexity to a minimum. It could be possible to implement a far more complex solution utilizing an additional intermediary technology like Kafka, however the costs may outweigh the real world benefits.
Q: If the XL reference HW can handle just under 1 terabyte per day, how can SC4S be scaled to handle large deployments of many terabytes per day?
A: SC4S is a distributed architecture. SC4S instances should be deployed in the same VLAN as the source devices. This means that each SC4S instance will only see a subset of the total syslog traffic in a large deployment. Even in a deployment of 100 terabytes or greater, the individual SC4S instances will see loads in gigabytes per day rather than terabytes per day.
Q: SC4S is being blocked by fapolicyd
, how do I fix that?
A: Create a rule that allows running SC4S in fapolicyd
configuration:
/etc/fapolicyd/rules.d/15-sc4s.rules
.allow perm=open exe=/ : dir=/usr/lib64/ all trust=1
.fagenrules --load
to load the new rule.systemctl restart fapolicyd
to restart the process.sc4s systemctl start sc4s
and verify that there are no errors systemctl status sc4s
.Q: I am facing a unique issue that my postfilter configuration is not working although I don\u2019t have any postfilter for the mentioned source?
A: There may be a OOB postfilter for the source which will be applied, validate this by checking the value of sc4s_tags
in Splunk. To resolve this, see [sc4s-finalfilter]
. Do not use this resolution in any other situation as it can add the cost of the data processing.
Q: Where should the configuration for the vendors be placed? There are folders of app-parsers and directories. Which one to use? Does this also mean that csv files for metadata are no longer required?
A: The configuration for vendors should be placed in /opt/sc4s/local/config/*/.conf
. Most of the folders are placeholders, the configuration will work in any of these folders with a .conf
extension. CSV should be placed in local/context/*.csv
. Using splunk_metadata.csv
is good for metadata override, but use .conf
file for everything else in place of other csv files.
Q: Can we have a file in which we can create all default indexes in one effort?
A: Refer to indexes.conf, which contains all indexes being created in one effort. This file also has lastChanceIndex
configured, to use if it fits your requirements. For more information on this file, please refer Splunk docs.
Load balancers are not a best practice for SC4S. The exception to this is a narrow use case where the syslog server is exposed to untrusted clients on the internet, for example, with Palo Alto Cortex.
"},{"location":"lb/#considerations","title":"Considerations","text":"SC4S_SOURCE_PROXYCONNECT=yes
. The best deployment model for high availability is a Microk8s based deployment with MetalLB in BGP mode. This model uses a special class of load balancer that is implemented as destination network translation.
"},{"location":"lite/","title":"SC4S Lite","text":""},{"location":"lite/#about-sc4s-lite","title":"About SC4S Lite","text":"SC4S Lite provides a scalable, performance-oriented solution for ingesting syslog data into Splunk. Pluggable modular parsers offer you the flexibility to incorporate custom data processing logic to suit specific use cases.
"},{"location":"lite/#architecture","title":"Architecture","text":""},{"location":"lite/#sc4s-lite_1","title":"SC4S Lite","text":"SC4S Lite provides a lightweight, high-performance SC4S solution.
"},{"location":"lite/#pluggable-modules","title":"Pluggable Modules","text":"Pluggable modules are predefined modules that you can enable and disable through configuration files. Each pluggable module represents a set of parsers for each vendor that supports SC4S. You can only enable or disable modules, you cannot create new modules or update existing ones. For more information see the pluggable modules documentation .
"},{"location":"lite/#splunk-enterprise-or-splunk-cloud","title":"Splunk Enterprise or Splunk Cloud","text":"You configure SC4S Lite to send syslog data to Splunk Enterprise or Splunk Cloud. The Splunk Platform provides comprehensive analysis, searching, and visualization of your processed data.
"},{"location":"lite/#how-sc4s-lite-processes-your-data","title":"How SC4S Lite processes your data","text":"SC4S Lite is built on an Alpine lightweight container which has very little vulnerability. SC4S Lite supports secure syslog data transmission protocols such as RELP and TLS over TCP to protect your data in transit. Additionally, the environment in which SC4S Lite is deployed enhances data security.
"},{"location":"lite/#scalability-and-performance","title":"Scalability and performance","text":"SC4S Lite provides superior performance and scalability thanks to the lightweight architecture and pluggable parsers, which distribute the processing load. It is also packaged with eBPF functionality to further enhance performance. Note that actual performance may depend on factors such as your server capacity and network bandwidth.
"},{"location":"lite/#implement-sc4s-lite","title":"Implement SC4S Lite","text":"To implementat of SC4S Lite:
container2
or container3
) with container3lite
.values.yaml
file.Performance testing against our lab configuration produces the following results and limitations.
"},{"location":"performance/#tested-configurations","title":"Tested Configurations","text":""},{"location":"performance/#splunk-cloud-noah","title":"Splunk Cloud Noah","text":""},{"location":"performance/#environment","title":"Environment","text":"/opt/syslog-ng/bin/loggen -i --rate=100000 --interval=1800 -P -F --sdata=\"[test name=\\\"stress17\\\"]\" -s 800 --active-connections=10 <local_hostmane> <sc4s_external_tcp514_port>\n
"},{"location":"performance/#result","title":"Result","text":"SC4S instance root networking slirp4netns networking m5zn.large average rate = 21109.66 msg/sec, count=38023708, time=1801.25, (average) msg size=800, bandwidth=16491.92 kB/sec average rate = 20738.39 msg/sec, count=37344765, time=1800.75, (average) msg size=800, bandwidth=16201.87 kB/sec m5zn.xlarge average rate = 34820.94 msg/sec, count=62687563, time=1800.28, (average) msg size=800, bandwidth=27203.86 kB/sec average rate = 35329.28 msg/sec, count=63619825, time=1800.77, (average) msg size=800, bandwidth=27601.00 kB/sec m5zn.2xlarge average rate = 71929.91 msg/sec, count=129492418, time=1800.26, (average) msg size=800, bandwidth=56195.24 kB/sec average rate = 70894.84 msg/sec, count=127630166, time=1800.27, (average) msg size=800, bandwidth=55386.60 kB/sec m5zn.2xlarge average rate = 85419.09 msg/sec, count=153778825, time=1800.29, (average) msg size=800, bandwidth=66733.66 kB/sec average rate = 84733.71 msg/sec, count=152542466, time=1800.26, (average) msg size=800, bandwidth=66198.21 kB/sec"},{"location":"performance/#splunk-enterprise","title":"Splunk Enterprise","text":""},{"location":"performance/#environment_1","title":"Environment","text":"/opt/syslog-ng/bin/loggen -i --rate=100000 --interval=600 -P -F --sdata=\"[test name=\\\"stress17\\\"]\" -s 800 --active-connections=10 <local_hostmane> <sc4s_external_tcp514_port>\n
"},{"location":"performance/#result_1","title":"Result","text":"SC4S instance root networking slirp4netns networking m5zn.large average rate = 21511.69 msg/sec, count=12930565, time=601.095, (average) msg size=800, bandwidth=16806.01 kB/sec average rate = 21583.13 msg/sec, count=12973491, time=601.094, (average) msg size=800, bandwidth=16861.82 kB/sec average rate = 20738.39 msg/sec, count=37344765, time=1800.75, (average) msg size=800, bandwidth=16201.87 kB/sec m5zn.xlarge average rate = 37514.29 msg/sec, count=22530855, time=600.594, (average) msg size=800, bandwidth=29308.04 kB/sec average rate = 37549.86 msg/sec, count=22552210, time=600.594, (average) msg size=800, bandwidth=29335.83 kB/sec average rate = 35329.28 msg/sec, count=63619825, time=1800.77, (average) msg size=800, bandwidth=27601.00 kB/sec m5zn.2xlarge average rate = 98580.10 msg/sec, count=59157495, time=600.096, (average) msg size=800, bandwidth=77015.70 kB/sec average rate = 99463.10 msg/sec, count=59687310, time=600.095, (average) msg size=800, bandwidth=77705.55 kB/sec average rate = 84733.71 msg/sec, count=152542466, time=1800.26, (average) msg size=800, bandwidth=66198.21 kB/sec"},{"location":"performance/#guidance-on-sizing-hardware","title":"Guidance on sizing hardware","text":"SC4S Lite pluggable modules are predefined modules that you can enable or disable by modifying your config.yaml
file. This file contains a list of add-ons. See the example and list of available pluggable modules in (config.yaml reference file) for more information. Once you update config.yaml
, you mount it to the Docker container and override /etc/syslog-ng/config.yaml
.
The installation process is identical to the installation process for Docker Compose for SC4S with the following modifications.
Use the SC4S Lite image instead of the SC4S image:
image: ghcr.io/splunk/splunk-connect-for-syslog/container3lite\n
Mount your config.yaml
file with your add-ons to /etc/syslog-ng/config.yaml
:
volumes:\n - /path/to/your/config.yaml:/etc/syslog-ng/config.yaml\n
"},{"location":"pluggable_modules/#kubernetes","title":"Kubernetes:","text":"The installation process is identical to the installation process for Kubernetes for SC4S with the following modifications:
Use the SC4S Lite image instead of SC4S in values.yaml
:
image:\n repository: ghcr.io/splunk/splunk-connect-for-syslog/container3lite\n
Mount config.yaml
. Add an addons
section inside sc4s
in values.yaml
:
sc4s:\n addons:\n config.yaml: |-\n ---\n addons:\n - cisco\n - paloalto\n - dell\n
"},{"location":"upgrade/","title":"Upgrading SC4S","text":""},{"location":"upgrade/#upgrade-sc4s","title":"Upgrade SC4S","text":"latest
tag for the SC4S image in the sc4s.service unit file. You can also set a specific version in the unit file if desired.[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n
sudo systemctl restart sc4s
See the release notes for more information.
"},{"location":"upgrade/#upgrade-notes","title":"Upgrade Notes","text":"Version 3 does not introduce any breaking changes. To upgrade to version 3, review the service file and change the container reference from container2
to container3
. For a step by step guide see here.
You may need to migrate legacy log paths or version 1 app-parsers for version 2. To do this, open an issue and attach the original configuration and a compressed pcap of sample data for testing. We will evaluate whether to include the source in an upcoming release.
"},{"location":"upgrade/#upgrade-from-2230","title":"Upgrade from <2.23.0","text":"sc4s.service
and manually update the differences in accordance with the current version of the documentation.env_file
for \u201cMICROFOCUS_ARCSIGHT\u201d variables and replace with CEF variables. env_file
and replace in accordance with the current version of the documentation. sc4s.service
file accordingly._metrics
index by default. Update vendor_product
key \u2018sc4s_metrics\u2019 to change the index.vendor_product_by_source
is deprecated for null queue or dropping events. This use will be removed in version 3. See Filtering events from output.SPLUNK_HEC_ALT_DESTS
is deprecated and will be ignored.SC4S_DEST_GLOBAL_ALTERNATES
is deprecated and will be removed in future major versions. .dest_key
field is no longer used.sc4s_vendor_product
is read only and will be removed.sc4s_vendor
now contains vendor portion of vendor_product
.sc4s_vendor_product
now contains product portion of \u2018vendor_product\u2019.sc4s_class
now contains additional data previously concatenated to vendor_product
meta_key
.#Current app parsers contain one or more lines\nvendor_product('value_here')\n#This must change to failure to make this change will prevent sc4s from starting\nvendor('value')\nproduct('here')\n
"},{"location":"v3_upgrade/","title":"Upgrading Splunk Connect for Syslog v2 -> v3","text":""},{"location":"v3_upgrade/#upgrade-process-for-version-newer-than-230","title":"Upgrade process (for version newer than 2.3.0)","text":"In general the upgrade process consists of three steps: - change of container version - restart of service - validation NOTE: Version 3 of SC4S is using alpine linux distribution as base image in opposition to previous versions which used UBI (Red Hat) image.
"},{"location":"v3_upgrade/#dockerpodman","title":"Docker/Podman","text":""},{"location":"v3_upgrade/#update-container-image-version","title":"Update container image version","text":"In the service file: /lib/systemd/system/sc4s.service
container image reference should be updated to version 3 with latest
tag:
[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n
"},{"location":"v3_upgrade/#restart-sc4s-service","title":"Restart sc4s service","text":"Restart the service: sudo systemctl restart sc4s
After the above command is executed successfully, the following information with the version becomes visible in the container logs: sudo podman logs SC4S
for podman or sudo docker logs SC4S
for docker. Expected output:
SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...\nSC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...\nsyslog-ng checking config\nsc4s version=3.0.0\nstarting goss\nstarting syslog-ng \n
If you are upgrading from version lower than 2.3.0 please refer to this guide.
"},{"location":"gettingstarted/","title":"Before you start","text":""},{"location":"gettingstarted/#getting-started","title":"Getting Started","text":"Splunk Connect for Syslog (SC4S) is a distribution of syslog-ng that simplifies getting your syslog data into Splunk Enterprise and Splunk Cloud. SC4S provides a runtime-agnostic solution that lets you deploy using the container runtime environment of choice and a configuration framework. This lets you process logs out-of-the-box from many popular devices and systems.
"},{"location":"gettingstarted/#planning-deployment","title":"Planning Deployment","text":"Syslog can refer to multiple message formats as well as, optionally, a wire protocol for event transmission between computer systems over UDP, TCP, or TLS. This protocol minimizes overhead on the sender, favoring performance over reliability. This means any instability or resource constraint can cause data to be lost in transmission.
SC4S installation can be automated with Ansible. To do this, you provide a list of hosts on which you want to run SC4S and basic configuration information, such as the Splunk endpoint, HEC token, and TLS configuration.
"},{"location":"gettingstarted/ansible-docker-podman/#step-1-prepare-your-initial-configuration","title":"Step 1: Prepare your initial configuration","text":"env_file
with your Splunk endpoint and HEC token:SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://xxx.xxx.xxx.xxx:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
2. Provide a list of hosts on which you want to run your cluster and the host application in the inventory file: all:\n hosts:\n children:\n node:\n hosts:\n node_1:\n ansible_host:\n
"},{"location":"gettingstarted/ansible-docker-podman/#step-2-deploy-sc4s-on-your-configuration","title":"Step 2: Deploy SC4S on your configuration","text":"# From repository root\ndocker-compose -f ansible/docker-compose.yml build\ndocker-compose -f ansible/docker-compose.yml up -d\ndocker exec -it ansible_sc4s /bin/bash\n
ansible-playbook -i path/to/inventory.yaml -u <username> --ask-pass path/to/playbooks/docker.yml\nor\nansible-playbook -i path/to/inventory.yaml -u <username> --ask-pass path/to/playbooks/podman.yml\n
ansible-playbook -i path/to/inventory.yaml -u <username> --key-file <key_file> path/to/playbooks/docker.yml\nor\nansible-playbook -i path/to/inventory.yaml -u <username> --key-file <key_file> path/to/playbooks/podman.yml\n
SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once the checks are complete, validate that SC4S properly communicate with Splunk. To do this, execute the following search in Splunk:
index=* sourcetype=sc4s:events \"starting up\"\n
This should yield an event similar to the following:
syslog-ng starting up; version='3.28.1'\n
You can verify if all SC4S instances work by checking the sc4s_container
in Splunk. Each instance should have a different container ID. All other fields should be the same. The startup process should proceed normally without syntax errors. If it does not, follow the steps below before proceeding to deeper-level troubleshooting:
sudo docker ps\n
docker logs <ID | image name> \n
or: sudo systemctl status sc4s\n
SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...\nSC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...\nsyslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
If you do not see this output, see \u201cTroubleshoot sc4s server\u201d and \u201cTroubleshoot resources\u201d for more information.
"},{"location":"gettingstarted/ansible-docker-swarm/","title":"Docker Swarm","text":"SC4S installation can be automated with Ansible. To do this, you provide a list of hosts on which you want to run SC4S and the basic configuration, such as Splunk endpoint, HEC token, and TLS configuration. To perform this task, you must have existing understanding of Docker Swarm and be able to set up your Swarm architecture and configuration.
"},{"location":"gettingstarted/ansible-docker-swarm/#step-1-prepare-your-initial-configuration","title":"Step 1: Prepare your initial configuration","text":"env_file
with your Splunk endpoint and HEC token:SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://xxx.xxx.xxx.xxx:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
2. Provide a list of hosts on which you want to run your Docker Swarm cluster and the host application in the inventory file: all:\n hosts:\n children:\n manager:\n hosts:\n manager_node_1:\n ansible_host:\n\n worker:\n hosts:\n worker_node_1:\n ansible_host:\n worker_node_2:\n ansible_host:\n
3. You can run your cluster with one or more manager nodes. One advantage of hosting SC4S with Docker Swarm is that you can leverage the Swarm internal load balancer. See your Swarm Mode documentation at Docker. /ansible/app/docker-compose.yml
file: version: \"3.7\"\nservices:\n sc4s:\n deploy:\n replicas: 2\n ...\n
# From repository root\ndocker-compose -f ansible/docker-compose.yml build\ndocker-compose -f ansible/docker-compose.yml up -d\ndocker exec -it ansible_sc4s /bin/bash\n
ansible-playbook -i path/to/inventory_swarm.yaml -u <username> --ask-pass path/to/playbooks/docker_swarm.yml\n
ansible-playbook -i path/to/inventory_swarm.yaml -u <username> --key-file <key_file> path/to/playbooks/docker_swarm.yml\n
sudo docker stack ls
To scale your number of services: sudo docker service update --replicas 2 sc4s_sc4s
To see services running in a given stack: sudo docker stack services sc4s
SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once the checks are complete, validate that SC4S properly communicate with Splunk. To do this, execute the following search in Splunk:
index=* sourcetype=sc4s:events \"starting up\"\n
You should see an event similar to the following:
syslog-ng starting up; version='3.28.1'\n
You can verify if all services in the Swarm cluster work by checking the sc4s_container
in Splunk. Each service should have a different container ID. All other fields should be the same. The startup process should proceed normally without syntax errors. If it does not, follow the steps below before proceeding to deeper-level troubleshooting:
sudo docker|podman ps\n
docker|podman logs <ID | image name> \n
SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...\nSC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...\nsyslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
To automate SC4S installation with Ansible, you provide a list of hosts on which you want to run SC4S as well as basic configuration information, such as the Splunk endpoint, HEC token, and TLS configuration. To perform this task, you must have existing understanding of MicroK8s and be able to set up your Kubernetes cluster architecture and configuration.
"},{"location":"gettingstarted/ansible-mk8s/#step-1-prepare-your-initial-configuration","title":"Step 1: Prepare your initial configuration","text":"Before you run SC4S with Ansible, update values.yaml
with your Splunk endpoint and HEC token. You can find the example file here.
In the inventory file, provide a list of hosts on which you want to run your cluster and the host application:
all:\n hosts:\n children:\n node:\n hosts:\n node_1:\n ansible_host:\n
all:\n hosts:\n children:\n manager:\n hosts:\n manager:\n ansible_host:\n\n workers:\n hosts:\n worker1:\n ansible_host:\n worker2:\n ansible_host:\n
# From repository root\ndocker-compose -f ansible/docker-compose.yml build\ndocker-compose -f ansible/docker-compose.yml up -d\ndocker exec -it ansible_sc4s /bin/bash\n
To authenticate with username and password:
ansible-playbook -i path/to/inventory_mk8s.yaml -u <username> --ask-pass path/to/playbooks/microk8s.yml\n
To authenitcate if you are running a high-availability cluster:
ansible-playbook -i path/to/inventory_mk8s_ha.yaml -u <username> --ask-pass path/to/playbooks/microk8s_ha.yml\n
To authenticate using a key pair:
ansible-playbook -i path/to/inventory_mk8s.yaml -u <username> --key-file <key_file> path/to/playbooks/microk8s.yml\n
SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once the checks are complete, validate that SC4S properly communicates with Splunk. To do this, execute the following search in Splunk:
index=* sourcetype=sc4s:events \"starting up\"\n
This should yield an event similar to the following:
syslog-ng starting up; version='3.28.1'\n
You can verify whether all services in the cluster work by checking the sc4s_container
in Splunk. Each service should have a different container ID. All other fields should be the same.
The startup process should proceed normally without syntax errors. If it does not, follow the steps below before proceeding to deeper-level troubleshooting:
sudo microk8s kubectl get pods\nsudo microk8s kubectl logs <podname>\n
You should see events similar to those below in the output:
SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...\nSC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...\nsyslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
"},{"location":"gettingstarted/byoe-rhel8/","title":"Configure SC4S in a non-containerized SC4S deployment","text":"Configuring SC4S in a non-containerized SC4S deployment requires a custom configuration. Note that since Splunk does not control your unique environment, we cannot help with setting up environments, debugging networking, etc. Consider this configuration only if:
This topic provides guidance for using the SC4S syslog-ng configuration files directly on the host OS running on a hardware server or virtual machine. You must provide:
You must modify the base configuration for most environments to accomodate enterprise infrastructure variations. When you upgrade, evaluate the current environment compared to this reference then develop and test an installation-specific installation plan. Do not depend on the distribution-supplied version of syslog-ng, as it may not be recent enough to support your needs. See this blog post to learn more.
"},{"location":"gettingstarted/byoe-rhel8/#install-sc4s-in-a-custom-environment","title":"Install SC4S in a custom environment","text":"These installation instructions assume a recent RHEL or CentOS-based release. You may have to make minor adjustments for Debian and Ubuntu. The examples provided here use pre-compiled binaries for the syslog-ng installation in /etc/syslog-ng
. Your configuration may vary.
The following installation instructions are summarized from a blog maintained by the One Identity team.
Install CentOS or RHEL 8.0. See your OS documentation for instructions.
Enable EPEL (Centos 8).
dnf install 'dnf-command(copr)' -y\ndnf install epel-release -y\ndnf copr enable czanik/syslog-ng336 -y\ndnf install syslog-ng syslog-ng-python syslog-ng-http python3-pip gcc python3-devel -y\n
sudo systemctl stop syslog-ng\nsudo systemctl disable syslog-ng\n
bare_metal.tar
from releases on github and untar the package in /etc/syslog-ng
. This step unpacks a tarball with the SC4S version of the syslog-ng config files in the standard /etc/syslog-ng
location, and will overwrite existing content. Make sure that any previous configurations of syslog-ng are saved prior to executing the download step.For production use, select the latest version of SC4S that does not have an -rc
, -alpha
, or -beta
suffix.
sudo wget -c https://github.com/splunk/splunk-connect-for-syslog/releases/download/<latest release>/baremetal.tar -O - | sudo tar -x -C /etc/syslog-ng\n
sudo pip3 install -r /etc/syslog-ng/requirements.txt\n
goss
and confirm that the version is v0.3.16 or later. goss
installs in /usr/local/bin
by default, so do one of the following:entrypoint.sh
is modified to include /usr/local/bin
in the full path.goss
binary to /bin
or /usr/bin
.curl -L https://github.com/aelsabbahy/goss/releases/latest/download/goss-linux-amd64 -o /usr/local/bin/goss\nchmod +rx /usr/local/bin/goss\ncurl -L https://github.com/aelsabbahy/goss/releases/latest/download/dgoss -o /usr/local/bin/dgoss\n# Alternatively, using the latest\n# curl -L https://raw.githubusercontent.com/aelsabbahy/goss/latest/extras/dgoss/dgoss -o /usr/local/bin/dgoss\nchmod +rx /usr/local/bin/dgoss\n
entrypoint.sh
script (identical to that used in the container) directly using systemd.entrypoint.sh
script directly in systemd, create the SC4S unit file /lib/systemd/system/sc4s.service
and add the following:[Unit]\nDescription=SC4S Syslog Daemon\nDocumentation=https://splunk-connect-for-syslog.readthedocs.io/en/latest/\nWants=network.target network-online.target\nAfter=network.target network-online.target\n\n[Service]\nType=simple\nExecStart=/etc/syslog-ng/entrypoint.sh\nExecReload=/bin/kill -HUP $MAINPID\nEnvironmentFile=/etc/syslog-ng/env_file\nStandardOutput=journal\nStandardError=journal\nRestart=on-abnormal\n\n[Install]\nWantedBy=multi-user.target\n
entrypoint.sh
as a preconfigured script, modify the script by commenting out or removing the stanzas following the OPTIONAL for BYOE
comments in the script. This prevents syslog-ng from being launched by the script. Then create the SC4S unit file /lib/systemd/system/syslog-ng.service
and add the following content:[Unit]\nDescription=System Logger Daemon\nDocumentation=man:syslog-ng(8)\nAfter=network.target\n\n[Service]\nType=notify\nExecStart=/usr/sbin/syslog-ng -F $SYSLOGNG_OPTS -p /var/run/syslogd.pid\nExecReload=/bin/kill -HUP $MAINPID\nEnvironmentFile=-/etc/default/syslog-ng\nEnvironmentFile=-/etc/sysconfig/syslog-ng\nStandardOutput=journal\nStandardError=journal\nRestart=on-failure\n\n[Install]\nWantedBy=multi-user.target\n
/etc/syslog-ng/env_file
and add the following environment variables. Adjust the URL/TOKEN as needed.# The following \"path\" variables can differ from the container defaults specified in the entrypoint.sh script. \n# These are *optional* for most BYOE installations, which do not differ from the install location used.\n# in the container version of SC4S. Failure to properly set these will cause startup failure.\n#SC4S_ETC=/etc/syslog-ng\n#SC4S_VAR=/etc/syslog-ng/var\n#SC4S_BIN=/bin\n#SC4S_SBIN=/usr/sbin\n#SC4S_TLS=/etc/syslog-ng/tls\n\n# General Options\nSC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://splunk.smg.aws:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=a778f63a-5dff-4e3c-a72c-a03183659e94\n\n# Uncomment the following line if using untrusted (self-signed) SSL certificates\n# SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
sudo systemctl daemon-reload\nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/byoe-rhel8/#configure-sc4s-listening-ports","title":"Configure SC4S listening ports","text":"The standard SC4S configuration uses UDP/TCP port 514 as the default for the listening port for syslog traffic, and TCP port 6514 for TLS. You can change these defaults by adding the following additional environment variables to the env_file
:
SC4S_LISTEN_DEFAULT_TCP_PORT=514\nSC4S_LISTEN_DEFAULT_UDP_PORT=514\nSC4S_LISTEN_DEFAULT_RFC6587_PORT=601\nSC4S_LISTEN_DEFAULT_RFC5426_PORT=601\nSC4S_LISTEN_DEFAULT_RFC5425_PORT=5425\nSC4S_LISTEN_DEFAULT_TLS_PORT=6514\n
"},{"location":"gettingstarted/byoe-rhel8/#create-unique-dedicated-listening-ports","title":"Create unique dedicated listening ports","text":"For some source technologies, categorization by message content is not possible. To collect these sources, dedicate a unique listening port to a specific source. See Sources for more information.
"},{"location":"gettingstarted/docker-compose-MacOS/","title":"Install Docker Desktop for MacOS","text":"Refer to the \u201cMacOS\u201d section in your Docker documentation to set up your Docker Desktop for MacOS.
"},{"location":"gettingstarted/docker-compose-MacOS/#perform-your-initial-sc4s-configuration","title":"Perform your initial SC4S configuration","text":"You can run SC4S using either docker-compose
or the docker run
command in the command line. This topic focuses solely on using docker-compose
.
Create a directory on the server for local configurations and disk buffering. Make it available to all administrators, for example: /opt/sc4s/
.
Create a docker-compose.yml
file in your new directory, based on the provided template. By default, the latest container is automatically downloaded at each restart. As a best practice, consult this topic at the time of any new upgrade to check for any changes in the latest template.
version: \"3.7\"\nservices:\n sc4s:\n deploy:\n replicas: 2\n restart_policy:\n condition: on-failure\n image: ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\n ports:\n - target: 514\n published: 514\n protocol: tcp\n - target: 514\n published: 514\n protocol: udp\n - target: 601\n published: 601\n protocol: tcp\n - target: 6514\n published: 6514\n protocol: tcp\n env_file:\n - /opt/sc4s/env_file\n volumes:\n - /opt/sc4s/local:/etc/syslog-ng/conf.d/local:z\n - splunk-sc4s-var:/var/lib/syslog-ng\n# Uncomment the following line if local disk archiving is desired\n# - /opt/sc4s/archive:/var/lib/syslog-ng/archive:z\n# Map location of TLS custom TLS\n# - /opt/sc4s/tls:/etc/syslog-ng/tls:z\n\nvolumes:\n splunk-sc4s-var:\n
/opt/sc4s
folder as shared.Create a local volume that will contain the disk buffer files in the event of a communication failure to the upstream destinations. This volume also keeps track of the state of syslog-ng between restarts, and in particular the state of the disk buffer. Be sure to account for disk space requirements for the Docker volume. This volume is located in /var/lib/docker/volumes/
and could grow significantly if there is an extended outage to the SC4S destinations. See SC4S disk buffer configuration for more information.
sudo docker volume create splunk-sc4s-var\n
Create the subdirectories: /opt/sc4s/local
, /opt/sc4s/archive
, and /opt/sc4s/tls
. Make sure these directories match the volume mounts specified indocker-compose.yml
.
Create a file named /opt/sc4s/env_file
.
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
6. Update the following environment variables and values in /opt/sc4s/env_file
: Update SC4S_DEST_SPLUNK_HEC_DEFAULT_URL
and SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN
to reflect the values for your environment. Do not configure HEC Acknowledgement when you deploy the HEC token on the Splunk side; syslog-ng http destination does not support this feature.
The default number of SC4S_DEST_SPLUNK_HEC_<ID>_WORKERS
is 10. Consult the community if you feel the number of workers (threads) should deviate from this.
Splunk Connect for Syslog defaults to secure configurations. If you are not using trusted SSL certificates, be sure to uncomment the last line.
Each listening port on the container must be mapped to a listening port on the host. Make sure to update the docker-compose.yml
file when adding listening ports for new data sources.
To configure unique ports:
/opt/sc4s/env_file
file to include the port-specific environment variables. See the Sources documentation to identify the specific environment variables that are mapped to each data source vendor and technology.target
stanzas in the ports
section of the file (after the default ports). For example, the following additional target
and published
lines provide for 21 additional technology-specific UDP and TCP ports: - target: 5000-5020\n published: 5000-5020\n protocol: tcp\n - target: 5000-5020\n published: 5000-5020\n protocol: udp\n
For more information about configuration refer to Docker and Podman basic configurations and detailed configuration.
"},{"location":"gettingstarted/docker-compose-MacOS/#startrestart-sc4s","title":"Start/Restart SC4S","text":"From the catalog where you created compose file, execute:
docker-compose up\n
Otherwise use docker-compose
with -f
flag pointing to the compose file docker-compose up -f /path/to/compose/file/docker-compose.yml\n
"},{"location":"gettingstarted/docker-compose-MacOS/#stop-sc4s","title":"Stop SC4S","text":"Execute:
docker-compose down \n
or docker-compose down -f /path/to/compose/file/docker-compose.yml\n
"},{"location":"gettingstarted/docker-compose-MacOS/#verify-proper-operation","title":"Verify Proper Operation","text":"SC4S performs automatic checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once these checks are complete, verify that SC4S is properly communicating with Splunk:
index=* sourcetype=sc4s:events \"starting up\"\n
When the startup process proceeds normally, you should see an event similar to the following:
syslog-ng starting up; version='3.28.1'\n
If you do not see this, try the following steps to troubleshoot:
docker logs <container_name>\n
You should see events similar to those below in the output:
syslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
If you do not see the output above, proceed to the \u201cTroubleshoot sc4s server\u201d and \u201cTroubleshoot resources\u201d sections for more detailed information.
"},{"location":"gettingstarted/docker-compose/","title":"Install Docker Desktop","text":"Refer to your Docker documentation to set up your Docker Desktop.
"},{"location":"gettingstarted/docker-compose/#perform-your-initial-sc4s-configuration","title":"Perform your initial SC4S configuration","text":"You can run SC4S with docker-compose
, or in the command line using the command docker run
. Both options are described in this topic.
/opt/sc4s/
. If you are using docker-compose
, create a docker-compose.yml
file in this directory using the template provided here. By default, the latest SC4S image is automatically downloaded at each restart. As a best practice, check back here regularly for any changes made to the latest template is incorporated into production before you relaunch with Docker Compose.version: \"3.7\"\nservices:\n sc4s:\n deploy:\n replicas: 2\n restart_policy:\n condition: on-failure\n image: ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\n ports:\n - target: 514\n published: 514\n protocol: tcp\n - target: 514\n published: 514\n protocol: udp\n - target: 601\n published: 601\n protocol: tcp\n - target: 6514\n published: 6514\n protocol: tcp\n env_file:\n - /opt/sc4s/env_file\n volumes:\n - /opt/sc4s/local:/etc/syslog-ng/conf.d/local:z\n - splunk-sc4s-var:/var/lib/syslog-ng\n# Uncomment the following line if local disk archiving is desired\n# - /opt/sc4s/archive:/var/lib/syslog-ng/archive:z\n# Map location of TLS custom TLS\n# - /opt/sc4s/tls:/etc/syslog-ng/tls:z\n\nvolumes:\n splunk-sc4s-var:\n
/opt/sc4s
folder as shared./var/lib/docker/volumes/
and could grow significantly if there is an extended outage to the SC4S destinations. See SC4S Disk Buffer Configuration in the Configuration topic for more information.sudo docker volume create splunk-sc4s-var\n
Create the subdirectories: /opt/sc4s/local
, /opt/sc4s/archive
, and /opt/sc4s/tls
. If you are using the docker-compose.yml
file, make sure these directories match the volume mounts specified indocker-compose.yml
.
Create a file named /opt/sc4s/env_file
.
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
6. Update the following environment variables and values to /opt/sc4s/env_file
: SC4S_DEST_SPLUNK_HEC_DEFAULT_URL
and SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN
to reflect the values for your environment. Do not configure HEC Acknowledgement when you deploy the HEC token on the Splunk side; syslog-ng http destination does not support this feature. SC4S_DEST_SPLUNK_HEC_<ID>_WORKERS
is 10. Consult the community if you feel the number of workers (threads) should deviate from this.NOTE: Splunk Connect for Syslog defaults to secure configurations. If you are not using trusted SSL certificates, be sure to uncomment the last line in the example above.
For more information about configuration, see Docker and Podman basic configurations and detailed configuration.
"},{"location":"gettingstarted/docker-compose/#start-or-restart-sc4s","title":"Start or restart SC4S","text":"docker-compose
. Be sure to map the listening ports (-p
arguments) according to your needs:docker run -p 514:514 -p 514:514/udp -p 6514:6514 -p 5000-5020:5000-5020 -p 5000-5020:5000-5020/udp \\\n --env-file=/opt/sc4s/env_file \\\n --name SC4S \\\n --rm ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\n
docker compose
, from the catalog where you created compose file execute: docker compose up\n
Otherwise use docker compose
with -f
flag pointing to the compose file:
docker compose up -f /path/to/compose/file/docker-compose.yml\n
"},{"location":"gettingstarted/docker-compose/#stop-sc4s","title":"Stop SC4S","text":"If the container is run directly from the CLI, stop the container using the docker stop <containerID>
command.
If using docker compose
, execute:
docker compose down \n
or docker compose down -f /path/to/compose/file/docker-compose.yml\n
"},{"location":"gettingstarted/docker-compose/#validate-your-configuration","title":"Validate your configuration","text":"SC4S performs automatic checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once these checks are complete, verify that SC4S is properly communicating with Splunk:
index=* sourcetype=sc4s:events \"starting up\"\n
This should yield an event similar to the following when the startup process proceeds normally:
syslog-ng starting up; version='3.28.1'\n
If you do not see this, try the following steps to troubleshoot:
docker logs SC4S\n
You should see events similar to those below in the output:
syslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
If you do not see the output above, see \u201cTroubleshoot SC4S server\u201d and \u201cTroubleshoot resources\u201d sections for more detailed information.
"},{"location":"gettingstarted/docker-podman-offline/","title":"Install a container while offline","text":"You can stage SC4S by downloading the image so that it can be loaded on a host machine, for example on an airgapped system, without internet connectivity.
oci_container.tgz
from our Github Page. The following example downloads v3.23.1, replace the URL with the latest release or pre-release version as desired:sudo wget https://github.com/splunk/splunk-connect-for-syslog/releases/download/v3.23.1/oci_container.tar.gz\n
<podman or docker> load < oci_container.tar.gz\n
Loaded image: ghcr.io/splunk/splunk-connect-for-syslog/container3:3.23.1\n
Use the container ID to create a local label:
<podman or docker> tag ghcr.io/splunk/splunk-connect-for-syslog/container3:3.23.1 sc4slocal:latest\n
Use the local label sc4slocal:latest
in the relevant unit or YAML file to launch SC4S by setting the SC4S_IMAGE
environment variable in the unit file, or the relevant image:
tag if you are using Docker Compose/Swarm. This label will cause the runtime to select the locally loaded image, and will not attempt to obtain the container image from the internet.
Environment=\"SC4S_IMAGE=sc4slocal:latest\"\n
7. Remove the entry from the relevant unit file when your configuration uses systemd. This is because an external connection to pull the container is no longer needed or available: ExecStartPre=/usr/bin/docker pull $SC4S_IMAGE\n
"},{"location":"gettingstarted/docker-systemd-general/","title":"Install Docker CE","text":""},{"location":"gettingstarted/docker-systemd-general/#before-you-begin","title":"Before you begin","text":"Before you start:
This topic provides the most recent unit file. By default, the latest SC4S image is automatically downloaded at each restart. Consult this topic when you upgrade your SC4S installation and check for changes to the provided template unit file. Make sure these changes are incorporated into your configuration before you relaunch with systemd.
/lib/systemd/system/sc4s.service
based on the provided template:[Unit]\nDescription=SC4S Container\nWants=NetworkManager.service network-online.target docker.service\nAfter=NetworkManager.service network-online.target docker.service\nRequires=docker.service\n\n[Install]\nWantedBy=multi-user.target\n\n[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n\n# Required mount point for syslog-ng persist data (including disk buffer)\nEnvironment=\"SC4S_PERSIST_MOUNT=splunk-sc4s-var:/var/lib/syslog-ng\"\n\n# Optional mount point for local overrides and configurations; see notes in docs\nEnvironment=\"SC4S_LOCAL_MOUNT=/opt/sc4s/local:/etc/syslog-ng/conf.d/local:z\"\n\n# Optional mount point for local disk archive (EWMM output) files\nEnvironment=\"SC4S_ARCHIVE_MOUNT=/opt/sc4s/archive:/var/lib/syslog-ng/archive:z\"\n\n# Map location of TLS custom TLS\nEnvironment=\"SC4S_TLS_MOUNT=/opt/sc4s/tls:/etc/syslog-ng/tls:z\"\n\nTimeoutStartSec=0\n\nExecStartPre=/usr/bin/docker pull $SC4S_IMAGE\n\n# Note: /usr/bin/bash will not be valid path for all OS\n# when startup fails on running bash check if the path is correct\nExecStartPre=/usr/bin/bash -c \"/usr/bin/systemctl set-environment SC4SHOST=$(hostname -s)\"\n\nExecStart=/usr/bin/docker run \\\n -e \"SC4S_CONTAINER_HOST=${SC4SHOST}\" \\\n -v \"$SC4S_PERSIST_MOUNT\" \\\n -v \"$SC4S_LOCAL_MOUNT\" \\\n -v \"$SC4S_ARCHIVE_MOUNT\" \\\n -v \"$SC4S_TLS_MOUNT\" \\\n --env-file=/opt/sc4s/env_file \\\n --network host \\\n --name SC4S \\\n --rm $SC4S_IMAGE\n\nRestart=on-abnormal\n
sudo docker volume create splunk-sc4s-var\n
Account for disk space requirements for the new Docker volume. The Docker volume can grow significantly if there is an extended outage to the SC4S destinations. This volume can be found at /var/lib/docker/volumes/
. See SC4S Disk Buffer Configuration.
Create the following subdirectories:
/opt/sc4s/local
/opt/sc4s/archive
/opt/sc4s/tls
/opt/sc4s/env_file
and add the following environment variables and values:SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
Update SC4S_DEST_SPLUNK_HEC_DEFAULT_URL
and SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN
to reflect the correct values for your environment. Do not configure HEC Acknowledgement when deploying the HEC token on the Splunk side, the underlying syslog-ng HTTP destination does not support this feature.
The default number of SC4S_DEST_SPLUNK_HEC_<ID>_WORKERS
is 10. Consult the community if you feel the number of workers should deviate from this.
Splunk Connect for Syslog defaults to secure configurations. If you are not using trusted SSL certificates, be sure to uncomment the last line in the example in step 5.
For more information see Docker and Podman basic configurations and detailed configuration.
"},{"location":"gettingstarted/docker-systemd-general/#configure-sc4s-for-systemd","title":"Configure SC4S for systemd","text":"To configure SC4S for systemd run the following commands:
sudo systemctl daemon-reload\nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/docker-systemd-general/#restart-sc4s","title":"Restart SC4S","text":"To restart SC4S run the following command:
sudo systemctl restart sc4s\n
"},{"location":"gettingstarted/docker-systemd-general/#implement-unit-file-changes","title":"Implement unit file changes","text":"If you made changes to the configuration unit file, for example to configure with dedicated ports, you must stop SC4S and re-run the systemd configuration commands to implement your changes.
sudo systemctl stop sc4s\nsudo systemctl daemon-reload \nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/docker-systemd-general/#validate-your-configuration","title":"Validate your configuration","text":"SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once the checks are complete, validate that SC4S properly communicate with Splunk. To do this, execute the following search in Splunk:
index=* sourcetype=sc4s:events \"starting up\"\n
You should see an event similar to the following:
syslog-ng starting up; version='3.28.1'\n
The startup process should proceed normally without syntax errors. If it does not, follow the steps below before proceeding to deeper-level troubleshooting:
docker logs SC4S\n
You should see events similar to those below in the output:
syslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
You must tune the host Linux OS receive buffer size to match the SC4S default. This helps to avoid event dropping at the network level. The default receive buffer for SC4S is 16 MB for UDP traffic, which should be acceptable for most environments. To set the host OS kernel to match your buffer:
Edit /etc/sysctl.conf
using the following whole-byte values corresponding to 16 MB:
net.core.rmem_default = 17039360\nnet.core.rmem_max = 17039360\n
Apply to the kernel:
sysctl -p\n
To verify that the kernel does not drop packets, periodically monitor the buffer using the command netstat -su | grep \"receive errors\"
. Failure to tune the kernel for high-volume traffic results in message loss, which can be unpredictable and difficult to detect. The default values for receive kernel buffers in most distributions is 2 MB, which may not be adequate for your configuration.
In many distributions, for example CentOS provisioned in AWS, IPv4 forwarding is not enabled by default. IPv4 forwarding must be enabled for container networking.
sudo sysctl net.ipv4.ip_forward
sudo sysctl net.ipv4.ip_forward=1
/usr/lib/sysctl.d/
, /run/sysctl.d/
, and /etc/sysctl.d/
. /etc/sysctl.d/
and put following setting there or find this specific setting in one of the existing configuration files and set the value to 1
.net.ipv4.ip_forward=1\n
"},{"location":"gettingstarted/getting-started-runtime-configuration/#step-2-create-your-local-directory-structure","title":"Step 2: Create your local directory structure","text":"Create the following three directories:
/opt/sc4s/local
: This directory is used as a mount point for local overrides and configurations. This empty local
directory populates with defaults and examples at the first invocation of SC4S for local configurations and context overrides. Do not change the directory structure of these files, as SC4S depends on the directory layout to read the local configurations properly. If necessary, you can change or add individual files.local/config/
directory four subdirectories let you provide support for device types that are not provided out of the box in SC4S. To get started, see the example log path template lp-example.conf.tmpl
and a filter example.conf
in the log_paths
and filters
subdirectories. Copy these as templates for your own log path development.local/context
directory, change the \u201cnon-example\u201d version of a file (e.g. splunk_metadata.csv
) to preserve the changes upon restart./opt/sc4s/archive
is a mount point for local storage of syslog events if the optional mount is uncommented. The events are written in the syslog-ng EWMM format. See the Configuration topic for information about the directory structure that the archive uses./opt/sc4s/tls
is a mount point for custom TLS certificates if the optional mount is uncommented.When you create these directories, make sure that they match the volume mounts specified in the sc4s.service unit file. Failure to do this will cause SC4S to abort at startup.
"},{"location":"gettingstarted/getting-started-runtime-configuration/#step-3-select-a-container-runtime-and-sc4s-configuration","title":"Step 3: Select a Container Runtime and SC4S Configuration","text":"The table below shows possible ways to run SC4S using Docker or Podman with various management and orchestration systems.
Check your Podman or Docker documentation to see which operating systems are supported by your chosen container management tool. If the SC4S deployment model involves additional limitations or requirements regarding operating systems, you will find them in the column labeled \u2018Additional Operating Systems Requirements\u2019.
Container Runtime and Orchestration Additional Operating Systems Requirements MicroK8s Ubuntu with Microk8s Podman + systemd Docker CE + systemd Docker Desktop + Compose MacOS Docker Compose Bring your own Environment RHEL or CentOS 8.1 & 8.2 (best option) Offline Container Installation Ansible+Docker Swarm Ansible+Podman Ansible+Docker"},{"location":"gettingstarted/getting-started-splunk-setup/","title":"Splunk setup","text":"To ensure proper integration for SC4S and Splunk, perform the following tasks in your Splunk instance:
SC4S maps each sourcetype to the following indexes by default. You will also need to create these indexes in Splunk:
email
epav
epintel
fireeye
gitops
infraops
netauth
netdlp
netdns
netfw
netids
netlb
netops
netwaf
netproxy
netipam
oswin
oswinsec
osnix
print
_metrics
(Optional opt-in for SC4S operational metrics; ensure this is created as a metrics index)If you use custom indexes in SC4S you must also create them in Splunk. See Create custom indexes for more information.
"},{"location":"gettingstarted/getting-started-splunk-setup/#step-2-configure-your-http-event-collector","title":"Step 2: Configure your HTTP event collector","text":"See Use the HTTP event collector for HEC configuration instructions based on your Splunk type.
Keep in mind the following best practices specific to HEC for SC4S:
_metrics
and all event destination indexes.lastChanceIndex
. If you do populate this field, take extreme care to keep it up to date; an attempt to send data to an index that is not in this list results in a 400
error from the HEC endpoint. The lastChanceIndex
will not be consulted if the index specified in the event is not configured on Splunk and the entire batch is then not sent to Splunk.In some configurations, you should ensure output balancing from SC4S to Splunk indexers. To do this, you create a load balancing mechanism between SC4S and Splunk indexers. Note that this should not be confused with load balancing between sources and SC4S.
When configuring your load balancing mechanism, keep in mind the following:
Splunk provides an implementation for SC4S deployment with MicroK8s using a single-server MicroK8s as the deployment model. Clustering has some tradeoffs and should be only considered on a deployment-specific basis.
You can independently replicate the model deployment on different distributions of Kubernetes. Do not attempt this unless you have advanced understanding of Kubernetes and are willing and able to maintain this configuration regularly.
SC4S with MicroK8s leverages features of MicroK8s:
Splunk maintains container images, but it doesn\u2019t directly support or otherwise provide resolutions for issues within the runtime environment.
"},{"location":"gettingstarted/k8s-microk8s/#step-1-allocate-ip-addresses","title":"Step 1: Allocate IP addresses","text":"This configuration requires as least two IP addresses: one for the host and one for the internal load balancer. We suggest allocating three IP addresses for the host and 5-10 IP addresses for later use.
"},{"location":"gettingstarted/k8s-microk8s/#step-2-install-microk8s","title":"Step 2: Install MicroK8s","text":"To install MicroK8s:
sudo snap install microk8s --classic --channel=1.24\nsudo usermod -a -G microk8s $USER\nsudo chown -f -R $USER ~/.kube\nsu - $USER\nmicrok8s status --wait-ready\n
"},{"location":"gettingstarted/k8s-microk8s/#step-3-set-up-your-add-ons","title":"Step 3: Set up your add-ons","text":"When you install metallb
you will be prompted for one or more IPs to use as entry points. If you do not plan to enable clustering, then this IP may be the same IP as the host. If you do plan to enable clustering this IP should not be assigned to the host.
A single IP in CIDR format is x.x.x.x/32. Use CIDR or range syntax.
microk8s enable dns \nmicrok8s enable community\nmicrok8s enable metallb \nmicrok8s enable rbac \nmicrok8s enable storage \nmicrok8s enable openebs \nmicrok8s enable helm3\nmicrok8s status --wait-ready\n
"},{"location":"gettingstarted/k8s-microk8s/#step-4-add-an-sc4s-helm-repository","title":"Step 4: Add an SC4S Helm repository","text":"To add an SC4S Helm repository:
microk8s helm3 repo add splunk-connect-for-syslog https://splunk.github.io/splunk-connect-for-syslog\nmicrok8s helm3 repo update\n
"},{"location":"gettingstarted/k8s-microk8s/#step-5-create-a-valuesyaml-file","title":"Step 5: Create a values.yaml
file","text":"Create the configuration file values.yaml
. You can provide HEC token as a Kubernetes secret or in plain text.
values.yaml
file:#values.yaml\nsplunk:\n hec_url: \"https://xxx.xxx.xxx.xxx:8088/services/collector/event\"\n hec_token: \"00000000-0000-0000-0000-000000000000\"\n hec_verify_tls: \"yes\"\n
microk8s helm3 install sc4s splunk-connect-for-syslog/splunk-connect-for-syslog -f values.yaml\n
values.yaml
file:#values.yaml\nsplunk:\n hec_url: \"https://xxx.xxx.xxx.xxx:8088/services/collector/event\"\n hec_verify_tls: \"yes\"\n
export HEC_TOKEN=\"00000000-0000-0000-0000-000000000000\"\nmicrok8s helm3 install sc4s --set splunk.hec_token=$HEC_TOKEN splunk-connect-for-syslog/splunk-connect-for-syslog -f values.yaml\n
Whenever the image is upgraded or when changes are made to the values.yaml
file and should be applied, run the command:
microk8s helm3 upgrade sc4s splunk-connect-for-syslog/splunk-connect-for-syslog -f values.yaml\n
"},{"location":"gettingstarted/k8s-microk8s/#install-and-configure-sc4s-for-high-availability-ha","title":"Install and configure SC4S for High Availability (HA)","text":"Three identically-sized nodes are required for HA. See your Microk8s documentation for more information.
Update the configuration file:
#values.yaml\nreplicaCount: 6 #2x node count\nsplunk:\n hec_url: \"https://xxx.xxx.xxx.xxx:8088/services/collector/event\"\n hec_token: \"00000000-0000-0000-0000-000000000000\"\n hec_verify_tls: \"yes\"\n
Upgrade SC4S to apply the new configuration:
microk8s helm3 upgrade sc4s splunk-connect-for-syslog/splunk-connect-for-syslog -f values.yaml\n
values.yaml
","text":"With helm-based deployment you cannot configure environment variables and context files directly. Instead, use the values.yaml
file to update your configuration, for example:
sc4s:\n # Certificate as a k8s Secret with tls.key and tls.crt fields\n # Ideally produced and managed by cert-manager.io\n existingCert: example-com-tls\n #\n vendor_product:\n - name: checkpoint\n ports:\n tcp: [9000] #Same as SC4S_LISTEN_CHECKPOINT_TCP_PORT=9000\n udp: [9000]\n options:\n listen:\n old_host_rules: \"yes\" #Same as SC4S_LISTEN_CHECKPOINT_OLD_HOST_RULES=yes\n\n - name: infoblox\n ports:\n tcp: [9001, 9002]\n tls: [9003]\n - name: fortinet\n ports:\n ietf_udp:\n - 9100\n - 9101\n context_files:\n splunk_metadata.csv: |-\n cisco_meraki,index,foo\n host.csv: |-\n 192.168.1.1,foo\n 192.168.1.2,moon\n
Use the config_files
and context_files
variables to specify configuration and context files that are passed to SC4S.
config_files
: This variable contains a dictionary that maps the name of the configuration file to its content in the form of a YAML block scalar.context_file
: This variable contains a dictionary that maps the name of the context files to its content in the form of a YAML block scalar. The context files splunk_metadata.csv
and host.csv
are passed with values.yaml
: sc4s:\n # Certificate as a k8s Secret with tls.key and tls.crt fields\n # Ideally produced and managed by cert-manager.io\n #\n vendor_product:\n - name: checkpoint\n ports:\n tcp: [9000] #Same as SC4S_LISTEN_CHECKPOINT_TCP_PORT=9000\n udp: [9000]\n options:\n listen:\n old_host_rules: \"yes\" #Same as SC4S_LISTEN_CHECKPOINT_OLD_HOST_RULES=yes\n\n - name: fortinet\n ports:\n ietf_udp:\n - 9100\n - 9101\n context_files:\n splunk_metadata.csv: |+\n cisco_meraki,index,foo\n cisco_asa,index,bar\n config_files:\n app-workaround-cisco_asa.conf: |+\n block parser app-postfilter-cisco_asa_metadata() {\n channel {\n rewrite {\n unset(value('fields.sc4s_recv_time'));\n };\n };\n };\n application app-postfilter-cisco_asa_metadata[sc4s-postfilter] {\n filter {\n 'cisco' eq \"${fields.sc4s_vendor}\"\n and 'asa' eq \"${fields.sc4s_product}\"\n };\n parser { app-postfilter-cisco_asa_metadata(); };\n };\n
You should expect your system to require two instances per node by default. Adjust requests and limits to allow each instance to use about 40% of each node, presuming no other workload is present.
resources:\n limits:\n cpu: 100m\n memory: 128Mi\n requests:\n cpu: 100m\n memory: 128Mi\n
"},{"location":"gettingstarted/podman-systemd-general/","title":"Install podman","text":"See Podman product installation docs for information about working with your Podman installation.
Before performing the tasks described in this topic, make sure you are familiar with using IPv4 forwarding with SC4S. See IPv4 forwarding .
"},{"location":"gettingstarted/podman-systemd-general/#initial-setup","title":"Initial Setup","text":"NOTE: Make sure to use the latest unit file, which is provided here, with the current release. By default, the latest container is automatically downloaded at each restart. As a best practice, check back here regularly for any changes made to the latest template unit file is incorporated into production before you relaunch with systemd.
/lib/systemd/system/sc4s.service
based on the following template:[Unit]\nDescription=SC4S Container\nWants=NetworkManager.service network-online.target\nAfter=NetworkManager.service network-online.target\n\n[Install]\nWantedBy=multi-user.target\n\n[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n\n# Required mount point for syslog-ng persist data (including disk buffer)\nEnvironment=\"SC4S_PERSIST_MOUNT=splunk-sc4s-var:/var/lib/syslog-ng\"\n\n# Optional mount point for local overrides and configurations; see notes in docs\nEnvironment=\"SC4S_LOCAL_MOUNT=/opt/sc4s/local:/etc/syslog-ng/conf.d/local:z\"\n\n# Optional mount point for local disk archive (EWMM output) files\nEnvironment=\"SC4S_ARCHIVE_MOUNT=/opt/sc4s/archive:/var/lib/syslog-ng/archive:z\"\n\n# Map location of TLS custom TLS\nEnvironment=\"SC4S_TLS_MOUNT=/opt/sc4s/tls:/etc/syslog-ng/tls:z\"\n\nTimeoutStartSec=0\n\nExecStartPre=/usr/bin/podman pull $SC4S_IMAGE\n\n# Note: /usr/bin/bash will not be valid path for all OS\n# when startup fails on running bash check if the path is correct\nExecStartPre=/usr/bin/bash -c \"/usr/bin/systemctl set-environment SC4SHOST=$(hostname -s)\"\n\nExecStart=/usr/bin/podman run \\\n -e \"SC4S_CONTAINER_HOST=${SC4SHOST}\" \\\n -v \"$SC4S_PERSIST_MOUNT\" \\\n -v \"$SC4S_LOCAL_MOUNT\" \\\n -v \"$SC4S_ARCHIVE_MOUNT\" \\\n -v \"$SC4S_TLS_MOUNT\" \\\n --env-file=/opt/sc4s/env_file \\\n --health-cmd=\"/healthcheck.sh\" \\\n --health-interval=10s --health-retries=6 --health-timeout=6s \\\n --network host \\\n --name SC4S \\\n --rm $SC4S_IMAGE\n\nRestart=on-abnormal\n
sudo podman volume create splunk-sc4s-var\n
NOTE: Be sure to account for disk space requirements for the podman volume you create. This volume will be located in /var/lib/containers/storage/volumes/
and could grow significantly if there is an extended outage to the SC4S destinations (typically HEC endpoints). See the \u201cSC4S Disk Buffer Configuration\u201d section on the Configuration page for more info.
/opt/sc4s/local
* /opt/sc4s/archive
* /opt/sc4s/tls
/opt/sc4s/env_file
and add the following environment variables and values:SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL
and SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN
to reflect the correct values for your environment. Do not configure HEC Acknowledgement when deploying the HEC token on the Splunk side; the underlying syslog-ng http destination does not support this feature. The default value for SC4S_DEST_SPLUNK_HEC_<ID>_WORKERS
is 10. Consult the community if you feel the number of workers (threads) should deviate from this.NOTE: Splunk Connect for Syslog defaults to secure configurations. If you are not using trusted SSL certificates, be sure to uncomment the last line in the example above.
For more information about configuration refer to Docker and Podman basic configurations and detailed configuration.
"},{"location":"gettingstarted/podman-systemd-general/#configure-sc4s-for-systemd-and-start-sc4s","title":"Configure SC4S for systemd and start SC4S","text":"sudo systemctl daemon-reload\nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/podman-systemd-general/#restart-sc4s","title":"Restart SC4S","text":"sudo systemctl restart sc4s\n
If you have made changes to the configuration unit file, for example, in order to configure dedicated ports, you must first stop SC4S and re-run the systemd configuration commands:
sudo systemctl stop sc4s\nsudo systemctl daemon-reload \nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/podman-systemd-general/#stop-sc4s","title":"Stop SC4S","text":"sudo systemctl stop sc4s\n
"},{"location":"gettingstarted/podman-systemd-general/#verify-proper-operation","title":"Verify Proper Operation","text":"SC4S has a number of \u201cpreflight\u201d checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. After this step is complete, verify SC4S is properly communicating with Splunk by executing the following search in Splunk:
index=* sourcetype=sc4s:events \"starting up\"\n
This should yield an event similar to the following when the startup process proceeds normally (without syntax errors).
syslog-ng starting up; version='3.28.1'\n
If you do not see this, try the following before proceeding to deeper-level troubleshooting:
podman logs SC4S\n
You should see events similar to those below in the output:
syslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
If the output does not display, see \u201cTroubleshoot sc4s server\u201d and \u201cTroubleshoot resources\u201d for more information.
"},{"location":"gettingstarted/podman-systemd-general/#sc4s-non-root-operation","title":"SC4S non-root operation","text":""},{"location":"gettingstarted/podman-systemd-general/#note","title":"NOTE:","text":"Operating as a non-root user makes it impossible to use standard ports 514 and 601. Many devices cannot alter their destination port, so this operation may only be appropriate for cases where accepting syslog data from the public internet cannot be avoided.
"},{"location":"gettingstarted/podman-systemd-general/#prequisites","title":"Prequisites","text":"Podman
and slirp4netns
must be installed.
Increase the number of user namespaces. Execute the following with sudo privileges:
$ echo \"user.max_user_namespaces=28633\" > /etc/sysctl.d/userns.conf \n$ sysctl -p /etc/sysctl.d/userns.conf\n
Create a non-root user from which to run SC4S and to prepare Podman for non-root operations:
sudo useradd -m -d /home/sc4s -s /bin/bash sc4s\nsudo passwd sc4s # type password here\nsudo su - sc4s\nmkdir -p /home/sc4s/local\nmkdir -p /home/sc4s/archive\nmkdir -p /home/sc4s/tls\npodman system migrate\n
Load the new environment variables. To do this, temporarily switch to any other user, and then log back in as the SC4S user. When logging in as the SC4S user, don\u2019t use the \u2018su\u2019 command, as it won\u2019t load the new variables. Instead, you can use, for example, the command \u2018ssh sc4s@localhost\u2019.
Create unit file in ~/.config/systemd/user/sc4s.service
with the following content:
[Unit]\nUser=sc4s\nDescription=SC4S Container\nWants=NetworkManager.service network-online.target\nAfter=NetworkManager.service network-online.target\n[Install]\nWantedBy=multi-user.target\n[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n# Required mount point for syslog-ng persist data (including disk buffer)\nEnvironment=\"SC4S_PERSIST_MOUNT=splunk-sc4s-var:/var/lib/syslog-ng\"\n# Optional mount point for local overrides and configuration\nEnvironment=\"SC4S_LOCAL_MOUNT=/home/sc4s/local:/etc/syslog-ng/conf.d/local:z\"\n# Optional mount point for local disk archive (EWMM output) files\nEnvironment=\"SC4S_ARCHIVE_MOUNT=/home/sc4s/archive:/var/lib/syslog-ng/archive:z\"\n# Map location of TLS custom TLS\nEnvironment=\"SC4S_TLS_MOUNT=/home/sc4s/tls:/etc/syslog-ng/tls:z\"\nTimeoutStartSec=0\nExecStartPre=/usr/bin/podman pull $SC4S_IMAGE\n# Note: The path /usr/bin/bash may vary based on your operating system.\n# when startup fails on running bash check if the path is correct\nExecStartPre=/usr/bin/bash -c \"/usr/bin/systemctl --user set-environment SC4SHOST=$(hostname -s)\"\nExecStart=/usr/bin/podman run -p 2514:514 -p 2514:514/udp -p 6514:6514 \\\n -e \"SC4S_CONTAINER_HOST=${SC4SHOST}\" \\\n -v \"$SC4S_PERSIST_MOUNT\" \\\n -v \"$SC4S_LOCAL_MOUNT\" \\\n -v \"$SC4S_ARCHIVE_MOUNT\" \\\n -v \"$SC4S_TLS_MOUNT\" \\\n --env-file=/home/sc4s/env_file \\\n --health-cmd=\"/healthcheck.sh\" \\\n --health-interval=10s --health-retries=6 --health-timeout=6s \\\n --network host \\\n --name SC4S \\\n --rm $SC4S_IMAGE\nRestart=on-abnormal\n
Create your env_file
file at /home/sc4s/env_file
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://xxx.xxx.xxx.xxx:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\nSC4S_LISTEN_DEFAULT_TCP_PORT=8514\nSC4S_LISTEN_DEFAULT_UDP_PORT=8514\nSC4S_LISTEN_DEFAULT_RFC5426_PORT=8601\nSC4S_LISTEN_DEFAULT_RFC6587_PORT=8601\n
To run the service as a non-root user, run the systemctl
command with --user
flag:
systemctl --user daemon-reload\nsystemctl --user enable sc4s\nsystemctl --user start sc4s\n
The remainder of the setup can be found in the main setup instructions.
"},{"location":"gettingstarted/quickstart_guide/","title":"Quickstart Guide","text":"This guide will enable you to quickly implement basic changes to your Splunk instance and set up a simple SC4S installation. It\u2019s a great starting point for working with SC4S and establishing a minimal operational solution. The same steps are thoroughly described in the Splunk Setup and Runtime configuration sections.
"},{"location":"gettingstarted/quickstart_guide/#splunk-setup","title":"Splunk setup","text":"Create the following default indexes that are used by SC4S:
email
epav
fireeye
gitops
infraops
netauth
netdlp
netdns
netfw
netids
netops
netwaf
netproxy
netipam
oswinsec
osnix
_metrics
(Optional opt-in for SC4S operational metrics; ensure this is created as a metrics index)Create a HEC token for SC4S. When filling out the form for the token, leave the \u201cSelected Indexes\u201d pane blank and specify that a lastChanceIndex
be created so that all data received by SC4S will have a target destination in Splunk.
a. Add the following to /etc/sysctl.conf
:
```\nnet.core.rmem_default = 17039360\nnet.core.rmem_max = 17039360\n```\n
b. Apply to the kernel:
```\nsysctl -p\n```\n
Ensure the kernel is not dropping packets:
netstat -su | grep \"receive errors\"\n
Create the systemd unit file /lib/systemd/system/sc4s.service
.
Copy and paste from the SC4S sample unit file (Docker) or SC4S sample unit file (Podman).
Install Podman or Docker:
sudo yum -y install podman\n
or sudo yum install docker-engine -y\n
Create a Podman/Docker local volume that will contain the disk buffer files and other SC4S state files (choose one in the command below):
sudo podman|docker volume create splunk-sc4s-var\n
Create directories to be used as a mount point for local overrides and configurations:
mkdir /opt/sc4s/local
mkdir /opt/sc4s/archive
mkdir /opt/sc4s/tls
Create the environment file /opt/sc4s/env_file
and replace the HEC_URL and HEC_TOKEN as necessary:
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\n SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n #Uncomment the following line if using untrusted SSL certificates\n #SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
Configure SC4S for systemd and start SC4S:
sudo systemctl daemon-reload
sudo systemctl enable sc4s
sudo systemctl start sc4s
Check podman/docker logs for errors:
sudo podman|docker logs SC4S\n
Search on Splunk for successful installation of SC4S:
index=* sourcetype=sc4s:events \"starting up\"\n
Send sample data to default udp port 514 of SC4S host:
echo \u201cHello SC4S\u201d > /dev/udp/<SC4S_ip>/514\n
When using Splunk Connect for Syslog to onboard a data source, the syslog-ng \u201capp-parser\u201d performs the operations that are traditionally performed at index-time by the corresponding Technical Add-on installed there. These index-time operations include linebreaking, source/sourcetype setting and timestamping. For this reason, if a data source is exclusively onboarded using SC4S then you will not need to install its corresponding Add-On on the indexers. You must, however, install the Add-on on the search head(s) for the user communities interested in this data source.
SC4S is designed to process \u201csyslog\u201d referring to IETF RFC standards 5424, legacy BSD syslog, RFC3164 (Not a standard document), and many \u201calmost\u201d syslog formats.
When possible data sources are identified and processed based on characteristics of the event that make them unique as compared to other events for example. Cisco devices using IOS will include \u201d : %\u201d followed by a string. While Arista EOS devices will use a valid RFC3164 header with a value in the \u201cPROGRAM\u201d position with \u201c%\u201d as the first char in the \u201cMESSAGE\u201d portion. This allows two similar event structures to be processed correctly.
When identification by message content alone is not possible for example the \u201csshd\u201d program field is commonly used across vendors additional \u201chint\u201d or guidance configuration allows SC4S to better classify events. The hints can be applied by definition of a specific port which will be used as a property of the event or by configuration of a host name/ip pattern. For example \u201cVMWARE VSPHERE\u201d products have a number of \u201cPROGRAM\u201d fields which can be used to identify vmware specific events in the syslog stream and these can be properly sourcetyped automatically however because \u201csshd\u201d is not unique it will be treated as generic \u201cos:nix\u201d events until further configuration is applied. The administrator can take one of two actions to refine the processing for vmware
Many log sources can be supported using one of the flexible options available without specific code known as app-parsers.
New supported sources are added regularly. Please submit an issue with a description of the vend/product. Configuration information an a compressed pcap (.zip) from a non-production environment to request support for a new source.
Many sources can be self supported. While we encourage sharing new sources via the github project to promote consistency and develop best-practices there is no requirement to engage in the community.
Sources sending legacy non conformant 3164 like streams can be assisted by the creation of an \u201cAlmost Syslog\u201d Parser. In an such a parser the goal is to process the syslog header allowing other parsers to correctly parse and handle the event. The following example is take from a currently supported format where the source product used epoch in the time stamp field.
#Example event\n #<134>1 1563249630.774247467 devicename security_event ids_alerted signature=1:28423:1 \n # In the example note the vendor incorrectly included \"1\" following PRI defined in RFC5424 as indicating a compliant message\n # The parser must remove the 1 before properly parsing\n # The epoch time is captured by regex\n # The epoch time is converted back into an RFC3306 date and provided to the parser\n block parser syslog_epoch-parser() { \n channel {\n filter { \n message('^(\\<\\d+\\>)(?:1(?= ))? ?(\\d{10,13}(?:\\.\\d+)?) (.*)', flags(store-matches));\n }; \n parser { \n date-parser(\n format('%s.%f', '%s')\n template(\"$2\")\n );\n };\n parser {\n syslog-parser(\n\n flags(assume-utf8, expect-hostname, guess-timezone)\n template(\"$1 $S_ISODATE $3\")\n );\n };\n rewrite(set_rfc3164_epoch); \n\n };\n };\n application syslog_epoch[sc4s-almost-syslog] {\n parser { syslog_epoch-parser(); }; \n };\n
"},{"location":"sources/#standard-syslog-using-message-parsing","title":"Standard Syslog using message parsing","text":"Syslog data conforming to RFC3164 or complying with RFC standards mentioned above can be processed with an app-parser allowing the use of the default port rather than requiring custom ports the following example take from a currently supported source uses the value of \u201cprogram\u201d to identify the source as this program value is unique. Care must be taken to write filter conditions strictly enough to not conflict with similar sources
block parser alcatel_switch-parser() { \n channel {\n rewrite {\n r_set_splunk_dest_default(\n index('netops')\n sourcetype('alcatel:switch')\n vendor('alcatel')\n product('switch')\n template('t_hdr_msg')\n ); \n }; \n\n\n };\n};\napplication alcatel_switch[sc4s-syslog] {\n filter { \n program('swlogd' type(string) flags(prefix));\n }; \n parser { alcatel_switch-parser(); }; \n};\n
"},{"location":"sources/#standard-syslog-vendor-product-by-source","title":"Standard Syslog vendor product by source","text":"In some cases standard syslog is also generic and can not be disambiguated from other sources by message content alone. When this happens and only a single source type is desired the \u201csimple\u201d option above is valid but requires managing a port. The following example allows use of a named port OR the vendor product by source configuration.
block parser dell_poweredge_cmc-parser() { \n channel {\n\n rewrite {\n r_set_splunk_dest_default(\n index('infraops')\n sourcetype('dell:poweredge:cmc:syslog')\n vendor('dell')\n product('poweredge')\n class('cmc')\n ); \n }; \n };\n};\napplication dell_poweredge_cmc[sc4s-network-source] {\n filter { \n (\"${.netsource.sc4s_vendor_product}\" eq \"dell_poweredge_cmc\"\n or \"${SOURCE}\" eq \"s_DELL_POWEREDGE_CMC\")\n and \"${fields.sc4s_vendor_product}\" eq \"\"\n }; \n\n parser { dell_poweredge_cmc-parser(); }; \n};\n
"},{"location":"sources/#filtering-events-from-output","title":"Filtering events from output","text":"In some cases specific events may be considered \u201cnoise\u201d and functionality must be implemented to prevent forwarding of these events to Splunk In version 2.0.0 of SC4S a new feature was implemented to improve the ease of use and efficiency of this progress.
The following example will \u201cnull_queue\u201d or drop cisco IOS device events at the debug level. Note Cisco does not use the PRI to indicate DEBUG a message filter is required.
block parser cisco_ios_debug-postfilter() {\n channel {\n #In this case the outcome is drop the event other logic such as adding indexed fields or editing the message is possible\n rewrite(r_set_dest_splunk_null_queue);\n };\n};\napplication cisco_ios_debug-postfilter[sc4s-postfilter] {\n filter {\n \"${fields.sc4s_vendor}\" eq \"cisco\" and\n \"${fields.sc4s_product}\" eq \"ios\"\n #Note regex reads as\n # start from first position\n # Any atleast 1 char that is not a `-`\n # constant '-7-'\n and message('^%[^\\-]+-7-');\n };\n parser { cisco_ios_debug-postfilter(); };\n};\n
"},{"location":"sources/#another-example-to-drop-events-based-on-src-and-action-values-in-message","title":"Another example to drop events based on \u201csrc\u201d and \u201caction\u201d values in message","text":"#filename: /opt/sc4s/local/config/app_parsers/rewriters/app-dest-rewrite-checkpoint_drop\n\nblock parser app-dest-rewrite-checkpoint_drop-d_fmt_hec_default() { \n channel {\n rewrite(r_set_dest_splunk_null_queue);\n };\n};\n\napplication app-dest-rewrite-checkpoint_drop-d_fmt_hec_default[sc4s-lp-dest-format-d_hec_fmt] {\n filter {\n match('checkpoint' value('fields.sc4s_vendor') type(string))\n and match('syslog' value('fields.sc4s_product') type(string))\n\n and match('Drop' value('.SDATA.sc4s@2620.action') type(string))\n and match('12.' value('.SDATA.sc4s@2620.src') type(string) flags(prefix) );\n\n }; \n parser { app-dest-rewrite-checkpoint_drop-d_fmt_hec_default(); }; \n};\n
"},{"location":"sources/#the-sc4s-fallback-sourcetype","title":"The SC4S \u201cfallback\u201d sourcetype","text":"If SC4S receives an event on port 514 which has no soup filter, that event will be given a \u201cfallback\u201d sourcetype. If you see events in Splunk with the fallback sourcetype, then you should figure out what source the events are from and determine why these events are not being sourcetyped correctly. The most common reason for events categorized as \u201cfallback\u201d is the lack of a SC4S filter for that source, and in some cases a misconfigured relay which alters the integrity of the message format. In most cases this means a new SC4S filter must be developed. In this situation you can either build a filter or file an issue with the community to request help.
The \u201cfallback\u201d sourcetype is formatted in JSON to allow the administrator to see the constituent syslog-ng \u201cmacros\u201d (fields) that have been automatically parsed by the syslog-ng server An RFC3164 (legacy BSD syslog) \u201con the wire\u201d raw message is usually (but unfortunately not always) comprised of the following syslog-ng macros, in this order and spacing:
<$PRI> $HOST $LEGACY_MSGHDR$MESSAGE\n
These fields can be very useful in building a new filter for that sourcetype. In addition, the indexed field sc4s_syslog_format
is helpful in determining if the incoming message is standard RFC3164. A value of anything other than rfc3164
or rfc5424_strict
indicates a vendor perturbation of standard syslog, which will warrant more careful examination when building a filter.
A key aspect of SC4S is to properly set Splunk metadata prior to the data arriving in Splunk (and before any TA processing takes place. The filters will apply the proper index, source, sourcetype, host, and timestamp metadata automatically by individual data source. Proper values for this metadata (including a recommended index) are included with all \u201cout-of-the-box\u201d log paths included with SC4S and are chosen to properly interface with the corresponding TA in Splunk. The administrator will need to ensure all recommended indexes be created to accept this data if the defaults are not changed.
It is understood that default values will need to be changed in many installations. Each source documented in this section has a table entitled \u201cSourcetype and Index Configuration\u201d, which highlights the default index and sourcetype for each source. See the section \u201cSC4S metadata configuration\u201d in the \u201cConfiguration\u201d page for more information on how to override the default values in this table.
"},{"location":"sources/#unique-listening-ports","title":"Unique listening ports","text":"SC4S supports unique listening ports for each source technology/log path (e.g. Cisco ASA), which is useful when the device is sending data on a port different from the typical default syslog port (UDP port 514). In some cases, when the source device emits data that is not able to be distinguished from other device types, a unique port is sometimes required. The specific environment variables used for setting \u201cunique ports\u201d are outlined in each source document in this section.
Using the default ports as unique listening ports is discouraged since it can lead to unintended consequences. There were cases of customers using port 514 as the unique listening port dedicated for a particular vendor and then sending other events to the same port, which caused some of those events to be misclassified.
In most cases only one \u201cunique port\u201d is needed for each source. However, SC4S also supports multiple network listening ports per source, which can be useful for a narrow set of compliance use cases. When configuring a source port variable to enable multiple ports, use a comma-separated list with no spaces (e.g. SC4S_LISTEN_CISCO_ASA_UDP_PORT=5005,6005
).
Due to the fact that unique listening port feature differentiate vendor and product based on the first two underscore characters (\u2018_\u2019), it is possible to filter events by an extra string added to the product. For example in case of having several devices of the same type sending logs over different ports it is possible to route it to different indexes based only on port value while retaining proper vendor and product fields. In general, it follows convention:
SC4S_LISTEN_{VENDOR}_{PRODUCT}_{PROTOCOL}_PORT={PORT VALUE 1},{PORT VALUE 2}...\n
But for special use cases it can be extended to: SC4S_LISTEN_{VENDOR}_{PRODUCT}_{ADDITIONAL_STRING}_{PROTOCOL}_PORT={PORT VALUE},{PORT VALUE 2}...\n
This feature removes the need for complex pre/post filters. Example:
SC4S_LISTEN_EAMPLEVENDOR_EXAMPLEPRODUCT_GROUP01-001_UDP_PORT=18514\n\nsets:\nvendor = < example vendor >\nproduct = < example product >\ntag = .source.s_EAMPLEVENDOR_EXAMPLEPRODUCT_GROUP01-001\n
SC4S_LISTEN_EAMPLEVENDOR_EXAMPLEPRODUCT_GROUP01-002_UDP_PORT=28514\n\nsets:\nvendor = < example vendor >\nproduct = < example product >\ntag = .source.s_EAMPLEVENDOR_EXAMPLEPRODUCT_GROUP01-002\n
"},{"location":"sources/base/cef/","title":"Common Event Format (CEF)","text":""},{"location":"sources/base/cef/#product-various-products-that-send-cef-format-messages-via-syslog","title":"Product - Various products that send CEF-format messages via syslog","text":"Each CEF product should have their own source entry in this documentation set. In a departure from normal configuration, all CEF products should use the \u201cCEF\u201d version of the unique port and archive environment variable settings (rather than a unique one per product), as the CEF log path handles all products sending events to SC4S in the CEF format. Examples of this include Arcsight, Imperva, and Cyberark. Therefore, the CEF environment variables for unique port, archive, etc. should be set only once.
If your deployment has multiple CEF devices that send to more than one port, set the CEF unique port variable(s) as a comma-separated list. See Unique Listening Ports for details.
The source documentation included below is a reference baseline for any product that sends data using the CEF log path.
Ref Link Splunk Add-on CEF https://bitbucket.org/SPLServices/ta-cef-for-splunk/downloads/ Product Manual https://docs.imperva.com/bundle/cloud-application-security/page/more/log-configuration.htm"},{"location":"sources/base/cef/#splunk-metadata-with-cef-events","title":"Splunk Metadata with CEF events","text":"The keys (first column) in splunk_metadata.csv
for CEF data sources have a slightly different meaning than those for non-CEF ones. The typical vendor_product
syntax is instead replaced by checks against specific columns of the CEF event \u2013 namely the first, second, and fourth columns following the leading CEF:0
(\u201ccolumn 0\u201d). These specific columns refer to the CEF device_vendor
, device_product
, and device_event_class
, respectively. The third column, device_version
, is not used for metadata assignment.
SC4S sets metadata based on the first two columns, and (optionally) the fourth. While the key (first column) in the splunk_metadata
file for non-CEF sources uses a \u201cvendor_product\u201d syntax that is arbitrary, the syntax for this key for CEF events is based on the actual contents of columns 1,2 and 4 from the CEF event, namely:
device_vendor
_device_product
_device_class
The final device_class
portion is optional. Therefore, CEF entries in splunk_metadata
can have a key representing the vendor and product, and others representing a vendor and product coupled with one or more additional classes. This allows for more granular metadata assignment (or overrides).
Here is a snippet of a sample Imperva CEF event that includes a CEF device class entry (which is \u201cFirewall\u201d):
Apr 19 10:29:53 3.3.3.3 CEF:0|Imperva Inc.|SecureSphere|12.0.0|Firewall|SSL Untraceable Connection|Medium|\n
and the corresponding match in splunk_metadata.csv
:
Imperva Inc._SecureSphere_Firewall,sourcetype,imperva:waf:firewall:cef\n
"},{"location":"sources/base/cef/#default-sourcetype","title":"Default Sourcetype","text":"sourcetype notes cef Common sourcetype"},{"location":"sources/base/cef/#default-source","title":"Default Source","text":"source notes Varies Varies"},{"location":"sources/base/cef/#default-index-configuration","title":"Default Index Configuration","text":"key source index notes Vendor_Product Varies main none"},{"location":"sources/base/cef/#filter-type","title":"Filter type","text":"MSG Parse: This filter parses message content
"},{"location":"sources/base/cef/#options","title":"Options","text":"Variable default description SC4S_LISTEN_CEF_UDP_PORT empty string Enable a UDP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_CEF_TCP_PORT empty string Enable a TCP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_CEF_TLS_PORT empty string Enable a TLS port for this specific vendor product using a comma-separated list of port numbers SC4S_DEST_CEF_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source"},{"location":"sources/base/leef/","title":"Log Extended Event Format (LEEF)","text":""},{"location":"sources/base/leef/#product-various-products-that-send-leef-v1-and-v2-format-messages-via-syslog","title":"Product - Various products that send LEEF V1 and V2 format messages via syslog","text":"Each LEEF product should have their own source entry in this documentation set by vendor. In a departure from normal configuration, all LEEF products should use the \u201cLEEF\u201d version of the unique port and archive environment variable settings (rather than a unique one per product), as the LEEF log path handles all products sending events to SC4S in the LEEF format. Examples of this include QRadar itself as well as other legacy systems. Therefore, the LEEF environment variables for unique port, archive, etc. should be set only once.
If your deployment has multiple LEEF devices that send to more than one port, set the LEEF unique port variable(s) as a comma-separated list. See Unique Listening Ports for details.
The source documentation included below is a reference baseline for any product that sends data using the LEEF log path.
Some vendors implement LEEF v2.0 format events incorrectly, omitting the required \u201ckey=value\u201d separator field from the LEEF header, thus forcing the consumer to assume the default tab \\t
character. SC4S will correctly process this omission, but will not correctly process other non-compliant formats.
The LEEF format allows for the inclusion of a field devTime
containing the device timestamp and allows the sender to also specify the format of this timestamp in another field called devTimeFormat
, which uses the Java Time format. SC4S uses syslog-ng strptime format which is not directly translatable to the Java Time format. Therefore, SC4S has provided support for the following common formats. If needed, additional time formats can be requested via an issue on github.
'%s.%f',\n '%s',\n '%b %d %H:%M:%S.%f',\n '%b %d %H:%M:%S',\n '%b %d %Y %H:%M:%S.%f',\n '%b %e %Y %H:%M:%S',\n '%b %e %H:%M:%S.%f',\n '%b %e %H:%M:%S',\n '%b %e %Y %H:%M:%S.%f',\n '%b %e %Y %H:%M:%S' \n
Ref Link Splunk Add-on LEEF None Product Manual https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/com.ibm.dsm.doc/c_LEEF_Format_Guide_intro.html"},{"location":"sources/base/leef/#splunk-metadata-with-leef-events","title":"Splunk Metadata with LEEF events","text":"The keys (first column) in splunk_metadata.csv
for LEEF data sources have a slightly different meaning than those for non-LEEF ones. The typical vendor_product
syntax is instead replaced by checks against specific columns of the LEEF event \u2013 namely the first and second, columns following the leading LEEF:VERSION
(\u201ccolumn 0\u201d). These specific columns refer to the LEEF device_vendor
, and device_product
, respectively.
device_vendor
_device_product
Here is a snippet of a sample LANCOPE event in LEEF 2.0 format:
<111>Apr 19 10:29:53 3.3.3.3 LEEF:2.0|Lancope|StealthWatch|1.0|41|^|src=192.0.2.0^dst=172.50.123.1^sev=5^cat=anomaly^srcPort=81^dstPort=21^usrName=joe.black\n
and the corresponding match in splunk_metadata.csv
:
Lancope_StealthWatch,source,lancope:stealthwatch\n
"},{"location":"sources/base/leef/#default-sourcetype","title":"Default Sourcetype","text":"sourcetype notes LEEF:1 Common sourcetype for all LEEF v1 events LEEF:2:<separator>
Common sourcetype for all LEEF v2 events separator
is the printable literal or hex value of the separator used in the event"},{"location":"sources/base/leef/#default-source","title":"Default Source","text":"source notes vendor
:product
Varies"},{"location":"sources/base/leef/#default-index-configuration","title":"Default Index Configuration","text":"key source index notes Vendor_Product Varies main none"},{"location":"sources/base/leef/#filter-type","title":"Filter type","text":"MSG Parse: This filter parses message content
"},{"location":"sources/base/leef/#options","title":"Options","text":"Variable default description SC4S_LISTEN_LEEF_UDP_PORT empty string Enable a UDP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_LEEF_TCP_PORT empty string Enable a TCP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_LEEF_TLS_PORT empty string Enable a TLS port for this specific vendor product using a comma-separated list of port numbers SC4S_DEST_LEEF_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source"},{"location":"sources/base/nix/","title":"Generic *NIX","text":"Many appliance vendor utilize Linux and BSD distributions as the foundation of the solution. When configured to log via syslog, these devices\u2019 OS logs (from a security perspective) can be monitored using the common Splunk Nix TA.
Note: This is NOT a replacement for or alternative to the Splunk Universal forwarder on Linux and Unix. For general-purpose server applications, the Universal Forwarder offers more comprehensive collection of events and metrics appropriate for both security and operations use cases.
Ref Link Splunk Add-on https://splunkbase.splunk.com/app/833/"},{"location":"sources/base/nix/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes nix:syslog None"},{"location":"sources/base/nix/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes nix_syslog nix:syslog osnix none"},{"location":"sources/base/nix/#filter-type","title":"Filter type","text":"MSG Parse: This filter parses message content
"},{"location":"sources/base/nix/#setup-and-configuration","title":"Setup and Configuration","text":"The SIMPLE source configuration allows configuration of a log path for SC4S using a single port to a single index/sourcetype combination to quickly onboard new sources that have not been formally supported in the product. Source data must use RFC5424 or a common variant of RFC3164 formatting.
The keys (first column) in splunk_metadata.csv
for SIMPLE data sources is a user-created key using the vendor_product
convention. For example, to on-board a new product first firewall
using a source type of first:firewall
and index netfw
, add the following two lines to the configuration file as shown:
first_firewall,index,netfw\nfirst_firewall,sourcetype,first:firewall\n
"},{"location":"sources/base/simple/#options","title":"Options","text":"For the variables below, replace VENDOR_PRODUCT
with the key (converted to upper case) used in the splunk_metadata.csv
. Based on the example above, to establish a tcp listener for first firewall
we would use SC4S_LISTEN_SIMPLE_FIRST_FIREWALL_TCP_PORT
.
SIMPLE
data sources must use RFC5424 or a common variant of RFC3164 formatting.SIMPLE
data source must listen on its own unique port list. Port overlap with other sources, either SIMPLE
ones or those served by regular log paths, are not allowed and will cause an error at startup.splunk_metadata.csv
must be in the form vendor_product
(lower case).SIMPLE
environment variables must have a core of VENDOR_PRODUCT
(upper case).SIMPLE
form of these LISTEN
variables after a regular SC4S log path is developed for a given source. You can, of course, continue to listen for this source on the same unique ports after having developed the new log path, but use the SC4S_LISTEN_<VENDOR_PRODUCT>_<protocol>_PORT
form of the variable to ensure the newly developed log path will listen on the specified unique ports.The product has been purchased and republished under a new product name by Tenable this configuration is obsolete.
"},{"location":"sources/vendor/Alsid/Alsid/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-aruba_ap.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-aruba_ap[sc4s-vps] {\n filter { \n host(\"aruba-ap-*\" type(glob))\n }; \n parser { \n p_set_netsource_fields(\n vendor('aruba')\n product('ap')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Aruba/clearpass/","title":"Clearpass","text":""},{"location":"sources/vendor/Aruba/clearpass/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-aruba_clearpass.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-aruba_clearpass[sc4s-vps] {\n filter { \n host(\"aruba-cp-*\" type(glob))\n }; \n parser { \n p_set_netsource_fields(\n vendor('aruba')\n product('clearpass')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Avaya/","title":"SIP Manager","text":""},{"location":"sources/vendor/Avaya/#key-facts","title":"Key facts","text":"\\n
Use of TCP will cause dataloss#/opt/sc4s/local/config/app-parsers/app-vps-barracuda_syslog.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-barracuda_syslog[sc4s-vps] {\n filter { \n netmask(169.254.100.1/24)\n or host(\"barracuda\" type(string) flags(ignore-case))\n }; \n parser { \n p_set_netsource_fields(\n vendor('barracuda')\n product('syslog')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Barracuda/waf_on_prem/","title":"Barracuda WAF (On Premises)","text":""},{"location":"sources/vendor/Barracuda/waf_on_prem/#key-facts","title":"Key facts","text":"%Y-%m-%d %H:%M:%S.%f %z
Login to Symantec DLP and edit the Syslog Response rule. The default configuration will appear as follows
$POLICY$^^$INCIDENT_ID$^^$SUBJECT$^^$SEVERITY$^^$MATCH_COUNT$^^$RULES$^^$SENDER$^^$RECIPIENTS$^^$BLOCKED$^^$FILE_NAME$^^$PARENT_PATH$^^$SCAN$^^$TARGET$^^$PROTOCOL$^^$INCIDENT_SNAPSHOT$\n
DO NOT replace the text prepend the following literal
SymantecDLPAlert: \n
Result note the space between \u2018:\u2019 and \u2018$\u2019
SymantecDLPAlert: $POLICY$^^$INCIDENT_ID$^^$SUBJECT$^^$SEVERITY$^^$MATCH_COUNT$^^$RULES$^^$SENDER$^^$RECIPIENTS$^^$BLOCKED$^^$FILE_NAME$^^$PARENT_PATH$^^$SCAN$^^$TARGET$^^$PROTOCOL$^^$INCIDENT_SNAPSHOT$\n
"},{"location":"sources/vendor/Broadcom/dlp/#syslog-system-events","title":"Syslog System events","text":"<drive>:\\SymantecDLP\\Protect\\config
directory on Windows or the /opt/SymantecDLP/Protect/config
directory on Linux.Manager.properties
file.systemevent.syslog.format
systemevent.syslog.format= {0.EN_US} SymantecDLP: {1.EN_US} - {2.EN_US}
#/opt/sc4s/local/config/app-parsers/app-vps-symantec_dlp.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-symantec_dlp[sc4s-vps] {\n filter { \n #netmask(169.254.100.1/24)\n #host(\"-esx-\")\n }; \n parser { \n p_set_netsource_fields(\n vendor('symantec')\n product('dlp')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Broadcom/ep/","title":"Symantec Endpoint Protection (SEPM)","text":""},{"location":"sources/vendor/Broadcom/ep/#key-facts","title":"Key facts","text":"Symantec now Broadcom ProxySG/ASG is formerly known as the \u201cBluecoat\u201d proxy
Broadcom products are inclusive of products formerly marketed under Symantec and Bluecoat brands.
"},{"location":"sources/vendor/Broadcom/proxy/#key-facts","title":"Key facts","text":"<111>1 $(date)T$(x-bluecoat-hour-utc):$(x-bluecoat-minute-utc):$(x-bluecoat-second-utc)Z $(s-computername) ProxySG - splunk_format - c-ip=$(c-ip) rs-Content-Type=$(quot)$(rs(Content-Type))$(quot) cs-auth-groups=$(cs-auth-groups) cs-bytes=$(cs-bytes) cs-categories=$(cs-categories) cs-host=$(cs-host) cs-ip=$(cs-ip) cs-method=$(cs-method) cs-uri-port=$(cs-uri-port) cs-uri-scheme=$(cs-uri-scheme) cs-User-Agent=$(quot)$(cs(User-Agent))$(quot) cs-username=$(cs-username) dnslookup-time=$(dnslookup-time) duration=$(duration) rs-status=$(rs-status) rs-version=$(rs-version) s-action=$(s-action) s-ip=$(s-ip) service.name=$(service.name) service.group=$(service.group) s-supplier-ip=$(s-supplier-ip) s-supplier-name=$(s-supplier-name) sc-bytes=$(sc-bytes) sc-filter-result=$(sc-filter-result) sc-status=$(sc-status) time-taken=$(time-taken) x-exception-id=$(x-exception-id) x-virus-id=$(x-virus-id) c-url=$(quot)$(url)$(quot) cs-Referer=$(quot)$(cs(Referer))$(quot) c-cpu=$(c-cpu) connect-time=$(connect-time) cs-auth-groups=$(cs-auth-groups) cs-headerlength=$(cs-headerlength) cs-threat-risk=$(cs-threat-risk) r-ip=$(r-ip) r-supplier-ip=$(r-supplier-ip) rs-time-taken=$(rs-time-taken) rs-server=$(rs(server)) s-connect-type=$(s-connect-type) s-icap-status=$(s-icap-status) s-sitename=$(s-sitename) s-source-port=$(s-source-port) s-supplier-country=$(s-supplier-country) sc-Content-Encoding=$(sc(Content-Encoding)) sr-Accept-Encoding=$(sr(Accept-Encoding)) x-auth-credential-type=$(x-auth-credential-type) x-cookie-date=$(x-cookie-date) x-cs-certificate-subject=$(x-cs-certificate-subject) x-cs-connection-negotiated-cipher=$(x-cs-connection-negotiated-cipher) x-cs-connection-negotiated-cipher-size=$(x-cs-connection-negotiated-cipher-size) x-cs-connection-negotiated-ssl-version=$(x-cs-connection-negotiated-ssl-version) x-cs-ocsp-error=$(x-cs-ocsp-error) x-cs-Referer-uri=$(x-cs(Referer)-uri) x-cs-Referer-uri-address=$(x-cs(Referer)-uri-address) x-cs-Referer-uri-extension=$(x-cs(Referer)-uri-extension) x-cs-Referer-uri-host=$(x-cs(Referer)-uri-host) x-cs-Referer-uri-hostname=$(x-cs(Referer)-uri-hostname) x-cs-Referer-uri-path=$(x-cs(Referer)-uri-path) x-cs-Referer-uri-pathquery=$(x-cs(Referer)-uri-pathquery) x-cs-Referer-uri-port=$(x-cs(Referer)-uri-port) x-cs-Referer-uri-query=$(x-cs(Referer)-uri-query) x-cs-Referer-uri-scheme=$(x-cs(Referer)-uri-scheme) x-cs-Referer-uri-stem=$(x-cs(Referer)-uri-stem) x-exception-category=$(x-exception-category) x-exception-category-review-message=$(x-exception-category-review-message) x-exception-company-name=$(x-exception-company-name) x-exception-contact=$(x-exception-contact) x-exception-details=$(x-exception-details) x-exception-header=$(x-exception-header) x-exception-help=$(x-exception-help) x-exception-last-error=$(x-exception-last-error) x-exception-reason=$(x-exception-reason) x-exception-sourcefile=$(x-exception-sourcefile) x-exception-sourceline=$(x-exception-sourceline) x-exception-summary=$(x-exception-summary) x-icap-error-code=$(x-icap-error-code) x-rs-certificate-hostname=$(x-rs-certificate-hostname) x-rs-certificate-hostname-category=$(x-rs-certificate-hostname-category) x-rs-certificate-observed-errors=$(x-rs-certificate-observed-errors) x-rs-certificate-subject=$(x-rs-certificate-subject) x-rs-certificate-validate-status=$(x-rs-certificate-validate-status) x-rs-connection-negotiated-cipher=$(x-rs-connection-negotiated-cipher) x-rs-connection-negotiated-cipher-size=$(x-rs-connection-negotiated-cipher-size) x-rs-connection-negotiated-ssl-version=$(x-rs-connection-negotiated-ssl-version) x-rs-ocsp-error=$(x-rs-ocsp-error) cs-uri-extension=$(cs-uri-extension) cs-uri-path=$(cs-uri-path) cs-uri-query=$(quot)$(cs-uri-query)$(quot) c-uri-pathquery=$(c-uri-pathquery)\n
"},{"location":"sources/vendor/Broadcom/sslva/","title":"SSL Visibility Appliance","text":""},{"location":"sources/vendor/Broadcom/sslva/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app_parsers/app-vps-brocade_syslog.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-brocade_syslog[sc4s-vps] {\n filter { \n host(\"^test_brocade-\")\n }; \n parser { \n p_set_netsource_fields(\n vendor('brocade')\n product('syslog')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Buffalo/","title":"Terastation","text":""},{"location":"sources/vendor/Buffalo/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-buffalo_terastation.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-buffalo_terastation[sc4s-vps] {\n filter { \n host(\"^test_buffalo_terastation-\")\n }; \n parser { \n p_set_netsource_fields(\n vendor('buffalo')\n product('terastation')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Checkpoint/firewallos/","title":"Firewall OS","text":"Firewall OS format is by devices supporting a direct Syslog output
"},{"location":"sources/vendor/Checkpoint/firewallos/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual unknown"},{"location":"sources/vendor/Checkpoint/firewallos/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cp_log:fw:syslog None"},{"location":"sources/vendor/Checkpoint/firewallos/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes checkpoint_fw cp_log:fw:syslog netops none"},{"location":"sources/vendor/Checkpoint/firewallos/#parser-configuration","title":"Parser Configuration","text":"#/opt/sc4s/local/config/app-parsers/app-vps-checkpoint_fw.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-checkpoint_fw[sc4s-vps] {\n filter { \n host(\"^checkpoint_fw-\")\n }; \n parser { \n p_set_netsource_fields(\n vendor('checkpoint')\n product('fw')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Checkpoint/logexporter_5424/","title":"Log Exporter (Syslog)","text":""},{"location":"sources/vendor/Checkpoint/logexporter_5424/#key-facts","title":"Key Facts","text":"514/TCP
.Checkpoint Software blades with a CIM mapping have been sub-grouped into sources to allow routing to appropriate indexes. All other source metadata is left as their defaults.
key source index notes checkpoint_syslog_dlp dlp netdlp none checkpoint_syslog_email email email none checkpoint_syslog_firewall firewall netfw none checkpoint_syslog_sessions sessions netops none checkpoint_syslog_web web netproxy none checkpoint_syslog_audit audit netops none checkpoint_syslog_endpoint endpoint netops none checkpoint_syslog_network network netops checkpoint_syslog_ids ids netids checkpoint_syslog_ids_malware ids_malware netids"},{"location":"sources/vendor/Checkpoint/logexporter_5424/#source-configuration","title":"Source Configuration","text":""},{"location":"sources/vendor/Checkpoint/logexporter_5424/#splunk-side","title":"Splunk Side","text":"splunk_metadata.csv
file and set the index
and sourcetype
as required for the data source.cp
terminal and use the expert
command to log-in in expert mode.$EXPORTERDIR
shell variable is defined with:echo \"$EXPORTERDIR\"\n
$EXPORTERDIR/targets
with:LOG_EXPORTER_NAME='SyslogToSplunk' # Name this something unique but meaningful\nTARGET_SERVER='example.internal' # The indexer or heavy forwarder to send logs to. Can be an FQDN or an IP address.\nTARGET_PORT='514' # Syslog defaults to 514\nTARGET_PROTOCOL='tcp' # IETF Syslog is specifically TCP\n\ncp_log_export add name \"$LOG_EXPORTER_NAME\" target-server \"$TARGET_SERVER\" target-port \"$TARGET_PORT\" protocol \"$TARGET_PROTOCOL\" format 'syslog'\n
cp \"$EXPORTERDIR/conf/SyslogFormatDefinition.xml\" \"$EXPORTERDIR/conf/SplunkRecommendedFormatDefinition.xml\"\n
$EXPORTERDIR/conf/SplunkRecommendedFormatDefinition.xml
by modifying the start_message_body
, fields_separatator
, and field_value_separatator
keys as shown below. a. Note: The misspelling of \u201cseparator\u201d as \u201cseparatator\u201d is intentional, and is to line up with both Checkpoint\u2019s documentation and parser implementation.<start_message_body>[sc4s@2620 </start_message_body>\n<!-- ... -->\n<fields_separatator> </fields_separatator>\n<!-- ... -->\n<field_value_separatator>=</field_value_separatator>\n
conf
directory with:cp \"$EXPORTERDIR/conf/SplunkRecommendedFormatDefinition.xml\" \"$EXPORTERDIR/targets/$LOG_EXPORTER_NAME/conf\"\n
$EXPORTERDIR/targets/$LOG_EXPORTER_NAME/targetConfiguration.xml
by adding the reference to the $EXPORTERDIR/targets/$LOG_EXPORTER_NAME/conf/SplunkRecommendedFormatDefinition.xml
under the key <formatHeaderFile>
. a. For example, if $EXPORTERDIR
is /opt/CPrt-R81/log_exporter
and $LOG_EXPORTER_NAME
is SyslogToSplunk
, the absolute path will become:<formatHeaderFile>/opt/CPrt-R81/log_exporter/targets/SyslogToSplunk/conf/SplunkRecommendedFormatDefinition.xml</formatHeaderFile>\n
cp_log_export restart name \"$LOG_EXPORTER_NAME\"\n
The \u201cSplunk Format\u201d is legacy and should not be used for new deployments see Log Exporter (Syslog)
"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/#key-facts","title":"Key Facts","text":"The Splunk host
field will be derived as follows using the first match
hostname
fieldIf the host is in the format <host>-v_<bladename>
use bladename
for host
Checkpoint Software blades with CIM mapping have been sub-grouped into sources to allow routing to appropriate indexes. All other source meta data is left at default
key source index notes checkpoint_splunk_dlp dlp netdlp none checkpoint_splunk_email email email none checkpoint_splunk_firewall firewall netfw none checkpoint_splunk_os program:${program} netops none checkpoint_splunk_sessions sessions netops none checkpoint_splunk_web web netproxy none checkpoint_splunk_audit audit netops none checkpoint_splunk_endpoint endpoint netops none checkpoint_splunk_network network netops checkpoint_splunk_ids ids netids checkpoint_splunk_ids_malware ids_malware netids"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/#options","title":"Options","text":"Variable default description SC4S_LISTEN_CHECKPOINT_SPLUNK_NOISE_CONTROL no Suppress any duplicate product+loguid pairs processed within 2 seconds of the last matching event SC4S_LISTEN_CHECKPOINT_SPLUNK_OLD_HOST_RULES empty string when set toyes
reverts host name selection order to originsicname\u2013>origin_sic_name\u2013>hostname"},{"location":"sources/vendor/Cisco/cisco_ace/","title":"Application Control Engine (ACE)","text":""},{"location":"sources/vendor/Cisco/cisco_ace/#key-facts","title":"Key facts","text":"EXTRACT-AA-signature = CSCOacs_(?<signature>\\S+):?\n# Note the value of this config is empty to disable\nEXTRACT-AA-syslog_message = \nEXTRACT-acs_message_header2 = ^CSCOacs_\\S+\\s+(?<log_session_id>\\S+)\\s+(?<total_segments>\\d+)\\s+(?<segment_number>\\d+)\\s+(?<acs_message>.*)\n
"},{"location":"sources/vendor/Cisco/cisco_asa/","title":"ASA/FTD (Firepower)","text":""},{"location":"sources/vendor/Cisco/cisco_asa/#key-facts","title":"Key facts","text":"If feasible for you, you can use following log configuration on the ESA. The log name configured on the ESA can then be parsed easily by sc4s.
ESA Log Name ESA Log Type sc4s_gui_logs HTTP Logs sc4s_mail_logs IronPort Text Mail Logs sc4s_amp AMP Engine Logs sc4s_audit_logs Audit Logs sc4s_antispam Anti-Spam Logs sc4s_content_scanner Content Scanner Logs sc4s_error_logs IronPort Text Mail Logs (Loglevel: Critical) sc4s_system_logs System Logs"},{"location":"sources/vendor/Cisco/cisco_esa/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:esa:http The HTTP logs of Cisco IronPort ESA record information about the secure HTTP services enabled on the interface. cisco:esa:textmail Text mail logs of Cisco IronPort ESA record email information and status. cisco:esa:amp Advanced Malware Protection (AMP) of Cisco IronPort ESA records malware detection and blocking, continuous analysis, and retrospective alerting details. cisco:esa:authentication These logs record successful user logins and unsuccessful login attempts. cisco:esa:cef The Consolidated Event Logs summarizes each message event in a single log line. cisco:esa:error_logs Error logs of Cisco IronPort ESA records error that occurred for ESA configurations or internal issues. cisco:esa:content_scanner Content scanner logs of Cisco IronPort ESA scans messages that contain password-protected attachments for malicious activity and data privacy. cisco:esa:antispam Anti-spam logs record the status of the anti-spam scanning feature of your system, including the status on receiving updates of the latest anti-spam rules. Also, any logs related to the Context Adaptive Scanning Engine are logged here. cisco:esa:system_logs System logs record the boot information, virtual appliance license expiration alerts, DNS status information, and comments users typed using commit command."},{"location":"sources/vendor/Cisco/cisco_esa/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_esa cisco:esa:http email None cisco_esa cisco:esa:textmail email None cisco_esa cisco:esa:amp email None cisco_esa cisco:esa:authentication email None cisco_esa cisco:esa:cef email None cisco_esa cisco:esa:error_logs email None cisco_esa cisco:esa:content_scanner email None cisco_esa cisco:esa:antispam email None cisco_esa cisco:esa:system_logs email None"},{"location":"sources/vendor/Cisco/cisco_esa/#parser-configuration","title":"Parser Configuration","text":"#/opt/sc4s/local/config/app-parsers/app-vps-cisco_esa.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-cisco_esa[sc4s-vps] {\n filter { \n host(\"^esa-\")\n }; \n parser { \n p_set_netsource_fields(\n vendor('cisco')\n product('esa')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Cisco/cisco_imc/","title":"Cisco Integrated Management Controller (IMC)","text":""},{"location":"sources/vendor/Cisco/cisco_imc/#key-facts","title":"Key facts","text":"Cisco Network Products of multiple types share common logging characteristics the following types are known to be compatible:
f_cisco_ios
as requiredIf you want to send raw logs to splunk (without any drop) then only use this feature Please set following property in env_file:
SC4S_ENABLE_CISCO_IOS_RAW_MSG=yes\n
Restart SC4S and it will send entire message without any drop. TA-meraki 1.1.5
requires sourcetype meraki
.Either by defining Cisco Meraki hosts:
#/opt/sc4s/local/config/app_parsers/app-vps-cisco_meraki.conf\n#File name provided is a suggestion it must be globally unique\n\nblock parser app-vps-test-cisco_meraki() {\n channel {\n if {\n filter { host(\"^test-mx-\") };\n parser { \n p_set_netsource_fields(\n vendor('meraki')\n product('securityappliances')\n ); \n };\n } elif {\n filter { host(\"^test-mr-\") };\n parser { \n p_set_netsource_fields(\n vendor('meraki')\n product('accesspoints')\n ); \n };\n } elif {\n filter { host(\"^test-ms-\") };\n parser { \n p_set_netsource_fields(\n vendor('meraki')\n product('switches')\n ); \n };\n } else {\n parser { \n p_set_netsource_fields(\n vendor('cisco')\n product('meraki')\n ); \n };\n };\n }; \n};\n\n\napplication app-vps-test-cisco_meraki[sc4s-vps] {\n filter {\n host(\"^test-meraki-\")\n or host(\"^test-mx-\")\n or host(\"^test-mr-\")\n or host(\"^test-ms-\")\n };\n parser { app-vps-test-cisco_meraki(); };\n};\n
Or by a unique port:
# /opt/sc4s/env_file\nSC4S_LISTEN_CISCO_MERAKI_UDP_PORT=5004\nSC4S_LISTEN_MERAKI_SECURITYAPPLIANCES_UDP_PORT=5005\nSC4S_LISTEN_MERAKI_ACCESSPOINTS_UDP_PORT=5006\nSC4S_LISTEN_MERAKI_SWITCHES_UDP_PORT=5007\n
#/opt/sc4s/local/config/app-parsers/app-vps-cisco_mm.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-cisco_mm[sc4s-vps] {\n filter { \n host('^test-cmm-')\n }; \n parser { \n p_set_netsource_fields(\n vendor('cisco')\n product('mm')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Cisco/cisco_ms/","title":"Meeting Server","text":""},{"location":"sources/vendor/Cisco/cisco_ms/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-cisco_ms.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-cisco_ms[sc4s-vps] {\n filter { \n host('^test-cms-')\n }; \n parser { \n p_set_netsource_fields(\n vendor('cisco')\n product('ms')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Cisco/cisco_tvcs/","title":"TelePresence Video Communication Server (TVCS)","text":""},{"location":"sources/vendor/Cisco/cisco_tvcs/#links","title":"Links","text":"Ref Link Product Manual https://www.cisco.com/c/en/us/products/unified-communications/telepresence-video-communication-server-vcs/index.html"},{"location":"sources/vendor/Cisco/cisco_tvcs/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:vcs none"},{"location":"sources/vendor/Cisco/cisco_tvcs/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_tvcs cisco:tvcs main none"},{"location":"sources/vendor/Cisco/cisco_ucm/","title":"Unified Communications Manager (UCM)","text":""},{"location":"sources/vendor/Cisco/cisco_ucm/#key-facts","title":"Key facts","text":"| cisco:wsa:l4tm | The L4TM logs of Cisco IronPort WSA record sites added to the L4TM block and allow lists. | | cisco:wsa:squid | The access logs of Cisco IronPort WSA version prior to 11.7 record Web Proxy client history in squid. | | cisco:wsa:squid:new | The access logs of Cisco IronPort WSA version since 11.7 record Web Proxy client history in squid. | | cisco:wsa:w3c:recommended | The access logs of Cisco IronPort WSA version since 12.5 record Web Proxy client history in W3C. |
"},{"location":"sources/vendor/Cisco/cisco_wsa/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_wsa cisco:wsa:l4tm netproxy None cisco_wsa cisco:wsa:squid netproxy None cisco_wsa cisco:wsa:squid:new netproxy None cisco_wsa cisco:wsa:w3c:recommended netproxy None"},{"location":"sources/vendor/Cisco/cisco_wsa/#filter-type","title":"Filter type","text":"IP, Netmask or Host
"},{"location":"sources/vendor/Cisco/cisco_wsa/#source-setup-and-configuration","title":"Source Setup and Configuration","text":"#/opt/sc4s/local/config/app-parsers/app-vps-cisco_wsa.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-cisco_wsa[sc4s-vps] {\n filter { \n host(\"^wsa-\")\n }; \n parser { \n p_set_netsource_fields(\n vendor('cisco')\n product('wsa')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Citrix/netscaler/","title":"Netscaler ADC/SDX","text":""},{"location":"sources/vendor/Citrix/netscaler/#key-facts","title":"Key facts","text":"clearswift:${PROGRAM}
none"},{"location":"sources/vendor/Clearswift/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes clearswift clearswift:${PROGRAM}
email None"},{"location":"sources/vendor/Clearswift/#parser-configuration","title":"Parser Configuration","text":"```c
"},{"location":"sources/vendor/Clearswift/#optsc4slocalconfigapp-parsersapp-vps-clearswiftconf","title":"/opt/sc4s/local/config/app-parsers/app-vps-clearswift.conf","text":""},{"location":"sources/vendor/Clearswift/#file-name-provided-is-a-suggestion-it-must-be-globally-unique","title":"File name provided is a suggestion it must be globally unique","text":"application app-vps-clearswift[sc4s-vps] { filter { host(\u201ctest-clearswift-\u201d type(string) flags(prefix)) }; parser { p_set_netsource_fields( vendor(\u2018clearswift\u2019) product(\u2018clearswift\u2019) ); }; };
"},{"location":"sources/vendor/Cohesity/cluster/","title":"Cluster","text":""},{"location":"sources/vendor/Cohesity/cluster/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-dell_cmc.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-dell_cmc[sc4s-vps] {\n filter { \n host(\"test-dell-cmc-\" type(string) flags(prefix))\n }; \n parser { \n p_set_netsource_fields(\n vendor('dell')\n product('poweredge_cmc')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Dell/emc_powerswitchn/","title":"EMC Powerswitch N Series","text":""},{"location":"sources/vendor/Dell/emc_powerswitchn/#key-facts","title":"Key facts","text":"Through sc4s-vps
#/opt/sc4s/local/config/app-parsers/app-vps-dell_switch_n.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-dell_switch_n[sc4s-vps] {\n filter { \n host(\"test-dell-switch-n-\" type(string) flags(prefix))\n }; \n parser { \n p_set_netsource_fields(\n vendor('dellemc')\n product('powerswitch_n')\n ); \n }; \n};\n
or through unique port
# /opt/sc4s/env_file \nSC4S_LISTEN_DELLEMC_POWERSWITCH_N_UDP_PORT=5005\n
#/opt/sc4s/local/config/app_parsers/app-vps-dell_rsa_secureid.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-dell_rsa_secureid[sc4s-vps] {\n filter { \n host(\"test_rsasecureid*\" type(glob))\n }; \n parser { \n p_set_netsource_fields(\n vendor('dell')\n product('rsa_secureid')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Dell/sonic/","title":"Dell Networking SONiC","text":""},{"location":"sources/vendor/Dell/sonic/#key-facts","title":"Key facts","text":"Through sc4s-vps
#/opt/sc4s/local/config/app-parsers/app-vps-dell_sonic.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-dell_sonic[sc4s-vps] {\n filter { \n host(\"sonic\" type(string) flags(prefix))\n }; \n parser { \n p_set_netsource_fields(\n vendor('dell')\n product('sonic')\n ); \n }; \n};\n
or through unique port
# /opt/sc4s/env_file \nSC4S_LISTEN_DELL_SONIC_UDP_PORT=5005\n
The sourcetype has been changed in version 2.35.0 making it compliant with corresponding TA.
"},{"location":"sources/vendor/F5/bigip/","title":"BigIP","text":""},{"location":"sources/vendor/F5/bigip/#key-facts","title":"Key facts","text":"<111>1 2020-05-28T22:48:15Z foo.example.com F5 - access_json - {\"event_type\":\"HTTP_REQUEST\", \"src_ip\":\"10.66.98.41\"}
This source type requires a customer specific Splunk Add-on for utility value"},{"location":"sources/vendor/F5/bigip/#index-configuration","title":"Index Configuration","text":"key index notes f5_bigip netops none f5_bigip_irule netops none f5_bigip_asm netwaf none f5_bigip_apm netops none f5_bigip_nix netops if f_f5_bigip
is not set the index osnix will be used f5_bigip_access_json netops none"},{"location":"sources/vendor/F5/bigip/#parser-configuration","title":"Parser Configuration","text":"#/opt/sc4s/local/config/app-parsers/app-vps-f5_bigip.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-f5_bigip[sc4s-vps] {\n filter { \n \"${HOST}\" eq \"f5_bigip\"\n }; \n parser { \n p_set_netsource_fields(\n vendor('f5')\n product('bigip')\n ); \n }; \n};\n
"},{"location":"sources/vendor/FireEye/cms/","title":"CMS","text":""},{"location":"sources/vendor/FireEye/cms/#key-facts","title":"Key facts","text":"config log memory filter\n\nset forward-traffic enable\n\nset local-traffic enable\n\nset sniffer-traffic disable\n\nset anomaly enable\n\nset voip disable\n\nset multicast-traffic enable\n\nset dns enable\n\nend\n\nconfig system global\n\nset cli-audit-log enable\n\nend\n\nconfig log setting\n\nset neighbor-event enable\n\nend\n
"},{"location":"sources/vendor/Fortinet/fortios/#options","title":"Options","text":"Variable default description SC4S_OPTION_FORTINET_SOURCETYPE_PREFIX fgt Notice starting with version 1.6 of the fortinet add-on and app the sourcetype required changes from fgt_*
to fortinet_*
this is a breaking change to use the new sourcetype set this variable to fortigate
in the env_file"},{"location":"sources/vendor/Fortinet/fortiweb/","title":"FortiWeb","text":""},{"location":"sources/vendor/Fortinet/fortiweb/#key-facts","title":"Key facts","text":"config log syslog-policy\n\nedit splunk \n\nconfig syslog-server-list \n\nedit 1\n\nset server x.x.x.x\n\nset port 514 (Example. Should be the same as default or dedicated port selected for sc4s) \n\nend\n\nend\n\nconfig log syslogd\n\nset policy splunk\n\nset status enable\n\nend\n
"},{"location":"sources/vendor/GitHub/","title":"Enterprise Server","text":""},{"location":"sources/vendor/GitHub/#key-facts","title":"Key facts","text":"client_ip
prefix in message"},{"location":"sources/vendor/HAProxy/syslog/#index-configuration","title":"Index Configuration","text":"key index notes haproxy_syslog netlb none"},{"location":"sources/vendor/HPe/ilo/","title":"ILO (4+)","text":""},{"location":"sources/vendor/HPe/ilo/#key-facts","title":"Key facts","text":"HP Procurve switches have multiple log formats used.
"},{"location":"sources/vendor/HPe/procurve/#key-facts","title":"Key facts","text":"Parser configuration is conditional only required if additional events are produced by the device that do not match the default configuration.
#/opt/sc4s/local/config/app-parsers/app-vps-ibm_datapower.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-ibm_datapower[sc4s-vps] {\n filter { \n host(\"^test-ibmdp-\")\n }; \n parser { \n p_set_netsource_fields(\n vendor('ibm')\n product('datapower')\n ); \n }; \n};\n
"},{"location":"sources/vendor/ISC/bind/","title":"bind","text":"This source type is often re-implemented by specific add-ons such as infoblox or bluecat if a more specific source type is desired see that source documentation for instructions
"},{"location":"sources/vendor/ISC/bind/#key-facts","title":"Key facts","text":"This source type is often re-implemented by specific add-ons such as infoblox or bluecat if a more specific source type is desired see that source documentation for instructions
"},{"location":"sources/vendor/ISC/dhcpd/#key-facts","title":"Key facts","text":"MSG Parse: This filter parses message content
"},{"location":"sources/vendor/ISC/dhcpd/#options","title":"Options","text":"None
"},{"location":"sources/vendor/ISC/dhcpd/#verification","title":"Verification","text":"An active site will generate frequent events use the following search to check for new events
Verify timestamp, and host values match as expected
index=<asconfigured> (sourcetype=isc:dhcp\")\n
"},{"location":"sources/vendor/Imperva/incapusla/","title":"Incapsula","text":""},{"location":"sources/vendor/Imperva/incapusla/#key-facts","title":"Key facts","text":"Warning: Despite the TA indication this data source is CIM compliant all versions of NIOS including the most recent available as of 2019-12-17 do not support the DNS data model correctly. For DNS security use cases use Splunk Stream instead.
"},{"location":"sources/vendor/InfoBlox/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-infoblox_nios.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-infoblox_nios[sc4s-vps] {\n filter { \n host(\"infoblox-*\" type(glob))\n }; \n parser { \n p_set_netsource_fields(\n vendor('infoblox')\n product('nios')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Juniper/junos/","title":"JunOS","text":""},{"location":"sources/vendor/Juniper/junos/#key-facts","title":"Key facts","text":"The TA link provided has commented out the CEF support as of 2022-03-18 manual edits are required
"},{"location":"sources/vendor/Kaspersky/es_cef/#key-facts","title":"Key facts","text":"Leef format has not been tested samples needed
"},{"location":"sources/vendor/Kaspersky/es_leef/#key-facts","title":"Key facts","text":"MSG Parse: This filter parses message content
"},{"location":"sources/vendor/McAfee/epo/#options","title":"Options","text":"Variable default description SC4S_LISTEN_MCAFEE_EPO_TLS_PORT empty string Enable a TLS port for this specific vendor product using a comma-separated list of port numbers SC4S_DEST_MCAFEE_EPO_ARCHIVE no Enable archive to disk for this specific source SC4S_DEST_MCAFEE_EPO_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source SC4S_SOURCE_TLS_ENABLE no This must be set to yes so that SC4S listens for encrypted syslog from ePO"},{"location":"sources/vendor/McAfee/epo/#additional-setup","title":"Additional setup","text":"You must create a certificate for the SC4S server to receive encrypted syslog from ePO. A self-signed certificate is fine. Generate a self-signed certificate on the SC4S host:
openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout /opt/sc4s/tls/server.key -out /opt/sc4s/tls/server.pem
Uncomment the following line in /lib/systemd/system/sc4s.service
to allow the docker container to use the certificate:
Environment=\"SC4S_TLS_MOUNT=/opt/sc4s/tls:/etc/syslog-ng/tls:z\"
from the command line of the SC4S host, run this: openssl s_client -connect localhost:6514
The message:
socket: Bad file descriptor\nconnect:errno=9\n
indicates that SC4S is not listening for encrypted syslog. Note that a netstat
may show the port open, but it is not accepting encrypted traffic as configured.
It may take several minutes for the syslog option to be available in the registered servers
dropdown.
#/opt/sc4s/local/config/app-parsers/app-vps-mikrotik_routeros.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-mikrotik_routeros[sc4s-vps] {\n filter { \n host(\"test-mrtros-\" type(string) flags(prefix))\n }; \n parser { \n p_set_netsource_fields(\n vendor('mikrotik')\n product('routeros')\n ); \n }; \n};\n
"},{"location":"sources/vendor/NetApp/ontap/","title":"OnTap","text":""},{"location":"sources/vendor/NetApp/ontap/#key-facts","title":"Key facts","text":"MSG Parse: This filter parses message content
"},{"location":"sources/vendor/PaloaltoNetworks/panos/#setup-and-configuration","title":"Setup and Configuration","text":"An active firewall will generate frequent events. Use the following search to validate events are present per source device
index=<asconfigured> sourcetype=pan:*| stats count by host\n
"},{"location":"sources/vendor/PaloaltoNetworks/prisma/","title":"Prisma SD-WAN ION","text":""},{"location":"sources/vendor/PaloaltoNetworks/prisma/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-pfsense_firewall.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-pfsense_firewall[sc4s-vps] {\n filter { \n \"${HOST}\" eq \"pfsense_firewall\"\n }; \n parser { \n p_set_netsource_fields(\n vendor('pfsense')\n product('firewall')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Polycom/rprm/","title":"RPRM","text":""},{"location":"sources/vendor/Polycom/rprm/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-proofpoint_pps.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-proofpoint_pps[sc4s-vps] {\n filter { \n host(\"pps-*\" type(glob))\n }; \n parser { \n p_set_netsource_fields(\n vendor('proofpoint')\n product('pps')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Pulse/connectsecure/","title":"Pulse","text":""},{"location":"sources/vendor/Pulse/connectsecure/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-raritan_dsx.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-raritan_dsx[sc4s-vps] {\n filter { \n host(\"raritan_dsx*\" type(glob))\n }; \n parser { \n p_set_netsource_fields(\n vendor('raritan')\n product('dsx')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Ricoh/mfp/","title":"MFP","text":""},{"location":"sources/vendor/Ricoh/mfp/#key-facts","title":"Key facts","text":"Used when more specific steelhead or steelconnect can not be identified
"},{"location":"sources/vendor/Riverbed/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-riverbed_syslog.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-riverbed_syslog[sc4s-vps] {\n filter { \n host(....)\n }; \n parser { \n p_set_netsource_fields(\n vendor('riverbed')\n product('syslog')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Riverbed/steelconnect/","title":"Steelconnect","text":""},{"location":"sources/vendor/Riverbed/steelconnect/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-riverbed_syslog.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-riverbed_syslog[sc4s-vps] {\n filter { \n host(....)\n }; \n parser { \n p_set_netsource_fields(\n vendor('riverbed')\n product('syslog')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Ruckus/SmartZone/","title":"Smart Zone","text":"Some events may not match the source format please report issues if found
"},{"location":"sources/vendor/Ruckus/SmartZone/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-schneider_apc.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-schneider_apc[sc4s-vps] {\n filter { \n host(\"test_apc-*\" type(glob))\n }; \n parser { \n p_set_netsource_fields(\n vendor('schneider')\n product('apc')\n ); \n }; \n};\n
"},{"location":"sources/vendor/SecureAuthIdP/secureauth_idp/","title":"SecureAuth IdP","text":""},{"location":"sources/vendor/SecureAuthIdP/secureauth_idp/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-sophos_webappliance.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-sophos_webappliance[sc4s-vps] {\n filter { \n host(\"test-sophos-webapp-\" type(string) flags(prefix))\n }; \n parser { \n p_set_netsource_fields(\n vendor('sophos')\n product('webappliance')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Spectracom/","title":"NTP Appliance","text":""},{"location":"sources/vendor/Spectracom/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-spectracom_ntp.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-spectracom_ntp[sc4s-vps] {\n filter { \n netmask(169.254.100.1/24)\n }; \n parser { \n p_set_netsource_fields(\n vendor('spectracom')\n product('ntp')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Splunk/heavyforwarder/","title":"Splunk Heavy Forwarder","text":"In certain network architectures such as those using data diodes or those networks requiring \u201cin the clear\u201d inspection at network egress SC4S can be used to accept specially formatted output from Splunk as RFC5424 syslog.
"},{"location":"sources/vendor/Splunk/heavyforwarder/#key-facts","title":"Key facts","text":"Index Source and Sourcetype will be used as determined by the Source/HWF
"},{"location":"sources/vendor/Splunk/heavyforwarder/#splunk-configuration","title":"Splunk Configuration","text":"#Because audit trail is protected and we can't transform it we can not use default we must use tcp_routing\n[tcpout]\ndefaultGroup = NoForwarding\n\n[tcpout:nexthop]\nserver = localhost:9000\nsendCookedData = false\n
"},{"location":"sources/vendor/Splunk/heavyforwarder/#propsconf","title":"props.conf","text":"[default]\nADD_EXTRA_TIME_FIELDS = none\nANNOTATE_PUNCT = false\nSHOULD_LINEMERGE = false\nTRANSFORMS-zza-syslog = syslog_canforward, metadata_meta, metadata_source, metadata_sourcetype, metadata_index, metadata_host, metadata_subsecond, metadata_time, syslog_prefix, syslog_drop_zero\n# The following applies for TCP destinations where the IETF frame is required\nTRANSFORMS-zzz-syslog = syslog_octal, syslog_octal_append\n# Comment out the above and uncomment the following for udp\n#TRANSFORMS-zzz-syslog-udp = syslog_octal, syslog_octal_append, syslog_drop_zero\n\n[audittrail]\n# We can't transform this source type its protected\nTRANSFORMS-zza-syslog =\nTRANSFORMS-zzz-syslog =\n
"},{"location":"sources/vendor/Splunk/heavyforwarder/#transformsconf","title":"transforms.conf","text":"syslog_canforward]\nREGEX = ^.(?!audit)\nDEST_KEY = _TCP_ROUTING\nFORMAT = nexthop\n\n[metadata_meta]\nSOURCE_KEY = _meta\nREGEX = (?ims)(.*)\nFORMAT = ~~~SM~~~$1~~~EM~~~$0 \nDEST_KEY = _raw\n\n[metadata_source]\nSOURCE_KEY = MetaData:Source\nREGEX = ^source::(.*)$\nFORMAT = s=\"$1\"] $0\nDEST_KEY = _raw\n\n[metadata_sourcetype]\nSOURCE_KEY = MetaData:Sourcetype\nREGEX = ^sourcetype::(.*)$\nFORMAT = st=\"$1\" $0\nDEST_KEY = _raw\n\n[metadata_index]\nSOURCE_KEY = _MetaData:Index\nREGEX = (.*)\nFORMAT = i=\"$1\" $0\nDEST_KEY = _raw\n\n[metadata_host]\nSOURCE_KEY = MetaData:Host\nREGEX = ^host::(.*)$\nFORMAT = \" h=\"$1\" $0\nDEST_KEY = _raw\n\n[syslog_prefix]\nSOURCE_KEY = _time\nREGEX = (.*)\nFORMAT = <1>1 - - SPLUNK - COOKED [fields@274489 $0\nDEST_KEY = _raw\n\n[metadata_time]\nSOURCE_KEY = _time\nREGEX = (.*)\nFORMAT = t=\"$1$0\nDEST_KEY = _raw\n\n[metadata_subsecond]\nSOURCE_KEY = _meta\nREGEX = \\_subsecond\\:\\:(\\.\\d+)\nFORMAT = $1 $0\nDEST_KEY = _raw\n\n[syslog_octal]\nINGEST_EVAL= mlen=length(_raw)+1\n\n[syslog_octal_append]\nINGEST_EVAL = _raw=mlen + \" \" + _raw\n\n[syslog_drop_zero]\nINGEST_EVAL = queue=if(mlen<10,\"nullQueue\",queue)\n
"},{"location":"sources/vendor/Splunk/sc4s/","title":"Splunk Connect for Syslog (SC4S)","text":""},{"location":"sources/vendor/Splunk/sc4s/#key-facts","title":"Key facts","text":"SC4S events and metrics are generated automatically and no specific ports or filters need to be configured for the collection of this data.
"},{"location":"sources/vendor/Splunk/sc4s/#setup-and-configuration","title":"Setup and Configuration","text":"SC4S_DEST_SPLUNK_SC4S_METRICS_HEC
. See the \u201cOptions\u201d section below for details.event
produce metrics as plain text events; single
produce metrics using Splunk Enterprise 7.3 single metrics format; multi
produce metrics using Splunk Enterprise >8.1 multi metric format multi2
produces improved (reduced resource consumption) multi metric format SC4S_SOURCE_MARK_MESSAGE_NULLQUEUE yes (yes"},{"location":"sources/vendor/Splunk/sc4s/#verification","title":"Verification","text":"SC4S will generate versioning events at startup. These startup events can be used to validate HEC is set up properly on the Splunk side.
index=<asconfigured> sourcetype=sc4s:events | stats count by host\n
Metrics can be observed via the \u201cAnalytics\u2013>Metrics\u201d navigation in the Search and Reporting app in Splunk.
t_msg_hdr
for original raw"},{"location":"sources/vendor/StealthWatch/StealthIntercept/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes stealthbits_stealthintercept StealthINTERCEPT netids none stealthbits_stealthintercept_alerts StealthINTERCEPT:alerts netids Note TA does not support this source type"},{"location":"sources/vendor/Tanium/platform/","title":"Platform","text":"This source requires a TLS connection; in most cases enabling TLS and using the default port 6514 is adequate. The source is understood to require a valid certificate.
"},{"location":"sources/vendor/Tanium/platform/#key-facts","title":"Key facts","text":"All Ubiquity Unfi firewalls, switches, and access points share a common syslog configuration via the NMS.
#/opt/sc4s/local/config/app-parsers/app-vps-ubiquiti_unifi_fw.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-ubiquiti_unifi_fw[sc4s-vps] {\n filter { \n host(\"usg-*\" type(glob))\n }; \n parser { \n p_set_netsource_fields(\n vendor('ubiquiti')\n product('unifi')\n ); \n }; \n};\n
"},{"location":"sources/vendor/VMWare/airwatch/","title":"Airwatch","text":"AirWatch is a product used for enterprise mobility management (EMM) software and standalone management systems for content, applications and email.
"},{"location":"sources/vendor/VMWare/airwatch/#key-facts","title":"Key facts","text":"Vmware vsphere product line has multiple old and known issues in syslog output.
WARNING use of a load balancer with udp will cause \u201ccorrupt\u201d event behavior due to out of order message processing caused by the load balancer
Ref Link Splunk Add-on ESX https://splunkbase.splunk.com/app/5603/ Splunk Add-on Vcenter https://splunkbase.splunk.com/app/5601/ Splunk Add-on nxs none Splunk Add-on vsan none"},{"location":"sources/vendor/VMWare/vsphere/#sourcetypes","title":"Sourcetypes","text":"sourcetype notesvmware:esxlog:${PROGRAM}
None vmware:nsxlog:${PROGRAM}
None vmware:vclog:${PROGRAM}
None nix:syslog When used with a default port, this will follow the generic NIX configuration. When using a dedicated port, IP or host rules events will follow the index configuration for vmware nsx"},{"location":"sources/vendor/VMWare/vsphere/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes vmware_vsphere_esx vmware:esxlog:${PROGRAM}
infraops none vmware_vsphere_nsx vmware:nsxlog:${PROGRAM}
infraops none vmware_vsphere_nsxfw vmware:nsxlog:dfwpktlogs
netfw none vmware_vsphere_vc vmware:vclog:${PROGRAM}
infraops none"},{"location":"sources/vendor/VMWare/vsphere/#filter-type","title":"Filter type","text":"MSG Parse: This filter parses message content when using the default configuration. SC4S will normalize the structure of vmware events from multiple incorrectly formed varients to rfc5424 format to improve parsing
"},{"location":"sources/vendor/VMWare/vsphere/#setup-and-configuration","title":"Setup and Configuration","text":"An active proxy will generate frequent events. Use the following search to validate events are present per source device
index=<asconfigured> sourcetype=\"vmware:vsphere:*\" | stats count by host\n
"},{"location":"sources/vendor/VMWare/vsphere/#automatic-parser-configuration","title":"Automatic Parser Configuration","text":"Enable the following options in the env_file
#Do not enable with a SNAT load balancer\nSC4S_USE_NAME_CACHE=yes\n#Combine known split events into a single event for Splunk\nSC4S_SOURCE_VMWARE_VSPHERE_GROUPMSG=yes\n#Learn vendor product from recognized events and apply to generic events\n#for example after the first vpxd event sshd will utilize vps \"vmware_vsphere_nix_syslog\" rather than \"nix_syslog\"\nSC4S_USE_VPS_CACHE=yes\n
"},{"location":"sources/vendor/VMWare/vsphere/#manual-parser-configuration","title":"Manual Parser Configuration","text":"#/opt/sc4s/local/config/app-parsers/app-vps-vmware_vsphere.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-vmware_vsphere[sc4s-vps] {\n filter { \n #netmask(169.254.100.1/24)\n #host(\"-esx-\")\n }; \n parser { \n p_set_netsource_fields(\n vendor('vmware')\n product('vsphere')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Varonis/datadvantage/","title":"DatAdvantage","text":""},{"location":"sources/vendor/Varonis/datadvantage/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-wallix_bastion.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-wallix_bastion[sc4s-vps] {\n filter { \n host('^wasb')\n }; \n parser { \n p_set_netsource_fields(\n vendor('wallix')\n product('bastion')\n ); \n }; \n};\n
"},{"location":"sources/vendor/XYPro/mergedaudit/","title":"Merged Audit","text":"XY Pro merged audit also called XYGate or XMA is the defacto solution for syslog from HP Nonstop Server (Tandem)
"},{"location":"sources/vendor/XYPro/mergedaudit/#key-facts","title":"Key facts","text":"The ZScaler product manual includes and extensive section of configuration for multiple Splunk TCP input ports around page 26. When using SC4S these ports are not required and should not be used. Simply configure all outputs from the LSS to utilize the IP or host name of the SC4S instance and port 514
"},{"location":"sources/vendor/Zscaler/lss/#key-facts","title":"Key facts","text":"The ZScaler product manual includes and extensive section of configuration for multiple Splunk TCP input ports around page 26. When using SC4S these ports are not required and should not be used. Simply configure all outputs from the NSS to utilize the IP or host name of the SC4S instance and port 514
"},{"location":"sources/vendor/Zscaler/nss/#key-facts","title":"Key facts","text":"\\tvendor=Zscaler\\tproduct=alerts
immediately prior to the \\n
in the NSS Alert Web format. See Zscaler manual for more info. zscaler_nss_dns Requires format customization add \\tvendor=Zscaler\\tproduct=dns
immediately prior to the \\n
in the NSS DNS format. See Zscaler manual for more info. zscaler_nss_web None zscaler_nss_fw Requires format customization add \\tvendor=Zscaler\\tproduct=fw
immediately prior to the \\n
in the Firewall format. See Zscaler manual for more info."},{"location":"sources/vendor/Zscaler/nss/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes zscaler_nss_alerts zscalernss-alerts main none zscaler_nss_dns zscalernss-dns netdns none zscaler_nss_fw zscalernss-fw netfw none zscaler_nss_web zscalernss-web netproxy none zscaler_nss_tunnel zscalernss-tunnel netops none zscaler_zia_audit zscalernss-zia-audit netops none zscaler_zia_sandbox zscalernss-zia-sandbox main none"},{"location":"sources/vendor/Zscaler/nss/#filter-type","title":"Filter type","text":"MSG Parse: This filter parses message content
"},{"location":"sources/vendor/Zscaler/nss/#setup-and-configuration","title":"Setup and Configuration","text":"Loggen is a tool used to load test syslog implementations.
"},{"location":"sources/vendor/syslog-ng/loggen/#key-facts","title":"Key facts","text":"loggen --inet --dgram --number 1 <ip> <port>
RFC5424 example:loggen --inet --dgram -PF --number 1 <ip> <port>
Refer to above manual link for more examples."},{"location":"sources/vendor/syslog-ng/loggen/#index-configuration","title":"Index Configuration","text":"key index notes syslogng_loggen main none"},{"location":"troubleshooting/troubleshoot_SC4S_server/","title":"Validate server startup and operations","text":"This topic helps you find the most common solutions to startup and operational issues with SC4S.
If you plan to run SC4S with standard configuration, we recommend that you perform startup out of systemd.
If you are using a custom configuration of SC4S with significant modifications, for example, multiple unique ports for sources, hostname/CIDR block configuration for sources, or new log paths, start SC4S with the container runtime command podman
or docker
directly from the command line as described in this topic. When you are satisfied with the operation, you can then transition to systemd.
If you are running out of systemd, you may see this at startup:
[root@sc4s syslog-ng]# systemctl start sc4s\nJob for sc4s.service failed because the control process exited with error code. See \"systemctl status sc4s.service\" and \"journalctl -xe\" for details.\n
Most issues that occur with startup and operation of SC4S involve syntax errors or duplicate listening ports. Try the following to resolve the issue:
"},{"location":"troubleshooting/troubleshoot_SC4S_server/#check-that-your-sc4s-container-is-running","title":"Check that your SC4S container is running","text":"If you start with systemd and the container is not running, check with the following:
journalctl -b -u sc4s | tail -100\n
This will print the last 100 lines of the system journal in detail, which should be sufficient to see the specific syntax or runtime failure and guide you in troubleshooting the unexpected container exit."},{"location":"troubleshooting/troubleshoot_SC4S_server/#check-that-the-sc4s-container-starts-and-runs-properly-outside-of-the-systemd-service-environment","title":"Check that the SC4S container starts and runs properly outside of the systemd service environment","text":"As an alternative to launching with systemd during the initial installation phase, you can test the container startup outside of the systemd startup environment. This is especially important for troubleshooting or log path development, for example, when SC4S_DEBUG_CONTAINER
is set to \u201cyes\u201d.
The following command launches the container directly from the command line. This command assumes the local mounted directories are set up as shown in the \u201cgetting started\u201d examples. Adjust for your local requirements, if you are using Docker, substitute \u201cdocker\u201d for \u201cpodman\u201d for the container runtime command.
/usr/bin/podman run \\\n -v splunk-sc4s-var:/var/lib/syslog-ng \\\n -v /opt/sc4s/local:/etc/syslog-ng/conf.d/local:z \\\n -v /opt/sc4s/archive:/var/lib/syslog-ng/archive:z \\\n -v /opt/sc4s/tls:/etc/syslog-ng/tls:z \\\n --env-file=/opt/sc4s/env_file \\\n --network host \\\n --name SC4S \\\n --rm ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\n
"},{"location":"troubleshooting/troubleshoot_SC4S_server/#check-that-the-container-is-still-running-when-systemd-indicates-that-its-not-running","title":"Check that the container is still running when systemd indicates that it\u2019s not running","text":"In some instances, particularly when SC4S_DEBUG_CONTAINER=yes
, an SC4S container might not shut down completely when starting/stopping out of systemd, and systemd will attempt to start a new container when one is already running with the SC4S
name. You will see this type of output when viewing the journal after a failed start caused by this condition, or a similar message when the container is run directly from the CLI:
Jul 15 18:45:20 sra-sc4s-alln01-02 podman[11187]: Error: error creating container storage: the container name \"SC4S\" is already in use by \"894357502b2a7142d097ea3ca1468d1cb4fbc69959a9817a1bbe145a09d37fb9\". You have to remove that container...\nJul 15 18:45:20 sra-sc4s-alln01-02 systemd[1]: sc4s.service: Main process exited, code=exited, status=125/n/a\n
To rectify this, execute:
podman rm -f SC4S\n
SC4S should then start normally.
Do not use systemd when SC4S_DEBUG_CONTAINER
is set to \u201cyes\u201d, instead use the CLI podman
or docker
commands directly to start/stop SC4S.
SC4S performs basic HEC connectivity and index checks at startup and creates logs that indicate general connection issues and indexes that may not be accessible or configured on Splunk. To check the container logs that contain the results of these tests, run:
/usr/bin/<podman|docker> logs SC4S\n
You will see entries similar to the following:
SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful; checking indexes...\n\nSC4S_ENV_CHECK_INDEX: Checking email {\"text\":\"Incorrect index\",\"code\":7,\"invalid-event-number\":1}\nSC4S_ENV_CHECK_INDEX: Checking epav {\"text\":\"Incorrect index\",\"code\":7,\"invalid-event-number\":1}\nSC4S_ENV_CHECK_INDEX: Checking main {\"text\":\"Success\",\"code\":0}\n
Note the specifics of the indexes that are not configured correctly, and rectify this in your Splunk configuration. If this is not addressed properly, you may see output similar to the below when data flows into SC4S:
Mar 16 19:00:06 b817af4e89da syslog-ng[1]: Server returned with a 4XX (client errors) status code, which means we are not authorized or the URL is not found.; url='https://splunk-instance.com:8088/services/collector/event', status_code='400', driver='d_hec#0', location='/opt/syslog-ng/etc/conf.d/destinations/splunk_hec.conf:2:5'\nMar 16 19:00:06 b817af4e89da syslog-ng[1]: Server disconnected while preparing messages for sending, trying again; driver='d_hec#0', location='/opt/syslog-ng/etc/conf.d/destinations/splunk_hec.conf:2:5', worker_index='4', time_reopen='10', batch_size='1000'\n
This is an indication that the standard d_hec
destination in syslog-ng, which is the route to Splunk, is rejected by the HEC endpoint. A 400
error is commonly caused by an index that has not been created in Splunk. One bad index can damage the batch, in this case, 1000 events, and prevent any of the data from being sent to Splunk. Make sure that the container logs are free of these kinds of errors in production. You can use the alternate HEC debug destination to help debug this condition by sending direct \u201ccurl\u201d commands to the HEC endpoint outside of the SC4S setting."},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-invalid-sc4s-listening-ports","title":"Issue: Invalid SC4S listening ports","text":"SC4S exclusively grants a port to a device when SC4S_LISTEN_{vendor}_{product}_{TCP/UDP/TLS}_PORT={port}
.
During startup, SC4S validates that listening ports are configured correctly, and shows any issues in container logs.
You will receive an error message similar to the following if listening ports for MERAKI SWITCHES
are configured incorrectly:
SC4S_LISTEN_MERAKI_SWITCHES_TCP_PORT: Wrong port number, don't use default port like (514,614,6514)\nSC4S_LISTEN_MERAKI_SWITCHES_UDP_PORT: 7000 is not unique and has already been used for another source\nSC4S_LISTEN_MERAKI_SWITCHES_TLS_PORT: 999999999999 must be integer within the range (0, 10000)\n
"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-sc4s-local-disk-resource-issues","title":"Issue: SC4S local disk resource issues","text":"Check the HEC connection to Splunk. If the connection is down for a long period of time, the local disk buffer used for backup will exhaust local disk resources. The size of the local disk buffer is configured in the env_file
: Disk buffer configuration
Check the env_file
to see whether SC4S_DEST_GLOBAL_ALTERNATES
is set to d_hec_debug
,d_archive
, or another file-based destination. Any of these settings will consume significant local disk space.
d_hec_debug
and d_archive
are organized by sourcetype; the du -sh *
command can be used in each subdirectory to find the culprit.
podman volume rm splunk-sc4s-var\npodman volume create splunk-sc4s-var\n
podman system prune [--all]\n
UDP Input Buffer Settings let you request a certain buffer size when configuring the UDP sockets. The kernel must have its parameters set to the same size or greater than what the syslog-ng configuration is requesting, or the following will occur in the SC4S logs:
/usr/bin/<podman|docker> logs SC4S\n
The following warning message is not a failure condition unless you are reaching the upper limit of your hardware performance. The kernel refused to set the receive buffer (SO_RCVBUF) to the requested size, you probably need to adjust buffer related kernel parameters; so_rcvbuf='1703936', so_rcvbuf_set='425984'\n
Make changes to /etc/sysctl.conf
, changing receive buffer values to 16 MB: net.core.rmem_default = 17039360\nnet.core.rmem_max = 17039360 \n
Run the following commands to implement your changes: sysctl -p restart SC4S \n
"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-invalid-sc4s-tls-listener","title":"Issue: Invalid SC4S TLS listener","text":"To verify the correct configuration of the TLS server use the following command. Replace the IP, FQDN, and port as appropriate:
<podman|docker> run -ti drwetter/testssl.sh --severity MEDIUM --ip 127.0.0.1 selfsigned.example.com:6510\n
"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-unable-to-retrieve-logs-from-non-rfc-5424-compliant-sources","title":"Issue: Unable to retrieve logs from non RFC-5424 compliant sources","text":"If a data source you are trying to ingest claims it is RFC-5424 compliant but you get an \u201cError processing log message:\u201d from SC4S, this message indicates that the data source still violates the RFC-5424 standard in some way. In this case, the underlying syslog-ng process will send an error event, with the location of the error in the original event highlighted with >@<
to indicate where the error occurred. Here is an example error message:
{ [-]\n ISODATE: 2020-05-04T21:21:59.001+00:00\n MESSAGE: Error processing log message: <14>1 2020-05-04T21:21:58.117351+00:00 arcata-pks-cluster-1 pod.log/cf-workloads/logspinner-testing-6446b8ef - - [kubernetes@47450 cloudfoundry.org/process_type=\"web\" cloudfoundry.org/rootfs-version=\"v75.0.0\" cloudfoundry.org/version=\"eae53cc3-148d-4395-985c-8fef0606b9e3\" controller-revision-hash=\"logspinner-testing-6446b8ef05-7db777754c\" cloudfoundry.org/app_guid=\"f71634fe-34a4-4f89-adac-3e523f61a401\" cloudfoundry.org/source_type=\"APP\" security.istio.io/tlsMode=\"istio\" statefulset.kubernetes.io/pod-n>@<ame=\"logspinner-testing-6446b8ef05-0\" cloudfoundry.org/guid=\"f71634fe-34a4-4f89-adac-3e523f61a401\" namespace_name=\"cf-workloads\" object_name=\"logspinner-testing-6446b8ef05-0\" container_name=\"opi\" vm_id=\"vm-e34452a3-771e-4994-666e-bfbc7eb77489\"] Duration 10.00299412s TotalSent 10 Rate 0.999701 \n PID: 33\n PRI: <43>\n PROGRAM: syslog-ng\n}\n
In this example the error can be seen in the snippet statefulset.kubernetes.io/pod-n>@<ame
. The error states that the \u201cSD-NAME\u201d (the left-hand side of the name=value pairs) cannot be longer than 32 printable ASCII characters, and the indicated name exceeds that. Ideally you should address this issue with the vendor, however, you can add an exception to the SC4S filter log path or an alternative workaround log path created for the data source.
In this example, the reason RAWMSG
is not shown in the fields above is because this error message is coming from syslog-ng itself. In messages of the type Error processing log message:
where the PROGRAM is shown as syslog-ng
, your incoming message is not RFC-5424 compliant.
In non-containerized SC4S deployments, if you try to start the SC4S service, the terminal may be overwhelmed by the internal and metrics logs. Example of the issue can be found here: Github Terminal abuse issue
To resolve this, set following property in env_file
:
SC4S_SEND_METRICS_TERMINAL=no\n
Restart SC4S.
SC4S_DEBUG_CONTAINER
is set to \u201cyes\u201d. Use the CLI podman
or docker
commands directly to start/stop SC4S.To resolve this, set following property in env_file
:
SC4S_DISABLE_DROP_INVALID_CEF=yes\n
Restart SC4S.
To resolve this, set following property in env_file
:
SC4S_DISABLE_DROP_INVALID_VMWARE_CB_PROTECT=yes\n
Restart SC4S.
env_file
: SC4S_DISABLE_DROP_INVALID_CISCO=yes\n
To resolve this, set following property in env_file
:
SC4S_DISABLE_DROP_INVALID_VMWARE_VSPHERE=yes\n
Restart SC4S.
To resolve this, set following property in env_file
:
SC4S_DISABLE_DROP_INVALID_RAW_BSD=yes\n
Restart SC4S.
To resolve this, set following property in env_file
:
SC4S_DISABLE_DROP_INVALID_XML=yes\n
Restart SC4S.
To resolve this, set following property in env_file
:
SC4S_DISABLE_DROP_INVALID_HPE=yes\n
Restart SC4S and it will not drop any invalid HPE JETDIRECT format.
NOTE: Please use only in this case of exception and this is splunk-unsupported feature. Also this setting might impact SC4S performance.
"},{"location":"troubleshooting/troubleshoot_resources/","title":"SC4S Logging and Troubleshooting Resources","text":""},{"location":"troubleshooting/troubleshoot_resources/#helpful-linux-and-container-commands","title":"Helpful Linux and container commands","text":""},{"location":"troubleshooting/troubleshoot_resources/#linux-service-systemd-commands","title":"Linux service (systemd) commands","text":"systemctl status sc4s
systemctl start service
systemctl stop service
systemctl restart service
systemctl enable sc4s
journalctl -b -u sc4s
All of the following container commands can be run with the podman
or docker
runtime.
sudo podman logs SC4S
podman exec -it SC4S bash
podman volume rm splunk-sc4s-var\npodman volume create splunk-sc4s-var\n
podman pull ghcr.io/splunk/splunk-connect-for-syslog/container3
podman system prune
podman load <tar>
Check your SC4S port using the nc
command. Run this command where SC4S is hosted and check data in Splunk for success and failure:
echo '<raw_sample>' |nc <host> <port>\n
"},{"location":"troubleshooting/troubleshoot_resources/#obtain-raw-message-events","title":"Obtain raw message events","text":"During development or troubleshooting, you may need to obtain samples of the messages exactly as they are received by SC4S. These events contain the full syslog message, including the <PRI>
preamble, and are different from messages that have been processed by SC4S and Splunk.
These raw messages help to determine that SC4S parsers and filters are operating correctly, and are needed for playback when testing. The community supporting SC4S will always first ask for raw samples before any development or troubleshooting exercise.
Here are some options for obtaining raw logs for one or more sourcetypes:
tcpdump
on the collection interface and display the results in ASCII. You will see events similar to the following buried in the packet contents: <165>1 2007-02-15T09:17:15.719Z router1 mgd 3046 UI_DBASE_LOGOUT_EVENT [junos@2636.1.1.1.2.18 username=\"user\"] User 'user' exiting configuration mode\n
env_file
to set the variable SC4S_SOURCE_STORE_RAWMSG=yes
and restart SC4S. This stores the raw message in a syslog-ng macro called RAWMSG
and is displayed in Splunk for all fallback
messages.RAWMSG
is not displayed, but can be viewed by changing the output template to one of the JSON variants, including t_JSON_3164 or t_JSON_5424, depending on RFC message type. See SC4S metadata configuration for more details.RAWMSG
to Splunk regardless the sourcetype you can also temporarily place the following final filter in the local parser directory: block parser app-finalfilter-fetch-rawmsg() {\n channel {\n rewrite {\n r_set_splunk_dest_default(\n template('t_fallback_kv')\n );\n };\n };\n};\n\napplication app-finalfilter-fetch-rawmsg[sc4s-finalfilter] {\n parser { app-finalfilter-fetch-rawmsg(); };\n};\n
Once you have edited SC4S_SOURCE_STORE_RAWMSG=yes
in /opt/sc4s/env_file
and the finalfilter
placed in /opt/sc4s/local/config/app_parsers
, restart the SC4S instance to add raw messages to all the messages sent to Splunk.NOTE: Be sure to turn off the RAWMSG
variable when you are finished, because it doubles the memory and disk requirements of SC4S. Do not use RAWMSG
in production.
d_rawmsg
for one or more sourcetypes. This destination will write the raw messages to the container directory /var/syslog-ng/archive/rawmsg/<sourcetype>
, which is typically mapped locally to /opt/sc4s/archive
. Within this directory, the logs are organized by host and time.exec
into the container (advanced task)","text":"You can confirm how the templating process created the actual syslog-ng configuration files by calling exec
into the container and navigating the syslog-ng config filesystem directly. To do this, run
/usr/bin/podman exec -it SC4S /bin/bash\n
and navigate to /opt/syslog-ng/etc/
to see the actual configuration files in use. If you are familiar with container operations and syslog-ng, you can modify files directly and reload syslog-ng with the command kill -1 1
in the container. You can also run the /entrypoint.sh
script, or a subset of it, such as everything but syslog-ng, and have complete control over the templating and underlying syslog-ng process. This is an advanced topic and further help can be obtained through the github issue tracker and Slack channels."},{"location":"troubleshooting/troubleshoot_resources/#keeping-a-failed-container-running-advanced-topic","title":"Keeping a failed container running (advanced topic)","text":"To debug a configuration syntax issue at startup, keep the container running after a syslog-ng startup failure. In order to facilitate troubleshooting and make syslog-ng configuration changes from within a running container, the container can be forced to remain running when syslog-ng fails to start (which normally terminates the container). To enable this, add SC4S_DEBUG_CONTAINER=yes
to the env_file
. Use this capability in conjunction with exec calls into the container.
NOTE: Do not enable the debug container mode while running out of systemd. Instead, run the container manually from the CLI, so that you can use the podman
or docker
commands needed to start, stop, and clean up cruft left behind by the debug process. Only when SC4S_DEBUG_CONTAINER
is set to \u201cno\u201d (or completely unset) should systemd startup processing resume.
Time zone mismatches can occur if SC4S and logHost are not in same time zones. To resolve this, create a filter using sc4s-lp-dest-format-d_hec_fmt
, for example:
#filename: /opt/sc4s/local/config/app_parsers/rewriters/app-dest-rewrite-fix_tz_something.conf\n\nblock parser app-dest-rewrite-checkpoint_drop-d_fmt_hec_default() { \n channel {\n rewrite { fix-time-zone(\"EST5EDT\"); };\n };\n};\napplication app-dest-rewrite-fix_tz_something-d_fmt_hec_default[sc4s-lp-dest-format-d_hec_fmt] {\n filter {\n match('checkpoint' value('fields.sc4s_vendor') type(string)) <- this must be customized\n and match('syslog' value('fields.sc4s_product') type(string)) <- this must be customized\n and match('Drop' value('.SDATA.sc4s@2620.action') type(string)) <- this must be customized\n and match('12.' value('.SDATA.sc4s@2620.src') type(string) flags(prefix) ); <- this must be customized\n\n }; \n parser { app-dest-rewrite-checkpoint_drop-d_fmt_hec_default(); }; \n};\n
If destport, container, and proto are not available in indexed fields, you can create a post-filter:
#filename: /opt/sc4s/local/config/app_parsers/rewriters/app-dest-rewrite-fix_tz_something.conf\n\nblock parser app-dest-rewrite-fortinet_fortios-d_fmt_hec_default() {\n channel {\n rewrite {\n fix-time-zone(\"EST5EDT\");\n };\n };\n};\n\napplication app-dest-rewrite-device-d_fmt_hec_default[sc4s-postfilter] {\n filter {\n match(\"xxxx\", value(\"fields.sc4s_destport\") type(glob)); <- this must be customized\n };\n parser { app-dest-rewrite-fortinet_fortios-d_fmt_hec_default(); };\n};\n
Note that filter match statement should be aligned to your data The parser accepts time zone in formats: \u201cAmerica/New York\u201d or \u201cEST5EDT\u201d, but not short in form such as \u201cEST\u201d.
"},{"location":"troubleshooting/troubleshoot_resources/#issue-cyberark-log-problems","title":"Issue: CyberArk log problems","text":"When data is received on the indexers, all events are merged together into one event. Check the following link for CyberArk configuration information: https://cyberark-customers.force.com/s/article/00004289.
"},{"location":"troubleshooting/troubleshoot_resources/#issue-sc4s-events-drop-when-another-interface-is-used-to-receive-logs","title":"Issue: SC4S events drop when another interface is used to receive logs","text":"When a second or alternate interface is used to receive syslog traffic, RPF (Reverse Path Forwarding) filtering in RHEL, which is configured as default configuration, may drop events. To resolve this, add a static route for the source device to point back to the dedicated syslog interface. See https://access.redhat.com/solutions/53031.
"},{"location":"troubleshooting/troubleshoot_resources/#issue-splunk-does-not-ingest-sc4s-events-from-other-virtual-machines","title":"Issue: Splunk does not ingest SC4S events from other virtual machines","text":"When data is transmitted through an echo message from the same instance, data is sent successfully to Splunk. However, when the echo is sent from a different instance, the data may not appear in Splunk and the errors are not reported in the logs. To resolve this issue, check whether an internal firewall is enabled. If an internal firewall is active, verify whether the default port 514 or the port which you have used is blocked. Here are some commands to check and enable your firewall:
#To list all the firewall ports\nsudo firewall-cmd --list-all\n#to enable 514 if its not enabled\nsudo firewall-cmd --zone=public --permanent --add-port=514/udp\nsudo firewall-cmd --reload\n
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to Splunk Connect for Syslog!","text":"Splunk Connect for Syslog is an open source packaged solution for getting data into Splunk. It is based on the syslog-ng Open Source Edition (Syslog-NG OSE) and transports data to Splunk via the Splunk HTTP event Collector (HEC) rather than writing events to disk for collection by a Universal Forwarder.
"},{"location":"#product-goals","title":"Product Goals","text":"Splunk Support: If you are an existing Splunk customer with access to the Support Portal, create a support ticket for the quickest resolution to any issues you experience. Here are some examples of when it may be appropriate to create a support ticket: - If you experience an issue with the current version of SC4S, such as a feature gap or a documented feature that is not working as expected. - If you have difficulty with the configuration of SC4S, either at the back end or with the out-of-box parsers or index configurations. - If you experience performance issues and need help understanding the bottlenecks. - If you have any questions or issues with the SC4S documentation.
GitHub Issues: For all enhancement requests, please feel free to create GitHub issues. We prioritize and work on issues based on their priority and resource availability. You can help us by tagging the requests with the appropriate labels.
Splunk Developers are active in the external usergroup on best effort basis, please use support case/github issues to resolve your issues quickly
"},{"location":"#contributing","title":"Contributing","text":"We welcome feedback and contributions from the community! Please see our contribution guidelines for more information on how to get involved.
"},{"location":"#license","title":"License","text":"Configuration and documentation licensed subject to CC0
Code and scripts licensed subject to BSD-2-Clause
Third Party Axoflow image of syslog-ng License
Third Party Syslog-NG (OSE) License
Splunk welcomes contributions from the SC4S community, and your feedback and enhancements are appreciated. There\u2019s always code that can be clarified, functionality that can be extended, and new data filters to develop, and documentation to refine. If you see something you think should be fixed or added, go for it!
"},{"location":"CONTRIBUTING/#data-safety","title":"Data Safety","text":"Splunk Connect for Syslog is a community built and maintained product. Anyone with internet access can get a Splunk GitHub account and participate. As with any publicly available repository, care must be taken to never share private data via Issues, Pull Requests or any other mechanisms. Any data that is shared in the Splunk Connect for Syslog GitHub repository is made available to the entire Community without limits. Members of the Community and/or their employers (including Splunk) assume no responsibility or liability for any damages resulting from the sharing of private data via the Splunk GitHub.
Any data samples shared in the Splunk GitHub repository must be free of private data. * Working locally, identify potentially sensitive field values in data samples (Public IP address, URL, Hostname, Etc.) * Replace all potentially sensitive field values with synthetic values * Manually review data samples to re-confirm they are free of private data before sharing in the Splunk GitHub
"},{"location":"CONTRIBUTING/#prerequisites","title":"Prerequisites","text":"When contributing to this repository, please first discuss the change you wish to make via a GitHub issue or Slack message with the owners of this repository.
"},{"location":"CONTRIBUTING/#setup-development-environment","title":"Setup Development Environment","text":"For a basic development environment docker and a bash shell is all that is required. For a more complete IDE experience see our wiki (Setup PyCharm)[https://github.com/splunk/splunk-connect-for-syslog/wiki/SC4S-Development-Setup-Using-PyCharm]
"},{"location":"CONTRIBUTING/#feature-requests-and-bug-reports","title":"Feature Requests and Bug Reports","text":"Have ideas on improvements or found a problem? While the community encourages everyone to contribute code, it is also appreciated when someone reports an issue. Please report any issues or bugs you find through GitHub\u2019s issue tracker.
If you are reporting a bug, please include the following details:
We want to hear about your enhancements as well. Feel free to submit them as issues:
Look through our issue tracker to find problems to fix! Feel free to comment and tag community members of this project with any questions or concerns.
"},{"location":"CONTRIBUTING/#pull-requests","title":"Pull Requests","text":"What is a \u201cpull request\u201d? It informs the project\u2019s core developers about the changes you want to review and merge. Once you submit a pull request, it enters a stage of code review where you and others can discuss its potential modifications and even add more commits to it later on.
If you want to learn more, please consult this tutorial on how pull requests work in the GitHub Help Center.
Here\u2019s an overview of how you can make a pull request against this project:
git clone git@github.com:YOUR_GITHUB_USERNAME/splunk-connect-for-syslog.git\ncd splunk-connect-for-syslog\n
git checkout -b your-bugfix-branch-name develop\n
cd splunk-connect-for-syslog\n./test-with-compose.sh\n
git commit -m \"\"\ngit push\n
There are two aspects of code review: giving and receiving. To make it easier for your PR to receive reviews, consider the reviewers will need you to:
Testing is the responsibility of all contributors. In general, we try to adhere to TDD, writing the test first. There are multiple types of tests. The location of the test code varies with type, as do the specifics of the environment needed to successfully run the test.
We could always use improvements to our documentation! Anyone can contribute to these docs - whether you\u2019re new to the project, you\u2019ve been around a long time, and whether you self-identify as a developer, an end user, or someone who just can\u2019t stand seeing typos. What exactly is needed?
To add commit messages to release notes, tag the message in following format
[TYPE] <commit message>\n
[TYPE] can be among the following * FEATURE * FIX * DOC * TEST * CI * REVERT * FILTERADD * FILTERMOD Sample commit:\ngit commit -m \"[TEST] test-message\"\n
"},{"location":"architecture/","title":"SC4S Architectural Considerations","text":"SC4S provides performant and reliable syslog data collection. When you are planning your configuration, review the following architectural considerations. These recommendations pertain to the Syslog protocol and age, and are not specific to Splunk Connect for Syslog.
"},{"location":"architecture/#the-syslog-protocol","title":"The syslog Protocol","text":"The syslog protocol design prioritizes speed and efficiency, which can occur at the expense of resiliency and reliability. User Data Protocol (UDP) provides the ability to \u201csend and forget\u201d events over the network without regard to or acknowledgment of receipt. Transport Layer Secuirty (TLS) and Secure Sockets Layer (SSL) protocols are also supported, though UDP prevails as the preferred syslog transport for most data centers.
Because of these tradeoffs, traditional methods to provide scale and resiliency do not necessarily transfer to syslog.
"},{"location":"architecture/#ip-protocol","title":"IP protocol","text":"By default, SC4S listens on ports using IPv4. IPv6 is also supported, see SC4S_IPV6_ENABLE
in source configuration options.
Since syslog is a \u201csend and forget\u201d protocol, it does not perform well when routed through substantial network infrastructure. This includes front-side load balancers and WAN. The most reliable way to collect syslog traffic is to provide for edge collection rather than centralized collection. If you centrally locate your syslog server, the UDP and (stateless) TCP traffic cannot adjust and data loss will occur.
"},{"location":"architecture/#syslog-data-collection-at-scale","title":"syslog Data Collection at Scale","text":"As a best practice, do not co-locate syslog-ng servers for horizontal scale and load balance to them with a front-side load balancer:
Attempting to load balance for scale can cause more data loss due to normal device operations and attendant buffer loss. A simple, robust single server or shared-IP cluster provides the best performance.
Front-side load balancing causes inadequate data distribution on the upstream side, leading to uneven data load on the indexers.
Load balancing for high availability does not work well for stateless, unacknowledged syslog traffic. More data is preserved when you use a more simple design such as vMotioned VMs. With syslog, the protocol itself is prone to loss, and syslog data collection can be made \u201cmostly available\u201d at best.
"},{"location":"architecture/#udp-vs-tcp","title":"UDP vs. TCP","text":"Run your syslog configuration on UDP rather than TCP.
The syslogd daemon optimally uses UDP for log forwarding to reduce overhead. This is because UDP\u2019s streaming method does not require the overhead of establishing a network session. UDP reduces network load on the network stream with no required receipt verification or window adjustment.
TCP uses Acknowledgement Signals (ACKS) to avoid data loss, however, loss can still occur when:
Use TCP if the syslog event is larger than the maximum size of the UDP packet on your network typically limited to Web Proxy, DLP, and IDs type sources. To mitigate the drawbacks of TCP you can use TLS over TCP:
SC4S is primarily controlled by environment variables. This topic describes the categories and variables you need to properly configure SC4S for your environment.
"},{"location":"configuration/#global-configuration-variables","title":"Global configuration variables","text":"Variable Values Description SC4S_USE_REVERSE_DNS yes or no (default) Use reverse DNS to identify hosts when HOST is not valid in the syslog header. SC4S_REVERSE_DNS_KEEP_FQDN yes or no (default) When enabled, SC4S will not extract the hostname from FQDN, and instead will pass the full domain name to the host. SC4S_CONTAINER_HOST string Variable that is passed to the container to identify the actual log host for container implementations.If the host value is not present in an event, and you require that a true hostname be attached to each event, SC4S provides an optional ability to perform a reverse IP to name lookup. If the variable SC4S_USE_REVERSE_DNS
is set to \u201cyes\u201d, then SC4S first checks host.csv
and replaces the value of host
with the specified value that matches the incoming IP address. If no value is found in host.csv
, SC4S attempts a reverse DNS lookup against the configured nameserver. In this case, SC4S by default extracts only the hostname from FQDN (example.domain.com
-> example
). If SC4S_REVERSE_DNS_KEEP_FQDN
variable is set to \u201cyes\u201d, full domain name is assigned to the host field.
Note: Using the SC4S_USE_REVERSE_DNS
variable can have a significant impact on performance if the reverse DNS facility is not performant. Check this variable if you notice that events are indexed later than the actual timestamp in the event, for example, if you notice a latency between _indextime
and _time
.
Many HTTP proxies are not provisioned with application traffic in mind. Ensure adequate capacity is available to avoid data loss and proxy outages. The following variables must be entered in lower case:
Variable Values Description http_proxy undefined Use libcurl format proxy string \u201chttp://username:password@proxy.server:port\u201d https_proxy undefined Use libcurl format proxy string \u201chttp://username:password@proxy.server:port\u201d"},{"location":"configuration/#configure-your-splunk-hec-destination","title":"Configure your Splunk HEC destination","text":"Variable Values Description SC4S_DEST_SPLUNK_HEC_<ID>_CIPHER_SUITE comma separated list Opens the SSL cipher suite list. SC4S_DEST_SPLUNK_HEC_<ID>_SSL_VERSION comma separated list Opens the SSL version list. SC4S_DEST_SPLUNK_HEC_<ID>_WORKERS numeric The number of destination workers (threads), the default value is 10 threads. You do not need to change this variable from the default unless your environment has a very high or low volume. Consult with the SC4S community for advice about configuring your settings for environments with very high or low volumes. SC4S_DEST_SPLUNK_INDEXED_FIELDS r_unixtime,facility,severity,container,loghost,destport,fromhostip,protonone This is the list of SC4S indexed fields that will be included with each event in Splunk. The default is the entire list except \u201cnone\u201d. Two other indexed fields,sc4s_vendor_product
and sc4s_syslog_format
, also appear along with the fields selected and cannot be turned on or off individually. If you do not want any indexed fields, set the value to the single value of \u201cnone\u201d. When you set this variable, you must separate multiple entries with commas, do not include extra spaces.This list maps to the following indexed fields that will appear in all Splunk events:facility: sc4s_syslog_facilityseverity: sc4s_syslog_severitycontainer: sc4s_containerloghost: sc4s_loghostdport: sc4s_destportfromhostip: sc4s_fromhostipproto: sc4s_proto The destination operating parameters outlined above should be individually controlled using the destination ID. For example, to set the number of workers for the default destination, use SC4S_DEST_SPLUNK_HEC_DEFAULT_WORKERS
. To configure workers for the alternate HEC destination d_hec_FOO
, use SC4S_DEST_SPLUNK_HEC_FOO_WORKERS
.
Set the SC4S_DEFAULT_TIMEZONE
variable to a recognized \u201czone info\u201d (Region/City) time zone format such as America/New_York
. Setting this value forces SC4S to use the specified timezone and honor its associated Daylight Savings rules for all events without a timezone offset in the header or message payload.
SC4S provides the ability to minimize the number of lost events if the connection to all the Splunk indexers is lost. This capability utilizes the disk buffering feature of Syslog-ng.
SC4S receives a response from the Splunk HTTP Event Collector (HEC) when a message is received successfully. If a confirmation message from the HEC endpoint is not received (or a \u201cserver busy\u201d reply, such as a \u201c503\u201d is sent), the load balancer will try the next HEC endpoint in the pool. If all pool members are exhausted, for example, if there were a full network outage to the HEC endpoints, events will queue to the local disk buffer on the SC4S Linux host.
SC4S will continue attempting to send the failed events while it buffers all new incoming events to disk. If the disk space allocated to disk buffering fills up then SC4S will stop accepting new events and subsequent events will be lost.
Once SC4S gets confirmation that events are again being received by one or more indexers, events will then stream from the buffer using FIFO queueing.
The number of events in the disk buffer will reduce as long as the incoming event volume is less than the maximum SC4S, with the disk buffer in the path, can handle. When all events have been emptied from the disk buffer, SC4S will resume streaming events directly to Splunk.
Disk buffers in SC4S are allocated per destination. Keep this in mind when using additional destinations that have disk buffering configured. By default, when you configure alternate HEC destinations, disk buffering is configured identically to that of the main HEC destination, unless overridden individually.
"},{"location":"configuration/#estimate-your-storage-allocation","title":"Estimate your storage allocation","text":"As an example, to protect against a full day of lost connectivity from SC4S to all your indexers at maximum throughput, the calculation would look like the following:
60,000 EPS * 86400 seconds * 800 bytes * 1.7 = 6.4 TB of storage
"},{"location":"configuration/#about-disk-buffering","title":"About disk buffering","text":"Note the following about disk buffering:
\u201cReliable\u201d disk buffering offers little advantage over \u201cnormal\u201d disk buffering, but has a significant performance penalty. For this reason, normal disk buffering is recommended.
Pay attention to the cumulative buffer requirements when allocating local disk space.
Disk buffer storage is configured using container volumes and is persistent between container restarts. Be sure to account for disk space requirements on the local SC4S host when you create the container volumes in your respective runtime environment. These volumes can grow significantly during an extended outage to the SC4S destination HEC endpoints. See the \u201cEstimate your storage allocation\u201d section.
When you change the disk buffering directory, the new directory must exist. Otherwise, syslog-ng will fail to start.
When you change the disk buffering directory, if buffering has previously occurred on that instance, a persist file may exist which will prevent syslog-ng from changing the directory.
Note: The buffer options apply to each worker rather than the entire destination.
"},{"location":"configuration/#archive-file-configuration","title":"Archive File Configuration","text":"This feature is designed to support compliance or diode mode archival of all messages. The files are stored in a folder structure at the mount point using the pattern shown in the table below, depending on the value of the SC4S_GLOBAL_ARCHIVE_MODE
variable. Events for both modes are formatted using syslog-ng\u2019s EWMM template.
<archive mount>/${.splunk.sourcetype}/${HOST}/$YEAR-$MONTH-$DAY-archive.log
SC4S_GLOBAL_ARCHIVE_MODE diode <archive mount>/${YEAR}/${MONTH}/${DAY}/${fields.sc4s_vendor_product}_${YEAR}${MONTH}${DAY}${HOUR}${MIN}.log\"
Use the following variables to select global archiving or per-source archiving. SC4S does not prune the files that are created, therefore an administrator must provide a means of log rotation to prune files and move them to an archival system to avoid exhausting disk space.
Variable Values Description SC4S_ARCHIVE_GLOBAL yes or undefined Enable archiving of all vendor_products. SC4S_DEST_<VENDOR_PRODUCT>_ARCHIVE yes(default) or undefined Enables selective archiving by vendor product."},{"location":"configuration/#syslog-source-configuration","title":"Syslog Source Configuration","text":"Variable Values/Default Description SC4S_SOURCE_TLS_ENABLE yes or no(default) Enable TLS globally. Be sure to configure the certificate as shown below. SC4S_LISTEN_DEFAULT_TLS_PORT undefined or 6514 Enable a TLS listener on port 6514. SC4S_LISTEN_DEFAULT_RFC6425_PORT undefined or 5425 Enable a TLS listener on port 5425. SC4S_SOURCE_TLS_OPTIONSno-sslv2
Comma-separated list of the following options: no-sslv2, no-sslv3, no-tlsv1, no-tlsv11, no-tlsv12, none
. See syslog-ng docs for the latest list and default values. SC4S_SOURCE_TLS_CIPHER_SUITE See openssl Colon-delimited list of ciphers to support, for example, ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384
. See openssl for the latest list and defaults. SC4S_SOURCE_TCP_MAX_CONNECTIONS 2000 Maximum number of TCP connections. SC4S_SOURCE_UDP_IW_USE yes or no(default) Determine whether to change the initial Window size for UDP. SC4S_SOURCE_UDP_FETCH_LIMIT 1000 Number of events to fetch from server buffer at one time. SC4S_SOURCE_UDP_IW_SIZE 250000 Initial Window size. SC4S_SOURCE_TCP_IW_SIZE 20000000 Initial Window size. SC4S_SOURCE_TCP_FETCH_LIMIT 2000 Number of events to fetch from server buffer at one time. SC4S_SOURCE_UDP_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_TCP_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_TLS_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_RFC5426_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_RFC6587_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_RFC5425_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_LISTEN_UDP_SOCKETS 4 Number of kernel sockets per active UDP port, which configures multi-threading of the UDP input buffer in the kernel to prevent packet loss. Total UDP input buffer is the multiple of SOCKETS x SO_RCVBUFF. SC4S_SOURCE_LISTEN_RFC5426_SOCKETS 1 Number of kernel sockets per active UDP port, which configures multi-threading of the input buffer in the kernel to prevent packet loss. Total UDP input buffer is the sum of SOCKETS x SO_RCVBUFF. SC4S_SOURCE_LISTEN_RFC6587_SOCKETS 1 Number of kernel sockets per active UDP port, which configures multi-threading of the input buffer in the kernel to prevent packet loss. Total UDP input buffer is the sum of SOCKETS x SO_RCVBUFF. SC4S_SOURCE_LISTEN_RFC5425_SOCKETS 1 Number of kernel sockets per active UDP port, which configures multi-threading of the input buffer in the kernel to prevent packet loss. Total UDP input buffer is the sum of SOCKETS x SO_RCVBUFF. SC4S_SOURCE_STORE_RAWMSG undefined or \u201cno\u201d Store unprocessed \u201con the wire\u201d raw message in the RAWMSG macro for use with the \u201cfallback\u201d sourcetype. Do not set this in production, substantial memory and disk overhead will result. Use this only for log path and filter development. SC4S_IPV6_ENABLE yes or no(default) Enable dual-stack IPv6 listeners and health checks."},{"location":"configuration/#configure-your-syslog-source-tls-certificate","title":"Configure your syslog source TLS certificate","text":"/opt/sc4s/tls
./opt/sc4s/tls/server.key
./opt/sc4s/tls/server.pem
.SC4S_SOURCE_TLS_ENABLE=yes
exists in /opt/sc4s/env_file
.Additional certificate authorities may be trusted by appending each PEM formatted certificate to /opt/sc4s/tls/trusted.pem
.
Set Splunk metadata before the data arrives in Splunk and before any add-on processing occurs. The filters apply the index, source, sourcetype, host, and timestamp metadata automatically by individual data source. Values for this metadata, including a recommended index and output format, are included with all \u201cout-of-the-box\u201d log paths included with SC4S and are chosen to properly interface with the corresponding add-on in Splunk. You must ensure all recommended indexes accept this data if the defaults are not changed.
To accommodate the override of default values, each log path consults an internal lookup file that maps Splunk metadata to the specific data source being processed. This file contains the defaults that are used by SC4S to set the appropriate Splunk metadata, index
, host
, source
, and sourcetype
, for each data source. This file is not directly available to the administrator, but a copy of the file is deposited in the local mounted directory for reference, /opt/sc4s/local/context/splunk_metadata.csv.example
by default. This copy is provided solely for reference. To add to the list or to override default entries, create an override file without the example
extension (for example /opt/sc4s/local/context/splunk_metadata.csv
) and modify it according to the instructions below.
splunk_metadata.csv
is a CSV file containing a \u201ckey\u201d that is referenced in the log path for each data source. These keys are documented in the individual source files in this section, and let you override Splunk metadata.
The following is example line from a typical splunk_metadata.csv
override file:
juniper_netscreen,index,ns_index\n
The columns in this file are key
, metadata
, and value
. To make a change using the override file, consult the example
file (or the source documentation) for the proper key and modify and add rows in the table, specifying one or more of the following metadata/value
pairs for a given key
:
key
which refers to the vendor and product name of the data source, using the vendor_product
convention. For overrides, these keys are listed in the example
file. For new custom sources, be sure to choose a key that accurately reflects the vendor and product being configured and that matches the log path.index
to specify an alternate value
for index.source
to specify an alternate value
for source.host
to specify an alternate value
for host.sourcetype
to specify an alternate value
for sourcetype. Only change this if no upstream TA used, or a custom TA is being used.sc4s_template
to specify an alternate value
for the syslog-ng template that will be used to format the event that is indexed by Splunk. Changing this will affect the upstream TA. The template choices are documented here.In our example above, the juniper_netscreen
key references a new index used for that data source called ns_index
.
For most deployments the index should be the only change needed, other default metadata should almost never be overridden.
The splunk_metadata.csv
file is a true override file and the entire example
file should not be copied over to the override. The override file is usually just one or two lines, unless an entire index category (for example netfw
) needs to be overridden.
When building a custom SC4S log path, append the splunk_metadata.csv
file with an appropriate new key and default for the index. The new key will not exist in the internal lookup or in the example
file. Care should be taken during log path design to choose appropriate index, sourcetype and template defaults so that admins are not compelled to override them. If the custom log path is later added to the list of SC4S-supported sources, this addendum can be removed.
The splunk_metadata.csv.example
file is provided for reference only and is not used directly by SC4S. It is an exact copy of the internal file, and can therefore change from release to release. Be sure to check the example file to make sure the keys for any overrides map correctly to the ones in the example file.
In some cases you can provide the same overrides based on PCI scope, geography, or other criteria. Use a file that uniquely identifies these source exceptions via syslog-ng filters, which map to an associated lookup of alternate indexes, sources, or other metadata. Indexed fields can also be added to further classify the data.
The conf
and csv
files referenced below are populated into the /opt/sc4s/local/context
directory when SC4S is run for the first time, in a similar fashion to splunk_metadata.csv
. After this first-time population of the files takes place, you can edit them and restart SC4S for the changes to take effect. To get started:
Edit the file compliance_meta_by_source.conf
to supply uniquely named filters to identify events subject to override.
compliance_meta_by_source.csv
to supply appropriate fields and values.The csv
file provides three columns: filter name
, field name
, and value
. Filter names in the conf
file must match one or more corresponding filter name
rows in the csv
file. The field name
column obeys the following convention:
.splunk.index
to specify an alternate value
for index..splunk.source
to specify an alternate value
for source..splunk.sourcetype
to specify an alternate value
for sourcetype (only changing this if a downstream TA is present, or if a custom TA is present.)fields.fieldname
where fieldname
will become the name of an indexed field sent to Splunk with the supplied value
. This file construct is best shown by an example. Here is an example of a compliance_meta_by_source.conf
file and its corresponding compliance_meta_by_source.csv
file:
filter f_test_test {\n host(\"something-*\" type(glob)) or\n netmask(192.168.100.1/24)\n};\n
f_test_test,.splunk.index,\"pciindex\"\nf_test_test,fields.compliance,\"pci\"\n
Ensure that the filter names in the conf
file match one or more rows in the csv
file. Any incoming message with a hostname starting with something-
or arriving from a netmask of 192.168.100.1/24
will match the f_test_test
filter, and the corresponding entries in the csv
file will be checked for overrides. The new index is pciindex
, and an indexed field named compliance
will be sent to Splunk with its value set to pci
. To add additional overrides, add another filter foo_bar {};
stanza to the conf
file, then add appropriate entries to the csv
file that match the filter names to the overrides.
Take care that your syntax is correct; for more information on proper syslog-ng syntax, see the syslog-ng documentation. A syntax error will cause the runtime process to abort in the \u201cpreflight\u201d phase at startup.
To update your changes, restart SC4S.
"},{"location":"configuration/#drop-all-data-by-ip-or-subnet-deprecated","title":"Drop all data by IP or subnet (deprecated)","text":"Using vendor_product_by_source
to null queue is now a deprecated task. See the supported method for dropping data in Filtering events from output.
Splunk Connect for Syslog uses the syslog-ng template mechanism to format the output event that will be sent to Splunk. These templates can format the messages in a number of ways, including straight text and JSON, and can utilize the many syslog-ng \u201cmacros\u201d fields to specify what gets placed in the event delivered to the destination. The following table is a list of the templates used in SC4S, which can be used for metadata override. New templates can also be added by the administrator in the \u201clocal\u201d section for local destinations; pay careful attention to the syntax as the templates are \u201clive\u201d syslog-ng config code.
Template name Template contents Notes t_standard ${DATE} ${HOST} ${MSGHDR}${MESSAGE} Standard template for most RFC3164 (standard syslog) traffic. t_msg_only ${MSGONLY} syslog-ng $MSG is sent, no headers (host, timestamp, etc.) . t_msg_trim $(strip $MSGONLY) Similar to syslog-ng $MSG with whitespace stripped. t_everything ${ISODATE} ${HOST} ${MSGHDR}${MESSAGE} Standard template with ISO date format. t_hdr_msg ${MSGHDR}${MESSAGE} Useful for non-compliant syslog messages. t_legacy_hdr_msg ${LEGACY_MSGHDR}${MESSAGE} Useful for non-compliant syslog messages. t_hdr_sdata_msg ${MSGHDR}${MSGID} ${SDATA} ${MESSAGE} Useful for non-compliant syslog messages. t_program_msg ${PROGRAM}[${PID}]: ${MESSAGE} Useful for non-compliant syslog messages. t_program_nopid_msg ${PROGRAM}: ${MESSAGE} Useful for non-compliant syslog messages. t_JSON_3164 $(format-json \u2013scope rfc3164\u2013pair PRI=\u201d<$PRI>\u201d\u2013key LEGACY_MSGHDR\u2013exclude FACILITY\u2013exclude PRIORITY) JSON output of all RFC3164-based syslog-ng macros. Useful with the \u201cfallback\u201d sourcetype to aid in new filter development. t_JSON_5424 $(format-json \u2013scope rfc5424\u2013pair PRI=\u201d<$PRI>\u201d\u2013key ISODATE\u2013exclude DATE\u2013exclude FACILITY\u2013exclude PRIORITY) JSON output of all RFC5424-based syslog-ng macros; for use with RFC5424-compliant traffic. t_JSON_5424_SDATA $(format-json \u2013scope rfc5424\u2013pair PRI=\u201d<$PRI>\u201d\u2013key ISODATE\u2013exclude DATE\u2013exclude FACILITY\u2013exclude PRIORITY)\u2013exclude MESSAGE JSON output of all RFC5424-based syslog-ng macros except for MESSAGE; for use with RFC5424-compliant traffic."},{"location":"configuration/#about-ebpf","title":"About eBPF","text":"eBPF helps mitigate congestion of single heavy data stream by utilizing multithreading and is used with SC4S_SOURCE_LISTEN_UDP_SOCKETS
. To leverage this feature you need your host OS to be able to use eBPF and must run Docker or Podman in privileged mode.
SC4S_SOURCE_LISTEN_UDP_SOCKETS
. To run Docker or Podman in privileged mode, edit the service file /lib/systemd/system/sc4s.service
to add the --privileged
flag to the Docker or Ppodman run command:
ExecStart=/usr/bin/podman run \\\n -e \"SC4S_CONTAINER_HOST=${SC4SHOST}\" \\\n -v \"$SC4S_PERSIST_MOUNT\" \\\n -v \"$SC4S_LOCAL_MOUNT\" \\\n -v \"$SC4S_ARCHIVE_MOUNT\" \\\n -v \"$SC4S_TLS_MOUNT\" \\\n --privileged \\\n --env-file=/opt/sc4s/env_file \\\n --health-cmd=\"/healthcheck.sh\" \\\n --health-interval=10s --health-retries=6 --health-timeout=6s \\\n --network host \\\n --name SC4S \\\n --rm $SC4S_IMAGE\n
"},{"location":"configuration/#change-your-status-port","title":"Change your status port","text":"Use SC4S_LISTEN_STATUS_PORT
to change the \u201cstatus\u201d port used by the internal health check process. The default value is 8080
.
SC4S parsers perform operations that would normally be performed during index time, including linebreaking, source and sourcetype setting, and timestamping. You can write your own parser if the parsers available in the SC4S package do not meet your needs.
"},{"location":"create-parser/#before-you-start","title":"Before you start","text":"Prepare your testing environment. With Python>=3.9:
pip3 install poetry\npoetry install\n
Prepare your testing command:
poetry run pytest -v --tb=long \\\n--splunk_type=external \\\n--splunk_hec_token=<HEC_TOKEN> \\\n--splunk_host=<HEC_ENDPOINT> \\\n--sc4s_host=<SC4S_IP> \\\n--junitxml=test-results/test.xml \\\n-n <NUMBER_OF_JOBS> \\\n<TEST>\n
Create a new branch in the repository where you will apply your changes.
If you already have a raw log message, you can skip this step. Otherwise, you need to extract one to have something to work with. You can do this in multiple ways, this section describes three methods.
"},{"location":"create-parser/#procure-a-raw-log-message-using-tcpdump","title":"Procure a raw log message usingtcpdump
","text":"You can use the tcpdump
command to get incoming raw messages on a given port of your server:
tcpdump -n -s 0 -S -i any -v port 8088\n\ntcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes\n09:54:26.051644 IP (tos 0x0, ttl 64, id 29465, offset 0, flags [DF], proto UDP (17), length 466)\n10.202.22.239.41151 > 10.202.33.242.syslog: SYSLOG, length: 438\nFacility local0 (16), Severity info (6)\nMsg: 2022-04-28T16:16:15.466731-04:00 NTNX-21SM6M510425-B-CVM audispd[32075]: node=ntnx-21sm6m510425-b-cvm type=SYSCALL msg=audit(1651176975.464:2828209): arch=c000003e syscall=2 success=yes exit=6 a0=7f2955ac932e a1=2 a2=3e8 a3=3 items=1 ppid=29680 pid=4684 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=964698 comm=\u201csshd\u201d exe=\u201c/usr/sbin/sshd\u201d subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 key=\u201clogins\u201d\\0x0a\n
"},{"location":"create-parser/#procure-a-raw-log-message-using-wireshark","title":"Procure a raw log message using Wireshark","text":"Once you get your stream of messages, copy one of them. Note that in UDP there are not usually any message separators. You can also read the logs using Wireshark from the .pcap file. From Wireshark go to Statistics > Conversations, then click on Follow Stream
:
See Obtaining \u201cOn-the-wire\u201d Raw Events.
"},{"location":"create-parser/#create-a-unit-test","title":"Create a unit test","text":"To create a unit test, use the existing test case that is most similar to your use case. The naming convention is test_vendor_product.py
.
<14>1 2022-03-30T11:17:11.900862-04:00 host - - - - Carbon Black App Control event: text=\"File 'c:\\program files\\azure advanced threat protection sensor\\2.175.15073.51407\\winpcap\\x86\\packet.dll' [c4e671bf409076a6bf0897e8a11e6f1366d4b21bf742c5e5e116059c9b571363] would have blocked if the rule was not in Report Only mode.\" type=\"Policy Enforcement\" subtype=\"Execution block (unapproved file)\" hostname=\"CORP\\USER\" username=\"NT AUTHORITY\\SYSTEM\" date=\"3/30/2022 3:16:40 PM\" ip_address=\"10.0.0.3\" process=\"c:\\program files\\azure advanced threat protection sensor\\2.175.15073.51407\\microsoft.tri.sensor.updater.exe\" file_path=\"c:\\program files\\azure advanced threat protection sensor\\2.175.15073.51407\\winpcap\\x86\\packet.dll\" file_name=\"packet.dll\" file_hash=\"c4e671bf409076a6bf0897e8a11e6f1366d4b21bf742c5e5e116059c9b571363\" policy=\"High Enforcement - Domain Controllers\" rule_name=\"Report read-only memory map operations on unapproved executables by .NET applications\" process_key=\"00000433-0000-23d8-01d8-44491b26f203\" server_version=\"8.5.4.3\" file_trust=\"-2\" file_threat=\"-2\" process_trust=\"-2\" process_threat=\"-2\" prevalence=\"50\"
Now run the test, for example:
poetry run pytest -v --tb=long \\\n--splunk_type=external \\\n--splunk_hec_token=<HEC_TOKEN> \\\n--splunk_host=<HEC_ENDPOINT> \\\n--sc4s_host=<SC4S_IP> \\\n--junitxml=test-results/test.xml \\\n-n <NUMBER_OF_JOBS> \\\ntest/test_vendor_product.py\n
The parsed log should appear in Splunk:
In this example the message is being parsed as a generic nix:syslog
sourcetype. This means that the message format complied with RFC standards, and SC4S could correctly identify the format fields in the message.
To assign your messages to the proper index and sourcetype you will need to create a parser. Your parser must be declared in package/etc/conf.d/conflib
. The naming convention is app-type-vendor_product.conf
.
The most basic configuration will forward raw log data with correct metadata, for example:
block parser app-syslog-vmware_cb-protect() {\n channel {\n rewrite {\n r_set_splunk_dest_default(\n index(\"epintel\")\n sourcetype('vmware:cb:protect')\n vendor(\"vmware\")\n product(\"cb-protect\")\n template(\"t_msg_only\")\n );\n };\n };\n};\napplication app-syslog-vmware_cb-protect[sc4s-syslog] {\n filter {\n message('Carbon Black App Control event: ' type(string) flags(prefix));\n }; \n parser { app-syslog-vmware_cb-protect(); };\n};\n
All messages that start with the string Carbon Black App Control event:
will now be routed to the proper index and assigned the given sourcetype: For more info about using message filtering go to sources documentation. To apply more transformations, add the parser:
block parser app-syslog-vmware_cb-protect() {\n channel {\n rewrite {\n r_set_splunk_dest_default(\n index(\"epintel\")\n sourcetype('vmware:cb:protect')\n vendor(\"vmware\")\n product(\"cb-protect\")\n template(\"t_kv_values\")\n );\n };\n\n parser {\n csv-parser(delimiters(chars('') strings(': '))\n columns('header', 'message')\n prefix('.tmp.')\n flags(greedy, drop-invalid));\n kv-parser(\n prefix(\".values.\")\n pair-separator(\" \")\n template('${.tmp.message}')\n );\n };\n };\n};\napplication app-syslog-vmware_cb-protect[sc4s-syslog] {\n filter {\n message('Carbon Black App Control event: ' type(string) flags(prefix));\n }; \n parser { app-syslog-vmware_cb-protect(); };\n};\n
This example extracts all fields that are nested in the raw log message first by using csv-parser
to split Carbon Black App Control event
and the rest of message as a two separate fields named header
and message
. kv-parser
will extract all key-value pairs in the message
field. To test your parser, run a previously created test case. If you need more debugging, use docker ps
to see your running containers and docker logs
to see what\u2019s happening to the parsed message.
Commit your changes and open a pull request.
The SC4S Metrics and Events dashboard lets you monitor metrics and event flows for all SC4S instances sending data to a chosen Splunk platform.
"},{"location":"dashboard/#functionalities","title":"Functionalities","text":""},{"location":"dashboard/#overview-metrics","title":"Overview metrics","text":"The SC4S and Metrics Overview dashboard displays the cumulative sum of received and dropped messages for all SC4S instances in a chosen interval for the specified time range. By default the interval is set to 30 seconds and the time range is set to 15 minutes.
The Received Messages panel can be used as a heartbeat metric. A healthy SC4S instance should send at least one message per 30 seconds. This metrics message is included in the count.
Keep the Dropped Messages panel at a constant level of 0. If SC4S drops messages due to filters, slow performance, or for any other reason, the number of dropped messages will persist until the instance restarts. The Dropped Messages panel does not include potential UDP messages dropped from the port buffer, which SC4S is not able to track.
"},{"location":"dashboard/#single-instance-metrics","title":"Single instance metrics","text":"You can display the instance name and SC4S version for a specific SC4S instance (available in versions 3.16.0 and later).
This dashboard also displays a timechart of deltas for received, queued, and dropped messages for a specific SC4S instance.
"},{"location":"dashboard/#single-instance-events","title":"Single instance events","text":"You can analyze traffic processed by an SC4S instance by visualizing the following events data:
You can configure Splunk Connect for Syslog to use any destination available in syslog-ng OSE. Helpers manage configuration for the three most common destination needs:
HTTP traffic compression helps to reduce network bandwidth usage when sending to a HEC destination. SC4S currently supports gzip for compressing transmitted traffic. Using the gzip compression algorithm can result in lower CPU load and increased utilization of RAM. The algorithm may also cause a decrease in performance by 6% to 7%. Compression affects the content but does not affect the HTTP headers. Enable batch packet processing to make the solution efficient, as this allows compression of a large number of logs at once.
Variable Values Description SC4S_DEST_SPLUNK_HEC_<ID>_HTTP_COMPRESSION yes or no(default) Compress outgoing HTTP traffic using the gzip method."},{"location":"destinations/#syslog-standard-destination","title":"Syslog standard destination","text":"The use of \u201csyslog\u201d as a network protocol has been defined in Internet Engineering Task Force standards RFC5424, RFC5425, and RFC6587.
Note: SC4S sending messages to a syslog destination behaves like a relay. This means overwriting some original information, for example the original source IP.
"},{"location":"destinations/#configuration-options_1","title":"Configuration options","text":"Variable Values Description SC4S_DEST_SYSLOG_<ID>_HOST fqdn or ip The FQDN or IP of the target. SC4S_DEST_SYSLOG_<ID>_PORT number 601 is the default when framed, 514 is the default when not framed. SC4S_DEST_SYSLOG_<ID>_IETF yes/no, the default value is yes. Use IETF Standard frames. SC4S_DEST_SYSLOG_<ID>_TRANSPORT tcp,udp,tls. The default value is tcp. SC4S_DEST_SYSLOG_<ID>_MODE string \u201cGLOBAL\u201d or \u201cSELECT\u201d."},{"location":"destinations/#send-rfc5424-with-frames","title":"Send RFC5424 with frames","text":"In this example, SC4S will send Cisco ASA events as RFC5424 syslog to a third party system.
The message format will be similar to: 123 <166>1 2022-02-02T14:59:55.000+00:00 kinetic-charlie - - - - %FTD-6-430003: DeviceUUID
.
The destination name is taken from the environment variable, each destination must have a unique name. This value should be short and meaningful.
#env_file\nSC4S_DEST_SYSLOG_MYSYS_HOST=172.17.0.1\nSC4S_DEST_SYSLOG_MYSYS_PORT=514\nSC4S_DEST_SYSLOG_MYSYS_MODE=SELECT\n
#filename: /opt/sc4s/local/config/app_parsers/selectors/sc4s-lp-cisco_asa_d_syslog_mysys.conf\napplication sc4s-lp-cisco_asa_d_syslog_mysys[sc4s-lp-dest-select-d_syslog_mysys] {\n filter {\n 'cisco' eq \"${fields.sc4s_vendor}\"\n and 'asa' eq \"${fields.sc4s_product}\"\n }; \n};\n
"},{"location":"destinations/#send-rfc5424-without-frames","title":"Send RFC5424 without frames","text":"In this example SC4S will send Cisco ASA events to a third party system without frames.
The message format will be similar to: <166>1 2022-02-02T14:59:55.000+00:00 kinetic-charlie - - - - %FTD-6-430003: DeviceUUID
.
#env_file\nSC4S_DEST_SYSLOG_MYSYS_HOST=172.17.0.1\nSC4S_DEST_SYSLOG_MYSYS_PORT=514\nSC4S_DEST_SYSLOG_MYSYS_MODE=SELECT\n# set to #yes for ietf frames\nSC4S_DEST_SYSLOG_MYSYS_IETF=no \n
#filename: /opt/sc4s/local/config/app_parsers/selectors/sc4s-lp-cisco_asa_d_syslog_mysys.conf\napplication sc4s-lp-cisco_asa_d_syslog_mysys[sc4s-lp-dest-select-d_syslog_mysys] {\n filter {\n 'cisco' eq \"${fields.sc4s_vendor}\"\n and 'asa' eq \"${fields.sc4s_product}\"\n }; \n};\n
"},{"location":"destinations/#legacy-bsd","title":"Legacy BSD","text":"In many cases, the actual configuration required is Legacy BSD syslog which is not a standard and was documented in RFC3164.
Variable Values Description SC4S_DEST_BSD_<ID>_HOST fqdn or ip The FQDN or IP of the target. SC4S_DEST_BSD_<ID>_PORT number, the default is 514. SC4S_DEST_BSD_<ID>_TRANSPORT tcp,udp,tls, the default is tcp. SC4S_DEST_BSD_<ID>_MODE string \u201cGLOBAL\u201d or \u201cSELECT\u201d."},{"location":"destinations/#send-legacy-bsd","title":"Send legacy BSD","text":"The message format will be similar to: <134>Feb 2 13:43:05.000 horse-ammonia CheckPoint[26203]
.
#env_file\nSC4S_DEST_BSD_MYSYS_HOST=172.17.0.1\nSC4S_DEST_BSD_MYSYS_PORT=514\nSC4S_DEST_BSD_MYSYS_MODE=SELECT\n
#filename: /opt/sc4s/local/config/app_parsers/selectors/sc4s-lp-cisco_asa_d_bsd_mysys.conf\napplication sc4s-lp-cisco_asa_d_bsd_mysys[sc4s-lp-dest-select-d_bsd_mysys] {\n filter {\n 'cisco' eq \"${fields.sc4s_vendor}\"\n and 'asa' eq \"${fields.sc4s_product}\"\n }; \n};\n
"},{"location":"destinations/#multiple-destinations","title":"Multiple destinations","text":"SC4S can send data to multiple destinations. In the original setup the default destination accepts all events. This ensures that at least one destination receives the event, helping to avoid data loss due to misconfiguration. The provided examples demonstrate possible options for configuring additional HEC destinations.
"},{"location":"destinations/#send-all-events-to-the-additional-destination","title":"Send all events to the additional destination","text":"After adding this example to your basic configuration SC4S will send all events both to SC4S_DEST_SPLUNK_HEC_DEFAULT_URL
and SC4S_DEST_SPLUNK_HEC_OTHER_URL
.
#Note \"OTHER\" should be a meaningful name\nSC4S_DEST_SPLUNK_HEC_OTHER_URL=https://splunk:8088\nSC4S_DEST_SPLUNK_HEC_OTHER_TOKEN=${SPLUNK_HEC_TOKEN}\nSC4S_DEST_SPLUNK_HEC_OTHER_TLS_VERIFY=no\nSC4S_DEST_SPLUNK_HEC_OTHER_MODE=GLOBAL\n
"},{"location":"destinations/#send-only-selected-events-to-the-additional-destination","title":"Send only selected events to the additional destination","text":"After adding this example to your basic configuration SC4S will send Cisco IOS events to SC4S_DEST_SPLUNK_HEC_OTHER_URL
.
#Note \"OTHER\" should be a meaningful name\nSC4S_DEST_SPLUNK_HEC_OTHER_URL=https://splunk:8088\nSC4S_DEST_SPLUNK_HEC_OTHER_TOKEN=${SPLUNK_HEC_TOKEN}\nSC4S_DEST_SPLUNK_HEC_OTHER_TLS_VERIFY=no\nSC4S_DEST_SPLUNK_HEC_OTHER_MODE=SELECT\n
application sc4s-lp-cisco_ios_dest_fmt_other[sc4s-lp-dest-select-d_hec_fmt_other] {\n filter {\n 'cisco' eq \"${fields.sc4s_vendor}\"\n and 'asa' eq \"${fields.sc4s_product}\"\n };\n};\n
"},{"location":"destinations/#advanced-topic-configure-filtered-alternate-destinations","title":"Advanced topic: Configure filtered alternate destinations","text":"You may require more granularity for a specific data source. For example, you may want to send all Cisco ASA debug traffic to Cisco Prime for analysis. To accommodate this, filtered alternate destinations let you supply a filter to redirect a portion of a source\u2019s traffic to a list of alternate destinations and, optionally, prevent matching events from being sent to Splunk. You configure this using environment variables:
Variable Values Description SC4S_DEST_<VENDOR_PRODUCT>_ALT_FILTER syslog-ng filter Filter to determine which events are sent to alternate destinations. SC4S_DEST_<VENDOR_PRODUCT>_FILTERED_ALTERNATES Comma or space-separated list of syslog-ng destinations. Send filtered events to alternate syslog-ng destinations using the VENDOR_PRODUCT syntax, for example,SC4S_DEST_CISCO_ASA_FILTERED_ALTERNATES
. This is an advanced capability, and filters and destinations using proper syslog-ng syntax must be constructed before using this functionality.
The regular destinations, including the primary HEC destination or configured archive destination, for example d_hec
or d_archive
, are not included for events matching the configured alternate destination filter. If an event matches the filter, the list of filtered alternate destinations completely replaces any mainline destinations, including defaults and global or source-based standard alternate destinations. Include them in the filtered destination list if desired.
Since the filtered alternate destinations completely replace the mainline destinations, including HEC to Splunk, a filter that matches all traffic can be used with a destination list that does not include the standard HEC destination to effectively turn off HEC for a given data source.
"},{"location":"edge_processor/","title":"Edge Processor integration guide (Experimental)","text":""},{"location":"edge_processor/#intro","title":"Intro","text":"You can use the Edge Processor
to:
SPL2
.SPL2
.AWS S3
or Apache Kafka
.stateDiagram\n direction LR\n\n SC4S: SC4S\n EP: Edge Processor\n Dest: Another destination\n Device: Your device\n S3: AWS S3\n Instance: Instance\n Pipeline: Pipeline with SPL2\n\n Device --> SC4S: Syslog protocol\n SC4S --> EP: HEC\n state EP {\n direction LR\n Instance --> Pipeline\n }\n EP --> Splunk\n EP --> S3\n EP --> Dest
"},{"location":"edge_processor/#set-up-the-edge-processor-for-sc4s","title":"Set up the Edge Processor for SC4S","text":"SC4S using same protocol for communication with Splunk and Edge Processor. For that reason setup process will be very similar, but it have some differences.
Set up on Docker / PodmanSet up on Kubernetesenv_file
, configure the HEC URL as IP of managed instance, that you registered on Edge Processor.SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://x.x.x.x:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
values.yaml
HEC URL using the IP of managed instance, that you registered on Edge Processor.splunk:\n hec_url: \"http://x.x.x.x:8088\"\n hec_token: \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\"\n hec_verify_tls: \"no\"\n
"},{"location":"edge_processor/#mtls-encryption","title":"mTLS encryption","text":"Before setup, generate mTLS certificates. Server mTLS certificates should be uploaded to Edge Processor
and client certifcates should be used with SC4S
.
Rename the certificate files. SC4S requires the following names:
key.pem
- client certificate keycert.pem
- client certificateca_cert.pem
- certificate authoritySC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://x.x.x.x:8088
.key.pem
, cert.pem
, ca_cert.pem
) to /opt/sc4s/tls/hec
./opt/sc4s/tls/hec
to /etc/syslog-ng/tls/hec
using docker/podman volumes.SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_MOUNT=/etc/syslog-ng/tls/hec
.values.yaml
file:splunk:\n hec_url: \"https://x.x.x.x:8088\"\n hec_token: \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\"\n hec_tls: \"hec-tls-secret\"\n
charts/splunk-connect-for-syslog/secrets.yaml
file:hec_tls:\n secret: \"hec-tls-secret\"\n value:\n key: |\n -----BEGIN PRIVATE KEY-----\n Exmaple key\n -----END PRIVATE KEY-----\n cert: |\n -----BEGIN CERTIFICATE-----\n Exmaple cert\n -----END CERTIFICATE-----\n ca: |\n -----BEGIN CERTIFICATE-----\n Example ca\n -----END CERTIFICATE-----\n
secrets.yaml
:ansible-vault encrypt charts/splunk-connect-for-syslog/secrets.yaml\n
Add the IP address for your cluster nodes to the inventory file ansible/inventory/inventory_microk8s_ha.yaml
.
Deploy the Ansible playbook:
ansible-playbook -i ansible/inventory/inventory_microk8s_ha.yaml ansible/playbooks/microk8s_ha.yml --ask-vault-pass\n
"},{"location":"edge_processor/#scaling-edge-processor","title":"Scaling Edge Processor","text":"To scale you can distribute traffic between Edge Processor managed instances. To set this up, update the HEC URL with a comma-separated list of URLs for your managed instances.
Set up on Docker/PodmanSet up on KubernetesUpdate HEC URL in env_file
:
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://x.x.x.x:8088,http://x.x.x.x:8088,http://x.x.x.x:8088\n
Update HEC URL in values.yaml
:
splunk:\n hec_url: \"http://x.x.x.x:8088,http://x.x.x.x:8088,http://x.x.x.x:8088\"\n
"},{"location":"experiments/","title":"Current experimental features","text":""},{"location":"experiments/#3120","title":"> 3.12.0","text":"SC4S_USE_NAME_CACHE=yes
supports IPv6.
eBPF is a feature that leverages Linux kernel infrastructure to evenly distribute the load, especially in cases when there is a huge stream of messages incoming from a single appliance. To use the eBPF feature, you must have a host machine with and OS that supports eBPF. eBPF should be used only in cases when other ways of SC4S tuning fail. See the instruction for configuration details. To learn more visit this blog post.
"},{"location":"experiments/#sc4s-lite","title":"SC4S Lite","text":"In the new 3.0.0 update, we\u2019ve introduced SC4S Lite. SC4S Lite is designed for those who prefer speed and custom filters over the pre-set ones that come with the standard SC4S. It\u2019s similar to our default version, without the pre-defined filters and complex app_parser topics. More information can be found at dedicated page.
"},{"location":"experiments/#2130","title":"> 2.13.0","text":"env_file
, SC4S sets SC4S_USE_NAME_CACHE=yes
to enable caching of the last valid host string, replaces nill, null, or IPv4 with the last good value, and stores this information in the hostip.sqlite
file. hostip.sqlite
file, set SC4S_CLEAR_NAME_CACHE=yes
flag in env_file
. This action will automatically delete the hostip.sqlite file
when SC4S restarts.env_file
set SC4S_SOURCE_VMWARE_VSPHERE_GROUPMSG=yes
to enable additional post processing and merge multiline vmware events. You should also enable SC4S_USE_NAME_CACHE=yes
, to accomodate event that have malformed or missing host names.env_file
set SC4S_USE_VPS_CACHE=yes
to enable automatic configuration of vendor_product
by source where possible. This feature caches vendor
and product
fields to determine of the best values for generic Linux events. For example, without this feature the \u201cvendor product by host\u201d app parser must be configured to identify ESX hosts so that ESX SSHD events can be routed using the meta key vmware_vsphere_nix_syslog
. With this feature enabled a common event such as an event containing \u201cprogram=vpxa\u201d will cache this value. SC4S_SOURCE_PROXYCONNECT=yes
for TCP and TLS connection expect \u201cPROXY CONNECT\u201d to provide the original client IP in SNAT load balancing.Q: The universal forwarder with file-based architecture has been the documented Splunk best practice for a long time. Why should I switch to an HTTP Event Collector (HEC) based architecture?
A:
Using HEC to stream events directly to the indexers provides superior load balancing, and has shown to produce more even data distribution across the indexers. This even distribution results in significantly enhanced search performance. This benefit is especially valuable in large Splunk deployments.
The HEC architecture designed in SC4S is also easier to administer with newer versions of syslog-ng. There are fewer opportunities for configuration errors, resulting in higher overall performance.
HEC, and in particular the \u201c/event\u201d endpoint, offers the opportunity for a far richer data stream to Splunk, with lower resource utilization at ingest time. This rich data stream can be taken advantage of in next-generation add-ons.
Q: Is the Splunk HTTP Event Collector (HEC) as reliable as the Splunk universal forwarder?
A: HEC utilizes standard HTTP mechanisms to confirm that the endpoint is responsive before sending data. The HEC architecture allows you to use an industry standard load balancer between SC4S and the indexer or the included load balancing capability built into SC4S itself.
Q: What if my team doesn\u2019t know how to manage containers?
A: Using a runtime like Podman to deploy and manage SC4S containers is exceptionally easy even for those with no prior \u201ccontainer experience\u201d. Our application of container technology behaves much like a packaging system. The interaction uses \u201csystemctl\u201d commands a Linux admin would use for other common administration activities. The best approach is to try it out in a lab to see what the experience is like for yourself!
Q: Can my team use SC4S with Windows?
A: You can now run Docker on Windows! Microsoft has introduced public preview technology for Linux containers on Windows. Alternatively, a minimal Centos/Ubuntu Linux VM running on Windows hyper-v is a reliable production-grade choice.
Q: My company has the traditional universal forwarder and files-based syslog architecture deployed and running, should I rip and replace a working installation with SC4S?
A: Generally speaking, if a deployment is working and you are happy with it, it\u2019s best to leave it as is until there is need for major deployment changes, such as scaling your configuration. The search performance improvements from better data distribution is one benefit, so if Splunk users have complained about search performance or you are curious about the possible performance gains, we recommend doing an analysis of the data distribution across the indexers.
Q: What is the best way to migrate to SC4S from an existing syslog architecture?
A: When exploring migration to SC4S we strongly recommend that you experiment in a lab prior to deployment to production. There are a couple of approaches to consider:
Q: How can SC4S be deployed to provide high availability?
A: The syslog protocol was not designed with HA as a goal, so configuration can be challenging. See Performant AND Reliable Syslog UDP is best for an excellent overview of this topic.
The syslog protocol limits the extent to which you can make any syslog collection architecture HA; at best it can be made \u201cmostly available\u201d. To do this, keep it simple and use OS clustering (shared IP) or even just VMs with vMotion. This simple architecture will encounter far less data loss over time than more complicated schemes. Another possible option is containerization HA schemes for SC4S (centered around MicroK8s) that will take some of the administrative burden of clustering away, but still functions as OS clustering under the hood.
Q: I\u2019m worried about data loss if SC4S goes down. Could I feed syslog to redundant SC4S servers to provide HA, without creating duplicate events in Splunk?
A: In many system design decisions there is some level of compromise. Any network protocol that doesn\u2019t have an application level ACK will lose data because speed is selected over reliability in the design. This is the case with syslog. Use a clustered IP with an active/passive node for a level of resilience while keeping complexity to a minimum. It could be possible to implement a far more complex solution utilizing an additional intermediary technology like Kafka, however the costs may outweigh the real world benefits.
Q: If the XL reference HW can handle just under 1 terabyte per day, how can SC4S be scaled to handle large deployments of many terabytes per day?
A: SC4S is a distributed architecture. SC4S instances should be deployed in the same VLAN as the source devices. This means that each SC4S instance will only see a subset of the total syslog traffic in a large deployment. Even in a deployment of 100 terabytes or greater, the individual SC4S instances will see loads in gigabytes per day rather than terabytes per day.
Q: SC4S is being blocked by fapolicyd
, how do I fix that?
A: Create a rule that allows running SC4S in fapolicyd
configuration:
/etc/fapolicyd/rules.d/15-sc4s.rules
.allow perm=open exe=/ : dir=/usr/lib64/ all trust=1
.fagenrules --load
to load the new rule.systemctl restart fapolicyd
to restart the process.sc4s systemctl start sc4s
and verify that there are no errors systemctl status sc4s
.Q: I am facing a unique issue that my postfilter configuration is not working although I don\u2019t have any postfilter for the mentioned source?
A: There may be a OOB postfilter for the source which will be applied, validate this by checking the value of sc4s_tags
in Splunk. To resolve this, see [sc4s-finalfilter]
. Do not use this resolution in any other situation as it can add the cost of the data processing.
Q: Where should the configuration for the vendors be placed? There are folders of app-parsers and directories. Which one to use? Does this also mean that csv files for metadata are no longer required?
A: The configuration for vendors should be placed in /opt/sc4s/local/config/*/.conf
. Most of the folders are placeholders, the configuration will work in any of these folders with a .conf
extension. CSV should be placed in local/context/*.csv
. Using splunk_metadata.csv
is good for metadata override, but use .conf
file for everything else in place of other csv files.
Q: Can we have a file in which we can create all default indexes in one effort?
A: Refer to indexes.conf, which contains all indexes being created in one effort. This file also has lastChanceIndex
configured, to use if it fits your requirements. For more information on this file, please refer Splunk docs.
Load balancers are not a best practice for SC4S. The exception to this is a narrow use case where the syslog server is exposed to untrusted clients on the internet, for example, with Palo Alto Cortex.
"},{"location":"lb/#considerations","title":"Considerations","text":"SC4S_SOURCE_PROXYCONNECT=yes
. The best deployment model for high availability is a Microk8s based deployment with MetalLB in BGP mode. This model uses a special class of load balancer that is implemented as destination network translation.
"},{"location":"lite/","title":"SC4S Lite","text":""},{"location":"lite/#about-sc4s-lite","title":"About SC4S Lite","text":"SC4S Lite provides a scalable, performance-oriented solution for ingesting syslog data into Splunk. Pluggable modular parsers offer you the flexibility to incorporate custom data processing logic to suit specific use cases.
"},{"location":"lite/#architecture","title":"Architecture","text":""},{"location":"lite/#sc4s-lite_1","title":"SC4S Lite","text":"SC4S Lite provides a lightweight, high-performance SC4S solution.
"},{"location":"lite/#pluggable-modules","title":"Pluggable Modules","text":"Pluggable modules are predefined modules that you can enable and disable through configuration files. Each pluggable module represents a set of parsers for each vendor that supports SC4S. You can only enable or disable modules, you cannot create new modules or update existing ones. For more information see the pluggable modules documentation .
"},{"location":"lite/#splunk-enterprise-or-splunk-cloud","title":"Splunk Enterprise or Splunk Cloud","text":"You configure SC4S Lite to send syslog data to Splunk Enterprise or Splunk Cloud. The Splunk Platform provides comprehensive analysis, searching, and visualization of your processed data.
"},{"location":"lite/#how-sc4s-lite-processes-your-data","title":"How SC4S Lite processes your data","text":"SC4S Lite is built on an Alpine lightweight container which has very little vulnerability. SC4S Lite supports secure syslog data transmission protocols such as RELP and TLS over TCP to protect your data in transit. Additionally, the environment in which SC4S Lite is deployed enhances data security.
"},{"location":"lite/#scalability-and-performance","title":"Scalability and performance","text":"SC4S Lite provides superior performance and scalability thanks to the lightweight architecture and pluggable parsers, which distribute the processing load. It is also packaged with eBPF functionality to further enhance performance. Note that actual performance may depend on factors such as your server capacity and network bandwidth.
"},{"location":"lite/#implement-sc4s-lite","title":"Implement SC4S Lite","text":"To implementat of SC4S Lite:
container2
or container3
) with container3lite
.values.yaml
file.Performance testing against our lab configuration produces the following results and limitations.
"},{"location":"performance/#tested-configurations","title":"Tested Configurations","text":""},{"location":"performance/#splunk-cloud-noah","title":"Splunk Cloud Noah","text":""},{"location":"performance/#environment","title":"Environment","text":"/opt/syslog-ng/bin/loggen -i --rate=100000 --interval=1800 -P -F --sdata=\"[test name=\\\"stress17\\\"]\" -s 800 --active-connections=10 <local_hostmane> <sc4s_external_tcp514_port>\n
"},{"location":"performance/#result","title":"Result","text":"SC4S instance root networking slirp4netns networking m5zn.large average rate = 21109.66 msg/sec, count=38023708, time=1801.25, (average) msg size=800, bandwidth=16491.92 kB/sec average rate = 20738.39 msg/sec, count=37344765, time=1800.75, (average) msg size=800, bandwidth=16201.87 kB/sec m5zn.xlarge average rate = 34820.94 msg/sec, count=62687563, time=1800.28, (average) msg size=800, bandwidth=27203.86 kB/sec average rate = 35329.28 msg/sec, count=63619825, time=1800.77, (average) msg size=800, bandwidth=27601.00 kB/sec m5zn.2xlarge average rate = 71929.91 msg/sec, count=129492418, time=1800.26, (average) msg size=800, bandwidth=56195.24 kB/sec average rate = 70894.84 msg/sec, count=127630166, time=1800.27, (average) msg size=800, bandwidth=55386.60 kB/sec m5zn.2xlarge average rate = 85419.09 msg/sec, count=153778825, time=1800.29, (average) msg size=800, bandwidth=66733.66 kB/sec average rate = 84733.71 msg/sec, count=152542466, time=1800.26, (average) msg size=800, bandwidth=66198.21 kB/sec"},{"location":"performance/#splunk-enterprise","title":"Splunk Enterprise","text":""},{"location":"performance/#environment_1","title":"Environment","text":"/opt/syslog-ng/bin/loggen -i --rate=100000 --interval=600 -P -F --sdata=\"[test name=\\\"stress17\\\"]\" -s 800 --active-connections=10 <local_hostmane> <sc4s_external_tcp514_port>\n
"},{"location":"performance/#result_1","title":"Result","text":"SC4S instance root networking slirp4netns networking m5zn.large average rate = 21511.69 msg/sec, count=12930565, time=601.095, (average) msg size=800, bandwidth=16806.01 kB/sec average rate = 21583.13 msg/sec, count=12973491, time=601.094, (average) msg size=800, bandwidth=16861.82 kB/sec average rate = 20738.39 msg/sec, count=37344765, time=1800.75, (average) msg size=800, bandwidth=16201.87 kB/sec m5zn.xlarge average rate = 37514.29 msg/sec, count=22530855, time=600.594, (average) msg size=800, bandwidth=29308.04 kB/sec average rate = 37549.86 msg/sec, count=22552210, time=600.594, (average) msg size=800, bandwidth=29335.83 kB/sec average rate = 35329.28 msg/sec, count=63619825, time=1800.77, (average) msg size=800, bandwidth=27601.00 kB/sec m5zn.2xlarge average rate = 98580.10 msg/sec, count=59157495, time=600.096, (average) msg size=800, bandwidth=77015.70 kB/sec average rate = 99463.10 msg/sec, count=59687310, time=600.095, (average) msg size=800, bandwidth=77705.55 kB/sec average rate = 84733.71 msg/sec, count=152542466, time=1800.26, (average) msg size=800, bandwidth=66198.21 kB/sec"},{"location":"performance/#guidance-on-sizing-hardware","title":"Guidance on sizing hardware","text":"SC4S Lite pluggable modules are predefined modules that you can enable or disable by modifying your config.yaml
file. This file contains a list of add-ons. See the example and list of available pluggable modules in (config.yaml reference file) for more information. Once you update config.yaml
, you mount it to the Docker container and override /etc/syslog-ng/config.yaml
.
The installation process is identical to the installation process for Docker Compose for SC4S with the following modifications.
Use the SC4S Lite image instead of the SC4S image:
image: ghcr.io/splunk/splunk-connect-for-syslog/container3lite\n
Mount your config.yaml
file with your add-ons to /etc/syslog-ng/config.yaml
:
volumes:\n - /path/to/your/config.yaml:/etc/syslog-ng/config.yaml\n
"},{"location":"pluggable_modules/#kubernetes","title":"Kubernetes:","text":"The installation process is identical to the installation process for Kubernetes for SC4S with the following modifications:
Use the SC4S Lite image instead of SC4S in values.yaml
:
image:\n repository: ghcr.io/splunk/splunk-connect-for-syslog/container3lite\n
Mount config.yaml
. Add an addons
section inside sc4s
in values.yaml
:
sc4s:\n addons:\n config.yaml: |-\n ---\n addons:\n - cisco\n - paloalto\n - dell\n
"},{"location":"upgrade/","title":"Upgrading SC4S","text":""},{"location":"upgrade/#upgrade-sc4s","title":"Upgrade SC4S","text":"latest
tag for the SC4S image in the sc4s.service unit file. You can also set a specific version in the unit file if desired.[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n
sudo systemctl restart sc4s
See the release notes for more information.
"},{"location":"upgrade/#upgrade-notes","title":"Upgrade Notes","text":"Version 3 does not introduce any breaking changes. To upgrade to version 3, review the service file and change the container reference from container2
to container3
. For a step by step guide see here.
You may need to migrate legacy log paths or version 1 app-parsers for version 2. To do this, open an issue and attach the original configuration and a compressed pcap of sample data for testing. We will evaluate whether to include the source in an upcoming release.
"},{"location":"upgrade/#upgrade-from-2230","title":"Upgrade from <2.23.0","text":"sc4s.service
and manually update the differences in accordance with the current version of the documentation.env_file
for \u201cMICROFOCUS_ARCSIGHT\u201d variables and replace with CEF variables. env_file
and replace in accordance with the current version of the documentation. sc4s.service
file accordingly._metrics
index by default. Update vendor_product
key \u2018sc4s_metrics\u2019 to change the index.vendor_product_by_source
is deprecated for null queue or dropping events. This use will be removed in version 3. See Filtering events from output.SPLUNK_HEC_ALT_DESTS
is deprecated and will be ignored.SC4S_DEST_GLOBAL_ALTERNATES
is deprecated and will be removed in future major versions. .dest_key
field is no longer used.sc4s_vendor_product
is read only and will be removed.sc4s_vendor
now contains vendor portion of vendor_product
.sc4s_vendor_product
now contains product portion of \u2018vendor_product\u2019.sc4s_class
now contains additional data previously concatenated to vendor_product
meta_key
.#Current app parsers contain one or more lines\nvendor_product('value_here')\n#This must change to failure to make this change will prevent sc4s from starting\nvendor('value')\nproduct('here')\n
"},{"location":"v3_upgrade/","title":"Upgrading Splunk Connect for Syslog v2 -> v3","text":""},{"location":"v3_upgrade/#upgrade-process-for-version-newer-than-230","title":"Upgrade process (for version newer than 2.3.0)","text":"In general the upgrade process consists of three steps: - change of container version - restart of service - validation NOTE: Version 3 of SC4S is using alpine linux distribution as base image in opposition to previous versions which used UBI (Red Hat) image.
"},{"location":"v3_upgrade/#dockerpodman","title":"Docker/Podman","text":""},{"location":"v3_upgrade/#update-container-image-version","title":"Update container image version","text":"In the service file: /lib/systemd/system/sc4s.service
container image reference should be updated to version 3 with latest
tag:
[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n
"},{"location":"v3_upgrade/#restart-sc4s-service","title":"Restart sc4s service","text":"Restart the service: sudo systemctl restart sc4s
After the above command is executed successfully, the following information with the version becomes visible in the container logs: sudo podman logs SC4S
for podman or sudo docker logs SC4S
for docker. Expected output:
SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...\nSC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...\nsyslog-ng checking config\nsc4s version=3.0.0\nstarting goss\nstarting syslog-ng \n
If you are upgrading from version lower than 2.3.0 please refer to this guide.
"},{"location":"gettingstarted/","title":"Before you start","text":""},{"location":"gettingstarted/#getting-started","title":"Getting Started","text":"Splunk Connect for Syslog (SC4S) is a distribution of syslog-ng that simplifies getting your syslog data into Splunk Enterprise and Splunk Cloud. SC4S provides a runtime-agnostic solution that lets you deploy using the container runtime environment of choice and a configuration framework. This lets you process logs out-of-the-box from many popular devices and systems.
"},{"location":"gettingstarted/#planning-deployment","title":"Planning Deployment","text":"Syslog can refer to multiple message formats as well as, optionally, a wire protocol for event transmission between computer systems over UDP, TCP, or TLS. This protocol minimizes overhead on the sender, favoring performance over reliability. This means any instability or resource constraint can cause data to be lost in transmission.
SC4S installation can be automated with Ansible. To do this, you provide a list of hosts on which you want to run SC4S and basic configuration information, such as the Splunk endpoint, HEC token, and TLS configuration.
"},{"location":"gettingstarted/ansible-docker-podman/#step-1-prepare-your-initial-configuration","title":"Step 1: Prepare your initial configuration","text":"env_file
with your Splunk endpoint and HEC token:SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://xxx.xxx.xxx.xxx:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
2. Provide a list of hosts on which you want to run your cluster and the host application in the inventory file: all:\n hosts:\n children:\n node:\n hosts:\n node_1:\n ansible_host:\n
"},{"location":"gettingstarted/ansible-docker-podman/#step-2-deploy-sc4s-on-your-configuration","title":"Step 2: Deploy SC4S on your configuration","text":"# From repository root\ndocker-compose -f ansible/docker-compose.yml build\ndocker-compose -f ansible/docker-compose.yml up -d\ndocker exec -it ansible_sc4s /bin/bash\n
ansible-playbook -i path/to/inventory.yaml -u <username> --ask-pass path/to/playbooks/docker.yml\nor\nansible-playbook -i path/to/inventory.yaml -u <username> --ask-pass path/to/playbooks/podman.yml\n
ansible-playbook -i path/to/inventory.yaml -u <username> --key-file <key_file> path/to/playbooks/docker.yml\nor\nansible-playbook -i path/to/inventory.yaml -u <username> --key-file <key_file> path/to/playbooks/podman.yml\n
SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once the checks are complete, validate that SC4S properly communicate with Splunk. To do this, execute the following search in Splunk:
index=* sourcetype=sc4s:events \"starting up\"\n
This should yield an event similar to the following:
syslog-ng starting up; version='3.28.1'\n
You can verify if all SC4S instances work by checking the sc4s_container
in Splunk. Each instance should have a different container ID. All other fields should be the same. The startup process should proceed normally without syntax errors. If it does not, follow the steps below before proceeding to deeper-level troubleshooting:
sudo docker ps\n
docker logs <ID | image name> \n
or: sudo systemctl status sc4s\n
SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...\nSC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...\nsyslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
If you do not see this output, see \u201cTroubleshoot sc4s server\u201d and \u201cTroubleshoot resources\u201d for more information.
"},{"location":"gettingstarted/ansible-docker-swarm/","title":"Docker Swarm","text":"SC4S installation can be automated with Ansible. To do this, you provide a list of hosts on which you want to run SC4S and the basic configuration, such as Splunk endpoint, HEC token, and TLS configuration. To perform this task, you must have existing understanding of Docker Swarm and be able to set up your Swarm architecture and configuration.
"},{"location":"gettingstarted/ansible-docker-swarm/#step-1-prepare-your-initial-configuration","title":"Step 1: Prepare your initial configuration","text":"env_file
with your Splunk endpoint and HEC token:SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://xxx.xxx.xxx.xxx:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
2. Provide a list of hosts on which you want to run your Docker Swarm cluster and the host application in the inventory file: all:\n hosts:\n children:\n manager:\n hosts:\n manager_node_1:\n ansible_host:\n\n worker:\n hosts:\n worker_node_1:\n ansible_host:\n worker_node_2:\n ansible_host:\n
3. You can run your cluster with one or more manager nodes. One advantage of hosting SC4S with Docker Swarm is that you can leverage the Swarm internal load balancer. See your Swarm Mode documentation at Docker. /ansible/app/docker-compose.yml
file: version: \"3.7\"\nservices:\n sc4s:\n deploy:\n replicas: 2\n ...\n
# From repository root\ndocker-compose -f ansible/docker-compose.yml build\ndocker-compose -f ansible/docker-compose.yml up -d\ndocker exec -it ansible_sc4s /bin/bash\n
ansible-playbook -i path/to/inventory_swarm.yaml -u <username> --ask-pass path/to/playbooks/docker_swarm.yml\n
ansible-playbook -i path/to/inventory_swarm.yaml -u <username> --key-file <key_file> path/to/playbooks/docker_swarm.yml\n
sudo docker stack ls
To scale your number of services: sudo docker service update --replicas 2 sc4s_sc4s
To see services running in a given stack: sudo docker stack services sc4s
SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once the checks are complete, validate that SC4S properly communicate with Splunk. To do this, execute the following search in Splunk:
index=* sourcetype=sc4s:events \"starting up\"\n
You should see an event similar to the following:
syslog-ng starting up; version='3.28.1'\n
You can verify if all services in the Swarm cluster work by checking the sc4s_container
in Splunk. Each service should have a different container ID. All other fields should be the same. The startup process should proceed normally without syntax errors. If it does not, follow the steps below before proceeding to deeper-level troubleshooting:
sudo docker|podman ps\n
docker|podman logs <ID | image name> \n
SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...\nSC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...\nsyslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
To automate SC4S installation with Ansible, you provide a list of hosts on which you want to run SC4S as well as basic configuration information, such as the Splunk endpoint, HEC token, and TLS configuration. To perform this task, you must have existing understanding of MicroK8s and be able to set up your Kubernetes cluster architecture and configuration.
"},{"location":"gettingstarted/ansible-mk8s/#step-1-prepare-your-initial-configuration","title":"Step 1: Prepare your initial configuration","text":"Before you run SC4S with Ansible, update values.yaml
with your Splunk endpoint and HEC token. You can find the example file here.
In the inventory file, provide a list of hosts on which you want to run your cluster and the host application:
all:\n hosts:\n children:\n node:\n hosts:\n node_1:\n ansible_host:\n
all:\n hosts:\n children:\n manager:\n hosts:\n manager:\n ansible_host:\n\n workers:\n hosts:\n worker1:\n ansible_host:\n worker2:\n ansible_host:\n
# From repository root\ndocker-compose -f ansible/docker-compose.yml build\ndocker-compose -f ansible/docker-compose.yml up -d\ndocker exec -it ansible_sc4s /bin/bash\n
To authenticate with username and password:
ansible-playbook -i path/to/inventory_mk8s.yaml -u <username> --ask-pass path/to/playbooks/microk8s.yml\n
To authenitcate if you are running a high-availability cluster:
ansible-playbook -i path/to/inventory_mk8s_ha.yaml -u <username> --ask-pass path/to/playbooks/microk8s_ha.yml\n
To authenticate using a key pair:
ansible-playbook -i path/to/inventory_mk8s.yaml -u <username> --key-file <key_file> path/to/playbooks/microk8s.yml\n
SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once the checks are complete, validate that SC4S properly communicates with Splunk. To do this, execute the following search in Splunk:
index=* sourcetype=sc4s:events \"starting up\"\n
This should yield an event similar to the following:
syslog-ng starting up; version='3.28.1'\n
You can verify whether all services in the cluster work by checking the sc4s_container
in Splunk. Each service should have a different container ID. All other fields should be the same.
The startup process should proceed normally without syntax errors. If it does not, follow the steps below before proceeding to deeper-level troubleshooting:
sudo microk8s kubectl get pods\nsudo microk8s kubectl logs <podname>\n
You should see events similar to those below in the output:
SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...\nSC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...\nsyslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
"},{"location":"gettingstarted/byoe-rhel8/","title":"Configure SC4S in a non-containerized SC4S deployment","text":"Configuring SC4S in a non-containerized SC4S deployment requires a custom configuration. Note that since Splunk does not control your unique environment, we cannot help with setting up environments, debugging networking, etc. Consider this configuration only if:
This topic provides guidance for using the SC4S syslog-ng configuration files directly on the host OS running on a hardware server or virtual machine. You must provide:
You must modify the base configuration for most environments to accomodate enterprise infrastructure variations. When you upgrade, evaluate the current environment compared to this reference then develop and test an installation-specific installation plan. Do not depend on the distribution-supplied version of syslog-ng, as it may not be recent enough to support your needs. See this blog post to learn more.
"},{"location":"gettingstarted/byoe-rhel8/#install-sc4s-in-a-custom-environment","title":"Install SC4S in a custom environment","text":"These installation instructions assume a recent RHEL or CentOS-based release. You may have to make minor adjustments for Debian and Ubuntu. The examples provided here use pre-compiled binaries for the syslog-ng installation in /etc/syslog-ng
. Your configuration may vary.
The following installation instructions are summarized from a blog maintained by the One Identity team.
Install CentOS or RHEL 8.0. See your OS documentation for instructions.
Enable EPEL (Centos 8).
dnf install 'dnf-command(copr)' -y\ndnf install epel-release -y\ndnf copr enable czanik/syslog-ng336 -y\ndnf install syslog-ng syslog-ng-python syslog-ng-http python3-pip gcc python3-devel -y\n
sudo systemctl stop syslog-ng\nsudo systemctl disable syslog-ng\n
bare_metal.tar
from releases on github and untar the package in /etc/syslog-ng
. This step unpacks a tarball with the SC4S version of the syslog-ng config files in the standard /etc/syslog-ng
location, and will overwrite existing content. Make sure that any previous configurations of syslog-ng are saved prior to executing the download step.For production use, select the latest version of SC4S that does not have an -rc
, -alpha
, or -beta
suffix.
sudo wget -c https://github.com/splunk/splunk-connect-for-syslog/releases/download/<latest release>/baremetal.tar -O - | sudo tar -x -C /etc/syslog-ng\n
sudo pip3 install -r /etc/syslog-ng/requirements.txt\n
goss
and confirm that the version is v0.3.16 or later. goss
installs in /usr/local/bin
by default, so do one of the following:entrypoint.sh
is modified to include /usr/local/bin
in the full path.goss
binary to /bin
or /usr/bin
.curl -L https://github.com/aelsabbahy/goss/releases/latest/download/goss-linux-amd64 -o /usr/local/bin/goss\nchmod +rx /usr/local/bin/goss\ncurl -L https://github.com/aelsabbahy/goss/releases/latest/download/dgoss -o /usr/local/bin/dgoss\n# Alternatively, using the latest\n# curl -L https://raw.githubusercontent.com/aelsabbahy/goss/latest/extras/dgoss/dgoss -o /usr/local/bin/dgoss\nchmod +rx /usr/local/bin/dgoss\n
entrypoint.sh
script (identical to that used in the container) directly using systemd.entrypoint.sh
script directly in systemd, create the SC4S unit file /lib/systemd/system/sc4s.service
and add the following:[Unit]\nDescription=SC4S Syslog Daemon\nDocumentation=https://splunk-connect-for-syslog.readthedocs.io/en/latest/\nWants=network.target network-online.target\nAfter=network.target network-online.target\n\n[Service]\nType=simple\nExecStart=/etc/syslog-ng/entrypoint.sh\nExecReload=/bin/kill -HUP $MAINPID\nEnvironmentFile=/etc/syslog-ng/env_file\nStandardOutput=journal\nStandardError=journal\nRestart=on-abnormal\n\n[Install]\nWantedBy=multi-user.target\n
entrypoint.sh
as a preconfigured script, modify the script by commenting out or removing the stanzas following the OPTIONAL for BYOE
comments in the script. This prevents syslog-ng from being launched by the script. Then create the SC4S unit file /lib/systemd/system/syslog-ng.service
and add the following content:[Unit]\nDescription=System Logger Daemon\nDocumentation=man:syslog-ng(8)\nAfter=network.target\n\n[Service]\nType=notify\nExecStart=/usr/sbin/syslog-ng -F $SYSLOGNG_OPTS -p /var/run/syslogd.pid\nExecReload=/bin/kill -HUP $MAINPID\nEnvironmentFile=-/etc/default/syslog-ng\nEnvironmentFile=-/etc/sysconfig/syslog-ng\nStandardOutput=journal\nStandardError=journal\nRestart=on-failure\n\n[Install]\nWantedBy=multi-user.target\n
/etc/syslog-ng/env_file
and add the following environment variables. Adjust the URL/TOKEN as needed.# The following \"path\" variables can differ from the container defaults specified in the entrypoint.sh script. \n# These are *optional* for most BYOE installations, which do not differ from the install location used.\n# in the container version of SC4S. Failure to properly set these will cause startup failure.\n#SC4S_ETC=/etc/syslog-ng\n#SC4S_VAR=/etc/syslog-ng/var\n#SC4S_BIN=/bin\n#SC4S_SBIN=/usr/sbin\n#SC4S_TLS=/etc/syslog-ng/tls\n\n# General Options\nSC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://splunk.smg.aws:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=a778f63a-5dff-4e3c-a72c-a03183659e94\n\n# Uncomment the following line if using untrusted (self-signed) SSL certificates\n# SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
sudo systemctl daemon-reload\nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/byoe-rhel8/#configure-sc4s-listening-ports","title":"Configure SC4S listening ports","text":"The standard SC4S configuration uses UDP/TCP port 514 as the default for the listening port for syslog traffic, and TCP port 6514 for TLS. You can change these defaults by adding the following additional environment variables to the env_file
:
SC4S_LISTEN_DEFAULT_TCP_PORT=514\nSC4S_LISTEN_DEFAULT_UDP_PORT=514\nSC4S_LISTEN_DEFAULT_RFC6587_PORT=601\nSC4S_LISTEN_DEFAULT_RFC5426_PORT=601\nSC4S_LISTEN_DEFAULT_RFC5425_PORT=5425\nSC4S_LISTEN_DEFAULT_TLS_PORT=6514\n
"},{"location":"gettingstarted/byoe-rhel8/#create-unique-dedicated-listening-ports","title":"Create unique dedicated listening ports","text":"For some source technologies, categorization by message content is not possible. To collect these sources, dedicate a unique listening port to a specific source. See Sources for more information.
"},{"location":"gettingstarted/docker-compose-MacOS/","title":"Install Docker Desktop for MacOS","text":"Refer to the \u201cMacOS\u201d section in your Docker documentation to set up your Docker Desktop for MacOS.
"},{"location":"gettingstarted/docker-compose-MacOS/#perform-your-initial-sc4s-configuration","title":"Perform your initial SC4S configuration","text":"You can run SC4S using either docker-compose
or the docker run
command in the command line. This topic focuses solely on using docker-compose
.
Create a directory on the server for local configurations and disk buffering. Make it available to all administrators, for example: /opt/sc4s/
.
Create a docker-compose.yml
file in your new directory, based on the provided template. By default, the latest container is automatically downloaded at each restart. As a best practice, consult this topic at the time of any new upgrade to check for any changes in the latest template.
version: \"3.7\"\nservices:\n sc4s:\n deploy:\n replicas: 2\n restart_policy:\n condition: on-failure\n image: ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\n ports:\n - target: 514\n published: 514\n protocol: tcp\n - target: 514\n published: 514\n protocol: udp\n - target: 601\n published: 601\n protocol: tcp\n - target: 6514\n published: 6514\n protocol: tcp\n env_file:\n - /opt/sc4s/env_file\n volumes:\n - /opt/sc4s/local:/etc/syslog-ng/conf.d/local:z\n - splunk-sc4s-var:/var/lib/syslog-ng\n# Uncomment the following line if local disk archiving is desired\n# - /opt/sc4s/archive:/var/lib/syslog-ng/archive:z\n# Map location of TLS custom TLS\n# - /opt/sc4s/tls:/etc/syslog-ng/tls:z\n\nvolumes:\n splunk-sc4s-var:\n
/opt/sc4s
folder as shared.Create a local volume that will contain the disk buffer files in the event of a communication failure to the upstream destinations. This volume also keeps track of the state of syslog-ng between restarts, and in particular the state of the disk buffer. Be sure to account for disk space requirements for the Docker volume. This volume is located in /var/lib/docker/volumes/
and could grow significantly if there is an extended outage to the SC4S destinations. See SC4S disk buffer configuration for more information.
sudo docker volume create splunk-sc4s-var\n
Create the subdirectories: /opt/sc4s/local
, /opt/sc4s/archive
, and /opt/sc4s/tls
. Make sure these directories match the volume mounts specified indocker-compose.yml
.
Create a file named /opt/sc4s/env_file
.
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
6. Update the following environment variables and values in /opt/sc4s/env_file
: Update SC4S_DEST_SPLUNK_HEC_DEFAULT_URL
and SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN
to reflect the values for your environment. Do not configure HEC Acknowledgement when you deploy the HEC token on the Splunk side; syslog-ng http destination does not support this feature.
The default number of SC4S_DEST_SPLUNK_HEC_<ID>_WORKERS
is 10. Consult the community if you feel the number of workers (threads) should deviate from this.
Splunk Connect for Syslog defaults to secure configurations. If you are not using trusted SSL certificates, be sure to uncomment the last line.
Each listening port on the container must be mapped to a listening port on the host. Make sure to update the docker-compose.yml
file when adding listening ports for new data sources.
To configure unique ports:
/opt/sc4s/env_file
file to include the port-specific environment variables. See the Sources documentation to identify the specific environment variables that are mapped to each data source vendor and technology.target
stanzas in the ports
section of the file (after the default ports). For example, the following additional target
and published
lines provide for 21 additional technology-specific UDP and TCP ports: - target: 5000-5020\n published: 5000-5020\n protocol: tcp\n - target: 5000-5020\n published: 5000-5020\n protocol: udp\n
For more information about configuration refer to Docker and Podman basic configurations and detailed configuration.
"},{"location":"gettingstarted/docker-compose-MacOS/#startrestart-sc4s","title":"Start/Restart SC4S","text":"From the catalog where you created compose file, execute:
docker-compose up\n
Otherwise use docker-compose
with -f
flag pointing to the compose file docker-compose up -f /path/to/compose/file/docker-compose.yml\n
"},{"location":"gettingstarted/docker-compose-MacOS/#stop-sc4s","title":"Stop SC4S","text":"Execute:
docker-compose down \n
or docker-compose down -f /path/to/compose/file/docker-compose.yml\n
"},{"location":"gettingstarted/docker-compose-MacOS/#verify-proper-operation","title":"Verify Proper Operation","text":"SC4S performs automatic checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once these checks are complete, verify that SC4S is properly communicating with Splunk:
index=* sourcetype=sc4s:events \"starting up\"\n
When the startup process proceeds normally, you should see an event similar to the following:
syslog-ng starting up; version='3.28.1'\n
If you do not see this, try the following steps to troubleshoot:
docker logs <container_name>\n
You should see events similar to those below in the output:
syslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
If you do not see the output above, proceed to the \u201cTroubleshoot sc4s server\u201d and \u201cTroubleshoot resources\u201d sections for more detailed information.
"},{"location":"gettingstarted/docker-compose/","title":"Install Docker Desktop","text":"Refer to your Docker documentation to set up your Docker Desktop.
"},{"location":"gettingstarted/docker-compose/#perform-your-initial-sc4s-configuration","title":"Perform your initial SC4S configuration","text":"You can run SC4S with docker-compose
, or in the command line using the command docker run
. Both options are described in this topic.
/opt/sc4s/
. If you are using docker-compose
, create a docker-compose.yml
file in this directory using the template provided here. By default, the latest SC4S image is automatically downloaded at each restart. As a best practice, check back here regularly for any changes made to the latest template is incorporated into production before you relaunch with Docker Compose.version: \"3.7\"\nservices:\n sc4s:\n deploy:\n replicas: 2\n restart_policy:\n condition: on-failure\n image: ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\n ports:\n - target: 514\n published: 514\n protocol: tcp\n - target: 514\n published: 514\n protocol: udp\n - target: 601\n published: 601\n protocol: tcp\n - target: 6514\n published: 6514\n protocol: tcp\n env_file:\n - /opt/sc4s/env_file\n volumes:\n - /opt/sc4s/local:/etc/syslog-ng/conf.d/local:z\n - splunk-sc4s-var:/var/lib/syslog-ng\n# Uncomment the following line if local disk archiving is desired\n# - /opt/sc4s/archive:/var/lib/syslog-ng/archive:z\n# Map location of TLS custom TLS\n# - /opt/sc4s/tls:/etc/syslog-ng/tls:z\n\nvolumes:\n splunk-sc4s-var:\n
/opt/sc4s
folder as shared./var/lib/docker/volumes/
and could grow significantly if there is an extended outage to the SC4S destinations. See SC4S Disk Buffer Configuration in the Configuration topic for more information.sudo docker volume create splunk-sc4s-var\n
Create the subdirectories: /opt/sc4s/local
, /opt/sc4s/archive
, and /opt/sc4s/tls
. If you are using the docker-compose.yml
file, make sure these directories match the volume mounts specified indocker-compose.yml
.
Create a file named /opt/sc4s/env_file
.
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
6. Update the following environment variables and values to /opt/sc4s/env_file
: SC4S_DEST_SPLUNK_HEC_DEFAULT_URL
and SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN
to reflect the values for your environment. Do not configure HEC Acknowledgement when you deploy the HEC token on the Splunk side; syslog-ng http destination does not support this feature. SC4S_DEST_SPLUNK_HEC_<ID>_WORKERS
is 10. Consult the community if you feel the number of workers (threads) should deviate from this.NOTE: Splunk Connect for Syslog defaults to secure configurations. If you are not using trusted SSL certificates, be sure to uncomment the last line in the example above.
For more information about configuration, see Docker and Podman basic configurations and detailed configuration.
"},{"location":"gettingstarted/docker-compose/#start-or-restart-sc4s","title":"Start or restart SC4S","text":"docker-compose
. Be sure to map the listening ports (-p
arguments) according to your needs:docker run -p 514:514 -p 514:514/udp -p 6514:6514 -p 5000-5020:5000-5020 -p 5000-5020:5000-5020/udp \\\n --env-file=/opt/sc4s/env_file \\\n --name SC4S \\\n --rm ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\n
docker compose
, from the catalog where you created compose file execute: docker compose up\n
Otherwise use docker compose
with -f
flag pointing to the compose file:
docker compose up -f /path/to/compose/file/docker-compose.yml\n
"},{"location":"gettingstarted/docker-compose/#stop-sc4s","title":"Stop SC4S","text":"If the container is run directly from the CLI, stop the container using the docker stop <containerID>
command.
If using docker compose
, execute:
docker compose down \n
or docker compose down -f /path/to/compose/file/docker-compose.yml\n
"},{"location":"gettingstarted/docker-compose/#validate-your-configuration","title":"Validate your configuration","text":"SC4S performs automatic checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once these checks are complete, verify that SC4S is properly communicating with Splunk:
index=* sourcetype=sc4s:events \"starting up\"\n
This should yield an event similar to the following when the startup process proceeds normally:
syslog-ng starting up; version='3.28.1'\n
If you do not see this, try the following steps to troubleshoot:
docker logs SC4S\n
You should see events similar to those below in the output:
syslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
If you do not see the output above, see \u201cTroubleshoot SC4S server\u201d and \u201cTroubleshoot resources\u201d sections for more detailed information.
"},{"location":"gettingstarted/docker-podman-offline/","title":"Install a container while offline","text":"You can stage SC4S by downloading the image so that it can be loaded on a host machine, for example on an airgapped system, without internet connectivity.
oci_container.tgz
from our Github Page. The following example downloads v3.23.1, replace the URL with the latest release or pre-release version as desired:sudo wget https://github.com/splunk/splunk-connect-for-syslog/releases/download/v3.23.1/oci_container.tar.gz\n
<podman or docker> load < oci_container.tar.gz\n
Loaded image: ghcr.io/splunk/splunk-connect-for-syslog/container3:3.23.1\n
Use the container ID to create a local label:
<podman or docker> tag ghcr.io/splunk/splunk-connect-for-syslog/container3:3.23.1 sc4slocal:latest\n
Use the local label sc4slocal:latest
in the relevant unit or YAML file to launch SC4S by setting the SC4S_IMAGE
environment variable in the unit file, or the relevant image:
tag if you are using Docker Compose/Swarm. This label will cause the runtime to select the locally loaded image, and will not attempt to obtain the container image from the internet.
Environment=\"SC4S_IMAGE=sc4slocal:latest\"\n
7. Remove the entry from the relevant unit file when your configuration uses systemd. This is because an external connection to pull the container is no longer needed or available: ExecStartPre=/usr/bin/docker pull $SC4S_IMAGE\n
"},{"location":"gettingstarted/docker-systemd-general/","title":"Install Docker CE","text":""},{"location":"gettingstarted/docker-systemd-general/#before-you-begin","title":"Before you begin","text":"Before you start:
This topic provides the most recent unit file. By default, the latest SC4S image is automatically downloaded at each restart. Consult this topic when you upgrade your SC4S installation and check for changes to the provided template unit file. Make sure these changes are incorporated into your configuration before you relaunch with systemd.
/lib/systemd/system/sc4s.service
based on the provided template:[Unit]\nDescription=SC4S Container\nWants=NetworkManager.service network-online.target docker.service\nAfter=NetworkManager.service network-online.target docker.service\nRequires=docker.service\n\n[Install]\nWantedBy=multi-user.target\n\n[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n\n# Required mount point for syslog-ng persist data (including disk buffer)\nEnvironment=\"SC4S_PERSIST_MOUNT=splunk-sc4s-var:/var/lib/syslog-ng\"\n\n# Optional mount point for local overrides and configurations; see notes in docs\nEnvironment=\"SC4S_LOCAL_MOUNT=/opt/sc4s/local:/etc/syslog-ng/conf.d/local:z\"\n\n# Optional mount point for local disk archive (EWMM output) files\nEnvironment=\"SC4S_ARCHIVE_MOUNT=/opt/sc4s/archive:/var/lib/syslog-ng/archive:z\"\n\n# Map location of TLS custom TLS\nEnvironment=\"SC4S_TLS_MOUNT=/opt/sc4s/tls:/etc/syslog-ng/tls:z\"\n\nTimeoutStartSec=0\n\nExecStartPre=/usr/bin/docker pull $SC4S_IMAGE\n\n# Note: /usr/bin/bash will not be valid path for all OS\n# when startup fails on running bash check if the path is correct\nExecStartPre=/usr/bin/bash -c \"/usr/bin/systemctl set-environment SC4SHOST=$(hostname -s)\"\n\nExecStart=/usr/bin/docker run \\\n -e \"SC4S_CONTAINER_HOST=${SC4SHOST}\" \\\n -v \"$SC4S_PERSIST_MOUNT\" \\\n -v \"$SC4S_LOCAL_MOUNT\" \\\n -v \"$SC4S_ARCHIVE_MOUNT\" \\\n -v \"$SC4S_TLS_MOUNT\" \\\n --env-file=/opt/sc4s/env_file \\\n --network host \\\n --name SC4S \\\n --rm $SC4S_IMAGE\n\nRestart=on-abnormal\n
sudo docker volume create splunk-sc4s-var\n
Account for disk space requirements for the new Docker volume. The Docker volume can grow significantly if there is an extended outage to the SC4S destinations. This volume can be found at /var/lib/docker/volumes/
. See SC4S Disk Buffer Configuration.
Create the following subdirectories:
/opt/sc4s/local
/opt/sc4s/archive
/opt/sc4s/tls
/opt/sc4s/env_file
and add the following environment variables and values:SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
Update SC4S_DEST_SPLUNK_HEC_DEFAULT_URL
and SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN
to reflect the correct values for your environment. Do not configure HEC Acknowledgement when deploying the HEC token on the Splunk side, the underlying syslog-ng HTTP destination does not support this feature.
The default number of SC4S_DEST_SPLUNK_HEC_<ID>_WORKERS
is 10. Consult the community if you feel the number of workers should deviate from this.
Splunk Connect for Syslog defaults to secure configurations. If you are not using trusted SSL certificates, be sure to uncomment the last line in the example in step 5.
For more information see Docker and Podman basic configurations and detailed configuration.
"},{"location":"gettingstarted/docker-systemd-general/#configure-sc4s-for-systemd","title":"Configure SC4S for systemd","text":"To configure SC4S for systemd run the following commands:
sudo systemctl daemon-reload\nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/docker-systemd-general/#restart-sc4s","title":"Restart SC4S","text":"To restart SC4S run the following command:
sudo systemctl restart sc4s\n
"},{"location":"gettingstarted/docker-systemd-general/#implement-unit-file-changes","title":"Implement unit file changes","text":"If you made changes to the configuration unit file, for example to configure with dedicated ports, you must stop SC4S and re-run the systemd configuration commands to implement your changes.
sudo systemctl stop sc4s\nsudo systemctl daemon-reload \nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/docker-systemd-general/#validate-your-configuration","title":"Validate your configuration","text":"SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once the checks are complete, validate that SC4S properly communicate with Splunk. To do this, execute the following search in Splunk:
index=* sourcetype=sc4s:events \"starting up\"\n
You should see an event similar to the following:
syslog-ng starting up; version='3.28.1'\n
The startup process should proceed normally without syntax errors. If it does not, follow the steps below before proceeding to deeper-level troubleshooting:
docker logs SC4S\n
You should see events similar to those below in the output:
syslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
You must tune the host Linux OS receive buffer size to match the SC4S default. This helps to avoid event dropping at the network level. The default receive buffer for SC4S is 16 MB for UDP traffic, which should be acceptable for most environments. To set the host OS kernel to match your buffer:
Edit /etc/sysctl.conf
using the following whole-byte values corresponding to 16 MB:
net.core.rmem_default = 17039360\nnet.core.rmem_max = 17039360\n
Apply to the kernel:
sysctl -p\n
To verify that the kernel does not drop packets, periodically monitor the buffer using the command netstat -su | grep \"receive errors\"
. Failure to tune the kernel for high-volume traffic results in message loss, which can be unpredictable and difficult to detect. The default values for receive kernel buffers in most distributions is 2 MB, which may not be adequate for your configuration.
In many distributions, for example CentOS provisioned in AWS, IPv4 forwarding is not enabled by default. IPv4 forwarding must be enabled for container networking.
sudo sysctl net.ipv4.ip_forward
sudo sysctl net.ipv4.ip_forward=1
/usr/lib/sysctl.d/
, /run/sysctl.d/
, and /etc/sysctl.d/
. /etc/sysctl.d/
and put following setting there or find this specific setting in one of the existing configuration files and set the value to 1
.net.ipv4.ip_forward=1\n
"},{"location":"gettingstarted/getting-started-runtime-configuration/#step-2-create-your-local-directory-structure","title":"Step 2: Create your local directory structure","text":"Create the following three directories:
/opt/sc4s/local
: This directory is used as a mount point for local overrides and configurations. This empty local
directory populates with defaults and examples at the first invocation of SC4S for local configurations and context overrides. Do not change the directory structure of these files, as SC4S depends on the directory layout to read the local configurations properly. If necessary, you can change or add individual files.local/config/
directory four subdirectories let you provide support for device types that are not provided out of the box in SC4S. To get started, see the example log path template lp-example.conf.tmpl
and a filter example.conf
in the log_paths
and filters
subdirectories. Copy these as templates for your own log path development.local/context
directory, change the \u201cnon-example\u201d version of a file (e.g. splunk_metadata.csv
) to preserve the changes upon restart./opt/sc4s/archive
is a mount point for local storage of syslog events if the optional mount is uncommented. The events are written in the syslog-ng EWMM format. See the Configuration topic for information about the directory structure that the archive uses./opt/sc4s/tls
is a mount point for custom TLS certificates if the optional mount is uncommented.When you create these directories, make sure that they match the volume mounts specified in the sc4s.service unit file. Failure to do this will cause SC4S to abort at startup.
"},{"location":"gettingstarted/getting-started-runtime-configuration/#step-3-select-a-container-runtime-and-sc4s-configuration","title":"Step 3: Select a Container Runtime and SC4S Configuration","text":"The table below shows possible ways to run SC4S using Docker or Podman with various management and orchestration systems.
Check your Podman or Docker documentation to see which operating systems are supported by your chosen container management tool. If the SC4S deployment model involves additional limitations or requirements regarding operating systems, you will find them in the column labeled \u2018Additional Operating Systems Requirements\u2019.
Container Runtime and Orchestration Additional Operating Systems Requirements MicroK8s Ubuntu with Microk8s Podman + systemd Docker CE + systemd Docker Desktop + Compose MacOS Docker Compose Bring your own Environment RHEL or CentOS 8.1 & 8.2 (best option) Offline Container Installation Ansible+Docker Swarm Ansible+Podman Ansible+Docker"},{"location":"gettingstarted/getting-started-splunk-setup/","title":"Splunk setup","text":"To ensure proper integration for SC4S and Splunk, perform the following tasks in your Splunk instance:
SC4S maps each sourcetype to the following indexes by default. You will also need to create these indexes in Splunk:
email
epav
epintel
fireeye
gitops
infraops
netauth
netdlp
netdns
netfw
netids
netlb
netops
netwaf
netproxy
netipam
oswin
oswinsec
osnix
print
_metrics
(Optional opt-in for SC4S operational metrics; ensure this is created as a metrics index)If you use custom indexes in SC4S you must also create them in Splunk. See Create custom indexes for more information.
"},{"location":"gettingstarted/getting-started-splunk-setup/#step-2-configure-your-http-event-collector","title":"Step 2: Configure your HTTP event collector","text":"See Use the HTTP event collector for HEC configuration instructions based on your Splunk type.
Keep in mind the following best practices specific to HEC for SC4S:
_metrics
and all event destination indexes.lastChanceIndex
. If you do populate this field, take extreme care to keep it up to date; an attempt to send data to an index that is not in this list results in a 400
error from the HEC endpoint. The lastChanceIndex
will not be consulted if the index specified in the event is not configured on Splunk and the entire batch is then not sent to Splunk.In some configurations, you should ensure output balancing from SC4S to Splunk indexers. To do this, you create a load balancing mechanism between SC4S and Splunk indexers. Note that this should not be confused with load balancing between sources and SC4S.
When configuring your load balancing mechanism, keep in mind the following:
Splunk provides an implementation for SC4S deployment with MicroK8s using a single-server MicroK8s as the deployment model. Clustering has some tradeoffs and should be only considered on a deployment-specific basis.
You can independently replicate the model deployment on different distributions of Kubernetes. Do not attempt this unless you have advanced understanding of Kubernetes and are willing and able to maintain this configuration regularly.
SC4S with MicroK8s leverages features of MicroK8s:
Splunk maintains container images, but it doesn\u2019t directly support or otherwise provide resolutions for issues within the runtime environment.
"},{"location":"gettingstarted/k8s-microk8s/#step-1-allocate-ip-addresses","title":"Step 1: Allocate IP addresses","text":"This configuration requires as least two IP addresses: one for the host and one for the internal load balancer. We suggest allocating three IP addresses for the host and 5-10 IP addresses for later use.
"},{"location":"gettingstarted/k8s-microk8s/#step-2-install-microk8s","title":"Step 2: Install MicroK8s","text":"To install MicroK8s:
sudo snap install microk8s --classic --channel=1.24\nsudo usermod -a -G microk8s $USER\nsudo chown -f -R $USER ~/.kube\nsu - $USER\nmicrok8s status --wait-ready\n
"},{"location":"gettingstarted/k8s-microk8s/#step-3-set-up-your-add-ons","title":"Step 3: Set up your add-ons","text":"When you install metallb
you will be prompted for one or more IPs to use as entry points. If you do not plan to enable clustering, then this IP may be the same IP as the host. If you do plan to enable clustering this IP should not be assigned to the host.
A single IP in CIDR format is x.x.x.x/32. Use CIDR or range syntax.
microk8s enable dns \nmicrok8s enable community\nmicrok8s enable metallb \nmicrok8s enable rbac \nmicrok8s enable storage \nmicrok8s enable openebs \nmicrok8s enable helm3\nmicrok8s status --wait-ready\n
"},{"location":"gettingstarted/k8s-microk8s/#step-4-add-an-sc4s-helm-repository","title":"Step 4: Add an SC4S Helm repository","text":"To add an SC4S Helm repository:
microk8s helm3 repo add splunk-connect-for-syslog https://splunk.github.io/splunk-connect-for-syslog\nmicrok8s helm3 repo update\n
"},{"location":"gettingstarted/k8s-microk8s/#step-5-create-a-valuesyaml-file","title":"Step 5: Create a values.yaml
file","text":"Create the configuration file values.yaml
. You can provide HEC token as a Kubernetes secret or in plain text.
values.yaml
file:#values.yaml\nsplunk:\n hec_url: \"https://xxx.xxx.xxx.xxx:8088/services/collector/event\"\n hec_token: \"00000000-0000-0000-0000-000000000000\"\n hec_verify_tls: \"yes\"\n
microk8s helm3 install sc4s splunk-connect-for-syslog/splunk-connect-for-syslog -f values.yaml\n
values.yaml
file:#values.yaml\nsplunk:\n hec_url: \"https://xxx.xxx.xxx.xxx:8088/services/collector/event\"\n hec_verify_tls: \"yes\"\n
export HEC_TOKEN=\"00000000-0000-0000-0000-000000000000\"\nmicrok8s helm3 install sc4s --set splunk.hec_token=$HEC_TOKEN splunk-connect-for-syslog/splunk-connect-for-syslog -f values.yaml\n
Whenever the image is upgraded or when changes are made to the values.yaml
file and should be applied, run the command:
microk8s helm3 upgrade sc4s splunk-connect-for-syslog/splunk-connect-for-syslog -f values.yaml\n
"},{"location":"gettingstarted/k8s-microk8s/#install-and-configure-sc4s-for-high-availability-ha","title":"Install and configure SC4S for High Availability (HA)","text":"Three identically-sized nodes are required for HA. See your Microk8s documentation for more information.
Update the configuration file:
#values.yaml\nreplicaCount: 6 #2x node count\nsplunk:\n hec_url: \"https://xxx.xxx.xxx.xxx:8088/services/collector/event\"\n hec_token: \"00000000-0000-0000-0000-000000000000\"\n hec_verify_tls: \"yes\"\n
Upgrade SC4S to apply the new configuration:
microk8s helm3 upgrade sc4s splunk-connect-for-syslog/splunk-connect-for-syslog -f values.yaml\n
values.yaml
","text":"With helm-based deployment you cannot configure environment variables and context files directly. Instead, use the values.yaml
file to update your configuration, for example:
sc4s:\n # Certificate as a k8s Secret with tls.key and tls.crt fields\n # Ideally produced and managed by cert-manager.io\n existingCert: example-com-tls\n #\n vendor_product:\n - name: checkpoint\n ports:\n tcp: [9000] #Same as SC4S_LISTEN_CHECKPOINT_TCP_PORT=9000\n udp: [9000]\n options:\n listen:\n old_host_rules: \"yes\" #Same as SC4S_LISTEN_CHECKPOINT_OLD_HOST_RULES=yes\n\n - name: infoblox\n ports:\n tcp: [9001, 9002]\n tls: [9003]\n - name: fortinet\n ports:\n ietf_udp:\n - 9100\n - 9101\n context_files:\n splunk_metadata.csv: |-\n cisco_meraki,index,foo\n host.csv: |-\n 192.168.1.1,foo\n 192.168.1.2,moon\n
Use the config_files
and context_files
variables to specify configuration and context files that are passed to SC4S.
config_files
: This variable contains a dictionary that maps the name of the configuration file to its content in the form of a YAML block scalar.context_file
: This variable contains a dictionary that maps the name of the context files to its content in the form of a YAML block scalar. The context files splunk_metadata.csv
and host.csv
are passed with values.yaml
: sc4s:\n # Certificate as a k8s Secret with tls.key and tls.crt fields\n # Ideally produced and managed by cert-manager.io\n #\n vendor_product:\n - name: checkpoint\n ports:\n tcp: [9000] #Same as SC4S_LISTEN_CHECKPOINT_TCP_PORT=9000\n udp: [9000]\n options:\n listen:\n old_host_rules: \"yes\" #Same as SC4S_LISTEN_CHECKPOINT_OLD_HOST_RULES=yes\n\n - name: fortinet\n ports:\n ietf_udp:\n - 9100\n - 9101\n context_files:\n splunk_metadata.csv: |+\n cisco_meraki,index,foo\n cisco_asa,index,bar\n config_files:\n app-workaround-cisco_asa.conf: |+\n block parser app-postfilter-cisco_asa_metadata() {\n channel {\n rewrite {\n unset(value('fields.sc4s_recv_time'));\n };\n };\n };\n application app-postfilter-cisco_asa_metadata[sc4s-postfilter] {\n filter {\n 'cisco' eq \"${fields.sc4s_vendor}\"\n and 'asa' eq \"${fields.sc4s_product}\"\n };\n parser { app-postfilter-cisco_asa_metadata(); };\n };\n
You should expect your system to require two instances per node by default. Adjust requests and limits to allow each instance to use about 40% of each node, presuming no other workload is present.
resources:\n limits:\n cpu: 100m\n memory: 128Mi\n requests:\n cpu: 100m\n memory: 128Mi\n
"},{"location":"gettingstarted/podman-systemd-general/","title":"Install podman","text":"See Podman product installation docs for information about working with your Podman installation.
Before performing the tasks described in this topic, make sure you are familiar with using IPv4 forwarding with SC4S. See IPv4 forwarding .
"},{"location":"gettingstarted/podman-systemd-general/#initial-setup","title":"Initial Setup","text":"NOTE: Make sure to use the latest unit file, which is provided here, with the current release. By default, the latest container is automatically downloaded at each restart. As a best practice, check back here regularly for any changes made to the latest template unit file is incorporated into production before you relaunch with systemd.
/lib/systemd/system/sc4s.service
based on the following template:[Unit]\nDescription=SC4S Container\nWants=NetworkManager.service network-online.target\nAfter=NetworkManager.service network-online.target\n\n[Install]\nWantedBy=multi-user.target\n\n[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n\n# Required mount point for syslog-ng persist data (including disk buffer)\nEnvironment=\"SC4S_PERSIST_MOUNT=splunk-sc4s-var:/var/lib/syslog-ng\"\n\n# Optional mount point for local overrides and configurations; see notes in docs\nEnvironment=\"SC4S_LOCAL_MOUNT=/opt/sc4s/local:/etc/syslog-ng/conf.d/local:z\"\n\n# Optional mount point for local disk archive (EWMM output) files\nEnvironment=\"SC4S_ARCHIVE_MOUNT=/opt/sc4s/archive:/var/lib/syslog-ng/archive:z\"\n\n# Map location of TLS custom TLS\nEnvironment=\"SC4S_TLS_MOUNT=/opt/sc4s/tls:/etc/syslog-ng/tls:z\"\n\nTimeoutStartSec=0\n\nExecStartPre=/usr/bin/podman pull $SC4S_IMAGE\n\n# Note: /usr/bin/bash will not be valid path for all OS\n# when startup fails on running bash check if the path is correct\nExecStartPre=/usr/bin/bash -c \"/usr/bin/systemctl set-environment SC4SHOST=$(hostname -s)\"\n\nExecStart=/usr/bin/podman run \\\n -e \"SC4S_CONTAINER_HOST=${SC4SHOST}\" \\\n -v \"$SC4S_PERSIST_MOUNT\" \\\n -v \"$SC4S_LOCAL_MOUNT\" \\\n -v \"$SC4S_ARCHIVE_MOUNT\" \\\n -v \"$SC4S_TLS_MOUNT\" \\\n --env-file=/opt/sc4s/env_file \\\n --health-cmd=\"/healthcheck.sh\" \\\n --health-interval=10s --health-retries=6 --health-timeout=6s \\\n --network host \\\n --name SC4S \\\n --rm $SC4S_IMAGE\n\nRestart=on-abnormal\n
sudo podman volume create splunk-sc4s-var\n
NOTE: Be sure to account for disk space requirements for the podman volume you create. This volume will be located in /var/lib/containers/storage/volumes/
and could grow significantly if there is an extended outage to the SC4S destinations (typically HEC endpoints). See the \u201cSC4S Disk Buffer Configuration\u201d section on the Configuration page for more info.
/opt/sc4s/local
* /opt/sc4s/archive
* /opt/sc4s/tls
/opt/sc4s/env_file
and add the following environment variables and values:SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL
and SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN
to reflect the correct values for your environment. Do not configure HEC Acknowledgement when deploying the HEC token on the Splunk side; the underlying syslog-ng http destination does not support this feature. The default value for SC4S_DEST_SPLUNK_HEC_<ID>_WORKERS
is 10. Consult the community if you feel the number of workers (threads) should deviate from this.NOTE: Splunk Connect for Syslog defaults to secure configurations. If you are not using trusted SSL certificates, be sure to uncomment the last line in the example above.
For more information about configuration refer to Docker and Podman basic configurations and detailed configuration.
"},{"location":"gettingstarted/podman-systemd-general/#configure-sc4s-for-systemd-and-start-sc4s","title":"Configure SC4S for systemd and start SC4S","text":"sudo systemctl daemon-reload\nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/podman-systemd-general/#restart-sc4s","title":"Restart SC4S","text":"sudo systemctl restart sc4s\n
If you have made changes to the configuration unit file, for example, in order to configure dedicated ports, you must first stop SC4S and re-run the systemd configuration commands:
sudo systemctl stop sc4s\nsudo systemctl daemon-reload \nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/podman-systemd-general/#stop-sc4s","title":"Stop SC4S","text":"sudo systemctl stop sc4s\n
"},{"location":"gettingstarted/podman-systemd-general/#verify-proper-operation","title":"Verify Proper Operation","text":"SC4S has a number of \u201cpreflight\u201d checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. After this step is complete, verify SC4S is properly communicating with Splunk by executing the following search in Splunk:
index=* sourcetype=sc4s:events \"starting up\"\n
This should yield an event similar to the following when the startup process proceeds normally (without syntax errors).
syslog-ng starting up; version='3.28.1'\n
If you do not see this, try the following before proceeding to deeper-level troubleshooting:
podman logs SC4S\n
You should see events similar to those below in the output:
syslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
If the output does not display, see \u201cTroubleshoot sc4s server\u201d and \u201cTroubleshoot resources\u201d for more information.
"},{"location":"gettingstarted/podman-systemd-general/#sc4s-non-root-operation","title":"SC4S non-root operation","text":""},{"location":"gettingstarted/podman-systemd-general/#note","title":"NOTE:","text":"Operating as a non-root user makes it impossible to use standard ports 514 and 601. Many devices cannot alter their destination port, so this operation may only be appropriate for cases where accepting syslog data from the public internet cannot be avoided.
"},{"location":"gettingstarted/podman-systemd-general/#prequisites","title":"Prequisites","text":"Podman
and slirp4netns
must be installed.
Increase the number of user namespaces. Execute the following with sudo privileges:
$ echo \"user.max_user_namespaces=28633\" > /etc/sysctl.d/userns.conf \n$ sysctl -p /etc/sysctl.d/userns.conf\n
Create a non-root user from which to run SC4S and to prepare Podman for non-root operations:
sudo useradd -m -d /home/sc4s -s /bin/bash sc4s\nsudo passwd sc4s # type password here\nsudo su - sc4s\nmkdir -p /home/sc4s/local\nmkdir -p /home/sc4s/archive\nmkdir -p /home/sc4s/tls\npodman system migrate\n
Load the new environment variables. To do this, temporarily switch to any other user, and then log back in as the SC4S user. When logging in as the SC4S user, don\u2019t use the \u2018su\u2019 command, as it won\u2019t load the new variables. Instead, you can use, for example, the command \u2018ssh sc4s@localhost\u2019.
Create unit file in ~/.config/systemd/user/sc4s.service
with the following content:
[Unit]\nUser=sc4s\nDescription=SC4S Container\nWants=NetworkManager.service network-online.target\nAfter=NetworkManager.service network-online.target\n[Install]\nWantedBy=multi-user.target\n[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n# Required mount point for syslog-ng persist data (including disk buffer)\nEnvironment=\"SC4S_PERSIST_MOUNT=splunk-sc4s-var:/var/lib/syslog-ng\"\n# Optional mount point for local overrides and configuration\nEnvironment=\"SC4S_LOCAL_MOUNT=/home/sc4s/local:/etc/syslog-ng/conf.d/local:z\"\n# Optional mount point for local disk archive (EWMM output) files\nEnvironment=\"SC4S_ARCHIVE_MOUNT=/home/sc4s/archive:/var/lib/syslog-ng/archive:z\"\n# Map location of TLS custom TLS\nEnvironment=\"SC4S_TLS_MOUNT=/home/sc4s/tls:/etc/syslog-ng/tls:z\"\nTimeoutStartSec=0\nExecStartPre=/usr/bin/podman pull $SC4S_IMAGE\n# Note: The path /usr/bin/bash may vary based on your operating system.\n# when startup fails on running bash check if the path is correct\nExecStartPre=/usr/bin/bash -c \"/usr/bin/systemctl --user set-environment SC4SHOST=$(hostname -s)\"\nExecStart=/usr/bin/podman run -p 2514:514 -p 2514:514/udp -p 6514:6514 \\\n -e \"SC4S_CONTAINER_HOST=${SC4SHOST}\" \\\n -v \"$SC4S_PERSIST_MOUNT\" \\\n -v \"$SC4S_LOCAL_MOUNT\" \\\n -v \"$SC4S_ARCHIVE_MOUNT\" \\\n -v \"$SC4S_TLS_MOUNT\" \\\n --env-file=/home/sc4s/env_file \\\n --health-cmd=\"/healthcheck.sh\" \\\n --health-interval=10s --health-retries=6 --health-timeout=6s \\\n --network host \\\n --name SC4S \\\n --rm $SC4S_IMAGE\nRestart=on-abnormal\n
Create your env_file
file at /home/sc4s/env_file
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://xxx.xxx.xxx.xxx:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\nSC4S_LISTEN_DEFAULT_TCP_PORT=8514\nSC4S_LISTEN_DEFAULT_UDP_PORT=8514\nSC4S_LISTEN_DEFAULT_RFC5426_PORT=8601\nSC4S_LISTEN_DEFAULT_RFC6587_PORT=8601\n
To run the service as a non-root user, run the systemctl
command with --user
flag:
systemctl --user daemon-reload\nsystemctl --user enable sc4s\nsystemctl --user start sc4s\n
The remainder of the setup can be found in the main setup instructions.
"},{"location":"gettingstarted/quickstart_guide/","title":"Quickstart Guide","text":"This guide will enable you to quickly implement basic changes to your Splunk instance and set up a simple SC4S installation. It\u2019s a great starting point for working with SC4S and establishing a minimal operational solution. The same steps are thoroughly described in the Splunk Setup and Runtime configuration sections.
"},{"location":"gettingstarted/quickstart_guide/#splunk-setup","title":"Splunk setup","text":"Create the following default indexes that are used by SC4S:
email
epav
fireeye
gitops
infraops
netauth
netdlp
netdns
netfw
netids
netops
netwaf
netproxy
netipam
oswinsec
osnix
_metrics
(Optional opt-in for SC4S operational metrics; ensure this is created as a metrics index)Create a HEC token for SC4S. When filling out the form for the token, leave the \u201cSelected Indexes\u201d pane blank and specify that a lastChanceIndex
be created so that all data received by SC4S will have a target destination in Splunk.
a. Add the following to /etc/sysctl.conf
:
```\nnet.core.rmem_default = 17039360\nnet.core.rmem_max = 17039360\n```\n
b. Apply to the kernel:
```\nsysctl -p\n```\n
Ensure the kernel is not dropping packets:
netstat -su | grep \"receive errors\"\n
Create the systemd unit file /lib/systemd/system/sc4s.service
.
Copy and paste from the SC4S sample unit file (Docker) or SC4S sample unit file (Podman).
Install Podman or Docker:
sudo yum -y install podman\n
or sudo yum install docker-engine -y\n
Create a Podman/Docker local volume that will contain the disk buffer files and other SC4S state files (choose one in the command below):
sudo podman|docker volume create splunk-sc4s-var\n
Create directories to be used as a mount point for local overrides and configurations:
mkdir /opt/sc4s/local
mkdir /opt/sc4s/archive
mkdir /opt/sc4s/tls
Create the environment file /opt/sc4s/env_file
and replace the HEC_URL and HEC_TOKEN as necessary:
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\n SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n #Uncomment the following line if using untrusted SSL certificates\n #SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
Configure SC4S for systemd and start SC4S:
sudo systemctl daemon-reload
sudo systemctl enable sc4s
sudo systemctl start sc4s
Check podman/docker logs for errors:
sudo podman|docker logs SC4S\n
Search on Splunk for successful installation of SC4S:
index=* sourcetype=sc4s:events \"starting up\"\n
Send sample data to default udp port 514 of SC4S host:
echo \u201cHello SC4S\u201d > /dev/udp/<SC4S_ip>/514\n
When using Splunk Connect for Syslog to onboard a data source, the syslog-ng \u201capp-parser\u201d performs the operations that are traditionally performed at index-time by the corresponding Technical Add-on installed there. These index-time operations include linebreaking, source/sourcetype setting and timestamping. For this reason, if a data source is exclusively onboarded using SC4S then you will not need to install its corresponding Add-On on the indexers. You must, however, install the Add-on on the search head(s) for the user communities interested in this data source.
SC4S is designed to process \u201csyslog\u201d referring to IETF RFC standards 5424, legacy BSD syslog, RFC3164 (Not a standard document), and many \u201calmost\u201d syslog formats.
When possible data sources are identified and processed based on characteristics of the event that make them unique as compared to other events for example. Cisco devices using IOS will include \u201d : %\u201d followed by a string. While Arista EOS devices will use a valid RFC3164 header with a value in the \u201cPROGRAM\u201d position with \u201c%\u201d as the first char in the \u201cMESSAGE\u201d portion. This allows two similar event structures to be processed correctly.
When identification by message content alone is not possible for example the \u201csshd\u201d program field is commonly used across vendors additional \u201chint\u201d or guidance configuration allows SC4S to better classify events. The hints can be applied by definition of a specific port which will be used as a property of the event or by configuration of a host name/ip pattern. For example \u201cVMWARE VSPHERE\u201d products have a number of \u201cPROGRAM\u201d fields which can be used to identify vmware specific events in the syslog stream and these can be properly sourcetyped automatically however because \u201csshd\u201d is not unique it will be treated as generic \u201cos:nix\u201d events until further configuration is applied. The administrator can take one of two actions to refine the processing for vmware
Many log sources can be supported using one of the flexible options available without specific code known as app-parsers.
New supported sources are added regularly. Please submit an issue with a description of the vend/product. Configuration information an a compressed pcap (.zip) from a non-production environment to request support for a new source.
Many sources can be self supported. While we encourage sharing new sources via the github project to promote consistency and develop best-practices there is no requirement to engage in the community.
Sources sending legacy non conformant 3164 like streams can be assisted by the creation of an \u201cAlmost Syslog\u201d Parser. In an such a parser the goal is to process the syslog header allowing other parsers to correctly parse and handle the event. The following example is take from a currently supported format where the source product used epoch in the time stamp field.
#Example event\n #<134>1 1563249630.774247467 devicename security_event ids_alerted signature=1:28423:1 \n # In the example note the vendor incorrectly included \"1\" following PRI defined in RFC5424 as indicating a compliant message\n # The parser must remove the 1 before properly parsing\n # The epoch time is captured by regex\n # The epoch time is converted back into an RFC3306 date and provided to the parser\n block parser syslog_epoch-parser() { \n channel {\n filter { \n message('^(\\<\\d+\\>)(?:1(?= ))? ?(\\d{10,13}(?:\\.\\d+)?) (.*)', flags(store-matches));\n }; \n parser { \n date-parser(\n format('%s.%f', '%s')\n template(\"$2\")\n );\n };\n parser {\n syslog-parser(\n\n flags(assume-utf8, expect-hostname, guess-timezone)\n template(\"$1 $S_ISODATE $3\")\n );\n };\n rewrite(set_rfc3164_epoch); \n\n };\n };\n application syslog_epoch[sc4s-almost-syslog] {\n parser { syslog_epoch-parser(); }; \n };\n
"},{"location":"sources/#standard-syslog-using-message-parsing","title":"Standard Syslog using message parsing","text":"Syslog data conforming to RFC3164 or complying with RFC standards mentioned above can be processed with an app-parser allowing the use of the default port rather than requiring custom ports the following example take from a currently supported source uses the value of \u201cprogram\u201d to identify the source as this program value is unique. Care must be taken to write filter conditions strictly enough to not conflict with similar sources
block parser alcatel_switch-parser() { \n channel {\n rewrite {\n r_set_splunk_dest_default(\n index('netops')\n sourcetype('alcatel:switch')\n vendor('alcatel')\n product('switch')\n template('t_hdr_msg')\n ); \n }; \n\n\n };\n};\napplication alcatel_switch[sc4s-syslog] {\n filter { \n program('swlogd' type(string) flags(prefix));\n }; \n parser { alcatel_switch-parser(); }; \n};\n
"},{"location":"sources/#standard-syslog-vendor-product-by-source","title":"Standard Syslog vendor product by source","text":"In some cases standard syslog is also generic and can not be disambiguated from other sources by message content alone. When this happens and only a single source type is desired the \u201csimple\u201d option above is valid but requires managing a port. The following example allows use of a named port OR the vendor product by source configuration.
block parser dell_poweredge_cmc-parser() { \n channel {\n\n rewrite {\n r_set_splunk_dest_default(\n index('infraops')\n sourcetype('dell:poweredge:cmc:syslog')\n vendor('dell')\n product('poweredge')\n class('cmc')\n ); \n }; \n };\n};\napplication dell_poweredge_cmc[sc4s-network-source] {\n filter { \n (\"${.netsource.sc4s_vendor_product}\" eq \"dell_poweredge_cmc\"\n or \"${SOURCE}\" eq \"s_DELL_POWEREDGE_CMC\")\n and \"${fields.sc4s_vendor_product}\" eq \"\"\n }; \n\n parser { dell_poweredge_cmc-parser(); }; \n};\n
"},{"location":"sources/#filtering-events-from-output","title":"Filtering events from output","text":"In some cases specific events may be considered \u201cnoise\u201d and functionality must be implemented to prevent forwarding of these events to Splunk In version 2.0.0 of SC4S a new feature was implemented to improve the ease of use and efficiency of this progress.
The following example will \u201cnull_queue\u201d or drop cisco IOS device events at the debug level. Note Cisco does not use the PRI to indicate DEBUG a message filter is required.
block parser cisco_ios_debug-postfilter() {\n channel {\n #In this case the outcome is drop the event other logic such as adding indexed fields or editing the message is possible\n rewrite(r_set_dest_splunk_null_queue);\n };\n};\napplication cisco_ios_debug-postfilter[sc4s-postfilter] {\n filter {\n \"${fields.sc4s_vendor}\" eq \"cisco\" and\n \"${fields.sc4s_product}\" eq \"ios\"\n #Note regex reads as\n # start from first position\n # Any atleast 1 char that is not a `-`\n # constant '-7-'\n and message('^%[^\\-]+-7-');\n };\n parser { cisco_ios_debug-postfilter(); };\n};\n
"},{"location":"sources/#another-example-to-drop-events-based-on-src-and-action-values-in-message","title":"Another example to drop events based on \u201csrc\u201d and \u201caction\u201d values in message","text":"#filename: /opt/sc4s/local/config/app_parsers/rewriters/app-dest-rewrite-checkpoint_drop\n\nblock parser app-dest-rewrite-checkpoint_drop-d_fmt_hec_default() { \n channel {\n rewrite(r_set_dest_splunk_null_queue);\n };\n};\n\napplication app-dest-rewrite-checkpoint_drop-d_fmt_hec_default[sc4s-lp-dest-format-d_hec_fmt] {\n filter {\n match('checkpoint' value('fields.sc4s_vendor') type(string))\n and match('syslog' value('fields.sc4s_product') type(string))\n\n and match('Drop' value('.SDATA.sc4s@2620.action') type(string))\n and match('12.' value('.SDATA.sc4s@2620.src') type(string) flags(prefix) );\n\n }; \n parser { app-dest-rewrite-checkpoint_drop-d_fmt_hec_default(); }; \n};\n
"},{"location":"sources/#the-sc4s-fallback-sourcetype","title":"The SC4S \u201cfallback\u201d sourcetype","text":"If SC4S receives an event on port 514 which has no soup filter, that event will be given a \u201cfallback\u201d sourcetype. If you see events in Splunk with the fallback sourcetype, then you should figure out what source the events are from and determine why these events are not being sourcetyped correctly. The most common reason for events categorized as \u201cfallback\u201d is the lack of a SC4S filter for that source, and in some cases a misconfigured relay which alters the integrity of the message format. In most cases this means a new SC4S filter must be developed. In this situation you can either build a filter or file an issue with the community to request help.
The \u201cfallback\u201d sourcetype is formatted in JSON to allow the administrator to see the constituent syslog-ng \u201cmacros\u201d (fields) that have been automatically parsed by the syslog-ng server An RFC3164 (legacy BSD syslog) \u201con the wire\u201d raw message is usually (but unfortunately not always) comprised of the following syslog-ng macros, in this order and spacing:
<$PRI> $HOST $LEGACY_MSGHDR$MESSAGE\n
These fields can be very useful in building a new filter for that sourcetype. In addition, the indexed field sc4s_syslog_format
is helpful in determining if the incoming message is standard RFC3164. A value of anything other than rfc3164
or rfc5424_strict
indicates a vendor perturbation of standard syslog, which will warrant more careful examination when building a filter.
A key aspect of SC4S is to properly set Splunk metadata prior to the data arriving in Splunk (and before any TA processing takes place. The filters will apply the proper index, source, sourcetype, host, and timestamp metadata automatically by individual data source. Proper values for this metadata (including a recommended index) are included with all \u201cout-of-the-box\u201d log paths included with SC4S and are chosen to properly interface with the corresponding TA in Splunk. The administrator will need to ensure all recommended indexes be created to accept this data if the defaults are not changed.
It is understood that default values will need to be changed in many installations. Each source documented in this section has a table entitled \u201cSourcetype and Index Configuration\u201d, which highlights the default index and sourcetype for each source. See the section \u201cSC4S metadata configuration\u201d in the \u201cConfiguration\u201d page for more information on how to override the default values in this table.
"},{"location":"sources/#unique-listening-ports","title":"Unique listening ports","text":"SC4S supports unique listening ports for each source technology/log path (e.g. Cisco ASA), which is useful when the device is sending data on a port different from the typical default syslog port (UDP port 514). In some cases, when the source device emits data that is not able to be distinguished from other device types, a unique port is sometimes required. The specific environment variables used for setting \u201cunique ports\u201d are outlined in each source document in this section.
Using the default ports as unique listening ports is discouraged since it can lead to unintended consequences. There were cases of customers using port 514 as the unique listening port dedicated for a particular vendor and then sending other events to the same port, which caused some of those events to be misclassified.
In most cases only one \u201cunique port\u201d is needed for each source. However, SC4S also supports multiple network listening ports per source, which can be useful for a narrow set of compliance use cases. When configuring a source port variable to enable multiple ports, use a comma-separated list with no spaces (e.g. SC4S_LISTEN_CISCO_ASA_UDP_PORT=5005,6005
).
Due to the fact that unique listening port feature differentiate vendor and product based on the first two underscore characters (\u2018_\u2019), it is possible to filter events by an extra string added to the product. For example in case of having several devices of the same type sending logs over different ports it is possible to route it to different indexes based only on port value while retaining proper vendor and product fields. In general, it follows convention:
SC4S_LISTEN_{VENDOR}_{PRODUCT}_{PROTOCOL}_PORT={PORT VALUE 1},{PORT VALUE 2}...\n
But for special use cases it can be extended to: SC4S_LISTEN_{VENDOR}_{PRODUCT}_{ADDITIONAL_STRING}_{PROTOCOL}_PORT={PORT VALUE},{PORT VALUE 2}...\n
This feature removes the need for complex pre/post filters. Example:
SC4S_LISTEN_EAMPLEVENDOR_EXAMPLEPRODUCT_GROUP01-001_UDP_PORT=18514\n\nsets:\nvendor = < example vendor >\nproduct = < example product >\ntag = .source.s_EAMPLEVENDOR_EXAMPLEPRODUCT_GROUP01-001\n
SC4S_LISTEN_EAMPLEVENDOR_EXAMPLEPRODUCT_GROUP01-002_UDP_PORT=28514\n\nsets:\nvendor = < example vendor >\nproduct = < example product >\ntag = .source.s_EAMPLEVENDOR_EXAMPLEPRODUCT_GROUP01-002\n
"},{"location":"sources/base/cef/","title":"Common Event Format (CEF)","text":""},{"location":"sources/base/cef/#product-various-products-that-send-cef-format-messages-via-syslog","title":"Product - Various products that send CEF-format messages via syslog","text":"Each CEF product should have their own source entry in this documentation set. In a departure from normal configuration, all CEF products should use the \u201cCEF\u201d version of the unique port and archive environment variable settings (rather than a unique one per product), as the CEF log path handles all products sending events to SC4S in the CEF format. Examples of this include Arcsight, Imperva, and Cyberark. Therefore, the CEF environment variables for unique port, archive, etc. should be set only once.
If your deployment has multiple CEF devices that send to more than one port, set the CEF unique port variable(s) as a comma-separated list. See Unique Listening Ports for details.
The source documentation included below is a reference baseline for any product that sends data using the CEF log path.
Ref Link Splunk Add-on CEF https://bitbucket.org/SPLServices/ta-cef-for-splunk/downloads/ Product Manual https://docs.imperva.com/bundle/cloud-application-security/page/more/log-configuration.htm"},{"location":"sources/base/cef/#splunk-metadata-with-cef-events","title":"Splunk Metadata with CEF events","text":"The keys (first column) in splunk_metadata.csv
for CEF data sources have a slightly different meaning than those for non-CEF ones. The typical vendor_product
syntax is instead replaced by checks against specific columns of the CEF event \u2013 namely the first, second, and fourth columns following the leading CEF:0
(\u201ccolumn 0\u201d). These specific columns refer to the CEF device_vendor
, device_product
, and device_event_class
, respectively. The third column, device_version
, is not used for metadata assignment.
SC4S sets metadata based on the first two columns, and (optionally) the fourth. While the key (first column) in the splunk_metadata
file for non-CEF sources uses a \u201cvendor_product\u201d syntax that is arbitrary, the syntax for this key for CEF events is based on the actual contents of columns 1,2 and 4 from the CEF event, namely:
device_vendor
_device_product
_device_class
The final device_class
portion is optional. Therefore, CEF entries in splunk_metadata
can have a key representing the vendor and product, and others representing a vendor and product coupled with one or more additional classes. This allows for more granular metadata assignment (or overrides).
Here is a snippet of a sample Imperva CEF event that includes a CEF device class entry (which is \u201cFirewall\u201d):
Apr 19 10:29:53 3.3.3.3 CEF:0|Imperva Inc.|SecureSphere|12.0.0|Firewall|SSL Untraceable Connection|Medium|\n
and the corresponding match in splunk_metadata.csv
:
Imperva Inc._SecureSphere_Firewall,sourcetype,imperva:waf:firewall:cef\n
"},{"location":"sources/base/cef/#default-sourcetype","title":"Default Sourcetype","text":"sourcetype notes cef Common sourcetype"},{"location":"sources/base/cef/#default-source","title":"Default Source","text":"source notes Varies Varies"},{"location":"sources/base/cef/#default-index-configuration","title":"Default Index Configuration","text":"key source index notes Vendor_Product Varies main none"},{"location":"sources/base/cef/#filter-type","title":"Filter type","text":"MSG Parse: This filter parses message content
"},{"location":"sources/base/cef/#options","title":"Options","text":"Variable default description SC4S_LISTEN_CEF_UDP_PORT empty string Enable a UDP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_CEF_TCP_PORT empty string Enable a TCP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_CEF_TLS_PORT empty string Enable a TLS port for this specific vendor product using a comma-separated list of port numbers SC4S_DEST_CEF_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source"},{"location":"sources/base/leef/","title":"Log Extended Event Format (LEEF)","text":""},{"location":"sources/base/leef/#product-various-products-that-send-leef-v1-and-v2-format-messages-via-syslog","title":"Product - Various products that send LEEF V1 and V2 format messages via syslog","text":"Each LEEF product should have their own source entry in this documentation set by vendor. In a departure from normal configuration, all LEEF products should use the \u201cLEEF\u201d version of the unique port and archive environment variable settings (rather than a unique one per product), as the LEEF log path handles all products sending events to SC4S in the LEEF format. Examples of this include QRadar itself as well as other legacy systems. Therefore, the LEEF environment variables for unique port, archive, etc. should be set only once.
If your deployment has multiple LEEF devices that send to more than one port, set the LEEF unique port variable(s) as a comma-separated list. See Unique Listening Ports for details.
The source documentation included below is a reference baseline for any product that sends data using the LEEF log path.
Some vendors implement LEEF v2.0 format events incorrectly, omitting the required \u201ckey=value\u201d separator field from the LEEF header, thus forcing the consumer to assume the default tab \\t
character. SC4S will correctly process this omission, but will not correctly process other non-compliant formats.
The LEEF format allows for the inclusion of a field devTime
containing the device timestamp and allows the sender to also specify the format of this timestamp in another field called devTimeFormat
, which uses the Java Time format. SC4S uses syslog-ng strptime format which is not directly translatable to the Java Time format. Therefore, SC4S has provided support for the following common formats. If needed, additional time formats can be requested via an issue on github.
'%s.%f',\n '%s',\n '%b %d %H:%M:%S.%f',\n '%b %d %H:%M:%S',\n '%b %d %Y %H:%M:%S.%f',\n '%b %e %Y %H:%M:%S',\n '%b %e %H:%M:%S.%f',\n '%b %e %H:%M:%S',\n '%b %e %Y %H:%M:%S.%f',\n '%b %e %Y %H:%M:%S' \n
Ref Link Splunk Add-on LEEF None Product Manual https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/com.ibm.dsm.doc/c_LEEF_Format_Guide_intro.html"},{"location":"sources/base/leef/#splunk-metadata-with-leef-events","title":"Splunk Metadata with LEEF events","text":"The keys (first column) in splunk_metadata.csv
for LEEF data sources have a slightly different meaning than those for non-LEEF ones. The typical vendor_product
syntax is instead replaced by checks against specific columns of the LEEF event \u2013 namely the first and second, columns following the leading LEEF:VERSION
(\u201ccolumn 0\u201d). These specific columns refer to the LEEF device_vendor
, and device_product
, respectively.
device_vendor
_device_product
Here is a snippet of a sample LANCOPE event in LEEF 2.0 format:
<111>Apr 19 10:29:53 3.3.3.3 LEEF:2.0|Lancope|StealthWatch|1.0|41|^|src=192.0.2.0^dst=172.50.123.1^sev=5^cat=anomaly^srcPort=81^dstPort=21^usrName=joe.black\n
and the corresponding match in splunk_metadata.csv
:
Lancope_StealthWatch,source,lancope:stealthwatch\n
"},{"location":"sources/base/leef/#default-sourcetype","title":"Default Sourcetype","text":"sourcetype notes LEEF:1 Common sourcetype for all LEEF v1 events LEEF:2:<separator>
Common sourcetype for all LEEF v2 events separator
is the printable literal or hex value of the separator used in the event"},{"location":"sources/base/leef/#default-source","title":"Default Source","text":"source notes vendor
:product
Varies"},{"location":"sources/base/leef/#default-index-configuration","title":"Default Index Configuration","text":"key source index notes Vendor_Product Varies main none"},{"location":"sources/base/leef/#filter-type","title":"Filter type","text":"MSG Parse: This filter parses message content
"},{"location":"sources/base/leef/#options","title":"Options","text":"Variable default description SC4S_LISTEN_LEEF_UDP_PORT empty string Enable a UDP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_LEEF_TCP_PORT empty string Enable a TCP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_LEEF_TLS_PORT empty string Enable a TLS port for this specific vendor product using a comma-separated list of port numbers SC4S_DEST_LEEF_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source"},{"location":"sources/base/nix/","title":"Generic *NIX","text":"Many appliance vendor utilize Linux and BSD distributions as the foundation of the solution. When configured to log via syslog, these devices\u2019 OS logs (from a security perspective) can be monitored using the common Splunk Nix TA.
Note: This is NOT a replacement for or alternative to the Splunk Universal forwarder on Linux and Unix. For general-purpose server applications, the Universal Forwarder offers more comprehensive collection of events and metrics appropriate for both security and operations use cases.
Ref Link Splunk Add-on https://splunkbase.splunk.com/app/833/"},{"location":"sources/base/nix/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes nix:syslog None"},{"location":"sources/base/nix/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes nix_syslog nix:syslog osnix none"},{"location":"sources/base/nix/#filter-type","title":"Filter type","text":"MSG Parse: This filter parses message content
"},{"location":"sources/base/nix/#setup-and-configuration","title":"Setup and Configuration","text":"The SIMPLE source configuration allows configuration of a log path for SC4S using a single port to a single index/sourcetype combination to quickly onboard new sources that have not been formally supported in the product. Source data must use RFC5424 or a common variant of RFC3164 formatting.
The keys (first column) in splunk_metadata.csv
for SIMPLE data sources is a user-created key using the vendor_product
convention. For example, to on-board a new product first firewall
using a source type of first:firewall
and index netfw
, add the following two lines to the configuration file as shown:
first_firewall,index,netfw\nfirst_firewall,sourcetype,first:firewall\n
"},{"location":"sources/base/simple/#options","title":"Options","text":"For the variables below, replace VENDOR_PRODUCT
with the key (converted to upper case) used in the splunk_metadata.csv
. Based on the example above, to establish a tcp listener for first firewall
we would use SC4S_LISTEN_SIMPLE_FIRST_FIREWALL_TCP_PORT
.
SIMPLE
data sources must use RFC5424 or a common variant of RFC3164 formatting.SIMPLE
data source must listen on its own unique port list. Port overlap with other sources, either SIMPLE
ones or those served by regular log paths, are not allowed and will cause an error at startup.splunk_metadata.csv
must be in the form vendor_product
(lower case).SIMPLE
environment variables must have a core of VENDOR_PRODUCT
(upper case).SIMPLE
form of these LISTEN
variables after a regular SC4S log path is developed for a given source. You can, of course, continue to listen for this source on the same unique ports after having developed the new log path, but use the SC4S_LISTEN_<VENDOR_PRODUCT>_<protocol>_PORT
form of the variable to ensure the newly developed log path will listen on the specified unique ports.The product has been purchased and republished under a new product name by Tenable this configuration is obsolete.
"},{"location":"sources/vendor/Alsid/Alsid/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-aruba_ap.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-aruba_ap[sc4s-vps] {\n filter { \n host(\"aruba-ap-*\" type(glob))\n }; \n parser { \n p_set_netsource_fields(\n vendor('aruba')\n product('ap')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Aruba/clearpass/","title":"Clearpass","text":""},{"location":"sources/vendor/Aruba/clearpass/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-aruba_clearpass.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-aruba_clearpass[sc4s-vps] {\n filter { \n host(\"aruba-cp-*\" type(glob))\n }; \n parser { \n p_set_netsource_fields(\n vendor('aruba')\n product('clearpass')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Avaya/","title":"SIP Manager","text":""},{"location":"sources/vendor/Avaya/#key-facts","title":"Key facts","text":"\\n
Use of TCP will cause dataloss#/opt/sc4s/local/config/app-parsers/app-vps-barracuda_syslog.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-barracuda_syslog[sc4s-vps] {\n filter { \n netmask(169.254.100.1/24)\n or host(\"barracuda\" type(string) flags(ignore-case))\n }; \n parser { \n p_set_netsource_fields(\n vendor('barracuda')\n product('syslog')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Barracuda/waf_on_prem/","title":"Barracuda WAF (On Premises)","text":""},{"location":"sources/vendor/Barracuda/waf_on_prem/#key-facts","title":"Key facts","text":"%Y-%m-%d %H:%M:%S.%f %z
Login to Symantec DLP and edit the Syslog Response rule. The default configuration will appear as follows
$POLICY$^^$INCIDENT_ID$^^$SUBJECT$^^$SEVERITY$^^$MATCH_COUNT$^^$RULES$^^$SENDER$^^$RECIPIENTS$^^$BLOCKED$^^$FILE_NAME$^^$PARENT_PATH$^^$SCAN$^^$TARGET$^^$PROTOCOL$^^$INCIDENT_SNAPSHOT$\n
DO NOT replace the text prepend the following literal
SymantecDLPAlert: \n
Result note the space between \u2018:\u2019 and \u2018$\u2019
SymantecDLPAlert: $POLICY$^^$INCIDENT_ID$^^$SUBJECT$^^$SEVERITY$^^$MATCH_COUNT$^^$RULES$^^$SENDER$^^$RECIPIENTS$^^$BLOCKED$^^$FILE_NAME$^^$PARENT_PATH$^^$SCAN$^^$TARGET$^^$PROTOCOL$^^$INCIDENT_SNAPSHOT$\n
"},{"location":"sources/vendor/Broadcom/dlp/#syslog-system-events","title":"Syslog System events","text":"<drive>:\\SymantecDLP\\Protect\\config
directory on Windows or the /opt/SymantecDLP/Protect/config
directory on Linux.Manager.properties
file.systemevent.syslog.format
systemevent.syslog.format= {0.EN_US} SymantecDLP: {1.EN_US} - {2.EN_US}
#/opt/sc4s/local/config/app-parsers/app-vps-symantec_dlp.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-symantec_dlp[sc4s-vps] {\n filter { \n #netmask(169.254.100.1/24)\n #host(\"-esx-\")\n }; \n parser { \n p_set_netsource_fields(\n vendor('symantec')\n product('dlp')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Broadcom/ep/","title":"Symantec Endpoint Protection (SEPM)","text":""},{"location":"sources/vendor/Broadcom/ep/#key-facts","title":"Key facts","text":"Symantec now Broadcom ProxySG/ASG is formerly known as the \u201cBluecoat\u201d proxy
Broadcom products are inclusive of products formerly marketed under Symantec and Bluecoat brands.
"},{"location":"sources/vendor/Broadcom/proxy/#key-facts","title":"Key facts","text":"<111>1 $(date)T$(x-bluecoat-hour-utc):$(x-bluecoat-minute-utc):$(x-bluecoat-second-utc)Z $(s-computername) ProxySG - splunk_format - c-ip=$(c-ip) rs-Content-Type=$(quot)$(rs(Content-Type))$(quot) cs-auth-groups=$(cs-auth-groups) cs-bytes=$(cs-bytes) cs-categories=$(cs-categories) cs-host=$(cs-host) cs-ip=$(cs-ip) cs-method=$(cs-method) cs-uri-port=$(cs-uri-port) cs-uri-scheme=$(cs-uri-scheme) cs-User-Agent=$(quot)$(cs(User-Agent))$(quot) cs-username=$(cs-username) dnslookup-time=$(dnslookup-time) duration=$(duration) rs-status=$(rs-status) rs-version=$(rs-version) s-action=$(s-action) s-ip=$(s-ip) service.name=$(service.name) service.group=$(service.group) s-supplier-ip=$(s-supplier-ip) s-supplier-name=$(s-supplier-name) sc-bytes=$(sc-bytes) sc-filter-result=$(sc-filter-result) sc-status=$(sc-status) time-taken=$(time-taken) x-exception-id=$(x-exception-id) x-virus-id=$(x-virus-id) c-url=$(quot)$(url)$(quot) cs-Referer=$(quot)$(cs(Referer))$(quot) c-cpu=$(c-cpu) connect-time=$(connect-time) cs-auth-groups=$(cs-auth-groups) cs-headerlength=$(cs-headerlength) cs-threat-risk=$(cs-threat-risk) r-ip=$(r-ip) r-supplier-ip=$(r-supplier-ip) rs-time-taken=$(rs-time-taken) rs-server=$(rs(server)) s-connect-type=$(s-connect-type) s-icap-status=$(s-icap-status) s-sitename=$(s-sitename) s-source-port=$(s-source-port) s-supplier-country=$(s-supplier-country) sc-Content-Encoding=$(sc(Content-Encoding)) sr-Accept-Encoding=$(sr(Accept-Encoding)) x-auth-credential-type=$(x-auth-credential-type) x-cookie-date=$(x-cookie-date) x-cs-certificate-subject=$(x-cs-certificate-subject) x-cs-connection-negotiated-cipher=$(x-cs-connection-negotiated-cipher) x-cs-connection-negotiated-cipher-size=$(x-cs-connection-negotiated-cipher-size) x-cs-connection-negotiated-ssl-version=$(x-cs-connection-negotiated-ssl-version) x-cs-ocsp-error=$(x-cs-ocsp-error) x-cs-Referer-uri=$(x-cs(Referer)-uri) x-cs-Referer-uri-address=$(x-cs(Referer)-uri-address) x-cs-Referer-uri-extension=$(x-cs(Referer)-uri-extension) x-cs-Referer-uri-host=$(x-cs(Referer)-uri-host) x-cs-Referer-uri-hostname=$(x-cs(Referer)-uri-hostname) x-cs-Referer-uri-path=$(x-cs(Referer)-uri-path) x-cs-Referer-uri-pathquery=$(x-cs(Referer)-uri-pathquery) x-cs-Referer-uri-port=$(x-cs(Referer)-uri-port) x-cs-Referer-uri-query=$(x-cs(Referer)-uri-query) x-cs-Referer-uri-scheme=$(x-cs(Referer)-uri-scheme) x-cs-Referer-uri-stem=$(x-cs(Referer)-uri-stem) x-exception-category=$(x-exception-category) x-exception-category-review-message=$(x-exception-category-review-message) x-exception-company-name=$(x-exception-company-name) x-exception-contact=$(x-exception-contact) x-exception-details=$(x-exception-details) x-exception-header=$(x-exception-header) x-exception-help=$(x-exception-help) x-exception-last-error=$(x-exception-last-error) x-exception-reason=$(x-exception-reason) x-exception-sourcefile=$(x-exception-sourcefile) x-exception-sourceline=$(x-exception-sourceline) x-exception-summary=$(x-exception-summary) x-icap-error-code=$(x-icap-error-code) x-rs-certificate-hostname=$(x-rs-certificate-hostname) x-rs-certificate-hostname-category=$(x-rs-certificate-hostname-category) x-rs-certificate-observed-errors=$(x-rs-certificate-observed-errors) x-rs-certificate-subject=$(x-rs-certificate-subject) x-rs-certificate-validate-status=$(x-rs-certificate-validate-status) x-rs-connection-negotiated-cipher=$(x-rs-connection-negotiated-cipher) x-rs-connection-negotiated-cipher-size=$(x-rs-connection-negotiated-cipher-size) x-rs-connection-negotiated-ssl-version=$(x-rs-connection-negotiated-ssl-version) x-rs-ocsp-error=$(x-rs-ocsp-error) cs-uri-extension=$(cs-uri-extension) cs-uri-path=$(cs-uri-path) cs-uri-query=$(quot)$(cs-uri-query)$(quot) c-uri-pathquery=$(c-uri-pathquery)\n
"},{"location":"sources/vendor/Broadcom/sslva/","title":"SSL Visibility Appliance","text":""},{"location":"sources/vendor/Broadcom/sslva/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app_parsers/app-vps-brocade_syslog.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-brocade_syslog[sc4s-vps] {\n filter { \n host(\"^test_brocade-\")\n }; \n parser { \n p_set_netsource_fields(\n vendor('brocade')\n product('syslog')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Buffalo/","title":"Terastation","text":""},{"location":"sources/vendor/Buffalo/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-buffalo_terastation.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-buffalo_terastation[sc4s-vps] {\n filter { \n host(\"^test_buffalo_terastation-\")\n }; \n parser { \n p_set_netsource_fields(\n vendor('buffalo')\n product('terastation')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Checkpoint/firewallos/","title":"Firewall OS","text":"Firewall OS format is by devices supporting a direct Syslog output
"},{"location":"sources/vendor/Checkpoint/firewallos/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual unknown"},{"location":"sources/vendor/Checkpoint/firewallos/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cp_log:fw:syslog None"},{"location":"sources/vendor/Checkpoint/firewallos/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes checkpoint_fw cp_log:fw:syslog netops none"},{"location":"sources/vendor/Checkpoint/firewallos/#parser-configuration","title":"Parser Configuration","text":"#/opt/sc4s/local/config/app-parsers/app-vps-checkpoint_fw.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-checkpoint_fw[sc4s-vps] {\n filter { \n host(\"^checkpoint_fw-\")\n }; \n parser { \n p_set_netsource_fields(\n vendor('checkpoint')\n product('fw')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Checkpoint/logexporter_5424/","title":"Log Exporter (Syslog)","text":""},{"location":"sources/vendor/Checkpoint/logexporter_5424/#key-facts","title":"Key Facts","text":"514/TCP
.Checkpoint Software blades with a CIM mapping have been sub-grouped into sources to allow routing to appropriate indexes. All other source metadata is left as their defaults.
key source index notes checkpoint_syslog_dlp dlp netdlp none checkpoint_syslog_email email email none checkpoint_syslog_firewall firewall netfw none checkpoint_syslog_sessions sessions netops none checkpoint_syslog_web web netproxy none checkpoint_syslog_audit audit netops none checkpoint_syslog_endpoint endpoint netops none checkpoint_syslog_network network netops checkpoint_syslog_ids ids netids checkpoint_syslog_ids_malware ids_malware netids"},{"location":"sources/vendor/Checkpoint/logexporter_5424/#source-configuration","title":"Source Configuration","text":""},{"location":"sources/vendor/Checkpoint/logexporter_5424/#splunk-side","title":"Splunk Side","text":"splunk_metadata.csv
file and set the index
and sourcetype
as required for the data source.cp
terminal and use the expert
command to log-in in expert mode.$EXPORTERDIR
shell variable is defined with:echo \"$EXPORTERDIR\"\n
$EXPORTERDIR/targets
with:LOG_EXPORTER_NAME='SyslogToSplunk' # Name this something unique but meaningful\nTARGET_SERVER='example.internal' # The indexer or heavy forwarder to send logs to. Can be an FQDN or an IP address.\nTARGET_PORT='514' # Syslog defaults to 514\nTARGET_PROTOCOL='tcp' # IETF Syslog is specifically TCP\n\ncp_log_export add name \"$LOG_EXPORTER_NAME\" target-server \"$TARGET_SERVER\" target-port \"$TARGET_PORT\" protocol \"$TARGET_PROTOCOL\" format 'syslog'\n
cp \"$EXPORTERDIR/conf/SyslogFormatDefinition.xml\" \"$EXPORTERDIR/conf/SplunkRecommendedFormatDefinition.xml\"\n
$EXPORTERDIR/conf/SplunkRecommendedFormatDefinition.xml
by modifying the start_message_body
, fields_separatator
, and field_value_separatator
keys as shown below. a. Note: The misspelling of \u201cseparator\u201d as \u201cseparatator\u201d is intentional, and is to line up with both Checkpoint\u2019s documentation and parser implementation.<start_message_body>[sc4s@2620 </start_message_body>\n<!-- ... -->\n<fields_separatator> </fields_separatator>\n<!-- ... -->\n<field_value_separatator>=</field_value_separatator>\n
conf
directory with:cp \"$EXPORTERDIR/conf/SplunkRecommendedFormatDefinition.xml\" \"$EXPORTERDIR/targets/$LOG_EXPORTER_NAME/conf\"\n
$EXPORTERDIR/targets/$LOG_EXPORTER_NAME/targetConfiguration.xml
by adding the reference to the $EXPORTERDIR/targets/$LOG_EXPORTER_NAME/conf/SplunkRecommendedFormatDefinition.xml
under the key <formatHeaderFile>
. a. For example, if $EXPORTERDIR
is /opt/CPrt-R81/log_exporter
and $LOG_EXPORTER_NAME
is SyslogToSplunk
, the absolute path will become:<formatHeaderFile>/opt/CPrt-R81/log_exporter/targets/SyslogToSplunk/conf/SplunkRecommendedFormatDefinition.xml</formatHeaderFile>\n
cp_log_export restart name \"$LOG_EXPORTER_NAME\"\n
The \u201cSplunk Format\u201d is legacy and should not be used for new deployments see Log Exporter (Syslog)
"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/#key-facts","title":"Key Facts","text":"The Splunk host
field will be derived as follows using the first match
hostname
fieldIf the host is in the format <host>-v_<bladename>
use bladename
for host
Checkpoint Software blades with CIM mapping have been sub-grouped into sources to allow routing to appropriate indexes. All other source meta data is left at default
key source index notes checkpoint_splunk_dlp dlp netdlp none checkpoint_splunk_email email email none checkpoint_splunk_firewall firewall netfw none checkpoint_splunk_os program:${program} netops none checkpoint_splunk_sessions sessions netops none checkpoint_splunk_web web netproxy none checkpoint_splunk_audit audit netops none checkpoint_splunk_endpoint endpoint netops none checkpoint_splunk_network network netops checkpoint_splunk_ids ids netids checkpoint_splunk_ids_malware ids_malware netids"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/#options","title":"Options","text":"Variable default description SC4S_LISTEN_CHECKPOINT_SPLUNK_NOISE_CONTROL no Suppress any duplicate product+loguid pairs processed within 2 seconds of the last matching event SC4S_LISTEN_CHECKPOINT_SPLUNK_OLD_HOST_RULES empty string when set toyes
reverts host name selection order to originsicname\u2013>origin_sic_name\u2013>hostname"},{"location":"sources/vendor/Cisco/cisco_ace/","title":"Application Control Engine (ACE)","text":""},{"location":"sources/vendor/Cisco/cisco_ace/#key-facts","title":"Key facts","text":"EXTRACT-AA-signature = CSCOacs_(?<signature>\\S+):?\n# Note the value of this config is empty to disable\nEXTRACT-AA-syslog_message = \nEXTRACT-acs_message_header2 = ^CSCOacs_\\S+\\s+(?<log_session_id>\\S+)\\s+(?<total_segments>\\d+)\\s+(?<segment_number>\\d+)\\s+(?<acs_message>.*)\n
"},{"location":"sources/vendor/Cisco/cisco_asa/","title":"ASA/FTD (Firepower)","text":""},{"location":"sources/vendor/Cisco/cisco_asa/#key-facts","title":"Key facts","text":"If feasible for you, you can use following log configuration on the ESA. The log name configured on the ESA can then be parsed easily by sc4s.
ESA Log Name ESA Log Type sc4s_gui_logs HTTP Logs sc4s_mail_logs IronPort Text Mail Logs sc4s_amp AMP Engine Logs sc4s_audit_logs Audit Logs sc4s_antispam Anti-Spam Logs sc4s_content_scanner Content Scanner Logs sc4s_error_logs IronPort Text Mail Logs (Loglevel: Critical) sc4s_system_logs System Logs"},{"location":"sources/vendor/Cisco/cisco_esa/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:esa:http The HTTP logs of Cisco IronPort ESA record information about the secure HTTP services enabled on the interface. cisco:esa:textmail Text mail logs of Cisco IronPort ESA record email information and status. cisco:esa:amp Advanced Malware Protection (AMP) of Cisco IronPort ESA records malware detection and blocking, continuous analysis, and retrospective alerting details. cisco:esa:authentication These logs record successful user logins and unsuccessful login attempts. cisco:esa:cef The Consolidated Event Logs summarizes each message event in a single log line. cisco:esa:error_logs Error logs of Cisco IronPort ESA records error that occurred for ESA configurations or internal issues. cisco:esa:content_scanner Content scanner logs of Cisco IronPort ESA scans messages that contain password-protected attachments for malicious activity and data privacy. cisco:esa:antispam Anti-spam logs record the status of the anti-spam scanning feature of your system, including the status on receiving updates of the latest anti-spam rules. Also, any logs related to the Context Adaptive Scanning Engine are logged here. cisco:esa:system_logs System logs record the boot information, virtual appliance license expiration alerts, DNS status information, and comments users typed using commit command."},{"location":"sources/vendor/Cisco/cisco_esa/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_esa cisco:esa:http email None cisco_esa cisco:esa:textmail email None cisco_esa cisco:esa:amp email None cisco_esa cisco:esa:authentication email None cisco_esa cisco:esa:cef email None cisco_esa cisco:esa:error_logs email None cisco_esa cisco:esa:content_scanner email None cisco_esa cisco:esa:antispam email None cisco_esa cisco:esa:system_logs email None"},{"location":"sources/vendor/Cisco/cisco_esa/#parser-configuration","title":"Parser Configuration","text":"#/opt/sc4s/local/config/app-parsers/app-vps-cisco_esa.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-cisco_esa[sc4s-vps] {\n filter { \n host(\"^esa-\")\n }; \n parser { \n p_set_netsource_fields(\n vendor('cisco')\n product('esa')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Cisco/cisco_imc/","title":"Cisco Integrated Management Controller (IMC)","text":""},{"location":"sources/vendor/Cisco/cisco_imc/#key-facts","title":"Key facts","text":"Cisco Network Products of multiple types share common logging characteristics the following types are known to be compatible:
f_cisco_ios
as requiredIf you want to send raw logs to splunk (without any drop) then only use this feature Please set following property in env_file:
SC4S_ENABLE_CISCO_IOS_RAW_MSG=yes\n
Restart SC4S and it will send entire message without any drop. TA-meraki 1.1.5
requires sourcetype meraki
.Either by defining Cisco Meraki hosts:
#/opt/sc4s/local/config/app_parsers/app-vps-cisco_meraki.conf\n#File name provided is a suggestion it must be globally unique\n\nblock parser app-vps-test-cisco_meraki() {\n channel {\n if {\n filter { host(\"^test-mx-\") };\n parser { \n p_set_netsource_fields(\n vendor('meraki')\n product('securityappliances')\n ); \n };\n } elif {\n filter { host(\"^test-mr-\") };\n parser { \n p_set_netsource_fields(\n vendor('meraki')\n product('accesspoints')\n ); \n };\n } elif {\n filter { host(\"^test-ms-\") };\n parser { \n p_set_netsource_fields(\n vendor('meraki')\n product('switches')\n ); \n };\n } else {\n parser { \n p_set_netsource_fields(\n vendor('cisco')\n product('meraki')\n ); \n };\n };\n }; \n};\n\n\napplication app-vps-test-cisco_meraki[sc4s-vps] {\n filter {\n host(\"^test-meraki-\")\n or host(\"^test-mx-\")\n or host(\"^test-mr-\")\n or host(\"^test-ms-\")\n };\n parser { app-vps-test-cisco_meraki(); };\n};\n
Or by a unique port:
# /opt/sc4s/env_file\nSC4S_LISTEN_CISCO_MERAKI_UDP_PORT=5004\nSC4S_LISTEN_MERAKI_SECURITYAPPLIANCES_UDP_PORT=5005\nSC4S_LISTEN_MERAKI_ACCESSPOINTS_UDP_PORT=5006\nSC4S_LISTEN_MERAKI_SWITCHES_UDP_PORT=5007\n
#/opt/sc4s/local/config/app-parsers/app-vps-cisco_mm.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-cisco_mm[sc4s-vps] {\n filter { \n host('^test-cmm-')\n }; \n parser { \n p_set_netsource_fields(\n vendor('cisco')\n product('mm')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Cisco/cisco_ms/","title":"Meeting Server","text":""},{"location":"sources/vendor/Cisco/cisco_ms/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-cisco_ms.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-cisco_ms[sc4s-vps] {\n filter { \n host('^test-cms-')\n }; \n parser { \n p_set_netsource_fields(\n vendor('cisco')\n product('ms')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Cisco/cisco_tvcs/","title":"TelePresence Video Communication Server (TVCS)","text":""},{"location":"sources/vendor/Cisco/cisco_tvcs/#links","title":"Links","text":"Ref Link Product Manual https://www.cisco.com/c/en/us/products/unified-communications/telepresence-video-communication-server-vcs/index.html"},{"location":"sources/vendor/Cisco/cisco_tvcs/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:vcs none"},{"location":"sources/vendor/Cisco/cisco_tvcs/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_tvcs cisco:tvcs main none"},{"location":"sources/vendor/Cisco/cisco_ucm/","title":"Unified Communications Manager (UCM)","text":""},{"location":"sources/vendor/Cisco/cisco_ucm/#key-facts","title":"Key facts","text":"| cisco:wsa:l4tm | The L4TM logs of Cisco IronPort WSA record sites added to the L4TM block and allow lists. | | cisco:wsa:squid | The access logs of Cisco IronPort WSA version prior to 11.7 record Web Proxy client history in squid. | | cisco:wsa:squid:new | The access logs of Cisco IronPort WSA version since 11.7 record Web Proxy client history in squid. | | cisco:wsa:w3c:recommended | The access logs of Cisco IronPort WSA version since 12.5 record Web Proxy client history in W3C. |
"},{"location":"sources/vendor/Cisco/cisco_wsa/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_wsa cisco:wsa:l4tm netproxy None cisco_wsa cisco:wsa:squid netproxy None cisco_wsa cisco:wsa:squid:new netproxy None cisco_wsa cisco:wsa:w3c:recommended netproxy None"},{"location":"sources/vendor/Cisco/cisco_wsa/#filter-type","title":"Filter type","text":"IP, Netmask or Host
"},{"location":"sources/vendor/Cisco/cisco_wsa/#source-setup-and-configuration","title":"Source Setup and Configuration","text":"#/opt/sc4s/local/config/app-parsers/app-vps-cisco_wsa.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-cisco_wsa[sc4s-vps] {\n filter { \n host(\"^wsa-\")\n }; \n parser { \n p_set_netsource_fields(\n vendor('cisco')\n product('wsa')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Citrix/netscaler/","title":"Netscaler ADC/SDX","text":""},{"location":"sources/vendor/Citrix/netscaler/#key-facts","title":"Key facts","text":"clearswift:${PROGRAM}
none"},{"location":"sources/vendor/Clearswift/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes clearswift clearswift:${PROGRAM}
email None"},{"location":"sources/vendor/Clearswift/#parser-configuration","title":"Parser Configuration","text":"```c
"},{"location":"sources/vendor/Clearswift/#optsc4slocalconfigapp-parsersapp-vps-clearswiftconf","title":"/opt/sc4s/local/config/app-parsers/app-vps-clearswift.conf","text":""},{"location":"sources/vendor/Clearswift/#file-name-provided-is-a-suggestion-it-must-be-globally-unique","title":"File name provided is a suggestion it must be globally unique","text":"application app-vps-clearswift[sc4s-vps] { filter { host(\u201ctest-clearswift-\u201d type(string) flags(prefix)) }; parser { p_set_netsource_fields( vendor(\u2018clearswift\u2019) product(\u2018clearswift\u2019) ); }; };
"},{"location":"sources/vendor/Cohesity/cluster/","title":"Cluster","text":""},{"location":"sources/vendor/Cohesity/cluster/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-dell_cmc.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-dell_cmc[sc4s-vps] {\n filter { \n host(\"test-dell-cmc-\" type(string) flags(prefix))\n }; \n parser { \n p_set_netsource_fields(\n vendor('dell')\n product('poweredge_cmc')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Dell/emc_powerswitchn/","title":"EMC Powerswitch N Series","text":""},{"location":"sources/vendor/Dell/emc_powerswitchn/#key-facts","title":"Key facts","text":"Through sc4s-vps
#/opt/sc4s/local/config/app-parsers/app-vps-dell_switch_n.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-dell_switch_n[sc4s-vps] {\n filter { \n host(\"test-dell-switch-n-\" type(string) flags(prefix))\n }; \n parser { \n p_set_netsource_fields(\n vendor('dellemc')\n product('powerswitch_n')\n ); \n }; \n};\n
or through unique port
# /opt/sc4s/env_file \nSC4S_LISTEN_DELLEMC_POWERSWITCH_N_UDP_PORT=5005\n
#/opt/sc4s/local/config/app_parsers/app-vps-dell_rsa_secureid.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-dell_rsa_secureid[sc4s-vps] {\n filter { \n host(\"test_rsasecureid*\" type(glob))\n }; \n parser { \n p_set_netsource_fields(\n vendor('dell')\n product('rsa_secureid')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Dell/sonic/","title":"Dell Networking SONiC","text":""},{"location":"sources/vendor/Dell/sonic/#key-facts","title":"Key facts","text":"Through sc4s-vps
#/opt/sc4s/local/config/app-parsers/app-vps-dell_sonic.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-dell_sonic[sc4s-vps] {\n filter { \n host(\"sonic\" type(string) flags(prefix))\n }; \n parser { \n p_set_netsource_fields(\n vendor('dell')\n product('sonic')\n ); \n }; \n};\n
or through unique port
# /opt/sc4s/env_file \nSC4S_LISTEN_DELL_SONIC_UDP_PORT=5005\n
The sourcetype has been changed in version 2.35.0 making it compliant with corresponding TA.
"},{"location":"sources/vendor/F5/bigip/","title":"BigIP","text":""},{"location":"sources/vendor/F5/bigip/#key-facts","title":"Key facts","text":"<111>1 2020-05-28T22:48:15Z foo.example.com F5 - access_json - {\"event_type\":\"HTTP_REQUEST\", \"src_ip\":\"10.66.98.41\"}
This source type requires a customer specific Splunk Add-on for utility value"},{"location":"sources/vendor/F5/bigip/#index-configuration","title":"Index Configuration","text":"key index notes f5_bigip netops none f5_bigip_irule netops none f5_bigip_asm netwaf none f5_bigip_apm netops none f5_bigip_nix netops if f_f5_bigip
is not set the index osnix will be used f5_bigip_access_json netops none"},{"location":"sources/vendor/F5/bigip/#parser-configuration","title":"Parser Configuration","text":"#/opt/sc4s/local/config/app-parsers/app-vps-f5_bigip.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-f5_bigip[sc4s-vps] {\n filter { \n \"${HOST}\" eq \"f5_bigip\"\n }; \n parser { \n p_set_netsource_fields(\n vendor('f5')\n product('bigip')\n ); \n }; \n};\n
"},{"location":"sources/vendor/FireEye/cms/","title":"CMS","text":""},{"location":"sources/vendor/FireEye/cms/#key-facts","title":"Key facts","text":"config log memory filter\n\nset forward-traffic enable\n\nset local-traffic enable\n\nset sniffer-traffic disable\n\nset anomaly enable\n\nset voip disable\n\nset multicast-traffic enable\n\nset dns enable\n\nend\n\nconfig system global\n\nset cli-audit-log enable\n\nend\n\nconfig log setting\n\nset neighbor-event enable\n\nend\n
"},{"location":"sources/vendor/Fortinet/fortios/#options","title":"Options","text":"Variable default description SC4S_OPTION_FORTINET_SOURCETYPE_PREFIX fgt Notice starting with version 1.6 of the fortinet add-on and app the sourcetype required changes from fgt_*
to fortinet_*
this is a breaking change to use the new sourcetype set this variable to fortigate
in the env_file"},{"location":"sources/vendor/Fortinet/fortiweb/","title":"FortiWeb","text":""},{"location":"sources/vendor/Fortinet/fortiweb/#key-facts","title":"Key facts","text":"config log syslog-policy\n\nedit splunk \n\nconfig syslog-server-list \n\nedit 1\n\nset server x.x.x.x\n\nset port 514 (Example. Should be the same as default or dedicated port selected for sc4s) \n\nend\n\nend\n\nconfig log syslogd\n\nset policy splunk\n\nset status enable\n\nend\n
"},{"location":"sources/vendor/GitHub/","title":"Enterprise Server","text":""},{"location":"sources/vendor/GitHub/#key-facts","title":"Key facts","text":"client_ip
prefix in message"},{"location":"sources/vendor/HAProxy/syslog/#index-configuration","title":"Index Configuration","text":"key index notes haproxy_syslog netlb none"},{"location":"sources/vendor/HPe/ilo/","title":"ILO (4+)","text":""},{"location":"sources/vendor/HPe/ilo/#key-facts","title":"Key facts","text":"HP Procurve switches have multiple log formats used.
"},{"location":"sources/vendor/HPe/procurve/#key-facts","title":"Key facts","text":"Parser configuration is conditional only required if additional events are produced by the device that do not match the default configuration.
#/opt/sc4s/local/config/app-parsers/app-vps-ibm_datapower.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-ibm_datapower[sc4s-vps] {\n filter { \n host(\"^test-ibmdp-\")\n }; \n parser { \n p_set_netsource_fields(\n vendor('ibm')\n product('datapower')\n ); \n }; \n};\n
"},{"location":"sources/vendor/ISC/bind/","title":"bind","text":"This source type is often re-implemented by specific add-ons such as infoblox or bluecat if a more specific source type is desired see that source documentation for instructions
"},{"location":"sources/vendor/ISC/bind/#key-facts","title":"Key facts","text":"This source type is often re-implemented by specific add-ons such as infoblox or bluecat if a more specific source type is desired see that source documentation for instructions
"},{"location":"sources/vendor/ISC/dhcpd/#key-facts","title":"Key facts","text":"MSG Parse: This filter parses message content
"},{"location":"sources/vendor/ISC/dhcpd/#options","title":"Options","text":"None
"},{"location":"sources/vendor/ISC/dhcpd/#verification","title":"Verification","text":"An active site will generate frequent events use the following search to check for new events
Verify timestamp, and host values match as expected
index=<asconfigured> (sourcetype=isc:dhcp\")\n
"},{"location":"sources/vendor/Imperva/incapusla/","title":"Incapsula","text":""},{"location":"sources/vendor/Imperva/incapusla/#key-facts","title":"Key facts","text":"Warning: Despite the TA indication this data source is CIM compliant all versions of NIOS including the most recent available as of 2019-12-17 do not support the DNS data model correctly. For DNS security use cases use Splunk Stream instead.
"},{"location":"sources/vendor/InfoBlox/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-infoblox_nios.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-infoblox_nios[sc4s-vps] {\n filter { \n host(\"infoblox-*\" type(glob))\n }; \n parser { \n p_set_netsource_fields(\n vendor('infoblox')\n product('nios')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Juniper/junos/","title":"JunOS","text":""},{"location":"sources/vendor/Juniper/junos/#key-facts","title":"Key facts","text":"The TA link provided has commented out the CEF support as of 2022-03-18 manual edits are required
"},{"location":"sources/vendor/Kaspersky/es_cef/#key-facts","title":"Key facts","text":"Leef format has not been tested samples needed
"},{"location":"sources/vendor/Kaspersky/es_leef/#key-facts","title":"Key facts","text":"MSG Parse: This filter parses message content
"},{"location":"sources/vendor/McAfee/epo/#options","title":"Options","text":"Variable default description SC4S_LISTEN_MCAFEE_EPO_TLS_PORT empty string Enable a TLS port for this specific vendor product using a comma-separated list of port numbers SC4S_DEST_MCAFEE_EPO_ARCHIVE no Enable archive to disk for this specific source SC4S_DEST_MCAFEE_EPO_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source SC4S_SOURCE_TLS_ENABLE no This must be set to yes so that SC4S listens for encrypted syslog from ePO"},{"location":"sources/vendor/McAfee/epo/#additional-setup","title":"Additional setup","text":"You must create a certificate for the SC4S server to receive encrypted syslog from ePO. A self-signed certificate is fine. Generate a self-signed certificate on the SC4S host:
openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout /opt/sc4s/tls/server.key -out /opt/sc4s/tls/server.pem
Uncomment the following line in /lib/systemd/system/sc4s.service
to allow the docker container to use the certificate:
Environment=\"SC4S_TLS_MOUNT=/opt/sc4s/tls:/etc/syslog-ng/tls:z\"
from the command line of the SC4S host, run this: openssl s_client -connect localhost:6514
The message:
socket: Bad file descriptor\nconnect:errno=9\n
indicates that SC4S is not listening for encrypted syslog. Note that a netstat
may show the port open, but it is not accepting encrypted traffic as configured.
It may take several minutes for the syslog option to be available in the registered servers
dropdown.
#/opt/sc4s/local/config/app-parsers/app-vps-mikrotik_routeros.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-mikrotik_routeros[sc4s-vps] {\n filter { \n host(\"test-mrtros-\" type(string) flags(prefix))\n }; \n parser { \n p_set_netsource_fields(\n vendor('mikrotik')\n product('routeros')\n ); \n }; \n};\n
"},{"location":"sources/vendor/NetApp/ontap/","title":"OnTap","text":""},{"location":"sources/vendor/NetApp/ontap/#key-facts","title":"Key facts","text":"MSG Parse: This filter parses message content
"},{"location":"sources/vendor/PaloaltoNetworks/panos/#setup-and-configuration","title":"Setup and Configuration","text":"An active firewall will generate frequent events. Use the following search to validate events are present per source device
index=<asconfigured> sourcetype=pan:*| stats count by host\n
"},{"location":"sources/vendor/PaloaltoNetworks/prisma/","title":"Prisma SD-WAN ION","text":""},{"location":"sources/vendor/PaloaltoNetworks/prisma/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-pfsense_firewall.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-pfsense_firewall[sc4s-vps] {\n filter { \n \"${HOST}\" eq \"pfsense_firewall\"\n }; \n parser { \n p_set_netsource_fields(\n vendor('pfsense')\n product('firewall')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Polycom/rprm/","title":"RPRM","text":""},{"location":"sources/vendor/Polycom/rprm/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-proofpoint_pps.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-proofpoint_pps[sc4s-vps] {\n filter { \n host(\"pps-*\" type(glob))\n }; \n parser { \n p_set_netsource_fields(\n vendor('proofpoint')\n product('pps')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Pulse/connectsecure/","title":"Pulse","text":""},{"location":"sources/vendor/Pulse/connectsecure/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-raritan_dsx.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-raritan_dsx[sc4s-vps] {\n filter { \n host(\"raritan_dsx*\" type(glob))\n }; \n parser { \n p_set_netsource_fields(\n vendor('raritan')\n product('dsx')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Ricoh/mfp/","title":"MFP","text":""},{"location":"sources/vendor/Ricoh/mfp/#key-facts","title":"Key facts","text":"Used when more specific steelhead or steelconnect can not be identified
"},{"location":"sources/vendor/Riverbed/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-riverbed_syslog.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-riverbed_syslog[sc4s-vps] {\n filter { \n host(....)\n }; \n parser { \n p_set_netsource_fields(\n vendor('riverbed')\n product('syslog')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Riverbed/steelconnect/","title":"Steelconnect","text":""},{"location":"sources/vendor/Riverbed/steelconnect/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-riverbed_syslog.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-riverbed_syslog[sc4s-vps] {\n filter { \n host(....)\n }; \n parser { \n p_set_netsource_fields(\n vendor('riverbed')\n product('syslog')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Ruckus/SmartZone/","title":"Smart Zone","text":"Some events may not match the source format please report issues if found
"},{"location":"sources/vendor/Ruckus/SmartZone/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-schneider_apc.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-schneider_apc[sc4s-vps] {\n filter { \n host(\"test_apc-*\" type(glob))\n }; \n parser { \n p_set_netsource_fields(\n vendor('schneider')\n product('apc')\n ); \n }; \n};\n
"},{"location":"sources/vendor/SecureAuthIdP/secureauth_idp/","title":"SecureAuth IdP","text":""},{"location":"sources/vendor/SecureAuthIdP/secureauth_idp/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-sophos_webappliance.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-sophos_webappliance[sc4s-vps] {\n filter { \n host(\"test-sophos-webapp-\" type(string) flags(prefix))\n }; \n parser { \n p_set_netsource_fields(\n vendor('sophos')\n product('webappliance')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Spectracom/","title":"NTP Appliance","text":""},{"location":"sources/vendor/Spectracom/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-spectracom_ntp.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-spectracom_ntp[sc4s-vps] {\n filter { \n netmask(169.254.100.1/24)\n }; \n parser { \n p_set_netsource_fields(\n vendor('spectracom')\n product('ntp')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Splunk/heavyforwarder/","title":"Splunk Heavy Forwarder","text":"In certain network architectures such as those using data diodes or those networks requiring \u201cin the clear\u201d inspection at network egress SC4S can be used to accept specially formatted output from Splunk as RFC5424 syslog.
"},{"location":"sources/vendor/Splunk/heavyforwarder/#key-facts","title":"Key facts","text":"Index Source and Sourcetype will be used as determined by the Source/HWF
"},{"location":"sources/vendor/Splunk/heavyforwarder/#splunk-configuration","title":"Splunk Configuration","text":"#Because audit trail is protected and we can't transform it we can not use default we must use tcp_routing\n[tcpout]\ndefaultGroup = NoForwarding\n\n[tcpout:nexthop]\nserver = localhost:9000\nsendCookedData = false\n
"},{"location":"sources/vendor/Splunk/heavyforwarder/#propsconf","title":"props.conf","text":"[default]\nADD_EXTRA_TIME_FIELDS = none\nANNOTATE_PUNCT = false\nSHOULD_LINEMERGE = false\nTRANSFORMS-zza-syslog = syslog_canforward, metadata_meta, metadata_source, metadata_sourcetype, metadata_index, metadata_host, metadata_subsecond, metadata_time, syslog_prefix, syslog_drop_zero\n# The following applies for TCP destinations where the IETF frame is required\nTRANSFORMS-zzz-syslog = syslog_octal, syslog_octal_append\n# Comment out the above and uncomment the following for udp\n#TRANSFORMS-zzz-syslog-udp = syslog_octal, syslog_octal_append, syslog_drop_zero\n\n[audittrail]\n# We can't transform this source type its protected\nTRANSFORMS-zza-syslog =\nTRANSFORMS-zzz-syslog =\n
"},{"location":"sources/vendor/Splunk/heavyforwarder/#transformsconf","title":"transforms.conf","text":"syslog_canforward]\nREGEX = ^.(?!audit)\nDEST_KEY = _TCP_ROUTING\nFORMAT = nexthop\n\n[metadata_meta]\nSOURCE_KEY = _meta\nREGEX = (?ims)(.*)\nFORMAT = ~~~SM~~~$1~~~EM~~~$0 \nDEST_KEY = _raw\n\n[metadata_source]\nSOURCE_KEY = MetaData:Source\nREGEX = ^source::(.*)$\nFORMAT = s=\"$1\"] $0\nDEST_KEY = _raw\n\n[metadata_sourcetype]\nSOURCE_KEY = MetaData:Sourcetype\nREGEX = ^sourcetype::(.*)$\nFORMAT = st=\"$1\" $0\nDEST_KEY = _raw\n\n[metadata_index]\nSOURCE_KEY = _MetaData:Index\nREGEX = (.*)\nFORMAT = i=\"$1\" $0\nDEST_KEY = _raw\n\n[metadata_host]\nSOURCE_KEY = MetaData:Host\nREGEX = ^host::(.*)$\nFORMAT = \" h=\"$1\" $0\nDEST_KEY = _raw\n\n[syslog_prefix]\nSOURCE_KEY = _time\nREGEX = (.*)\nFORMAT = <1>1 - - SPLUNK - COOKED [fields@274489 $0\nDEST_KEY = _raw\n\n[metadata_time]\nSOURCE_KEY = _time\nREGEX = (.*)\nFORMAT = t=\"$1$0\nDEST_KEY = _raw\n\n[metadata_subsecond]\nSOURCE_KEY = _meta\nREGEX = \\_subsecond\\:\\:(\\.\\d+)\nFORMAT = $1 $0\nDEST_KEY = _raw\n\n[syslog_octal]\nINGEST_EVAL= mlen=length(_raw)+1\n\n[syslog_octal_append]\nINGEST_EVAL = _raw=mlen + \" \" + _raw\n\n[syslog_drop_zero]\nINGEST_EVAL = queue=if(mlen<10,\"nullQueue\",queue)\n
"},{"location":"sources/vendor/Splunk/sc4s/","title":"Splunk Connect for Syslog (SC4S)","text":""},{"location":"sources/vendor/Splunk/sc4s/#key-facts","title":"Key facts","text":"SC4S events and metrics are generated automatically and no specific ports or filters need to be configured for the collection of this data.
"},{"location":"sources/vendor/Splunk/sc4s/#setup-and-configuration","title":"Setup and Configuration","text":"SC4S_DEST_SPLUNK_SC4S_METRICS_HEC
. See the \u201cOptions\u201d section below for details.event
produce metrics as plain text events; single
produce metrics using Splunk Enterprise 7.3 single metrics format; multi
produce metrics using Splunk Enterprise >8.1 multi metric format multi2
produces improved (reduced resource consumption) multi metric format SC4S_SOURCE_MARK_MESSAGE_NULLQUEUE yes (yes"},{"location":"sources/vendor/Splunk/sc4s/#verification","title":"Verification","text":"SC4S will generate versioning events at startup. These startup events can be used to validate HEC is set up properly on the Splunk side.
index=<asconfigured> sourcetype=sc4s:events | stats count by host\n
Metrics can be observed via the \u201cAnalytics\u2013>Metrics\u201d navigation in the Search and Reporting app in Splunk.
t_msg_hdr
for original raw"},{"location":"sources/vendor/StealthWatch/StealthIntercept/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes stealthbits_stealthintercept StealthINTERCEPT netids none stealthbits_stealthintercept_alerts StealthINTERCEPT:alerts netids Note TA does not support this source type"},{"location":"sources/vendor/Tanium/platform/","title":"Platform","text":"This source requires a TLS connection; in most cases enabling TLS and using the default port 6514 is adequate. The source is understood to require a valid certificate.
"},{"location":"sources/vendor/Tanium/platform/#key-facts","title":"Key facts","text":"All Ubiquity Unfi firewalls, switches, and access points share a common syslog configuration via the NMS.
#/opt/sc4s/local/config/app-parsers/app-vps-ubiquiti_unifi_fw.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-ubiquiti_unifi_fw[sc4s-vps] {\n filter { \n host(\"usg-*\" type(glob))\n }; \n parser { \n p_set_netsource_fields(\n vendor('ubiquiti')\n product('unifi')\n ); \n }; \n};\n
"},{"location":"sources/vendor/VMWare/airwatch/","title":"Airwatch","text":"AirWatch is a product used for enterprise mobility management (EMM) software and standalone management systems for content, applications and email.
"},{"location":"sources/vendor/VMWare/airwatch/#key-facts","title":"Key facts","text":"Vmware vsphere product line has multiple old and known issues in syslog output.
WARNING use of a load balancer with udp will cause \u201ccorrupt\u201d event behavior due to out of order message processing caused by the load balancer
Ref Link Splunk Add-on ESX https://splunkbase.splunk.com/app/5603/ Splunk Add-on Vcenter https://splunkbase.splunk.com/app/5601/ Splunk Add-on nxs none Splunk Add-on vsan none"},{"location":"sources/vendor/VMWare/vsphere/#sourcetypes","title":"Sourcetypes","text":"sourcetype notesvmware:esxlog:${PROGRAM}
None vmware:nsxlog:${PROGRAM}
None vmware:vclog:${PROGRAM}
None nix:syslog When used with a default port, this will follow the generic NIX configuration. When using a dedicated port, IP or host rules events will follow the index configuration for vmware nsx"},{"location":"sources/vendor/VMWare/vsphere/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes vmware_vsphere_esx vmware:esxlog:${PROGRAM}
infraops none vmware_vsphere_nsx vmware:nsxlog:${PROGRAM}
infraops none vmware_vsphere_nsxfw vmware:nsxlog:dfwpktlogs
netfw none vmware_vsphere_vc vmware:vclog:${PROGRAM}
infraops none"},{"location":"sources/vendor/VMWare/vsphere/#filter-type","title":"Filter type","text":"MSG Parse: This filter parses message content when using the default configuration. SC4S will normalize the structure of vmware events from multiple incorrectly formed varients to rfc5424 format to improve parsing
"},{"location":"sources/vendor/VMWare/vsphere/#setup-and-configuration","title":"Setup and Configuration","text":"An active proxy will generate frequent events. Use the following search to validate events are present per source device
index=<asconfigured> sourcetype=\"vmware:vsphere:*\" | stats count by host\n
"},{"location":"sources/vendor/VMWare/vsphere/#automatic-parser-configuration","title":"Automatic Parser Configuration","text":"Enable the following options in the env_file
#Do not enable with a SNAT load balancer\nSC4S_USE_NAME_CACHE=yes\n#Combine known split events into a single event for Splunk\nSC4S_SOURCE_VMWARE_VSPHERE_GROUPMSG=yes\n#Learn vendor product from recognized events and apply to generic events\n#for example after the first vpxd event sshd will utilize vps \"vmware_vsphere_nix_syslog\" rather than \"nix_syslog\"\nSC4S_USE_VPS_CACHE=yes\n
"},{"location":"sources/vendor/VMWare/vsphere/#manual-parser-configuration","title":"Manual Parser Configuration","text":"#/opt/sc4s/local/config/app-parsers/app-vps-vmware_vsphere.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-vmware_vsphere[sc4s-vps] {\n filter { \n #netmask(169.254.100.1/24)\n #host(\"-esx-\")\n }; \n parser { \n p_set_netsource_fields(\n vendor('vmware')\n product('vsphere')\n ); \n }; \n};\n
"},{"location":"sources/vendor/Varonis/datadvantage/","title":"DatAdvantage","text":""},{"location":"sources/vendor/Varonis/datadvantage/#key-facts","title":"Key facts","text":"#/opt/sc4s/local/config/app-parsers/app-vps-wallix_bastion.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-wallix_bastion[sc4s-vps] {\n filter { \n host('^wasb')\n }; \n parser { \n p_set_netsource_fields(\n vendor('wallix')\n product('bastion')\n ); \n }; \n};\n
"},{"location":"sources/vendor/XYPro/mergedaudit/","title":"Merged Audit","text":"XY Pro merged audit also called XYGate or XMA is the defacto solution for syslog from HP Nonstop Server (Tandem)
"},{"location":"sources/vendor/XYPro/mergedaudit/#key-facts","title":"Key facts","text":"The ZScaler product manual includes and extensive section of configuration for multiple Splunk TCP input ports around page 26. When using SC4S these ports are not required and should not be used. Simply configure all outputs from the LSS to utilize the IP or host name of the SC4S instance and port 514
"},{"location":"sources/vendor/Zscaler/lss/#key-facts","title":"Key facts","text":"The ZScaler product manual includes and extensive section of configuration for multiple Splunk TCP input ports around page 26. When using SC4S these ports are not required and should not be used. Simply configure all outputs from the NSS to utilize the IP or host name of the SC4S instance and port 514
"},{"location":"sources/vendor/Zscaler/nss/#key-facts","title":"Key facts","text":"\\tvendor=Zscaler\\tproduct=alerts
immediately prior to the \\n
in the NSS Alert Web format. See Zscaler manual for more info. zscaler_nss_dns Requires format customization add \\tvendor=Zscaler\\tproduct=dns
immediately prior to the \\n
in the NSS DNS format. See Zscaler manual for more info. zscaler_nss_web None zscaler_nss_fw Requires format customization add \\tvendor=Zscaler\\tproduct=fw
immediately prior to the \\n
in the Firewall format. See Zscaler manual for more info."},{"location":"sources/vendor/Zscaler/nss/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes zscaler_nss_alerts zscalernss-alerts main none zscaler_nss_dns zscalernss-dns netdns none zscaler_nss_fw zscalernss-fw netfw none zscaler_nss_web zscalernss-web netproxy none zscaler_nss_tunnel zscalernss-tunnel netops none zscaler_zia_audit zscalernss-zia-audit netops none zscaler_zia_sandbox zscalernss-zia-sandbox main none"},{"location":"sources/vendor/Zscaler/nss/#filter-type","title":"Filter type","text":"MSG Parse: This filter parses message content
"},{"location":"sources/vendor/Zscaler/nss/#setup-and-configuration","title":"Setup and Configuration","text":"Loggen is a tool used to load test syslog implementations.
"},{"location":"sources/vendor/syslog-ng/loggen/#key-facts","title":"Key facts","text":"loggen --inet --dgram --number 1 <ip> <port>
RFC5424 example:loggen --inet --dgram -PF --number 1 <ip> <port>
Refer to above manual link for more examples."},{"location":"sources/vendor/syslog-ng/loggen/#index-configuration","title":"Index Configuration","text":"key index notes syslogng_loggen main none"},{"location":"troubleshooting/troubleshoot_SC4S_server/","title":"Validate server startup and operations","text":"This topic helps you find the most common solutions to startup and operational issues with SC4S.
If you plan to run SC4S with standard configuration, we recommend that you perform startup out of systemd.
If you are using a custom configuration of SC4S with significant modifications, for example, multiple unique ports for sources, hostname/CIDR block configuration for sources, or new log paths, start SC4S with the container runtime command podman
or docker
directly from the command line as described in this topic. When you are satisfied with the operation, you can then transition to systemd.
If you are running out of systemd, you may see this at startup:
[root@sc4s syslog-ng]# systemctl start sc4s\nJob for sc4s.service failed because the control process exited with error code. See \"systemctl status sc4s.service\" and \"journalctl -xe\" for details.\n
Most issues that occur with startup and operation of SC4S involve syntax errors or duplicate listening ports. Try the following to resolve the issue:
"},{"location":"troubleshooting/troubleshoot_SC4S_server/#check-that-your-sc4s-container-is-running","title":"Check that your SC4S container is running","text":"If you start with systemd and the container is not running, check with the following:
journalctl -b -u sc4s | tail -100\n
This will print the last 100 lines of the system journal in detail, which should be sufficient to see the specific syntax or runtime failure and guide you in troubleshooting the unexpected container exit."},{"location":"troubleshooting/troubleshoot_SC4S_server/#check-that-the-sc4s-container-starts-and-runs-properly-outside-of-the-systemd-service-environment","title":"Check that the SC4S container starts and runs properly outside of the systemd service environment","text":"As an alternative to launching with systemd during the initial installation phase, you can test the container startup outside of the systemd startup environment. This is especially important for troubleshooting or log path development, for example, when SC4S_DEBUG_CONTAINER
is set to \u201cyes\u201d.
The following command launches the container directly from the command line. This command assumes the local mounted directories are set up as shown in the \u201cgetting started\u201d examples. Adjust for your local requirements, if you are using Docker, substitute \u201cdocker\u201d for \u201cpodman\u201d for the container runtime command.
/usr/bin/podman run \\\n -v splunk-sc4s-var:/var/lib/syslog-ng \\\n -v /opt/sc4s/local:/etc/syslog-ng/conf.d/local:z \\\n -v /opt/sc4s/archive:/var/lib/syslog-ng/archive:z \\\n -v /opt/sc4s/tls:/etc/syslog-ng/tls:z \\\n --env-file=/opt/sc4s/env_file \\\n --network host \\\n --name SC4S \\\n --rm ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\n
"},{"location":"troubleshooting/troubleshoot_SC4S_server/#check-that-the-container-is-still-running-when-systemd-indicates-that-its-not-running","title":"Check that the container is still running when systemd indicates that it\u2019s not running","text":"In some instances, particularly when SC4S_DEBUG_CONTAINER=yes
, an SC4S container might not shut down completely when starting/stopping out of systemd, and systemd will attempt to start a new container when one is already running with the SC4S
name. You will see this type of output when viewing the journal after a failed start caused by this condition, or a similar message when the container is run directly from the CLI:
Jul 15 18:45:20 sra-sc4s-alln01-02 podman[11187]: Error: error creating container storage: the container name \"SC4S\" is already in use by \"894357502b2a7142d097ea3ca1468d1cb4fbc69959a9817a1bbe145a09d37fb9\". You have to remove that container...\nJul 15 18:45:20 sra-sc4s-alln01-02 systemd[1]: sc4s.service: Main process exited, code=exited, status=125/n/a\n
To rectify this, execute:
podman rm -f SC4S\n
SC4S should then start normally.
Do not use systemd when SC4S_DEBUG_CONTAINER
is set to \u201cyes\u201d, instead use the CLI podman
or docker
commands directly to start/stop SC4S.
SC4S performs basic HEC connectivity and index checks at startup and creates logs that indicate general connection issues and indexes that may not be accessible or configured on Splunk. To check the container logs that contain the results of these tests, run:
/usr/bin/<podman|docker> logs SC4S\n
You will see entries similar to the following:
SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful; checking indexes...\n\nSC4S_ENV_CHECK_INDEX: Checking email {\"text\":\"Incorrect index\",\"code\":7,\"invalid-event-number\":1}\nSC4S_ENV_CHECK_INDEX: Checking epav {\"text\":\"Incorrect index\",\"code\":7,\"invalid-event-number\":1}\nSC4S_ENV_CHECK_INDEX: Checking main {\"text\":\"Success\",\"code\":0}\n
Note the specifics of the indexes that are not configured correctly, and rectify this in your Splunk configuration. If this is not addressed properly, you may see output similar to the below when data flows into SC4S:
Mar 16 19:00:06 b817af4e89da syslog-ng[1]: Server returned with a 4XX (client errors) status code, which means we are not authorized or the URL is not found.; url='https://splunk-instance.com:8088/services/collector/event', status_code='400', driver='d_hec#0', location='/opt/syslog-ng/etc/conf.d/destinations/splunk_hec.conf:2:5'\nMar 16 19:00:06 b817af4e89da syslog-ng[1]: Server disconnected while preparing messages for sending, trying again; driver='d_hec#0', location='/opt/syslog-ng/etc/conf.d/destinations/splunk_hec.conf:2:5', worker_index='4', time_reopen='10', batch_size='1000'\n
This is an indication that the standard d_hec
destination in syslog-ng, which is the route to Splunk, is rejected by the HEC endpoint. A 400
error is commonly caused by an index that has not been created in Splunk. One bad index can damage the batch, in this case, 1000 events, and prevent any of the data from being sent to Splunk. Make sure that the container logs are free of these kinds of errors in production. You can use the alternate HEC debug destination to help debug this condition by sending direct \u201ccurl\u201d commands to the HEC endpoint outside of the SC4S setting."},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-invalid-sc4s-listening-ports","title":"Issue: Invalid SC4S listening ports","text":"SC4S exclusively grants a port to a device when SC4S_LISTEN_{vendor}_{product}_{TCP/UDP/TLS}_PORT={port}
.
During startup, SC4S validates that listening ports are configured correctly, and shows any issues in container logs.
You will receive an error message similar to the following if listening ports for MERAKI SWITCHES
are configured incorrectly:
SC4S_LISTEN_MERAKI_SWITCHES_TCP_PORT: Wrong port number, don't use default port like (514,614,6514)\nSC4S_LISTEN_MERAKI_SWITCHES_UDP_PORT: 7000 is not unique and has already been used for another source\nSC4S_LISTEN_MERAKI_SWITCHES_TLS_PORT: 999999999999 must be integer within the range (0, 10000)\n
"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-sc4s-local-disk-resource-issues","title":"Issue: SC4S local disk resource issues","text":"Check the HEC connection to Splunk. If the connection is down for a long period of time, the local disk buffer used for backup will exhaust local disk resources. The size of the local disk buffer is configured in the env_file
: Disk buffer configuration
Check the env_file
to see whether SC4S_DEST_GLOBAL_ALTERNATES
is set to d_hec_debug
,d_archive
, or another file-based destination. Any of these settings will consume significant local disk space.
d_hec_debug
and d_archive
are organized by sourcetype; the du -sh *
command can be used in each subdirectory to find the culprit.
podman volume rm splunk-sc4s-var\npodman volume create splunk-sc4s-var\n
podman system prune [--all]\n
UDP Input Buffer Settings let you request a certain buffer size when configuring the UDP sockets. The kernel must have its parameters set to the same size or greater than what the syslog-ng configuration is requesting, or the following will occur in the SC4S logs:
/usr/bin/<podman|docker> logs SC4S\n
The following warning message is not a failure condition unless you are reaching the upper limit of your hardware performance. The kernel refused to set the receive buffer (SO_RCVBUF) to the requested size, you probably need to adjust buffer related kernel parameters; so_rcvbuf='1703936', so_rcvbuf_set='425984'\n
Make changes to /etc/sysctl.conf
, changing receive buffer values to 16 MB: net.core.rmem_default = 17039360\nnet.core.rmem_max = 17039360 \n
Run the following commands to implement your changes: sysctl -p restart SC4S \n
"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-invalid-sc4s-tls-listener","title":"Issue: Invalid SC4S TLS listener","text":"To verify the correct configuration of the TLS server use the following command. Replace the IP, FQDN, and port as appropriate:
<podman|docker> run -ti drwetter/testssl.sh --severity MEDIUM --ip 127.0.0.1 selfsigned.example.com:6510\n
"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-unable-to-retrieve-logs-from-non-rfc-5424-compliant-sources","title":"Issue: Unable to retrieve logs from non RFC-5424 compliant sources","text":"If a data source you are trying to ingest claims it is RFC-5424 compliant but you get an \u201cError processing log message:\u201d from SC4S, this message indicates that the data source still violates the RFC-5424 standard in some way. In this case, the underlying syslog-ng process will send an error event, with the location of the error in the original event highlighted with >@<
to indicate where the error occurred. Here is an example error message:
{ [-]\n ISODATE: 2020-05-04T21:21:59.001+00:00\n MESSAGE: Error processing log message: <14>1 2020-05-04T21:21:58.117351+00:00 arcata-pks-cluster-1 pod.log/cf-workloads/logspinner-testing-6446b8ef - - [kubernetes@47450 cloudfoundry.org/process_type=\"web\" cloudfoundry.org/rootfs-version=\"v75.0.0\" cloudfoundry.org/version=\"eae53cc3-148d-4395-985c-8fef0606b9e3\" controller-revision-hash=\"logspinner-testing-6446b8ef05-7db777754c\" cloudfoundry.org/app_guid=\"f71634fe-34a4-4f89-adac-3e523f61a401\" cloudfoundry.org/source_type=\"APP\" security.istio.io/tlsMode=\"istio\" statefulset.kubernetes.io/pod-n>@<ame=\"logspinner-testing-6446b8ef05-0\" cloudfoundry.org/guid=\"f71634fe-34a4-4f89-adac-3e523f61a401\" namespace_name=\"cf-workloads\" object_name=\"logspinner-testing-6446b8ef05-0\" container_name=\"opi\" vm_id=\"vm-e34452a3-771e-4994-666e-bfbc7eb77489\"] Duration 10.00299412s TotalSent 10 Rate 0.999701 \n PID: 33\n PRI: <43>\n PROGRAM: syslog-ng\n}\n
In this example the error can be seen in the snippet statefulset.kubernetes.io/pod-n>@<ame
. The error states that the \u201cSD-NAME\u201d (the left-hand side of the name=value pairs) cannot be longer than 32 printable ASCII characters, and the indicated name exceeds that. Ideally you should address this issue with the vendor, however, you can add an exception to the SC4S filter log path or an alternative workaround log path created for the data source.
In this example, the reason RAWMSG
is not shown in the fields above is because this error message is coming from syslog-ng itself. In messages of the type Error processing log message:
where the PROGRAM is shown as syslog-ng
, your incoming message is not RFC-5424 compliant.
In non-containerized SC4S deployments, if you try to start the SC4S service, the terminal may be overwhelmed by the internal and metrics logs. Example of the issue can be found here: Github Terminal abuse issue
To resolve this, set following property in env_file
:
SC4S_SEND_METRICS_TERMINAL=no\n
Restart SC4S.
SC4S_DEBUG_CONTAINER
is set to \u201cyes\u201d. Use the CLI podman
or docker
commands directly to start/stop SC4S.To resolve this, set following property in env_file
:
SC4S_DISABLE_DROP_INVALID_CEF=yes\n
Restart SC4S.
To resolve this, set following property in env_file
:
SC4S_DISABLE_DROP_INVALID_VMWARE_CB_PROTECT=yes\n
Restart SC4S.
env_file
: SC4S_DISABLE_DROP_INVALID_CISCO=yes\n
To resolve this, set following property in env_file
:
SC4S_DISABLE_DROP_INVALID_VMWARE_VSPHERE=yes\n
Restart SC4S.
To resolve this, set following property in env_file
:
SC4S_DISABLE_DROP_INVALID_RAW_BSD=yes\n
Restart SC4S.
To resolve this, set following property in env_file
:
SC4S_DISABLE_DROP_INVALID_XML=yes\n
Restart SC4S.
To resolve this, set following property in env_file
:
SC4S_DISABLE_DROP_INVALID_HPE=yes\n
Restart SC4S and it will not drop any invalid HPE JETDIRECT format.
NOTE: Please use only in this case of exception and this is splunk-unsupported feature. Also this setting might impact SC4S performance.
"},{"location":"troubleshooting/troubleshoot_resources/","title":"SC4S Logging and Troubleshooting Resources","text":""},{"location":"troubleshooting/troubleshoot_resources/#helpful-linux-and-container-commands","title":"Helpful Linux and container commands","text":""},{"location":"troubleshooting/troubleshoot_resources/#linux-service-systemd-commands","title":"Linux service (systemd) commands","text":"systemctl status sc4s
systemctl start service
systemctl stop service
systemctl restart service
systemctl enable sc4s
journalctl -b -u sc4s
All of the following container commands can be run with the podman
or docker
runtime.
sudo podman logs SC4S
podman exec -it SC4S bash
podman volume rm splunk-sc4s-var\npodman volume create splunk-sc4s-var\n
podman pull ghcr.io/splunk/splunk-connect-for-syslog/container3
podman system prune
podman load <tar>
Check your SC4S port using the nc
command. Run this command where SC4S is hosted and check data in Splunk for success and failure:
echo '<raw_sample>' |nc <host> <port>\n
"},{"location":"troubleshooting/troubleshoot_resources/#obtain-raw-message-events","title":"Obtain raw message events","text":"During development or troubleshooting, you may need to obtain samples of the messages exactly as they are received by SC4S. These events contain the full syslog message, including the <PRI>
preamble, and are different from messages that have been processed by SC4S and Splunk.
These raw messages help to determine that SC4S parsers and filters are operating correctly, and are needed for playback when testing. The community supporting SC4S will always first ask for raw samples before any development or troubleshooting exercise.
Here are some options for obtaining raw logs for one or more sourcetypes:
tcpdump
on the collection interface and display the results in ASCII. You will see events similar to the following buried in the packet contents: <165>1 2007-02-15T09:17:15.719Z router1 mgd 3046 UI_DBASE_LOGOUT_EVENT [junos@2636.1.1.1.2.18 username=\"user\"] User 'user' exiting configuration mode\n
env_file
to set the variable SC4S_SOURCE_STORE_RAWMSG=yes
and restart SC4S. This stores the raw message in a syslog-ng macro called RAWMSG
and is displayed in Splunk for all fallback
messages.RAWMSG
is not displayed, but can be viewed by changing the output template to one of the JSON variants, including t_JSON_3164 or t_JSON_5424, depending on RFC message type. See SC4S metadata configuration for more details.RAWMSG
to Splunk regardless the sourcetype you can also temporarily place the following final filter in the local parser directory: block parser app-finalfilter-fetch-rawmsg() {\n channel {\n rewrite {\n r_set_splunk_dest_default(\n template('t_fallback_kv')\n );\n };\n };\n};\n\napplication app-finalfilter-fetch-rawmsg[sc4s-finalfilter] {\n parser { app-finalfilter-fetch-rawmsg(); };\n};\n
Once you have edited SC4S_SOURCE_STORE_RAWMSG=yes
in /opt/sc4s/env_file
and the finalfilter
placed in /opt/sc4s/local/config/app_parsers
, restart the SC4S instance to add raw messages to all the messages sent to Splunk.NOTE: Be sure to turn off the RAWMSG
variable when you are finished, because it doubles the memory and disk requirements of SC4S. Do not use RAWMSG
in production.
d_rawmsg
for one or more sourcetypes. This destination will write the raw messages to the container directory /var/syslog-ng/archive/rawmsg/<sourcetype>
, which is typically mapped locally to /opt/sc4s/archive
. Within this directory, the logs are organized by host and time.exec
into the container (advanced task)","text":"You can confirm how the templating process created the actual syslog-ng configuration files by calling exec
into the container and navigating the syslog-ng config filesystem directly. To do this, run
/usr/bin/podman exec -it SC4S /bin/bash\n
and navigate to /opt/syslog-ng/etc/
to see the actual configuration files in use. If you are familiar with container operations and syslog-ng, you can modify files directly and reload syslog-ng with the command kill -1 1
in the container. You can also run the /entrypoint.sh
script, or a subset of it, such as everything but syslog-ng, and have complete control over the templating and underlying syslog-ng process. This is an advanced topic and further help can be obtained through the github issue tracker and Slack channels."},{"location":"troubleshooting/troubleshoot_resources/#keeping-a-failed-container-running-advanced-topic","title":"Keeping a failed container running (advanced topic)","text":"To debug a configuration syntax issue at startup, keep the container running after a syslog-ng startup failure. In order to facilitate troubleshooting and make syslog-ng configuration changes from within a running container, the container can be forced to remain running when syslog-ng fails to start (which normally terminates the container). To enable this, add SC4S_DEBUG_CONTAINER=yes
to the env_file
. Use this capability in conjunction with exec calls into the container.
NOTE: Do not enable the debug container mode while running out of systemd. Instead, run the container manually from the CLI, so that you can use the podman
or docker
commands needed to start, stop, and clean up cruft left behind by the debug process. Only when SC4S_DEBUG_CONTAINER
is set to \u201cno\u201d (or completely unset) should systemd startup processing resume.
Time zone mismatches can occur if SC4S and logHost are not in same time zones. To resolve this, create a filter using sc4s-lp-dest-format-d_hec_fmt
, for example:
#filename: /opt/sc4s/local/config/app_parsers/rewriters/app-dest-rewrite-fix_tz_something.conf\n\nblock parser app-dest-rewrite-checkpoint_drop-d_fmt_hec_default() { \n channel {\n rewrite { fix-time-zone(\"EST5EDT\"); };\n };\n};\napplication app-dest-rewrite-fix_tz_something-d_fmt_hec_default[sc4s-lp-dest-format-d_hec_fmt] {\n filter {\n match('checkpoint' value('fields.sc4s_vendor') type(string)) <- this must be customized\n and match('syslog' value('fields.sc4s_product') type(string)) <- this must be customized\n and match('Drop' value('.SDATA.sc4s@2620.action') type(string)) <- this must be customized\n and match('12.' value('.SDATA.sc4s@2620.src') type(string) flags(prefix) ); <- this must be customized\n\n }; \n parser { app-dest-rewrite-checkpoint_drop-d_fmt_hec_default(); }; \n};\n
If destport, container, and proto are not available in indexed fields, you can create a post-filter:
#filename: /opt/sc4s/local/config/app_parsers/rewriters/app-dest-rewrite-fix_tz_something.conf\n\nblock parser app-dest-rewrite-fortinet_fortios-d_fmt_hec_default() {\n channel {\n rewrite {\n fix-time-zone(\"EST5EDT\");\n };\n };\n};\n\napplication app-dest-rewrite-device-d_fmt_hec_default[sc4s-postfilter] {\n filter {\n match(\"xxxx\", value(\"fields.sc4s_destport\") type(glob)); <- this must be customized\n };\n parser { app-dest-rewrite-fortinet_fortios-d_fmt_hec_default(); };\n};\n
Note that filter match statement should be aligned to your data The parser accepts time zone in formats: \u201cAmerica/New York\u201d or \u201cEST5EDT\u201d, but not short in form such as \u201cEST\u201d.
"},{"location":"troubleshooting/troubleshoot_resources/#issue-cyberark-log-problems","title":"Issue: CyberArk log problems","text":"When data is received on the indexers, all events are merged together into one event. Check the following link for CyberArk configuration information: https://cyberark-customers.force.com/s/article/00004289.
"},{"location":"troubleshooting/troubleshoot_resources/#issue-sc4s-events-drop-when-another-interface-is-used-to-receive-logs","title":"Issue: SC4S events drop when another interface is used to receive logs","text":"When a second or alternate interface is used to receive syslog traffic, RPF (Reverse Path Forwarding) filtering in RHEL, which is configured as default configuration, may drop events. To resolve this, add a static route for the source device to point back to the dedicated syslog interface. See https://access.redhat.com/solutions/53031.
"},{"location":"troubleshooting/troubleshoot_resources/#issue-splunk-does-not-ingest-sc4s-events-from-other-virtual-machines","title":"Issue: Splunk does not ingest SC4S events from other virtual machines","text":"When data is transmitted through an echo message from the same instance, data is sent successfully to Splunk. However, when the echo is sent from a different instance, the data may not appear in Splunk and the errors are not reported in the logs. To resolve this issue, check whether an internal firewall is enabled. If an internal firewall is active, verify whether the default port 514 or the port which you have used is blocked. Here are some commands to check and enable your firewall:
#To list all the firewall ports\nsudo firewall-cmd --list-all\n#to enable 514 if its not enabled\nsudo firewall-cmd --zone=public --permanent --add-port=514/udp\nsudo firewall-cmd --reload\n
"}]}
\ No newline at end of file
diff --git a/main/sources/vendor/Cisco/cisco_asa/index.html b/main/sources/vendor/Cisco/cisco_asa/index.html
index e931545b69..3d2a7004d3 100644
--- a/main/sources/vendor/Cisco/cisco_asa/index.html
+++ b/main/sources/vendor/Cisco/cisco_asa/index.html
@@ -8272,7 +8272,6 @@