diff --git a/main/search/search_index.json b/main/search/search_index.json index d7827f8646..1a07d8f5e3 100644 --- a/main/search/search_index.json +++ b/main/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to Splunk Connect for Syslog!","text":"

Splunk Connect for Syslog is an open source packaged solution for getting data into Splunk. It is based on the syslog-ng Open Source Edition (Syslog-NG OSE) and transports data to Splunk via the Splunk HTTP event Collector (HEC) rather than writing events to disk for collection by a Universal Forwarder.

"},{"location":"#product-goals","title":"Product Goals","text":""},{"location":"#support","title":"Support","text":"

Splunk Support: If you are an existing Splunk customer with access to the Support Portal, create a support ticket for the quickest resolution to any issues you experience. Here are some examples of when it may be appropriate to create a support ticket: - If you experience an issue with the current version of SC4S, such as a feature gap or a documented feature that is not working as expected. - If you have difficulty with the configuration of SC4S, either at the back end or with the out-of-box parsers or index configurations. - If you experience performance issues and need help understanding the bottlenecks. - If you have any questions or issues with the SC4S documentation.

GitHub Issues: For all enhancement requests, please feel free to create GitHub issues. We prioritize and work on issues based on their priority and resource availability. You can help us by tagging the requests with the appropriate labels.

Splunk Developers are active in the external usergroup on best effort basis, please use support case/github issues to resolve your issues quickly

"},{"location":"#contributing","title":"Contributing","text":"

We welcome feedback and contributions from the community! Please see our contribution guidelines for more information on how to get involved.

"},{"location":"#license","title":"License","text":""},{"location":"#references","title":"References","text":""},{"location":"CONTRIBUTING/","title":"CONTRIBUTING","text":"

Splunk welcomes contributions from the SC4S community, and your feedback and enhancements are appreciated. There\u2019s always code that can be clarified, functionality that can be extended, and new data filters to develop, and documentation to refine. If you see something you think should be fixed or added, go for it!

"},{"location":"CONTRIBUTING/#data-safety","title":"Data Safety","text":"

Splunk Connect for Syslog is a community built and maintained product. Anyone with internet access can get a Splunk GitHub account and participate. As with any publicly available repository, care must be taken to never share private data via Issues, Pull Requests or any other mechanisms. Any data that is shared in the Splunk Connect for Syslog GitHub repository is made available to the entire Community without limits. Members of the Community and/or their employers (including Splunk) assume no responsibility or liability for any damages resulting from the sharing of private data via the Splunk GitHub.

Any data samples shared in the Splunk GitHub repository must be free of private data. * Working locally, identify potentially sensitive field values in data samples (Public IP address, URL, Hostname, Etc.) * Replace all potentially sensitive field values with synthetic values * Manually review data samples to re-confirm they are free of private data before sharing in the Splunk GitHub

"},{"location":"CONTRIBUTING/#prerequisites","title":"Prerequisites","text":"

When contributing to this repository, please first discuss the change you wish to make via a GitHub issue or Slack message with the owners of this repository.

"},{"location":"CONTRIBUTING/#setup-development-environment","title":"Setup Development Environment","text":"

For a basic development environment docker and a bash shell is all that is required. For a more complete IDE experience see our wiki (Setup PyCharm)[https://github.com/splunk/splunk-connect-for-syslog/wiki/SC4S-Development-Setup-Using-PyCharm]

"},{"location":"CONTRIBUTING/#feature-requests-and-bug-reports","title":"Feature Requests and Bug Reports","text":"

Have ideas on improvements or found a problem? While the community encourages everyone to contribute code, it is also appreciated when someone reports an issue. Please report any issues or bugs you find through GitHub\u2019s issue tracker.

If you are reporting a bug, please include the following details:

We want to hear about your enhancements as well. Feel free to submit them as issues:

"},{"location":"CONTRIBUTING/#fixing-issues","title":"Fixing Issues","text":"

Look through our issue tracker to find problems to fix! Feel free to comment and tag community members of this project with any questions or concerns.

"},{"location":"CONTRIBUTING/#pull-requests","title":"Pull Requests","text":"

What is a \u201cpull request\u201d? It informs the project\u2019s core developers about the changes you want to review and merge. Once you submit a pull request, it enters a stage of code review where you and others can discuss its potential modifications and even add more commits to it later on.

If you want to learn more, please consult this tutorial on how pull requests work in the GitHub Help Center.

Here\u2019s an overview of how you can make a pull request against this project:

"},{"location":"CONTRIBUTING/#code-review","title":"Code Review","text":"

There are two aspects of code review: giving and receiving. To make it easier for your PR to receive reviews, consider the reviewers will need you to:

"},{"location":"CONTRIBUTING/#testing","title":"Testing","text":"

Testing is the responsibility of all contributors. In general, we try to adhere to TDD, writing the test first. There are multiple types of tests. The location of the test code varies with type, as do the specifics of the environment needed to successfully run the test.

We could always use improvements to our documentation! Anyone can contribute to these docs - whether you\u2019re new to the project, you\u2019ve been around a long time, and whether you self-identify as a developer, an end user, or someone who just can\u2019t stand seeing typos. What exactly is needed?

"},{"location":"CONTRIBUTING/#release-notes","title":"Release Notes","text":"

To add commit messages to release notes, tag the message in following format

[TYPE] <commit message>\n
[TYPE] can be among the following * FEATURE * FIX * DOC * TEST * CI * REVERT * FILTERADD * FILTERMOD

Sample commit:\ngit commit -m \"[TEST] test-message\"\n
"},{"location":"architecture/","title":"SC4S Architectural Considerations","text":"

SC4S provides performant and reliable syslog data collection. When you are planning your configuration, review the following architectural considerations. These recommendations pertain to the Syslog protocol and age, and are not specific to Splunk Connect for Syslog.

"},{"location":"architecture/#the-syslog-protocol","title":"The syslog Protocol","text":"

The syslog protocol design prioritizes speed and efficiency, which can occur at the expense of resiliency and reliability. User Data Protocol (UDP) provides the ability to \u201csend and forget\u201d events over the network without regard to or acknowledgment of receipt. Transport Layer Secuirty (TLS) and Secure Sockets Layer (SSL) protocols are also supported, though UDP prevails as the preferred syslog transport for most data centers.

Because of these tradeoffs, traditional methods to provide scale and resiliency do not necessarily transfer to syslog.

"},{"location":"architecture/#ip-protocol","title":"IP protocol","text":"

By default, SC4S listens on ports using IPv4. IPv6 is also supported, see SC4S_IPV6_ENABLE in source configuration options.

"},{"location":"architecture/#collector-location","title":"Collector Location","text":"

Since syslog is a \u201csend and forget\u201d protocol, it does not perform well when routed through substantial network infrastructure. This includes front-side load balancers and WAN. The most reliable way to collect syslog traffic is to provide for edge collection rather than centralized collection. If you centrally locate your syslog server, the UDP and (stateless) TCP traffic cannot adjust and data loss will occur.

"},{"location":"architecture/#syslog-data-collection-at-scale","title":"syslog Data Collection at Scale","text":"

As a best practice, do not co-locate syslog-ng servers for horizontal scale and load balance to them with a front-side load balancer:

"},{"location":"architecture/#high-availability-considerations-and-challenges","title":"High availability considerations and challenges","text":"

Load balancing for high availability does not work well for stateless, unacknowledged syslog traffic. More data is preserved when you use a more simple design such as vMotioned VMs. With syslog, the protocol itself is prone to loss, and syslog data collection can be made \u201cmostly available\u201d at best.

"},{"location":"architecture/#udp-vs-tcp","title":"UDP vs. TCP","text":"

Run your syslog configuration on UDP rather than TCP.

The syslogd daemon optimally uses UDP for log forwarding to reduce overhead. This is because UDP\u2019s streaming method does not require the overhead of establishing a network session. UDP reduces network load on the network stream with no required receipt verification or window adjustment.

TCP uses Acknowledgement Signals (ACKS) to avoid data loss, however, loss can still occur when:

Use TCP if the syslog event is larger than the maximum size of the UDP packet on your network typically limited to Web Proxy, DLP, and IDs type sources. To mitigate the drawbacks of TCP you can use TLS over TCP:

"},{"location":"configuration/","title":"SC4S configuration variables","text":"

SC4S is primarily controlled by environment variables. This topic describes the categories and variables you need to properly configure SC4S for your environment.

"},{"location":"configuration/#global-configuration-variables","title":"Global configuration variables","text":"Variable Values Description SC4S_USE_REVERSE_DNS yes or no (default) Use reverse DNS to identify hosts when HOST is not valid in the syslog header. SC4S_REVERSE_DNS_KEEP_FQDN yes or no (default) When enabled, SC4S will not extract the hostname from FQDN, and instead will pass the full domain name to the host. SC4S_CONTAINER_HOST string Variable that is passed to the container to identify the actual log host for container implementations.

If the host value is not present in an event, and you require that a true hostname be attached to each event, SC4S provides an optional ability to perform a reverse IP to name lookup. If the variable SC4S_USE_REVERSE_DNS is set to \u201cyes\u201d, then SC4S first checks host.csv and replaces the value of host with the specified value that matches the incoming IP address. If no value is found in host.csv, SC4S attempts a reverse DNS lookup against the configured nameserver. In this case, SC4S by default extracts only the hostname from FQDN (example.domain.com -> example). If SC4S_REVERSE_DNS_KEEP_FQDN variable is set to \u201cyes\u201d, full domain name is assigned to the host field.

Note: Using the SC4S_USE_REVERSE_DNS variable can have a significant impact on performance if the reverse DNS facility is not performant. Check this variable if you notice that events are indexed later than the actual timestamp in the event, for example, if you notice a latency between _indextime and _time.

"},{"location":"configuration/#configure-your-external-http-proxy","title":"Configure your external HTTP proxy","text":"

Many HTTP proxies are not provisioned with application traffic in mind. Ensure adequate capacity is available to avoid data loss and proxy outages. The following variables must be entered in lower case:

Variable Values Description http_proxy undefined Use libcurl format proxy string \u201chttp://username:password@proxy.server:port\u201d https_proxy undefined Use libcurl format proxy string \u201chttp://username:password@proxy.server:port\u201d"},{"location":"configuration/#configure-your-splunk-hec-destination","title":"Configure your Splunk HEC destination","text":"Variable Values Description SC4S_DEST_SPLUNK_HEC_<ID>_CIPHER_SUITE comma separated list Opens the SSL cipher suite list. SC4S_DEST_SPLUNK_HEC_<ID>_SSL_VERSION comma separated list Opens the SSL version list. SC4S_DEST_SPLUNK_HEC_<ID>_WORKERS numeric The number of destination workers (threads), the default value is 10 threads. You do not need to change this variable from the default unless your environment has a very high or low volume. Consult with the SC4S community for advice about configuring your settings for environments with very high or low volumes. SC4S_DEST_SPLUNK_INDEXED_FIELDS r_unixtime,facility,severity,container,loghost,destport,fromhostip,protonone This is the list of SC4S indexed fields that will be included with each event in Splunk. The default is the entire list except \u201cnone\u201d. Two other indexed fields, sc4s_vendor_product and sc4s_syslog_format, also appear along with the fields selected and cannot be turned on or off individually. If you do not want any indexed fields, set the value to the single value of \u201cnone\u201d. When you set this variable, you must separate multiple entries with commas, do not include extra spaces.This list maps to the following indexed fields that will appear in all Splunk events:facility: sc4s_syslog_facilityseverity: sc4s_syslog_severitycontainer: sc4s_containerloghost: sc4s_loghostdport: sc4s_destportfromhostip: sc4s_fromhostipproto: sc4s_proto

The destination operating parameters outlined above should be individually controlled using the destination ID. For example, to set the number of workers for the default destination, use SC4S_DEST_SPLUNK_HEC_DEFAULT_WORKERS. To configure workers for the alternate HEC destination d_hec_FOO, use SC4S_DEST_SPLUNK_HEC_FOO_WORKERS.

"},{"location":"configuration/#configure-timezones-for-legacy-sources","title":"Configure timezones for legacy sources","text":"

Set the SC4S_DEFAULT_TIMEZONE variable to a recognized \u201czone info\u201d (Region/City) time zone format such as America/New_York. Setting this value forces SC4S to use the specified timezone and honor its associated Daylight Savings rules for all events without a timezone offset in the header or message payload.

"},{"location":"configuration/#configure-your-sc4s-disk-buffer","title":"Configure your SC4S disk buffer","text":"

SC4S provides the ability to minimize the number of lost events if the connection to all the Splunk indexers is lost. This capability utilizes the disk buffering feature of Syslog-ng.

SC4S receives a response from the Splunk HTTP Event Collector (HEC) when a message is received successfully. If a confirmation message from the HEC endpoint is not received (or a \u201cserver busy\u201d reply, such as a \u201c503\u201d is sent), the load balancer will try the next HEC endpoint in the pool. If all pool members are exhausted, for example, if there were a full network outage to the HEC endpoints, events will queue to the local disk buffer on the SC4S Linux host.

SC4S will continue attempting to send the failed events while it buffers all new incoming events to disk. If the disk space allocated to disk buffering fills up then SC4S will stop accepting new events and subsequent events will be lost.

Once SC4S gets confirmation that events are again being received by one or more indexers, events will then stream from the buffer using FIFO queueing.

The number of events in the disk buffer will reduce as long as the incoming event volume is less than the maximum SC4S, with the disk buffer in the path, can handle. When all events have been emptied from the disk buffer, SC4S will resume streaming events directly to Splunk.

Disk buffers in SC4S are allocated per destination. Keep this in mind when using additional destinations that have disk buffering configured. By default, when you configure alternate HEC destinations, disk buffering is configured identically to that of the main HEC destination, unless overridden individually.

"},{"location":"configuration/#estimate-your-storage-allocation","title":"Estimate your storage allocation","text":"

As an example, to protect against a full day of lost connectivity from SC4S to all your indexers at maximum throughput, the calculation would look like the following:

60,000 EPS * 86400 seconds * 800 bytes * 1.7 = 6.4 TB of storage

"},{"location":"configuration/#about-disk-buffering","title":"About disk buffering","text":"

Note the following about disk buffering:

"},{"location":"configuration/#disk-buffer-variables","title":"Disk Buffer Variables","text":"Variable Values/Default Description SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_ENABLE yes(default) or no Enable local disk buffering. SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_RELIABLE yes or no(default) Enable reliable/normal disk buffering (normal is the recommended value). SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_MEMBUFSIZE bytes (10241024) The worker\u2019s memory buffer size in bytes, used with reliable disk buffering. SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_MEMBUFLENGTH messages (15000) The worker\u2019s memory buffer size in message count, used with normal disk buffering. SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_DISKBUFSIZE bytes (53687091200) Size of local disk buffering bytes, the default is 50 GB per worker. SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_DIR path Location to store the disk buffer files. This location is fixed when using the container and should not be modified.

Note: The buffer options apply to each worker rather than the entire destination.

"},{"location":"configuration/#archive-file-configuration","title":"Archive File Configuration","text":"

This feature is designed to support compliance or diode mode archival of all messages. The files are stored in a folder structure at the mount point using the pattern shown in the table below, depending on the value of the SC4S_GLOBAL_ARCHIVE_MODE variable. Events for both modes are formatted using syslog-ng\u2019s EWMM template.

Variable Value/Default Location/Pattern SC4S_GLOBAL_ARCHIVE_MODE compliance(default) <archive mount>/${.splunk.sourcetype}/${HOST}/$YEAR-$MONTH-$DAY-archive.log SC4S_GLOBAL_ARCHIVE_MODE diode <archive mount>/${YEAR}/${MONTH}/${DAY}/${fields.sc4s_vendor_product}_${YEAR}${MONTH}${DAY}${HOUR}${MIN}.log\"

Use the following variables to select global archiving or per-source archiving. SC4S does not prune the files that are created, therefore an administrator must provide a means of log rotation to prune files and move them to an archival system to avoid exhausting disk space.

Variable Values Description SC4S_ARCHIVE_GLOBAL yes or undefined Enable archiving of all vendor_products. SC4S_DEST_<VENDOR_PRODUCT>_ARCHIVE yes(default) or undefined Enables selective archiving by vendor product."},{"location":"configuration/#syslog-source-configuration","title":"Syslog Source Configuration","text":"Variable Values/Default Description SC4S_SOURCE_TLS_ENABLE yes or no(default) Enable TLS globally. Be sure to configure the certificate as shown below. SC4S_LISTEN_DEFAULT_TLS_PORT undefined or 6514 Enable a TLS listener on port 6514. SC4S_LISTEN_DEFAULT_RFC6425_PORT undefined or 5425 Enable a TLS listener on port 5425. SC4S_SOURCE_TLS_OPTIONS no-sslv2 Comma-separated list of the following options: no-sslv2, no-sslv3, no-tlsv1, no-tlsv11, no-tlsv12, none. See syslog-ng docs for the latest list and default values. SC4S_SOURCE_TLS_CIPHER_SUITE See openssl Colon-delimited list of ciphers to support, for example, ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384. See openssl for the latest list and defaults. SC4S_SOURCE_TCP_MAX_CONNECTIONS 2000 Maximum number of TCP connections. SC4S_SOURCE_UDP_IW_USE yes or no(default) Determine whether to change the initial Window size for UDP. SC4S_SOURCE_UDP_FETCH_LIMIT 1000 Number of events to fetch from server buffer at one time. SC4S_SOURCE_UDP_IW_SIZE 250000 Initial Window size. SC4S_SOURCE_TCP_IW_SIZE 20000000 Initial Window size. SC4S_SOURCE_TCP_FETCH_LIMIT 2000 Number of events to fetch from server buffer at one time. SC4S_SOURCE_UDP_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_TCP_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_TLS_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_RFC5426_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_RFC6587_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_RFC5425_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_LISTEN_UDP_SOCKETS 4 Number of kernel sockets per active UDP port, which configures multi-threading of the UDP input buffer in the kernel to prevent packet loss. Total UDP input buffer is the multiple of SOCKETS x SO_RCVBUFF. SC4S_SOURCE_LISTEN_RFC5426_SOCKETS 1 Number of kernel sockets per active UDP port, which configures multi-threading of the input buffer in the kernel to prevent packet loss. Total UDP input buffer is the sum of SOCKETS x SO_RCVBUFF. SC4S_SOURCE_LISTEN_RFC6587_SOCKETS 1 Number of kernel sockets per active UDP port, which configures multi-threading of the input buffer in the kernel to prevent packet loss. Total UDP input buffer is the sum of SOCKETS x SO_RCVBUFF. SC4S_SOURCE_LISTEN_RFC5425_SOCKETS 1 Number of kernel sockets per active UDP port, which configures multi-threading of the input buffer in the kernel to prevent packet loss. Total UDP input buffer is the sum of SOCKETS x SO_RCVBUFF. SC4S_SOURCE_STORE_RAWMSG undefined or \u201cno\u201d Store unprocessed \u201con the wire\u201d raw message in the RAWMSG macro for use with the \u201cfallback\u201d sourcetype. Do not set this in production, substantial memory and disk overhead will result. Use this only for log path and filter development. SC4S_IPV6_ENABLE yes or no(default) Enable dual-stack IPv6 listeners and health checks."},{"location":"configuration/#configure-your-syslog-source-tls-certificate","title":"Configure your syslog source TLS certificate","text":"
  1. Create the folder /opt/sc4s/tls .
  2. Uncomment the appropriate mount line in the unit or yaml file.
  3. Save the server private key in PEM format with no password to /opt/sc4s/tls/server.key.
  4. Save the server certificate in PEM format to /opt/sc4s/tls/server.pem.
  5. Ensure the entry SC4S_SOURCE_TLS_ENABLE=yes exists in /opt/sc4s/env_file.
"},{"location":"configuration/#configure-additional-pki-trust-anchors","title":"Configure additional PKI trust anchors","text":"

Additional certificate authorities may be trusted by appending each PEM formatted certificate to /opt/sc4s/tls/trusted.pem.

"},{"location":"configuration/#configure-sc4s-metadata","title":"Configure SC4S metadata","text":""},{"location":"configuration/#override-the-log-path-of-indexes-or-metadata","title":"Override the log path of indexes or metadata","text":"

Set Splunk metadata before the data arrives in Splunk and before any add-on processing occurs. The filters apply the index, source, sourcetype, host, and timestamp metadata automatically by individual data source. Values for this metadata, including a recommended index and output format, are included with all \u201cout-of-the-box\u201d log paths included with SC4S and are chosen to properly interface with the corresponding add-on in Splunk. You must ensure all recommended indexes accept this data if the defaults are not changed.

To accommodate the override of default values, each log path consults an internal lookup file that maps Splunk metadata to the specific data source being processed. This file contains the defaults that are used by SC4S to set the appropriate Splunk metadata, index, host, source, and sourcetype, for each data source. This file is not directly available to the administrator, but a copy of the file is deposited in the local mounted directory for reference, /opt/sc4s/local/context/splunk_metadata.csv.example by default. This copy is provided solely for reference. To add to the list or to override default entries, create an override file without the example extension (for example /opt/sc4s/local/context/splunk_metadata.csv) and modify it according to the instructions below.

splunk_metadata.csv is a CSV file containing a \u201ckey\u201d that is referenced in the log path for each data source. These keys are documented in the individual source files in this section, and let you override Splunk metadata.

The following is example line from a typical splunk_metadata.csv override file:

juniper_netscreen,index,ns_index\n

The columns in this file are key, metadata, and value. To make a change using the override file, consult the example file (or the source documentation) for the proper key and modify and add rows in the table, specifying one or more of the following metadata/value pairs for a given key:

In our example above, the juniper_netscreen key references a new index used for that data source called ns_index.

For most deployments the index should be the only change needed, other default metadata should almost never be overridden.

The splunk_metadata.csv file is a true override file and the entire example file should not be copied over to the override. The override file is usually just one or two lines, unless an entire index category (for example netfw) needs to be overridden.

When building a custom SC4S log path, append the splunk_metadata.csv file with an appropriate new key and default for the index. The new key will not exist in the internal lookup or in the example file. Care should be taken during log path design to choose appropriate index, sourcetype and template defaults so that admins are not compelled to override them. If the custom log path is later added to the list of SC4S-supported sources, this addendum can be removed.

The splunk_metadata.csv.example file is provided for reference only and is not used directly by SC4S. It is an exact copy of the internal file, and can therefore change from release to release. Be sure to check the example file to make sure the keys for any overrides map correctly to the ones in the example file.

"},{"location":"configuration/#override-index-or-metadata-based-on-host-ip-or-subnet-compliance-overrides","title":"Override index or metadata based on host, ip, or subnet (compliance overrides)","text":"

In some cases you can provide the same overrides based on PCI scope, geography, or other criteria. Use a file that uniquely identifies these source exceptions via syslog-ng filters, which map to an associated lookup of alternate indexes, sources, or other metadata. Indexed fields can also be added to further classify the data.

The csv file provides three columns: filter name, field name, and value. Filter names in the conf file must match one or more corresponding filter name rows in the csv file. The field name column obeys the following convention:

This file construct is best shown by an example. Here is an example of a compliance_meta_by_source.conf file and its corresponding compliance_meta_by_source.csv file:

filter f_test_test {\n   host(\"something-*\" type(glob)) or\n   netmask(192.168.100.1/24)\n};\n
f_test_test,.splunk.index,\"pciindex\"\nf_test_test,fields.compliance,\"pci\"\n

Ensure that the filter names in the conf file match one or more rows in the csv file. Any incoming message with a hostname starting with something- or arriving from a netmask of 192.168.100.1/24 will match the f_test_test filter, and the corresponding entries in the csv file will be checked for overrides. The new index is pciindex, and an indexed field named compliance will be sent to Splunk with its value set to pci. To add additional overrides, add another filter foo_bar {}; stanza to the conf file, then add appropriate entries to the csv file that match the filter names to the overrides.

Take care that your syntax is correct; for more information on proper syslog-ng syntax, see the syslog-ng documentation. A syntax error will cause the runtime process to abort in the \u201cpreflight\u201d phase at startup.

To update your changes, restart SC4S.

"},{"location":"configuration/#drop-all-data-by-ip-or-subnet-deprecated","title":"Drop all data by IP or subnet (deprecated)","text":"

Using vendor_product_by_source to null queue is now a deprecated task. See the supported method for dropping data in Filtering events from output.

"},{"location":"configuration/#splunk-connect-for-syslog-output-templates-syslog-ng-templates","title":"Splunk Connect for Syslog output templates (syslog-ng templates)","text":"

Splunk Connect for Syslog uses the syslog-ng template mechanism to format the output event that will be sent to Splunk. These templates can format the messages in a number of ways, including straight text and JSON, and can utilize the many syslog-ng \u201cmacros\u201d fields to specify what gets placed in the event delivered to the destination. The following table is a list of the templates used in SC4S, which can be used for metadata override. New templates can also be added by the administrator in the \u201clocal\u201d section for local destinations; pay careful attention to the syntax as the templates are \u201clive\u201d syslog-ng config code.

Template name Template contents Notes t_standard ${DATE} ${HOST} ${MSGHDR}${MESSAGE} Standard template for most RFC3164 (standard syslog) traffic. t_msg_only ${MSGONLY} syslog-ng $MSG is sent, no headers (host, timestamp, etc.) . t_msg_trim $(strip $MSGONLY) Similar to syslog-ng $MSG with whitespace stripped. t_everything ${ISODATE} ${HOST} ${MSGHDR}${MESSAGE} Standard template with ISO date format. t_hdr_msg ${MSGHDR}${MESSAGE} Useful for non-compliant syslog messages. t_legacy_hdr_msg ${LEGACY_MSGHDR}${MESSAGE} Useful for non-compliant syslog messages. t_hdr_sdata_msg ${MSGHDR}${MSGID} ${SDATA} ${MESSAGE} Useful for non-compliant syslog messages. t_program_msg ${PROGRAM}[${PID}]: ${MESSAGE} Useful for non-compliant syslog messages. t_program_nopid_msg ${PROGRAM}: ${MESSAGE} Useful for non-compliant syslog messages. t_JSON_3164 $(format-json \u2013scope rfc3164\u2013pair PRI=\u201d<$PRI>\u201d\u2013key LEGACY_MSGHDR\u2013exclude FACILITY\u2013exclude PRIORITY) JSON output of all RFC3164-based syslog-ng macros. Useful with the \u201cfallback\u201d sourcetype to aid in new filter development. t_JSON_5424 $(format-json \u2013scope rfc5424\u2013pair PRI=\u201d<$PRI>\u201d\u2013key ISODATE\u2013exclude DATE\u2013exclude FACILITY\u2013exclude PRIORITY) JSON output of all RFC5424-based syslog-ng macros; for use with RFC5424-compliant traffic. t_JSON_5424_SDATA $(format-json \u2013scope rfc5424\u2013pair PRI=\u201d<$PRI>\u201d\u2013key ISODATE\u2013exclude DATE\u2013exclude FACILITY\u2013exclude PRIORITY)\u2013exclude MESSAGE JSON output of all RFC5424-based syslog-ng macros except for MESSAGE; for use with RFC5424-compliant traffic."},{"location":"configuration/#about-ebpf","title":"About eBPF","text":"

eBPF helps mitigate congestion of single heavy data stream by utilizing multithreading and is used with SC4S_SOURCE_LISTEN_UDP_SOCKETS. To leverage this feature you need your host OS to be able to use eBPF and must run Docker or Podman in privileged mode.

Variable Values Description SC4S_ENABLE_EBPF=yes yes or no(default) Use eBPF to leverage multithreading when consuming from a single connection. SC4S_EBPF_NO_SOCKETS=4 integer Set number of threads to use. For optimal performance this should not be less than the value set for SC4S_SOURCE_LISTEN_UDP_SOCKETS.

To run Docker or Podman in privileged mode, edit the service file /lib/systemd/system/sc4s.service to add the --privileged flag to the Docker or Ppodman run command:

ExecStart=/usr/bin/podman run \\\n        -e \"SC4S_CONTAINER_HOST=${SC4SHOST}\" \\\n        -v \"$SC4S_PERSIST_MOUNT\" \\\n        -v \"$SC4S_LOCAL_MOUNT\" \\\n        -v \"$SC4S_ARCHIVE_MOUNT\" \\\n        -v \"$SC4S_TLS_MOUNT\" \\\n        --privileged \\\n        --env-file=/opt/sc4s/env_file \\\n        --health-cmd=\"/healthcheck.sh\" \\\n        --health-interval=10s --health-retries=6 --health-timeout=6s \\\n        --network host \\\n        --name SC4S \\\n        --rm $SC4S_IMAGE\n

"},{"location":"configuration/#change-your-status-port","title":"Change your status port","text":"

Use SC4S_LISTEN_STATUS_PORT to change the \u201cstatus\u201d port used by the internal health check process. The default value is 8080.

"},{"location":"create-parser/","title":"Create a parser","text":"

SC4S parsers perform operations that would normally be performed during index time, including linebreaking, source and sourcetype setting, and timestamping. You can write your own parser if the parsers available in the SC4S package do not meet your needs.

"},{"location":"create-parser/#before-you-start","title":"Before you start","text":""},{"location":"create-parser/#procure-a-raw-log-message","title":"Procure a raw log message","text":"

If you already have a raw log message, you can skip this step. Otherwise, you need to extract one to have something to work with. You can do this in multiple ways, this section describes three methods.

"},{"location":"create-parser/#procure-a-raw-log-message-using-tcpdump","title":"Procure a raw log message using tcpdump","text":"

You can use the tcpdump command to get incoming raw messages on a given port of your server:

tcpdump -n -s 0 -S -i any -v port 8088\n\ntcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes\n09:54:26.051644 IP (tos 0x0, ttl 64, id 29465, offset 0, flags [DF], proto UDP (17), length 466)\n10.202.22.239.41151 > 10.202.33.242.syslog: SYSLOG, length: 438\nFacility local0 (16), Severity info (6)\nMsg: 2022-04-28T16:16:15.466731-04:00 NTNX-21SM6M510425-B-CVM audispd[32075]: node=ntnx-21sm6m510425-b-cvm type=SYSCALL msg=audit(1651176975.464:2828209): arch=c000003e syscall=2 success=yes exit=6 a0=7f2955ac932e a1=2 a2=3e8 a3=3 items=1 ppid=29680 pid=4684 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=964698 comm=\u201csshd\u201d exe=\u201c/usr/sbin/sshd\u201d subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 key=\u201clogins\u201d\\0x0a\n
"},{"location":"create-parser/#procure-a-raw-log-message-using-wireshark","title":"Procure a raw log message using Wireshark","text":"

Once you get your stream of messages, copy one of them. Note that in UDP there are not usually any message separators. You can also read the logs using Wireshark from the .pcap file. From Wireshark go to Statistics > Conversations, then click on Follow Stream:

"},{"location":"create-parser/#procure-a-raw-log-message-by-saving-it-in-splunk","title":"Procure a raw log message by saving it in Splunk","text":"

See Obtaining \u201cOn-the-wire\u201d Raw Events.

"},{"location":"create-parser/#create-a-unit-test","title":"Create a unit test","text":"

To create a unit test, use the existing test case that is most similar to your use case. The naming convention is test_vendor_product.py.

  1. Make sure that your log is being parsed correctly by creating a test case. Assuming you have a raw message like this:

<14>1 2022-03-30T11:17:11.900862-04:00 host - - - - Carbon Black App Control event: text=\"File 'c:\\program files\\azure advanced threat protection sensor\\2.175.15073.51407\\winpcap\\x86\\packet.dll' [c4e671bf409076a6bf0897e8a11e6f1366d4b21bf742c5e5e116059c9b571363] would have blocked if the rule was not in Report Only mode.\" type=\"Policy Enforcement\" subtype=\"Execution block (unapproved file)\" hostname=\"CORP\\USER\" username=\"NT AUTHORITY\\SYSTEM\" date=\"3/30/2022 3:16:40 PM\" ip_address=\"10.0.0.3\" process=\"c:\\program files\\azure advanced threat protection sensor\\2.175.15073.51407\\microsoft.tri.sensor.updater.exe\" file_path=\"c:\\program files\\azure advanced threat protection sensor\\2.175.15073.51407\\winpcap\\x86\\packet.dll\" file_name=\"packet.dll\" file_hash=\"c4e671bf409076a6bf0897e8a11e6f1366d4b21bf742c5e5e116059c9b571363\" policy=\"High Enforcement - Domain Controllers\" rule_name=\"Report read-only memory map operations on unapproved executables by .NET applications\" process_key=\"00000433-0000-23d8-01d8-44491b26f203\" server_version=\"8.5.4.3\" file_trust=\"-2\" file_threat=\"-2\" process_trust=\"-2\" process_threat=\"-2\" prevalence=\"50\"

  1. Now run the test, for example:

    poetry run pytest -v --tb=long \\\n--splunk_type=external \\\n--splunk_hec_token=<HEC_TOKEN> \\\n--splunk_host=<HEC_ENDPOINT> \\\n--sc4s_host=<SC4S_IP> \\\n--junitxml=test-results/test.xml \\\n-n <NUMBER_OF_JOBS> \\\ntest/test_vendor_product.py\n

  2. The parsed log should appear in Splunk:

In this example the message is being parsed as a generic nix:syslog sourcetype. This means that the message format complied with RFC standards, and SC4S could correctly identify the format fields in the message.

"},{"location":"create-parser/#create-a-parser_1","title":"Create a parser","text":"

To assign your messages to the proper index and sourcetype you will need to create a parser. Your parser must be declared in package/etc/conf.d/conflib. The naming convention is app-type-vendor_product.conf.

  1. If you find a similar parser in SC4S, you can use it as a reference. In the parser, make sure you assign the proper sourcetype, index, vendor, product, and template. The template shows how your message should be parsed before sending them to Splunk.

The most basic configuration will forward raw log data with correct metadata, for example:

block parser app-syslog-vmware_cb-protect() {\n    channel {\n        rewrite {\n            r_set_splunk_dest_default(\n                index(\"epintel\")\n                sourcetype('vmware:cb:protect')\n                vendor(\"vmware\")\n                product(\"cb-protect\")\n                template(\"t_msg_only\")\n            );\n        };\n    };\n};\napplication app-syslog-vmware_cb-protect[sc4s-syslog] {\n    filter {\n        message('Carbon Black App Control event:  '  type(string)  flags(prefix));\n    };  \n    parser { app-syslog-vmware_cb-protect(); };\n};\n
All messages that start with the string Carbon Black App Control event: will now be routed to the proper index and assigned the given sourcetype: For more info about using message filtering go to sources documentation.

  1. To apply more transformations, add the parser:

    block parser app-syslog-vmware_cb-protect() {\n    channel {\n        rewrite {\n            r_set_splunk_dest_default(\n                index(\"epintel\")\n                sourcetype('vmware:cb:protect')\n                vendor(\"vmware\")\n                product(\"cb-protect\")\n                template(\"t_kv_values\")\n            );\n        };\n\n        parser {\n            csv-parser(delimiters(chars('') strings(': '))\n                       columns('header', 'message')\n                       prefix('.tmp.')\n                       flags(greedy, drop-invalid));\n            kv-parser(\n                prefix(\".values.\")\n                pair-separator(\" \")\n                template('${.tmp.message}')\n            );\n        };\n    };\n};\napplication app-syslog-vmware_cb-protect[sc4s-syslog] {\n    filter {\n        message('Carbon Black App Control event:  '  type(string)  flags(prefix));\n    };  \n    parser { app-syslog-vmware_cb-protect(); };\n};\n
    This example extracts all fields that are nested in the raw log message first by using csv-parser to split Carbon Black App Control event and the rest of message as a two separate fields named header and message. kv-parser will extract all key-value pairs in the message field.

  2. To test your parser, run a previously created test case. If you need more debugging, use docker ps to see your running containers and docker logs to see what\u2019s happening to the parsed message.

  3. Commit your changes and open a pull request.

"},{"location":"dashboard/","title":"SC4S Metrics and Events Dashboard","text":"

The SC4S Metrics and Events dashboard lets you monitor metrics and event flows for all SC4S instances sending data to a chosen Splunk platform.

"},{"location":"dashboard/#functionalities","title":"Functionalities","text":""},{"location":"dashboard/#overview-metrics","title":"Overview metrics","text":"

The SC4S and Metrics Overview dashboard displays the cumulative sum of received and dropped messages for all SC4S instances in a chosen interval for the specified time range. By default the interval is set to 30 seconds and the time range is set to 15 minutes.

The Received Messages panel can be used as a heartbeat metric. A healthy SC4S instance should send at least one message per 30 seconds. This metrics message is included in the count.

Keep the Dropped Messages panel at a constant level of 0. If SC4S drops messages due to filters, slow performance, or for any other reason, the number of dropped messages will persist until the instance restarts. The Dropped Messages panel does not include potential UDP messages dropped from the port buffer, which SC4S is not able to track.

"},{"location":"dashboard/#single-instance-metrics","title":"Single instance metrics","text":"

You can display the instance name and SC4S version for a specific SC4S instance (available in versions 3.16.0 and later).

This dashboard also displays a timechart of deltas for received, queued, and dropped messages for a specific SC4S instance.

"},{"location":"dashboard/#single-instance-events","title":"Single instance events","text":"

You can analyze traffic processed by an SC4S instance by visualizing the following events data:

"},{"location":"dashboard/#install-the-dashboard","title":"Install the dashboard","text":"
  1. In Splunk platform open Search -> Dashboards.
  2. Click on Create New Dashboard and make an empty dashboard. Be sure to choose Classic Dashboards.
  3. In the \u201cEdit Dashboard\u201d view, go to Source and replace the initial xml with the contents of dashboard/dashboard.xml published in the SC4S repository.
  4. Saving your changes. Your dashboard is ready to use.
"},{"location":"destinations/","title":"Supported SC4S destinations","text":"

You can configure Splunk Connect for Syslog to use any destination available in syslog-ng OSE. Helpers manage configuration for the three most common destination needs:

"},{"location":"destinations/#hec-destination","title":"HEC destination","text":""},{"location":"destinations/#configuration-options","title":"Configuration options","text":"Variable Values Description SC4S_DEST_SPLUNK_HEC_<ID>_URL url URL of the Splunk endpoint, this can be a single URL or a space-separated list. SC4S_DEST_SPLUNK_HEC_<ID>_TOKEN string Splunk HTTP Event Collector token. SC4S_DEST_SPLUNK_HEC_<ID>_MODE string \u201cGLOBAL\u201d or \u201cSELECT\u201d. SC4S_DEST_SPLUNK_HEC_<ID>_TLS_VERIFY yes(default) or no Verify HTTP(s) certificates. SC4S_DEST_SPLUNK_HEC_<ID>_HTTP_COMPRESSION yes or no(default) Compress outgoing HTTP traffic using the gzip method."},{"location":"destinations/#http-compression","title":"HTTP Compression","text":"

HTTP traffic compression helps to reduce network bandwidth usage when sending to a HEC destination. SC4S currently supports gzip for compressing transmitted traffic. Using the gzip compression algorithm can result in lower CPU load and increased utilization of RAM. The algorithm may also cause a decrease in performance by 6% to 7%. Compression affects the content but does not affect the HTTP headers. Enable batch packet processing to make the solution efficient, as this allows compression of a large number of logs at once.

Variable Values Description SC4S_DEST_SPLUNK_HEC_<ID>_HTTP_COMPRESSION yes or no(default) Compress outgoing HTTP traffic using the gzip method."},{"location":"destinations/#syslog-standard-destination","title":"Syslog standard destination","text":"

The use of \u201csyslog\u201d as a network protocol has been defined in Internet Engineering Task Force standards RFC5424, RFC5425, and RFC6587.

Note: SC4S sending messages to a syslog destination behaves like a relay. This means overwriting some original information, for example the original source IP.

"},{"location":"destinations/#configuration-options_1","title":"Configuration options","text":"Variable Values Description SC4S_DEST_SYSLOG_<ID>_HOST fqdn or ip The FQDN or IP of the target. SC4S_DEST_SYSLOG_<ID>_PORT number 601 is the default when framed, 514 is the default when not framed. SC4S_DEST_SYSLOG_<ID>_IETF yes/no, the default value is yes. Use IETF Standard frames. SC4S_DEST_SYSLOG_<ID>_TRANSPORT tcp,udp,tls. The default value is tcp. SC4S_DEST_SYSLOG_<ID>_MODE string \u201cGLOBAL\u201d or \u201cSELECT\u201d."},{"location":"destinations/#send-rfc5424-with-frames","title":"Send RFC5424 with frames","text":"

In this example, SC4S will send Cisco ASA events as RFC5424 syslog to a third party system.

The message format will be similar to: 123 <166>1 2022-02-02T14:59:55.000+00:00 kinetic-charlie - - - - %FTD-6-430003: DeviceUUID.

The destination name is taken from the environment variable, each destination must have a unique name. This value should be short and meaningful.

#env_file\nSC4S_DEST_SYSLOG_MYSYS_HOST=172.17.0.1\nSC4S_DEST_SYSLOG_MYSYS_PORT=514\nSC4S_DEST_SYSLOG_MYSYS_MODE=SELECT\n
#filename: /opt/sc4s/local/config/app_parsers/selectors/sc4s-lp-cisco_asa_d_syslog_mysys.conf\napplication sc4s-lp-cisco_asa_d_syslog_mysys[sc4s-lp-dest-select-d_syslog_mysys] {\n    filter {\n        'cisco' eq \"${fields.sc4s_vendor}\"\n        and 'asa' eq \"${fields.sc4s_product}\"\n    };    \n};\n
"},{"location":"destinations/#send-rfc5424-without-frames","title":"Send RFC5424 without frames","text":"

In this example SC4S will send Cisco ASA events to a third party system without frames.

The message format will be similar to: <166>1 2022-02-02T14:59:55.000+00:00 kinetic-charlie - - - - %FTD-6-430003: DeviceUUID.

#env_file\nSC4S_DEST_SYSLOG_MYSYS_HOST=172.17.0.1\nSC4S_DEST_SYSLOG_MYSYS_PORT=514\nSC4S_DEST_SYSLOG_MYSYS_MODE=SELECT\n# set to #yes for ietf frames\nSC4S_DEST_SYSLOG_MYSYS_IETF=no \n
#filename: /opt/sc4s/local/config/app_parsers/selectors/sc4s-lp-cisco_asa_d_syslog_mysys.conf\napplication sc4s-lp-cisco_asa_d_syslog_mysys[sc4s-lp-dest-select-d_syslog_mysys] {\n    filter {\n        'cisco' eq \"${fields.sc4s_vendor}\"\n        and 'asa' eq \"${fields.sc4s_product}\"\n    };    \n};\n
"},{"location":"destinations/#legacy-bsd","title":"Legacy BSD","text":"

In many cases, the actual configuration required is Legacy BSD syslog which is not a standard and was documented in RFC3164.

Variable Values Description SC4S_DEST_BSD_<ID>_HOST fqdn or ip The FQDN or IP of the target. SC4S_DEST_BSD_<ID>_PORT number, the default is 514. SC4S_DEST_BSD_<ID>_TRANSPORT tcp,udp,tls, the default is tcp. SC4S_DEST_BSD_<ID>_MODE string \u201cGLOBAL\u201d or \u201cSELECT\u201d."},{"location":"destinations/#send-legacy-bsd","title":"Send legacy BSD","text":"

The message format will be similar to: <134>Feb 2 13:43:05.000 horse-ammonia CheckPoint[26203].

#env_file\nSC4S_DEST_BSD_MYSYS_HOST=172.17.0.1\nSC4S_DEST_BSD_MYSYS_PORT=514\nSC4S_DEST_BSD_MYSYS_MODE=SELECT\n
#filename: /opt/sc4s/local/config/app_parsers/selectors/sc4s-lp-cisco_asa_d_bsd_mysys.conf\napplication sc4s-lp-cisco_asa_d_bsd_mysys[sc4s-lp-dest-select-d_bsd_mysys] {\n    filter {\n        'cisco' eq \"${fields.sc4s_vendor}\"\n        and 'asa' eq \"${fields.sc4s_product}\"\n    };    \n};\n
"},{"location":"destinations/#multiple-destinations","title":"Multiple destinations","text":"

SC4S can send data to multiple destinations. In the original setup the default destination accepts all events. This ensures that at least one destination receives the event, helping to avoid data loss due to misconfiguration. The provided examples demonstrate possible options for configuring additional HEC destinations.

"},{"location":"destinations/#send-all-events-to-the-additional-destination","title":"Send all events to the additional destination","text":"

After adding this example to your basic configuration SC4S will send all events both to SC4S_DEST_SPLUNK_HEC_DEFAULT_URL and SC4S_DEST_SPLUNK_HEC_OTHER_URL.

#Note \"OTHER\" should be a meaningful name\nSC4S_DEST_SPLUNK_HEC_OTHER_URL=https://splunk:8088\nSC4S_DEST_SPLUNK_HEC_OTHER_TOKEN=${SPLUNK_HEC_TOKEN}\nSC4S_DEST_SPLUNK_HEC_OTHER_TLS_VERIFY=no\nSC4S_DEST_SPLUNK_HEC_OTHER_MODE=GLOBAL\n

"},{"location":"destinations/#send-only-selected-events-to-the-additional-destination","title":"Send only selected events to the additional destination","text":"

After adding this example to your basic configuration SC4S will send Cisco IOS events to SC4S_DEST_SPLUNK_HEC_OTHER_URL.

#Note \"OTHER\" should be a meaningful name\nSC4S_DEST_SPLUNK_HEC_OTHER_URL=https://splunk:8088\nSC4S_DEST_SPLUNK_HEC_OTHER_TOKEN=${SPLUNK_HEC_TOKEN}\nSC4S_DEST_SPLUNK_HEC_OTHER_TLS_VERIFY=no\nSC4S_DEST_SPLUNK_HEC_OTHER_MODE=SELECT\n

application sc4s-lp-cisco_ios_dest_fmt_other[sc4s-lp-dest-select-d_hec_fmt_other] {\n    filter {\n        'cisco' eq \"${fields.sc4s_vendor}\"\n        and 'asa' eq \"${fields.sc4s_product}\"\n    };\n};\n
"},{"location":"destinations/#advanced-topic-configure-filtered-alternate-destinations","title":"Advanced topic: Configure filtered alternate destinations","text":"

You may require more granularity for a specific data source. For example, you may want to send all Cisco ASA debug traffic to Cisco Prime for analysis. To accommodate this, filtered alternate destinations let you supply a filter to redirect a portion of a source\u2019s traffic to a list of alternate destinations and, optionally, prevent matching events from being sent to Splunk. You configure this using environment variables:

Variable Values Description SC4S_DEST_<VENDOR_PRODUCT>_ALT_FILTER syslog-ng filter Filter to determine which events are sent to alternate destinations. SC4S_DEST_<VENDOR_PRODUCT>_FILTERED_ALTERNATES Comma or space-separated list of syslog-ng destinations. Send filtered events to alternate syslog-ng destinations using the VENDOR_PRODUCT syntax, for example, SC4S_DEST_CISCO_ASA_FILTERED_ALTERNATES.

This is an advanced capability, and filters and destinations using proper syslog-ng syntax must be constructed before using this functionality.

The regular destinations, including the primary HEC destination or configured archive destination, for example d_hec or d_archive, are not included for events matching the configured alternate destination filter. If an event matches the filter, the list of filtered alternate destinations completely replaces any mainline destinations, including defaults and global or source-based standard alternate destinations. Include them in the filtered destination list if desired.

Since the filtered alternate destinations completely replace the mainline destinations, including HEC to Splunk, a filter that matches all traffic can be used with a destination list that does not include the standard HEC destination to effectively turn off HEC for a given data source.

"},{"location":"edge_processor/","title":"Edge Processor integration guide (Experimental)","text":""},{"location":"edge_processor/#intro","title":"Intro","text":"

You can use the Edge Processor to:

"},{"location":"edge_processor/#how-it-works","title":"How it works","text":"
stateDiagram\n    direction LR\n\n    SC4S: SC4S\n    EP: Edge Processor\n    Dest: Another destination\n    Device: Your device\n    S3: AWS S3\n    Instance: Instance\n    Pipeline: Pipeline with SPL2\n\n    Device --> SC4S: Syslog protocol\n    SC4S --> EP: HEC\n    state EP {\n      direction LR\n      Instance --> Pipeline\n    }\n    EP --> Splunk\n    EP --> S3\n    EP --> Dest
"},{"location":"edge_processor/#set-up-the-edge-processor-for-sc4s","title":"Set up the Edge Processor for SC4S","text":"

SC4S using same protocol for communication with Splunk and Edge Processor. For that reason setup process will be very similar, but it have some differences.

Set up on Docker / PodmanSet up on Kubernetes
  1. On the env_file, configure the HEC URL as IP of managed instance, that you registered on Edge Processor.
  2. Add your HEC token. You can find your token in the Edge Processor \u201cglobal settings\u201d page.
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://x.x.x.x:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
  1. Set up the Edge Processor on your values.yaml HEC URL using the IP of managed instance, that you registered on Edge Processor.
  2. Provide the hec_token. You can find this token on the Edge Processor\u2019s \u201cglobal settings\u201d page.
splunk:\n  hec_url: \"http://x.x.x.x:8088\"\n  hec_token: \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\"\n  hec_verify_tls: \"no\"\n
"},{"location":"edge_processor/#mtls-encryption","title":"mTLS encryption","text":"

Before setup, generate mTLS certificates. Server mTLS certificates should be uploaded to Edge Processor and client certifcates should be used with SC4S.

Rename the certificate files. SC4S requires the following names:

Set up on Docker / PodmanSet up on Kubernetes
  1. Use HTTPS in HEC url: SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://x.x.x.x:8088.
  2. Move your clients mTLS certificates (key.pem, cert.pem, ca_cert.pem) to /opt/sc4s/tls/hec.
  3. Mount /opt/sc4s/tls/hec to /etc/syslog-ng/tls/hec using docker/podman volumes.
  4. Define mounting mTLS point for HEC: SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_MOUNT=/etc/syslog-ng/tls/hec.
  5. Start or restart SC4S.
  1. Add the secret name of the mTLS certificates to the values.yaml file:
splunk:\n  hec_url: \"https://x.x.x.x:8088\"\n  hec_token: \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\"\n  hec_tls: \"hec-tls-secret\"\n
  1. Add your mTLS certificates to the charts/splunk-connect-for-syslog/secrets.yaml file:
hec_tls:\n  secret: \"hec-tls-secret\"\n  value:\n    key: |\n      -----BEGIN PRIVATE KEY-----\n      Exmaple key\n      -----END PRIVATE KEY-----\n    cert: |\n      -----BEGIN CERTIFICATE-----\n      Exmaple cert\n      -----END CERTIFICATE-----\n    ca: |\n      -----BEGIN CERTIFICATE-----\n      Example ca\n      -----END CERTIFICATE-----\n
  1. Encrypt your secrets.yaml:
ansible-vault encrypt charts/splunk-connect-for-syslog/secrets.yaml\n
  1. Add the IP address for your cluster nodes to the inventory file ansible/inventory/inventory_microk8s_ha.yaml.

  2. Deploy the Ansible playbook:

ansible-playbook -i ansible/inventory/inventory_microk8s_ha.yaml ansible/playbooks/microk8s_ha.yml --ask-vault-pass\n
"},{"location":"edge_processor/#scaling-edge-processor","title":"Scaling Edge Processor","text":"

To scale you can distribute traffic between Edge Processor managed instances. To set this up, update the HEC URL with a comma-separated list of URLs for your managed instances.

Set up on Docker/PodmanSet up on Kubernetes

Update HEC URL in env_file:

SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://x.x.x.x:8088,http://x.x.x.x:8088,http://x.x.x.x:8088\n

Update HEC URL in values.yaml:

splunk:\n  hec_url: \"http://x.x.x.x:8088,http://x.x.x.x:8088,http://x.x.x.x:8088\"\n
"},{"location":"experiments/","title":"Current experimental features","text":""},{"location":"experiments/#3120","title":"> 3.12.0","text":"

SC4S_USE_NAME_CACHE=yes supports IPv6.

"},{"location":"experiments/#300","title":"> 3.0.0","text":""},{"location":"experiments/#ebpf","title":"eBPF","text":"

eBPF is a feature that leverages Linux kernel infrastructure to evenly distribute the load, especially in cases when there is a huge stream of messages incoming from a single appliance. To use the eBPF feature, you must have a host machine with and OS that supports eBPF. eBPF should be used only in cases when other ways of SC4S tuning fail. See the instruction for configuration details. To learn more visit this blog post.

"},{"location":"experiments/#sc4s-lite","title":"SC4S Lite","text":"

In the new 3.0.0 update, we\u2019ve introduced SC4S Lite. SC4S Lite is designed for those who prefer speed and custom filters over the pre-set ones that come with the standard SC4S. It\u2019s similar to our default version, without the pre-defined filters and complex app_parser topics. More information can be found at dedicated page.

"},{"location":"experiments/#2130","title":"> 2.13.0","text":""},{"location":"faq/","title":"Splunk Connect for Syslog (SC4S) Frequently Asked Questions","text":"

Q: The universal forwarder with file-based architecture has been the documented Splunk best practice for a long time. Why should I switch to an HTTP Event Collector (HEC) based architecture?

A:

Q: Is the Splunk HTTP Event Collector (HEC) as reliable as the Splunk universal forwarder?

A: HEC utilizes standard HTTP mechanisms to confirm that the endpoint is responsive before sending data. The HEC architecture allows you to use an industry standard load balancer between SC4S and the indexer or the included load balancing capability built into SC4S itself.

Q: What if my team doesn\u2019t know how to manage containers?

A: Using a runtime like Podman to deploy and manage SC4S containers is exceptionally easy even for those with no prior \u201ccontainer experience\u201d. Our application of container technology behaves much like a packaging system. The interaction uses \u201csystemctl\u201d commands a Linux admin would use for other common administration activities. The best approach is to try it out in a lab to see what the experience is like for yourself!

Q: Can my team use SC4S with Windows?

A: You can now run Docker on Windows! Microsoft has introduced public preview technology for Linux containers on Windows. Alternatively, a minimal Centos/Ubuntu Linux VM running on Windows hyper-v is a reliable production-grade choice.

Q: My company has the traditional universal forwarder and files-based syslog architecture deployed and running, should I rip and replace a working installation with SC4S?

A: Generally speaking, if a deployment is working and you are happy with it, it\u2019s best to leave it as is until there is need for major deployment changes, such as scaling your configuration. The search performance improvements from better data distribution is one benefit, so if Splunk users have complained about search performance or you are curious about the possible performance gains, we recommend doing an analysis of the data distribution across the indexers.

Q: What is the best way to migrate to SC4S from an existing syslog architecture?

A: When exploring migration to SC4S we strongly recommend that you experiment in a lab prior to deployment to production. There are a couple of approaches to consider:

  1. Configure the new SC4S infrastructure for all your sources.
  2. Confirm all the sourcetypes are being indexed as expected.
  3. Stop the existing syslog servers.
  1. Stand up the new SC4S infrastructure in its default configuration.
  2. Confirm that all the sourcetypes are being indexed as expected.
  3. Retire the old syslog servers listening on port 514.
  4. Once the 514 sources are complete, migrate any other sources. To do this, configure SC4S filters to explicitly identify them either through a unique port, hostID, or CIDR block.
  5. Once you confirm that each sourcetype is successfully indexed, disable the old syslog configurations for that source.

Q: How can SC4S be deployed to provide high availability?

A: The syslog protocol was not designed with HA as a goal, so configuration can be challenging. See Performant AND Reliable Syslog UDP is best for an excellent overview of this topic.

The syslog protocol limits the extent to which you can make any syslog collection architecture HA; at best it can be made \u201cmostly available\u201d. To do this, keep it simple and use OS clustering (shared IP) or even just VMs with vMotion. This simple architecture will encounter far less data loss over time than more complicated schemes. Another possible option is containerization HA schemes for SC4S (centered around MicroK8s) that will take some of the administrative burden of clustering away, but still functions as OS clustering under the hood.

Q: I\u2019m worried about data loss if SC4S goes down. Could I feed syslog to redundant SC4S servers to provide HA, without creating duplicate events in Splunk?

A: In many system design decisions there is some level of compromise. Any network protocol that doesn\u2019t have an application level ACK will lose data because speed is selected over reliability in the design. This is the case with syslog. Use a clustered IP with an active/passive node for a level of resilience while keeping complexity to a minimum. It could be possible to implement a far more complex solution utilizing an additional intermediary technology like Kafka, however the costs may outweigh the real world benefits.

Q: If the XL reference HW can handle just under 1 terabyte per day, how can SC4S be scaled to handle large deployments of many terabytes per day?

A: SC4S is a distributed architecture. SC4S instances should be deployed in the same VLAN as the source devices. This means that each SC4S instance will only see a subset of the total syslog traffic in a large deployment. Even in a deployment of 100 terabytes or greater, the individual SC4S instances will see loads in gigabytes per day rather than terabytes per day.

Q: SC4S is being blocked by fapolicyd, how do I fix that?

A: Create a rule that allows running SC4S in fapolicyd configuration:

Q: I am facing a unique issue that my postfilter configuration is not working although I don\u2019t have any postfilter for the mentioned source?

A: There may be a OOB postfilter for the source which will be applied, validate this by checking the value of sc4s_tags in Splunk. To resolve this, see [sc4s-finalfilter]. Do not use this resolution in any other situation as it can add the cost of the data processing.

Q: Where should the configuration for the vendors be placed? There are folders of app-parsers and directories. Which one to use? Does this also mean that csv files for metadata are no longer required?

A: The configuration for vendors should be placed in /opt/sc4s/local/config/*/.conf. Most of the folders are placeholders, the configuration will work in any of these folders with a .conf extension. CSV should be placed in local/context/*.csv. Using splunk_metadata.csv is good for metadata override, but use .conf file for everything else in place of other csv files.

Q: Can we have a file in which we can create all default indexes in one effort?

A: Refer to indexes.conf, which contains all indexes being created in one effort. This file also has lastChanceIndex configured, to use if it fits your requirements. For more information on this file, please refer Splunk docs.

"},{"location":"lb/","title":"About using load balancers","text":"

Load balancers are not a best practice for SC4S. The exception to this is a narrow use case where the syslog server is exposed to untrusted clients on the internet, for example, with Palo Alto Cortex.

"},{"location":"lb/#considerations","title":"Considerations","text":""},{"location":"lb/#alternatives","title":"Alternatives","text":"

The best deployment model for high availability is a Microk8s based deployment with MetalLB in BGP mode. This model uses a special class of load balancer that is implemented as destination network translation.

"},{"location":"lite/","title":"SC4S Lite","text":""},{"location":"lite/#about-sc4s-lite","title":"About SC4S Lite","text":"

SC4S Lite provides a scalable, performance-oriented solution for ingesting syslog data into Splunk. Pluggable modular parsers offer you the flexibility to incorporate custom data processing logic to suit specific use cases.

"},{"location":"lite/#architecture","title":"Architecture","text":""},{"location":"lite/#sc4s-lite_1","title":"SC4S Lite","text":"

SC4S Lite provides a lightweight, high-performance SC4S solution.

"},{"location":"lite/#pluggable-modules","title":"Pluggable Modules","text":"

Pluggable modules are predefined modules that you can enable and disable through configuration files. Each pluggable module represents a set of parsers for each vendor that supports SC4S. You can only enable or disable modules, you cannot create new modules or update existing ones. For more information see the pluggable modules documentation .

"},{"location":"lite/#splunk-enterprise-or-splunk-cloud","title":"Splunk Enterprise or Splunk Cloud","text":"

You configure SC4S Lite to send syslog data to Splunk Enterprise or Splunk Cloud. The Splunk Platform provides comprehensive analysis, searching, and visualization of your processed data.

"},{"location":"lite/#how-sc4s-lite-processes-your-data","title":"How SC4S Lite processes your data","text":"
  1. Source systems send syslog data to SC4S Lite. The data may be transmitted using UDP, TCP, or RELP, depending on your system\u2019s capabilities and configurations.
  2. SC4S Lite receives the syslog data and routes it through the appropriate parsers, as defined by you during configuration.
  3. The parsers in the pluggable module process the data, such as parsing, filtering, and enriching the data with metadata.
  4. SC4S Lite forwards the processed syslog data to the Splunk platform over the HTTP Event Collector (HEC).
"},{"location":"lite/#security-considerations","title":"Security considerations","text":"

SC4S Lite is built on an Alpine lightweight container which has very little vulnerability. SC4S Lite supports secure syslog data transmission protocols such as RELP and TLS over TCP to protect your data in transit. Additionally, the environment in which SC4S Lite is deployed enhances data security.

"},{"location":"lite/#scalability-and-performance","title":"Scalability and performance","text":"

SC4S Lite provides superior performance and scalability thanks to the lightweight architecture and pluggable parsers, which distribute the processing load. It is also packaged with eBPF functionality to further enhance performance. Note that actual performance may depend on factors such as your server capacity and network bandwidth.

"},{"location":"lite/#implement-sc4s-lite","title":"Implement SC4S Lite","text":"

To implementat of SC4S Lite:

  1. Set up the SC4S Lite environment.
  2. Install SC4S Lite following the instructions for your chosen environment with the following changes:
  1. Configure source systems to send syslog data to SC4S Lite.
  2. Enable or disable your pluggable modules. All pluggable modules are enabled by default.
  3. Test the setup to ensure that your syslog data is correctly received, processed, and forwarded to Splunk.
"},{"location":"performance/","title":"Performance and Sizing","text":"

Performance testing against our lab configuration produces the following results and limitations.

"},{"location":"performance/#tested-configurations","title":"Tested Configurations","text":""},{"location":"performance/#splunk-cloud-noah","title":"Splunk Cloud Noah","text":""},{"location":"performance/#environment","title":"Environment","text":"
/opt/syslog-ng/bin/loggen -i --rate=100000 --interval=1800 -P -F --sdata=\"[test name=\\\"stress17\\\"]\" -s 800 --active-connections=10 <local_hostmane> <sc4s_external_tcp514_port>\n
"},{"location":"performance/#result","title":"Result","text":"SC4S instance root networking slirp4netns networking m5zn.large average rate = 21109.66 msg/sec, count=38023708, time=1801.25, (average) msg size=800, bandwidth=16491.92 kB/sec average rate = 20738.39 msg/sec, count=37344765, time=1800.75, (average) msg size=800, bandwidth=16201.87 kB/sec m5zn.xlarge average rate = 34820.94 msg/sec, count=62687563, time=1800.28, (average) msg size=800, bandwidth=27203.86 kB/sec average rate = 35329.28 msg/sec, count=63619825, time=1800.77, (average) msg size=800, bandwidth=27601.00 kB/sec m5zn.2xlarge average rate = 71929.91 msg/sec, count=129492418, time=1800.26, (average) msg size=800, bandwidth=56195.24 kB/sec average rate = 70894.84 msg/sec, count=127630166, time=1800.27, (average) msg size=800, bandwidth=55386.60 kB/sec m5zn.2xlarge average rate = 85419.09 msg/sec, count=153778825, time=1800.29, (average) msg size=800, bandwidth=66733.66 kB/sec average rate = 84733.71 msg/sec, count=152542466, time=1800.26, (average) msg size=800, bandwidth=66198.21 kB/sec"},{"location":"performance/#splunk-enterprise","title":"Splunk Enterprise","text":""},{"location":"performance/#environment_1","title":"Environment","text":"
/opt/syslog-ng/bin/loggen -i --rate=100000 --interval=600 -P -F --sdata=\"[test name=\\\"stress17\\\"]\" -s 800 --active-connections=10 <local_hostmane> <sc4s_external_tcp514_port>\n
"},{"location":"performance/#result_1","title":"Result","text":"SC4S instance root networking slirp4netns networking m5zn.large average rate = 21511.69 msg/sec, count=12930565, time=601.095, (average) msg size=800, bandwidth=16806.01 kB/sec average rate = 21583.13 msg/sec, count=12973491, time=601.094, (average) msg size=800, bandwidth=16861.82 kB/sec average rate = 20738.39 msg/sec, count=37344765, time=1800.75, (average) msg size=800, bandwidth=16201.87 kB/sec m5zn.xlarge average rate = 37514.29 msg/sec, count=22530855, time=600.594, (average) msg size=800, bandwidth=29308.04 kB/sec average rate = 37549.86 msg/sec, count=22552210, time=600.594, (average) msg size=800, bandwidth=29335.83 kB/sec average rate = 35329.28 msg/sec, count=63619825, time=1800.77, (average) msg size=800, bandwidth=27601.00 kB/sec m5zn.2xlarge average rate = 98580.10 msg/sec, count=59157495, time=600.096, (average) msg size=800, bandwidth=77015.70 kB/sec average rate = 99463.10 msg/sec, count=59687310, time=600.095, (average) msg size=800, bandwidth=77705.55 kB/sec average rate = 84733.71 msg/sec, count=152542466, time=1800.26, (average) msg size=800, bandwidth=66198.21 kB/sec"},{"location":"performance/#guidance-on-sizing-hardware","title":"Guidance on sizing hardware","text":""},{"location":"pluggable_modules/","title":"Working with pluggable modules","text":"

SC4S Lite pluggable modules are predefined modules that you can enable or disable by modifying your config.yaml file. This file contains a list of add-ons. See the example and list of available pluggable modules in (config.yaml reference file) for more information. Once you update config.yaml, you mount it to the Docker container and override /etc/syslog-ng/config.yaml.

"},{"location":"pluggable_modules/#install-sc4s-lite-using-docker-compose","title":"Install SC4S Lite using Docker Compose","text":"

The installation process is identical to the installation process for Docker Compose for SC4S with the following modifications.

volumes:\n    - /path/to/your/config.yaml:/etc/syslog-ng/config.yaml\n
"},{"location":"pluggable_modules/#kubernetes","title":"Kubernetes:","text":"

The installation process is identical to the installation process for Kubernetes for SC4S with the following modifications:

sc4s:\n    addons:\n        config.yaml: |-\n            ---\n            addons:\n                - cisco\n                - paloalto\n                - dell\n
"},{"location":"upgrade/","title":"Upgrading SC4S","text":""},{"location":"upgrade/#upgrade-sc4s","title":"Upgrade SC4S","text":"
  1. For the latest version, use the latest tag for the SC4S image in the sc4s.service unit file. You can also set a specific version in the unit file if desired.
[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n
  1. Restart the service. sudo systemctl restart sc4s

See the release notes for more information.

"},{"location":"upgrade/#upgrade-notes","title":"Upgrade Notes","text":"

Version 3 does not introduce any breaking changes. To upgrade to version 3, review the service file and change the container reference from container2 to container3. For a step by step guide see here.

You may need to migrate legacy log paths or version 1 app-parsers for version 2. To do this, open an issue and attach the original configuration and a compressed pcap of sample data for testing. We will evaluate whether to include the source in an upcoming release.

"},{"location":"upgrade/#upgrade-from-2230","title":"Upgrade from <2.23.0","text":""},{"location":"upgrade/#upgrade-from-2","title":"Upgrade from <2","text":"
#Current app parsers contain one or more lines\nvendor_product('value_here')\n#This must change to failure to make this change will prevent sc4s from starting\nvendor('value')\nproduct('here')\n
"},{"location":"v3_upgrade/","title":"Upgrading Splunk Connect for Syslog v2 -> v3","text":""},{"location":"v3_upgrade/#upgrade-process-for-version-newer-than-230","title":"Upgrade process (for version newer than 2.3.0)","text":"

In general the upgrade process consists of three steps: - change of container version - restart of service - validation NOTE: Version 3 of SC4S is using alpine linux distribution as base image in opposition to previous versions which used UBI (Red Hat) image.

"},{"location":"v3_upgrade/#dockerpodman","title":"Docker/Podman","text":""},{"location":"v3_upgrade/#update-container-image-version","title":"Update container image version","text":"

In the service file: /lib/systemd/system/sc4s.service container image reference should be updated to version 3 with latest tag:

[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n

"},{"location":"v3_upgrade/#restart-sc4s-service","title":"Restart sc4s service","text":"

Restart the service: sudo systemctl restart sc4s

"},{"location":"v3_upgrade/#validate","title":"Validate","text":"

After the above command is executed successfully, the following information with the version becomes visible in the container logs: sudo podman logs SC4S for podman or sudo docker logs SC4S for docker. Expected output:

SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...\nSC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...\nsyslog-ng checking config\nsc4s version=3.0.0\nstarting goss\nstarting syslog-ng \n

If you are upgrading from version lower than 2.3.0 please refer to this guide.

"},{"location":"gettingstarted/","title":"Before you start","text":""},{"location":"gettingstarted/#getting-started","title":"Getting Started","text":"

Splunk Connect for Syslog (SC4S) is a distribution of syslog-ng that simplifies getting your syslog data into Splunk Enterprise and Splunk Cloud. SC4S provides a runtime-agnostic solution that lets you deploy using the container runtime environment of choice and a configuration framework. This lets you process logs out-of-the-box from many popular devices and systems.

"},{"location":"gettingstarted/#planning-deployment","title":"Planning Deployment","text":"

Syslog can refer to multiple message formats as well as, optionally, a wire protocol for event transmission between computer systems over UDP, TCP, or TLS. This protocol minimizes overhead on the sender, favoring performance over reliability. This means any instability or resource constraint can cause data to be lost in transmission.

"},{"location":"gettingstarted/#implementation","title":"Implementation","text":""},{"location":"gettingstarted/#quickstart-guide","title":"Quickstart Guide","text":""},{"location":"gettingstarted/#splunk-setup","title":"Splunk Setup","text":""},{"location":"gettingstarted/#runtime-configuration","title":"Runtime configuration","text":""},{"location":"gettingstarted/ansible-docker-podman/","title":"Podman/Docker","text":"

SC4S installation can be automated with Ansible. To do this, you provide a list of hosts on which you want to run SC4S and basic configuration information, such as the Splunk endpoint, HEC token, and TLS configuration.

"},{"location":"gettingstarted/ansible-docker-podman/#step-1-prepare-your-initial-configuration","title":"Step 1: Prepare your initial configuration","text":"
  1. Before running SC4S with Ansible, provide env_file with your Splunk endpoint and HEC token:

SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://xxx.xxx.xxx.xxx:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
2. Provide a list of hosts on which you want to run your cluster and the host application in the inventory file:
all:\n  hosts:\n  children:\n    node:\n      hosts:\n        node_1:\n          ansible_host:\n

"},{"location":"gettingstarted/ansible-docker-podman/#step-2-deploy-sc4s-on-your-configuration","title":"Step 2: Deploy SC4S on your configuration","text":"
  1. If you have Ansible installed on your host, run the Ansible playbook to deploy SC4S. Otherwise, use the Docker Ansible image provided in the package:
    # From repository root\ndocker-compose -f ansible/docker-compose.yml build\ndocker-compose -f ansible/docker-compose.yml up -d\ndocker exec -it ansible_sc4s /bin/bash\n
  2. If you used the Docker Ansible image in the previous step, then from your container remote shell, authenticate to and run the playbook.
"},{"location":"gettingstarted/ansible-docker-podman/#step-3-validate-your-configuration","title":"Step 3: Validate your configuration","text":"

SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once the checks are complete, validate that SC4S properly communicate with Splunk. To do this, execute the following search in Splunk:

index=* sourcetype=sc4s:events \"starting up\"\n

This should yield an event similar to the following:

syslog-ng starting up; version='3.28.1'\n
You can verify if all SC4S instances work by checking the sc4s_container in Splunk. Each instance should have a different container ID. All other fields should be the same.

The startup process should proceed normally without syntax errors. If it does not, follow the steps below before proceeding to deeper-level troubleshooting:

  1. Verify that the URL, token, and TLS/SSL settings are correct, and that the appropriate firewall ports are open (8088 or 443).
  2. Verify that your indexes are created in Splunk, and that your token has access to them.
  3. If you are using a load balancer, verify that it is operating properly.
  4. Execute the following command to check the SC4S startup process running in the container.
    sudo docker ps\n

docker logs <ID | image name> \n
or:
sudo systemctl status sc4s\n

SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...\nSC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...\nsyslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n

If you do not see this output, see \u201cTroubleshoot sc4s server\u201d and \u201cTroubleshoot resources\u201d for more information.

"},{"location":"gettingstarted/ansible-docker-swarm/","title":"Docker Swarm","text":"

SC4S installation can be automated with Ansible. To do this, you provide a list of hosts on which you want to run SC4S and the basic configuration, such as Splunk endpoint, HEC token, and TLS configuration. To perform this task, you must have existing understanding of Docker Swarm and be able to set up your Swarm architecture and configuration.

"},{"location":"gettingstarted/ansible-docker-swarm/#step-1-prepare-your-initial-configuration","title":"Step 1: Prepare your initial configuration","text":"
  1. Before running SC4S with Ansible, provide env_file with your Splunk endpoint and HEC token:

SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://xxx.xxx.xxx.xxx:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
2. Provide a list of hosts on which you want to run your Docker Swarm cluster and the host application in the inventory file:
all:\n  hosts:\n  children:\n    manager:\n      hosts:\n        manager_node_1:\n          ansible_host:\n\n    worker:\n      hosts:\n        worker_node_1:\n          ansible_host:\n        worker_node_2:\n          ansible_host:\n
3. You can run your cluster with one or more manager nodes. One advantage of hosting SC4S with Docker Swarm is that you can leverage the Swarm internal load balancer. See your Swarm Mode documentation at Docker.

  1. You can also provide extra service configurations, for example, the number of replicas, in the /ansible/app/docker-compose.yml file:
    version: \"3.7\"\nservices:\n  sc4s:\n    deploy:\n      replicas: 2\n      ...\n
"},{"location":"gettingstarted/ansible-docker-swarm/#step-2-deploy-sc4s-on-your-configuration","title":"Step 2: Deploy SC4S on your configuration","text":"
  1. If you have Ansible installed on your host, run the Ansible playbook to deploy SC4S. Otherwise, use the Docker Ansible image provided in the package:
    # From repository root\ndocker-compose -f ansible/docker-compose.yml build\ndocker-compose -f ansible/docker-compose.yml up -d\ndocker exec -it ansible_sc4s /bin/bash\n
  2. If you used the Docker Ansible image in Step 1, then from your container remote shell, run the Docker Swam Ansible playbook.
  1. If your deployment is successfull, you can check the state of the Swarm cluster and deployed stack from the manager node remote shell:
NAME SERVICES ORCHESTRATOR sc4s 1 Swarm ID NAME MODE REPLICAS IMAGE PORTS 1xv9vvbizf3m sc4s_sc4s replicated 2/2 ghcr.io/splunk/splunk-connect-for-syslog/container3:latest :514->514/tcp, :601->601/tcp, :6514->6514/tcp, :514->514/udp"},{"location":"gettingstarted/ansible-docker-swarm/#step-3-validate-your-configuration","title":"Step 3: validate your configuration","text":"

SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once the checks are complete, validate that SC4S properly communicate with Splunk. To do this, execute the following search in Splunk:

index=* sourcetype=sc4s:events \"starting up\"\n

You should see an event similar to the following:

syslog-ng starting up; version='3.28.1'\n
You can verify if all services in the Swarm cluster work by checking the sc4s_container in Splunk. Each service should have a different container ID. All other fields should be the same.

The startup process should proceed normally without syntax errors. If it does not, follow the steps below before proceeding to deeper-level troubleshooting:

  1. Verify that the URL, token, and TLS/SSL settings are correct, and that the appropriate firewall ports are open (8088 or 443).
  2. Verify that your indexes are created in Splunk, and that your token has access to them.
  3. If you are using a load balancer, verify that it is operating properly.
  4. Execute the following command to check the SC4S startup process running in the container.
sudo docker|podman ps\n
docker|podman logs <ID | image name> \n
SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...\nSC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...\nsyslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
  1. If you do not see this output, see \u201cTroubleshoot sc4s server\u201d and \u201cTroubleshoot resources\u201d for more information.
"},{"location":"gettingstarted/ansible-mk8s/","title":"mk8s","text":"

To automate SC4S installation with Ansible, you provide a list of hosts on which you want to run SC4S as well as basic configuration information, such as the Splunk endpoint, HEC token, and TLS configuration. To perform this task, you must have existing understanding of MicroK8s and be able to set up your Kubernetes cluster architecture and configuration.

"},{"location":"gettingstarted/ansible-mk8s/#step-1-prepare-your-initial-configuration","title":"Step 1: Prepare your initial configuration","text":"
  1. Before you run SC4S with Ansible, update values.yaml with your Splunk endpoint and HEC token. You can find the example file here.

  2. In the inventory file, provide a list of hosts on which you want to run your cluster and the host application:

    all:\n  hosts:\n  children:\n    node:\n      hosts:\n        node_1:\n          ansible_host:\n

  3. Alternatively, you can spin up a high-availability cluster:
    all:\n  hosts:\n  children:\n    manager:\n      hosts:\n        manager:\n          ansible_host:\n\n    workers:\n      hosts:\n        worker1:\n          ansible_host:\n        worker2:\n          ansible_host:\n
"},{"location":"gettingstarted/ansible-mk8s/#step-2-deploy-sc4s-on-your-configuration","title":"Step 2: Deploy SC4S on your configuration","text":"
  1. If you have Ansible installed on your host, run the Ansible playbook to deploy SC4S. Otherwise, use the Docker Ansible image provided in the package:
    # From repository root\ndocker-compose -f ansible/docker-compose.yml build\ndocker-compose -f ansible/docker-compose.yml up -d\ndocker exec -it ansible_sc4s /bin/bash\n
  2. If you used the Docker Ansible image, then from your container remote shell, authenticate to and run the MicroK8s playbook.
"},{"location":"gettingstarted/ansible-mk8s/#step-3-validate-your-configuration","title":"Step 3: Validate your configuration","text":"

SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once the checks are complete, validate that SC4S properly communicates with Splunk. To do this, execute the following search in Splunk:

index=* sourcetype=sc4s:events \"starting up\"\n

This should yield an event similar to the following:

syslog-ng starting up; version='3.28.1'\n

You can verify whether all services in the cluster work by checking the sc4s_container in Splunk. Each service should have a different container ID. All other fields should be the same.

The startup process should proceed normally without syntax errors. If it does not, follow the steps below before proceeding to deeper-level troubleshooting:

  1. Verify that the URL, token, and TLS/SSL settings are correct, and that the appropriate firewall ports are open (8088 or 443).
  2. Verify that your indexes are created in Splunk, and that your token has access to them.
  3. If you are using a load balancer, verify that it is operating properly.
  4. Execute the following command to check the SC4S startup process running in the container.
sudo microk8s kubectl get pods\nsudo microk8s kubectl logs <podname>\n

You should see events similar to those below in the output:

SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...\nSC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...\nsyslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
"},{"location":"gettingstarted/byoe-rhel8/","title":"Configure SC4S in a non-containerized SC4S deployment","text":"

Configuring SC4S in a non-containerized SC4S deployment requires a custom configuration. Note that since Splunk does not control your unique environment, we cannot help with setting up environments, debugging networking, etc. Consider this configuration only if:

This topic provides guidance for using the SC4S syslog-ng configuration files directly on the host OS running on a hardware server or virtual machine. You must provide:

You must modify the base configuration for most environments to accomodate enterprise infrastructure variations. When you upgrade, evaluate the current environment compared to this reference then develop and test an installation-specific installation plan. Do not depend on the distribution-supplied version of syslog-ng, as it may not be recent enough to support your needs. See this blog post to learn more.

"},{"location":"gettingstarted/byoe-rhel8/#install-sc4s-in-a-custom-environment","title":"Install SC4S in a custom environment","text":"

These installation instructions assume a recent RHEL or CentOS-based release. You may have to make minor adjustments for Debian and Ubuntu. The examples provided here use pre-compiled binaries for the syslog-ng installation in /etc/syslog-ng. Your configuration may vary.

The following installation instructions are summarized from a blog maintained by the One Identity team.

  1. Install CentOS or RHEL 8.0. See your OS documentation for instructions.

  2. Enable EPEL (Centos 8).

dnf install 'dnf-command(copr)' -y\ndnf install epel-release -y\ndnf copr enable czanik/syslog-ng336  -y\ndnf install syslog-ng syslog-ng-python syslog-ng-http python3-pip gcc python3-devel -y\n
  1. Disable the distribution-supplied syslog-ng unit file. rsyslog will continue to be the system logger, but should be left enabled only if it is not configured to listen on the same ports as SC4S. You can also configure SC4S to provide local logging.
sudo systemctl stop syslog-ng\nsudo systemctl disable syslog-ng\n
  1. Download the latest bare_metal.tar from releases on github and untar the package in /etc/syslog-ng. This step unpacks a tarball with the SC4S version of the syslog-ng config files in the standard /etc/syslog-ng location, and will overwrite existing content. Make sure that any previous configurations of syslog-ng are saved prior to executing the download step.

For production use, select the latest version of SC4S that does not have an -rc, -alpha, or -beta suffix.

sudo wget -c https://github.com/splunk/splunk-connect-for-syslog/releases/download/<latest release>/baremetal.tar -O - | sudo tar -x -C /etc/syslog-ng\n
  1. Install python requirements:
sudo pip3 install -r /etc/syslog-ng/requirements.txt\n
  1. Optionally, to use monitoring, install goss and confirm that the version is v0.3.16 or later. goss installs in /usr/local/bin by default, so do one of the following:
curl -L https://github.com/aelsabbahy/goss/releases/latest/download/goss-linux-amd64 -o /usr/local/bin/goss\nchmod +rx /usr/local/bin/goss\ncurl -L https://github.com/aelsabbahy/goss/releases/latest/download/dgoss -o /usr/local/bin/dgoss\n# Alternatively, using the latest\n# curl -L https://raw.githubusercontent.com/aelsabbahy/goss/latest/extras/dgoss/dgoss -o /usr/local/bin/dgoss\nchmod +rx /usr/local/bin/dgoss\n
  1. You can run SC4S using systemd in one of two ways, depending on administrator preference and orchestration methodology. These are not the only ways to run in a custom environment:
  1. To run the entrypoint.sh script directly in systemd, create the SC4S unit file /lib/systemd/system/sc4s.service and add the following:
[Unit]\nDescription=SC4S Syslog Daemon\nDocumentation=https://splunk-connect-for-syslog.readthedocs.io/en/latest/\nWants=network.target network-online.target\nAfter=network.target network-online.target\n\n[Service]\nType=simple\nExecStart=/etc/syslog-ng/entrypoint.sh\nExecReload=/bin/kill -HUP $MAINPID\nEnvironmentFile=/etc/syslog-ng/env_file\nStandardOutput=journal\nStandardError=journal\nRestart=on-abnormal\n\n[Install]\nWantedBy=multi-user.target\n
  1. To run entrypoint.sh as a preconfigured script, modify the script by commenting out or removing the stanzas following the OPTIONAL for BYOE comments in the script. This prevents syslog-ng from being launched by the script. Then create the SC4S unit file /lib/systemd/system/syslog-ng.service and add the following content:
[Unit]\nDescription=System Logger Daemon\nDocumentation=man:syslog-ng(8)\nAfter=network.target\n\n[Service]\nType=notify\nExecStart=/usr/sbin/syslog-ng -F $SYSLOGNG_OPTS -p /var/run/syslogd.pid\nExecReload=/bin/kill -HUP $MAINPID\nEnvironmentFile=-/etc/default/syslog-ng\nEnvironmentFile=-/etc/sysconfig/syslog-ng\nStandardOutput=journal\nStandardError=journal\nRestart=on-failure\n\n[Install]\nWantedBy=multi-user.target\n
  1. Create the file /etc/syslog-ng/env_file and add the following environment variables. Adjust the URL/TOKEN as needed.
# The following \"path\" variables can differ from the container defaults specified in the entrypoint.sh script. \n# These are *optional* for most BYOE installations, which do not differ from the install location used.\n# in the container version of SC4S.  Failure to properly set these will cause startup failure.\n#SC4S_ETC=/etc/syslog-ng\n#SC4S_VAR=/etc/syslog-ng/var\n#SC4S_BIN=/bin\n#SC4S_SBIN=/usr/sbin\n#SC4S_TLS=/etc/syslog-ng/tls\n\n# General Options\nSC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://splunk.smg.aws:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=a778f63a-5dff-4e3c-a72c-a03183659e94\n\n# Uncomment the following line if using untrusted (self-signed) SSL certificates\n# SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
  1. Reload systemctl and restart syslog-ng (example here is shown for systemd option (1) above)
sudo systemctl daemon-reload\nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/byoe-rhel8/#configure-sc4s-listening-ports","title":"Configure SC4S listening ports","text":"

The standard SC4S configuration uses UDP/TCP port 514 as the default for the listening port for syslog traffic, and TCP port 6514 for TLS. You can change these defaults by adding the following additional environment variables to the env_file:

SC4S_LISTEN_DEFAULT_TCP_PORT=514\nSC4S_LISTEN_DEFAULT_UDP_PORT=514\nSC4S_LISTEN_DEFAULT_RFC6587_PORT=601\nSC4S_LISTEN_DEFAULT_RFC5426_PORT=601\nSC4S_LISTEN_DEFAULT_RFC5425_PORT=5425\nSC4S_LISTEN_DEFAULT_TLS_PORT=6514\n

"},{"location":"gettingstarted/byoe-rhel8/#create-unique-dedicated-listening-ports","title":"Create unique dedicated listening ports","text":"

For some source technologies, categorization by message content is not possible. To collect these sources, dedicate a unique listening port to a specific source. See Sources for more information.

"},{"location":"gettingstarted/docker-compose-MacOS/","title":"Install Docker Desktop for MacOS","text":"

Refer to the \u201cMacOS\u201d section in your Docker documentation to set up your Docker Desktop for MacOS.

"},{"location":"gettingstarted/docker-compose-MacOS/#perform-your-initial-sc4s-configuration","title":"Perform your initial SC4S configuration","text":"

You can run SC4S using either docker-compose or the docker run command in the command line. This topic focuses solely on using docker-compose.

  1. Create a directory on the server for local configurations and disk buffering. Make it available to all administrators, for example: /opt/sc4s/.

  2. Create a docker-compose.yml file in your new directory, based on the provided template. By default, the latest container is automatically downloaded at each restart. As a best practice, consult this topic at the time of any new upgrade to check for any changes in the latest template.

    version: \"3.7\"\nservices:\n  sc4s:\n    deploy:\n      replicas: 2\n      restart_policy:\n        condition: on-failure\n    image: ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\n    ports:\n       - target: 514\n         published: 514\n         protocol: tcp\n       - target: 514\n         published: 514\n         protocol: udp\n       - target: 601\n         published: 601\n         protocol: tcp\n       - target: 6514\n         published: 6514\n         protocol: tcp\n    env_file:\n      - /opt/sc4s/env_file\n    volumes:\n      - /opt/sc4s/local:/etc/syslog-ng/conf.d/local:z\n      - splunk-sc4s-var:/var/lib/syslog-ng\n# Uncomment the following line if local disk archiving is desired\n#     - /opt/sc4s/archive:/var/lib/syslog-ng/archive:z\n# Map location of TLS custom TLS\n#     - /opt/sc4s/tls:/etc/syslog-ng/tls:z\n\nvolumes:\n  splunk-sc4s-var:\n

  3. In Docker Desktop, set the /opt/sc4s folder as shared.
  4. Create a local volume that will contain the disk buffer files in the event of a communication failure to the upstream destinations. This volume also keeps track of the state of syslog-ng between restarts, and in particular the state of the disk buffer. Be sure to account for disk space requirements for the Docker volume. This volume is located in /var/lib/docker/volumes/ and could grow significantly if there is an extended outage to the SC4S destinations. See SC4S disk buffer configuration for more information.

    sudo docker volume create splunk-sc4s-var\n

  5. Create the subdirectories: /opt/sc4s/local, /opt/sc4s/archive, and /opt/sc4s/tls. Make sure these directories match the volume mounts specified indocker-compose.yml.

  6. Create a file named /opt/sc4s/env_file.

SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
6. Update the following environment variables and values in /opt/sc4s/env_file:

"},{"location":"gettingstarted/docker-compose-MacOS/#create-unique-dedicated-listening-ports","title":"Create unique dedicated listening ports","text":"

Each listening port on the container must be mapped to a listening port on the host. Make sure to update the docker-compose.yml file when adding listening ports for new data sources.

To configure unique ports:

  1. Modify the /opt/sc4s/env_file file to include the port-specific environment variables. See the Sources documentation to identify the specific environment variables that are mapped to each data source vendor and technology.
  2. Modify the Docker Compose file that starts the SC4S container so that it reflects the additional listening ports you have created. You can amend the Docker Compose file with additional target stanzas in the ports section of the file (after the default ports). For example, the following additional target and published lines provide for 21 additional technology-specific UDP and TCP ports:
       - target: 5000-5020\n         published: 5000-5020\n         protocol: tcp\n       - target: 5000-5020\n         published: 5000-5020\n         protocol: udp\n
  1. Restart SC4S using the command in the \u201cStart/Restart SC4S\u201d section in this topic.

For more information about configuration refer to Docker and Podman basic configurations and detailed configuration.

"},{"location":"gettingstarted/docker-compose-MacOS/#startrestart-sc4s","title":"Start/Restart SC4S","text":"

From the catalog where you created compose file, execute:

docker-compose up\n
Otherwise use docker-compose with -f flag pointing to the compose file
docker-compose up -f /path/to/compose/file/docker-compose.yml\n

"},{"location":"gettingstarted/docker-compose-MacOS/#stop-sc4s","title":"Stop SC4S","text":"

Execute:

docker-compose down \n
or

docker-compose down -f /path/to/compose/file/docker-compose.yml\n
"},{"location":"gettingstarted/docker-compose-MacOS/#verify-proper-operation","title":"Verify Proper Operation","text":"

SC4S performs automatic checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once these checks are complete, verify that SC4S is properly communicating with Splunk:

index=* sourcetype=sc4s:events \"starting up\"\n

When the startup process proceeds normally, you should see an event similar to the following:

syslog-ng starting up; version='3.28.1'\n

If you do not see this, try the following steps to troubleshoot:

  1. Check to see that the URL, token, and TLS/SSL settings are correct, and that the appropriate firewall ports are open (8088 or 443).
  2. Check to see that the proper indexes are created in Splunk, and that the token has access to them.
  3. Ensure the proper operation of the load balancer if used.
  4. Check the SC4S startup process running:
docker logs <container_name>\n

You should see events similar to those below in the output:

syslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n

If you do not see the output above, proceed to the \u201cTroubleshoot sc4s server\u201d and \u201cTroubleshoot resources\u201d sections for more detailed information.

"},{"location":"gettingstarted/docker-compose/","title":"Install Docker Desktop","text":"

Refer to your Docker documentation to set up your Docker Desktop.

"},{"location":"gettingstarted/docker-compose/#perform-your-initial-sc4s-configuration","title":"Perform your initial SC4S configuration","text":"

You can run SC4S with docker-compose, or in the command line using the command docker run. Both options are described in this topic.

  1. Create a directory on the server for local configurations and disk buffering. Make it available to all administrators, for example: /opt/sc4s/. If you are using docker-compose, create a docker-compose.yml file in this directory using the template provided here. By default, the latest SC4S image is automatically downloaded at each restart. As a best practice, check back here regularly for any changes made to the latest template is incorporated into production before you relaunch with Docker Compose.
version: \"3.7\"\nservices:\n  sc4s:\n    deploy:\n      replicas: 2\n      restart_policy:\n        condition: on-failure\n    image: ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\n    ports:\n       - target: 514\n         published: 514\n         protocol: tcp\n       - target: 514\n         published: 514\n         protocol: udp\n       - target: 601\n         published: 601\n         protocol: tcp\n       - target: 6514\n         published: 6514\n         protocol: tcp\n    env_file:\n      - /opt/sc4s/env_file\n    volumes:\n      - /opt/sc4s/local:/etc/syslog-ng/conf.d/local:z\n      - splunk-sc4s-var:/var/lib/syslog-ng\n# Uncomment the following line if local disk archiving is desired\n#     - /opt/sc4s/archive:/var/lib/syslog-ng/archive:z\n# Map location of TLS custom TLS\n#     - /opt/sc4s/tls:/etc/syslog-ng/tls:z\n\nvolumes:\n  splunk-sc4s-var:\n
  1. In Docker, set the /opt/sc4s folder as shared.
  2. Create a local volume that will contain the disk buffer files in the event of a communication failure to the upstream destinations. This volume also keeps track of the state of syslog-ng between restarts, and in particular the state of the disk buffer. Be sure to account for disk space requirements for the Docker volume. This volume is located in /var/lib/docker/volumes/ and could grow significantly if there is an extended outage to the SC4S destinations. See SC4S Disk Buffer Configuration in the Configuration topic for more information.
sudo docker volume create splunk-sc4s-var\n
  1. Create the subdirectories: /opt/sc4s/local, /opt/sc4s/archive, and /opt/sc4s/tls. If you are using the docker-compose.yml file, make sure these directories match the volume mounts specified indocker-compose.yml.

  2. Create a file named /opt/sc4s/env_file.

SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
6. Update the following environment variables and values to /opt/sc4s/env_file:

NOTE: Splunk Connect for Syslog defaults to secure configurations. If you are not using trusted SSL certificates, be sure to uncomment the last line in the example above.

For more information about configuration, see Docker and Podman basic configurations and detailed configuration.

"},{"location":"gettingstarted/docker-compose/#start-or-restart-sc4s","title":"Start or restart SC4S","text":"
docker run -p 514:514 -p 514:514/udp -p 6514:6514 -p 5000-5020:5000-5020 -p 5000-5020:5000-5020/udp \\\n    --env-file=/opt/sc4s/env_file \\\n    --name SC4S \\\n    --rm ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\n

Otherwise use docker compose with -f flag pointing to the compose file:

docker compose up -f /path/to/compose/file/docker-compose.yml\n

"},{"location":"gettingstarted/docker-compose/#stop-sc4s","title":"Stop SC4S","text":"

If the container is run directly from the CLI, stop the container using the docker stop <containerID> command.

If using docker compose, execute:

docker compose down \n
or

docker compose down -f /path/to/compose/file/docker-compose.yml\n
"},{"location":"gettingstarted/docker-compose/#validate-your-configuration","title":"Validate your configuration","text":"

SC4S performs automatic checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once these checks are complete, verify that SC4S is properly communicating with Splunk:

index=* sourcetype=sc4s:events \"starting up\"\n

This should yield an event similar to the following when the startup process proceeds normally:

syslog-ng starting up; version='3.28.1'\n

If you do not see this, try the following steps to troubleshoot:

  1. Check to see that the URL, token, and TLS/SSL settings are correct, and that the appropriate firewall ports are open (8088 or 443).
  2. Check to see that the proper indexes are created in Splunk, and that the token has access to them.
  3. Ensure the proper operation of the load balancer if used.
  4. Check the SC4S startup process running in the container.
docker logs SC4S\n

You should see events similar to those below in the output:

syslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n

If you do not see the output above, see \u201cTroubleshoot SC4S server\u201d and \u201cTroubleshoot resources\u201d sections for more detailed information.

"},{"location":"gettingstarted/docker-podman-offline/","title":"Install a container while offline","text":"

You can stage SC4S by downloading the image so that it can be loaded on a host machine, for example on an airgapped system, without internet connectivity.

  1. Download the container image oci_container.tgz from our Github Page. The following example downloads v3.23.1, replace the URL with the latest release or pre-release version as desired:
sudo wget https://github.com/splunk/splunk-connect-for-syslog/releases/download/v3.23.1/oci_container.tar.gz\n
  1. Distribute the container to the airgapped host machine using your preferred file transfer utility.
  2. Execute the following command, using Docker or Podman:
<podman or docker> load < oci_container.tar.gz\n
  1. Make a note of the container ID for the resulting load:
Loaded image: ghcr.io/splunk/splunk-connect-for-syslog/container3:3.23.1\n
  1. Use the container ID to create a local label:

    <podman or docker> tag ghcr.io/splunk/splunk-connect-for-syslog/container3:3.23.1 sc4slocal:latest\n

  2. Use the local label sc4slocal:latest in the relevant unit or YAML file to launch SC4S by setting the SC4S_IMAGE environment variable in the unit file, or the relevant image: tag if you are using Docker Compose/Swarm. This label will cause the runtime to select the locally loaded image, and will not attempt to obtain the container image from the internet.

Environment=\"SC4S_IMAGE=sc4slocal:latest\"\n
7. Remove the entry from the relevant unit file when your configuration uses systemd. This is because an external connection to pull the container is no longer needed or available:

ExecStartPre=/usr/bin/docker pull $SC4S_IMAGE\n
"},{"location":"gettingstarted/docker-systemd-general/","title":"Install Docker CE","text":""},{"location":"gettingstarted/docker-systemd-general/#before-you-begin","title":"Before you begin","text":"

Before you start:

"},{"location":"gettingstarted/docker-systemd-general/#initial-setup","title":"Initial Setup","text":"

This topic provides the most recent unit file. By default, the latest SC4S image is automatically downloaded at each restart. Consult this topic when you upgrade your SC4S installation and check for changes to the provided template unit file. Make sure these changes are incorporated into your configuration before you relaunch with systemd.

  1. Create the systemd unit file /lib/systemd/system/sc4s.service based on the provided template:
[Unit]\nDescription=SC4S Container\nWants=NetworkManager.service network-online.target docker.service\nAfter=NetworkManager.service network-online.target docker.service\nRequires=docker.service\n\n[Install]\nWantedBy=multi-user.target\n\n[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n\n# Required mount point for syslog-ng persist data (including disk buffer)\nEnvironment=\"SC4S_PERSIST_MOUNT=splunk-sc4s-var:/var/lib/syslog-ng\"\n\n# Optional mount point for local overrides and configurations; see notes in docs\nEnvironment=\"SC4S_LOCAL_MOUNT=/opt/sc4s/local:/etc/syslog-ng/conf.d/local:z\"\n\n# Optional mount point for local disk archive (EWMM output) files\nEnvironment=\"SC4S_ARCHIVE_MOUNT=/opt/sc4s/archive:/var/lib/syslog-ng/archive:z\"\n\n# Map location of TLS custom TLS\nEnvironment=\"SC4S_TLS_MOUNT=/opt/sc4s/tls:/etc/syslog-ng/tls:z\"\n\nTimeoutStartSec=0\n\nExecStartPre=/usr/bin/docker pull $SC4S_IMAGE\n\n# Note: /usr/bin/bash will not be valid path for all OS\n# when startup fails on running bash check if the path is correct\nExecStartPre=/usr/bin/bash -c \"/usr/bin/systemctl set-environment SC4SHOST=$(hostname -s)\"\n\nExecStart=/usr/bin/docker run \\\n        -e \"SC4S_CONTAINER_HOST=${SC4SHOST}\" \\\n        -v \"$SC4S_PERSIST_MOUNT\" \\\n        -v \"$SC4S_LOCAL_MOUNT\" \\\n        -v \"$SC4S_ARCHIVE_MOUNT\" \\\n        -v \"$SC4S_TLS_MOUNT\" \\\n        --env-file=/opt/sc4s/env_file \\\n        --network host \\\n        --name SC4S \\\n        --rm $SC4S_IMAGE\n\nRestart=on-abnormal\n
  1. Execute the following command to create a local volume. This volume contains the disk buffer files in case of a communication failure to the upstream destinations:
sudo docker volume create splunk-sc4s-var\n
  1. Account for disk space requirements for the new Docker volume. The Docker volume can grow significantly if there is an extended outage to the SC4S destinations. This volume can be found at /var/lib/docker/volumes/. See SC4S Disk Buffer Configuration.

  2. Create the following subdirectories:

  1. Create a file named /opt/sc4s/env_file and add the following environment variables and values:
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
  1. Update SC4S_DEST_SPLUNK_HEC_DEFAULT_URL and SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN to reflect the correct values for your environment. Do not configure HEC Acknowledgement when deploying the HEC token on the Splunk side, the underlying syslog-ng HTTP destination does not support this feature.

  2. The default number of SC4S_DEST_SPLUNK_HEC_<ID>_WORKERS is 10. Consult the community if you feel the number of workers should deviate from this.

  3. Splunk Connect for Syslog defaults to secure configurations. If you are not using trusted SSL certificates, be sure to uncomment the last line in the example in step 5.

For more information see Docker and Podman basic configurations and detailed configuration.

"},{"location":"gettingstarted/docker-systemd-general/#configure-sc4s-for-systemd","title":"Configure SC4S for systemd","text":"

To configure SC4S for systemd run the following commands:

sudo systemctl daemon-reload\nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/docker-systemd-general/#restart-sc4s","title":"Restart SC4S","text":"

To restart SC4S run the following command:

sudo systemctl restart sc4s\n
"},{"location":"gettingstarted/docker-systemd-general/#implement-unit-file-changes","title":"Implement unit file changes","text":"

If you made changes to the configuration unit file, for example to configure with dedicated ports, you must stop SC4S and re-run the systemd configuration commands to implement your changes.

sudo systemctl stop sc4s\nsudo systemctl daemon-reload \nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/docker-systemd-general/#validate-your-configuration","title":"Validate your configuration","text":"

SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once the checks are complete, validate that SC4S properly communicate with Splunk. To do this, execute the following search in Splunk:

index=* sourcetype=sc4s:events \"starting up\"\n

You should see an event similar to the following:

syslog-ng starting up; version='3.28.1'\n

The startup process should proceed normally without syntax errors. If it does not, follow the steps below before proceeding to deeper-level troubleshooting:

  1. Verify that the URL, token, and TLS/SSL settings are correct, and that the appropriate firewall ports are open (8088 or 443).
  2. Verify that your indexes are created in Splunk, and that your token has access to them.
  3. If you are using a load balancer, verify that it is operating properly.
  4. Execute the following command to check the SC4S startup process running in the container.
docker logs SC4S\n

You should see events similar to those below in the output:

syslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
  1. If you do not see this output, see \u201cTroubleshoot sc4s server\u201d and \u201cTroubleshoot resources\u201d for more information.
"},{"location":"gettingstarted/getting-started-runtime-configuration/","title":"Implement a Container Runtime and SC4S","text":""},{"location":"gettingstarted/getting-started-runtime-configuration/#step-1-configure-your-os-to-work-with-sc4s","title":"Step 1: Configure your OS to work with SC4S","text":""},{"location":"gettingstarted/getting-started-runtime-configuration/#tune-your-receive-buffer","title":"Tune your receive buffer","text":"

You must tune the host Linux OS receive buffer size to match the SC4S default. This helps to avoid event dropping at the network level. The default receive buffer for SC4S is 16 MB for UDP traffic, which should be acceptable for most environments. To set the host OS kernel to match your buffer:

  1. Edit /etc/sysctl.conf using the following whole-byte values corresponding to 16 MB:

    net.core.rmem_default = 17039360\nnet.core.rmem_max = 17039360\n

  2. Apply to the kernel:

    sysctl -p\n

  3. To verify that the kernel does not drop packets, periodically monitor the buffer using the command netstat -su | grep \"receive errors\". Failure to tune the kernel for high-volume traffic results in message loss, which can be unpredictable and difficult to detect. The default values for receive kernel buffers in most distributions is 2 MB, which may not be adequate for your configuration.

"},{"location":"gettingstarted/getting-started-runtime-configuration/#configure-ipv4-forwarding","title":"Configure IPv4 forwarding","text":"

In many distributions, for example CentOS provisioned in AWS, IPv4 forwarding is not enabled by default. IPv4 forwarding must be enabled for container networking.

net.ipv4.ip_forward=1\n
"},{"location":"gettingstarted/getting-started-runtime-configuration/#step-2-create-your-local-directory-structure","title":"Step 2: Create your local directory structure","text":"

Create the following three directories:

When you create these directories, make sure that they match the volume mounts specified in the sc4s.service unit file. Failure to do this will cause SC4S to abort at startup.

"},{"location":"gettingstarted/getting-started-runtime-configuration/#step-3-select-a-container-runtime-and-sc4s-configuration","title":"Step 3: Select a Container Runtime and SC4S Configuration","text":"

The table below shows possible ways to run SC4S using Docker or Podman with various management and orchestration systems.

Check your Podman or Docker documentation to see which operating systems are supported by your chosen container management tool. If the SC4S deployment model involves additional limitations or requirements regarding operating systems, you will find them in the column labeled \u2018Additional Operating Systems Requirements\u2019.

Container Runtime and Orchestration Additional Operating Systems Requirements MicroK8s Ubuntu with Microk8s Podman + systemd Docker CE + systemd Docker Desktop + Compose MacOS Docker Compose Bring your own Environment RHEL or CentOS 8.1 & 8.2 (best option) Offline Container Installation Ansible+Docker Swarm Ansible+Podman Ansible+Docker"},{"location":"gettingstarted/getting-started-splunk-setup/","title":"Splunk setup","text":"

To ensure proper integration for SC4S and Splunk, perform the following tasks in your Splunk instance:

  1. Create your SC4S indexes in Splunk.
  2. Configure your HTTP event collector.
"},{"location":"gettingstarted/getting-started-splunk-setup/#step-1-create-indexes-within-splunk","title":"Step 1: Create indexes within Splunk","text":"

SC4S maps each sourcetype to the following indexes by default. You will also need to create these indexes in Splunk:

If you use custom indexes in SC4S you must also create them in Splunk. See Create custom indexes for more information.

"},{"location":"gettingstarted/getting-started-splunk-setup/#step-2-configure-your-http-event-collector","title":"Step 2: Configure your HTTP event collector","text":"

See Use the HTTP event collector for HEC configuration instructions based on your Splunk type.

Keep in mind the following best practices specific to HEC for SC4S:

"},{"location":"gettingstarted/getting-started-splunk-setup/#create-a-load-balancing-mechanism","title":"Create a load balancing mechanism","text":"

In some configurations, you should ensure output balancing from SC4S to Splunk indexers. To do this, you create a load balancing mechanism between SC4S and Splunk indexers. Note that this should not be confused with load balancing between sources and SC4S.

When configuring your load balancing mechanism, keep in mind the following:

"},{"location":"gettingstarted/k8s-microk8s/","title":"Install and configure SC4S with Kubernetes","text":"

Splunk provides an implementation for SC4S deployment with MicroK8s using a single-server MicroK8s as the deployment model. Clustering has some tradeoffs and should be only considered on a deployment-specific basis.

You can independently replicate the model deployment on different distributions of Kubernetes. Do not attempt this unless you have advanced understanding of Kubernetes and are willing and able to maintain this configuration regularly.

SC4S with MicroK8s leverages features of MicroK8s:

Splunk maintains container images, but it doesn\u2019t directly support or otherwise provide resolutions for issues within the runtime environment.

"},{"location":"gettingstarted/k8s-microk8s/#step-1-allocate-ip-addresses","title":"Step 1: Allocate IP addresses","text":"

This configuration requires as least two IP addresses: one for the host and one for the internal load balancer. We suggest allocating three IP addresses for the host and 5-10 IP addresses for later use.

"},{"location":"gettingstarted/k8s-microk8s/#step-2-install-microk8s","title":"Step 2: Install MicroK8s","text":"

To install MicroK8s:

sudo snap install microk8s --classic --channel=1.24\nsudo usermod -a -G microk8s $USER\nsudo chown -f -R $USER ~/.kube\nsu - $USER\nmicrok8s status --wait-ready\n

"},{"location":"gettingstarted/k8s-microk8s/#step-3-set-up-your-add-ons","title":"Step 3: Set up your add-ons","text":"

When you install metallb you will be prompted for one or more IPs to use as entry points. If you do not plan to enable clustering, then this IP may be the same IP as the host. If you do plan to enable clustering this IP should not be assigned to the host.

A single IP in CIDR format is x.x.x.x/32. Use CIDR or range syntax.

microk8s enable dns \nmicrok8s enable community\nmicrok8s enable metallb \nmicrok8s enable rbac \nmicrok8s enable storage \nmicrok8s enable openebs \nmicrok8s enable helm3\nmicrok8s status --wait-ready\n
"},{"location":"gettingstarted/k8s-microk8s/#step-4-add-an-sc4s-helm-repository","title":"Step 4: Add an SC4S Helm repository","text":"

To add an SC4S Helm repository:

microk8s helm3 repo add splunk-connect-for-syslog https://splunk.github.io/splunk-connect-for-syslog\nmicrok8s helm3 repo update\n
"},{"location":"gettingstarted/k8s-microk8s/#step-5-create-a-valuesyaml-file","title":"Step 5: Create a values.yaml file","text":"

Create the configuration file values.yaml. You can provide HEC token as a Kubernetes secret or in plain text.

"},{"location":"gettingstarted/k8s-microk8s/#provide-the-hec-token-as-plain-text","title":"Provide the HEC token as plain text","text":"
  1. Create values.yaml file:
#values.yaml\nsplunk:\n    hec_url: \"https://xxx.xxx.xxx.xxx:8088/services/collector/event\"\n    hec_token: \"00000000-0000-0000-0000-000000000000\"\n    hec_verify_tls: \"yes\"\n
  1. Install SC4S:
    microk8s helm3 install sc4s splunk-connect-for-syslog/splunk-connect-for-syslog -f values.yaml\n
"},{"location":"gettingstarted/k8s-microk8s/#provide-the-hec-token-as-secret","title":"Provide the HEC token as secret","text":"
  1. Create values.yaml file:
#values.yaml\nsplunk:\n    hec_url: \"https://xxx.xxx.xxx.xxx:8088/services/collector/event\"\n    hec_verify_tls: \"yes\"\n
  1. Install SC4S:
    export HEC_TOKEN=\"00000000-0000-0000-0000-000000000000\"\nmicrok8s helm3 install sc4s --set splunk.hec_token=$HEC_TOKEN splunk-connect-for-syslog/splunk-connect-for-syslog -f values.yaml\n
"},{"location":"gettingstarted/k8s-microk8s/#update-or-upgrade-sc4s","title":"Update or upgrade SC4S","text":"

Whenever the image is upgraded or when changes are made to the values.yaml file and should be applied, run the command:

microk8s helm3 upgrade sc4s splunk-connect-for-syslog/splunk-connect-for-syslog -f values.yaml\n
"},{"location":"gettingstarted/k8s-microk8s/#install-and-configure-sc4s-for-high-availability-ha","title":"Install and configure SC4S for High Availability (HA)","text":"

Three identically-sized nodes are required for HA. See your Microk8s documentation for more information.

  1. Update the configuration file:

    #values.yaml\nreplicaCount: 6 #2x node count\nsplunk:\n    hec_url: \"https://xxx.xxx.xxx.xxx:8088/services/collector/event\"\n    hec_token: \"00000000-0000-0000-0000-000000000000\"\n    hec_verify_tls: \"yes\"\n

  2. Upgrade SC4S to apply the new configuration:

    microk8s helm3 upgrade sc4s splunk-connect-for-syslog/splunk-connect-for-syslog -f values.yaml\n

"},{"location":"gettingstarted/k8s-microk8s/#configure-your-sc4s-instances-through-valuesyaml","title":"Configure your SC4S instances through values.yaml","text":"

With helm-based deployment you cannot configure environment variables and context files directly. Instead, use the values.yaml file to update your configuration, for example:

sc4s:\n  # Certificate as a k8s Secret with tls.key and tls.crt fields\n  # Ideally produced and managed by cert-manager.io\n  existingCert: example-com-tls\n  #\n  vendor_product:\n    - name: checkpoint\n      ports:\n        tcp: [9000] #Same as SC4S_LISTEN_CHECKPOINT_TCP_PORT=9000\n        udp: [9000]\n      options:\n        listen:\n          old_host_rules: \"yes\" #Same as SC4S_LISTEN_CHECKPOINT_OLD_HOST_RULES=yes\n\n    - name: infoblox\n      ports:\n        tcp: [9001, 9002]\n        tls: [9003]\n    - name: fortinet\n      ports:\n        ietf_udp:\n          - 9100\n          - 9101\n  context_files:\n    splunk_metadata.csv: |-\n      cisco_meraki,index,foo\n    host.csv: |-\n      192.168.1.1,foo\n      192.168.1.2,moon\n

Use the config_files and context_files variables to specify configuration and context files that are passed to SC4S.

"},{"location":"gettingstarted/k8s-microk8s/#manage-resources","title":"Manage resources","text":"

You should expect your system to require two instances per node by default. Adjust requests and limits to allow each instance to use about 40% of each node, presuming no other workload is present.

resources:\n  limits:\n    cpu: 100m\n    memory: 128Mi\n  requests:\n    cpu: 100m\n    memory: 128Mi\n
"},{"location":"gettingstarted/podman-systemd-general/","title":"Install podman","text":"

See Podman product installation docs for information about working with your Podman installation.

Before performing the tasks described in this topic, make sure you are familiar with using IPv4 forwarding with SC4S. See IPv4 forwarding .

"},{"location":"gettingstarted/podman-systemd-general/#initial-setup","title":"Initial Setup","text":"

NOTE: Make sure to use the latest unit file, which is provided here, with the current release. By default, the latest container is automatically downloaded at each restart. As a best practice, check back here regularly for any changes made to the latest template unit file is incorporated into production before you relaunch with systemd.

  1. Create the systemd unit file /lib/systemd/system/sc4s.service based on the following template:
[Unit]\nDescription=SC4S Container\nWants=NetworkManager.service network-online.target\nAfter=NetworkManager.service network-online.target\n\n[Install]\nWantedBy=multi-user.target\n\n[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n\n# Required mount point for syslog-ng persist data (including disk buffer)\nEnvironment=\"SC4S_PERSIST_MOUNT=splunk-sc4s-var:/var/lib/syslog-ng\"\n\n# Optional mount point for local overrides and configurations; see notes in docs\nEnvironment=\"SC4S_LOCAL_MOUNT=/opt/sc4s/local:/etc/syslog-ng/conf.d/local:z\"\n\n# Optional mount point for local disk archive (EWMM output) files\nEnvironment=\"SC4S_ARCHIVE_MOUNT=/opt/sc4s/archive:/var/lib/syslog-ng/archive:z\"\n\n# Map location of TLS custom TLS\nEnvironment=\"SC4S_TLS_MOUNT=/opt/sc4s/tls:/etc/syslog-ng/tls:z\"\n\nTimeoutStartSec=0\n\nExecStartPre=/usr/bin/podman pull $SC4S_IMAGE\n\n# Note: /usr/bin/bash will not be valid path for all OS\n# when startup fails on running bash check if the path is correct\nExecStartPre=/usr/bin/bash -c \"/usr/bin/systemctl set-environment SC4SHOST=$(hostname -s)\"\n\nExecStart=/usr/bin/podman run \\\n        -e \"SC4S_CONTAINER_HOST=${SC4SHOST}\" \\\n        -v \"$SC4S_PERSIST_MOUNT\" \\\n        -v \"$SC4S_LOCAL_MOUNT\" \\\n        -v \"$SC4S_ARCHIVE_MOUNT\" \\\n        -v \"$SC4S_TLS_MOUNT\" \\\n        --env-file=/opt/sc4s/env_file \\\n        --health-cmd=\"/healthcheck.sh\" \\\n        --health-interval=10s --health-retries=6 --health-timeout=6s \\\n        --network host \\\n        --name SC4S \\\n        --rm $SC4S_IMAGE\n\nRestart=on-abnormal\n
  1. Execute the following command to create a local volume, which contains the disk buffer files in the event of a communication failure, to the upstream destinations. This volume will also be used to keep track of the state of syslog-ng between restarts, and in particular the state of the disk buffer.
sudo podman volume create splunk-sc4s-var\n

NOTE: Be sure to account for disk space requirements for the podman volume you create. This volume will be located in /var/lib/containers/storage/volumes/ and could grow significantly if there is an extended outage to the SC4S destinations (typically HEC endpoints). See the \u201cSC4S Disk Buffer Configuration\u201d section on the Configuration page for more info.

  1. Create the subdirectories: * /opt/sc4s/local * /opt/sc4s/archive * /opt/sc4s/tls
  2. Create a file named /opt/sc4s/env_file and add the following environment variables and values:
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
  1. Update SC4S_DEST_SPLUNK_HEC_DEFAULT_URL and SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN to reflect the correct values for your environment. Do not configure HEC Acknowledgement when deploying the HEC token on the Splunk side; the underlying syslog-ng http destination does not support this feature. The default value for SC4S_DEST_SPLUNK_HEC_<ID>_WORKERS is 10. Consult the community if you feel the number of workers (threads) should deviate from this.

NOTE: Splunk Connect for Syslog defaults to secure configurations. If you are not using trusted SSL certificates, be sure to uncomment the last line in the example above.

For more information about configuration refer to Docker and Podman basic configurations and detailed configuration.

"},{"location":"gettingstarted/podman-systemd-general/#configure-sc4s-for-systemd-and-start-sc4s","title":"Configure SC4S for systemd and start SC4S","text":"
sudo systemctl daemon-reload\nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/podman-systemd-general/#restart-sc4s","title":"Restart SC4S","text":"
sudo systemctl restart sc4s\n

If you have made changes to the configuration unit file, for example, in order to configure dedicated ports, you must first stop SC4S and re-run the systemd configuration commands:

sudo systemctl stop sc4s\nsudo systemctl daemon-reload \nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/podman-systemd-general/#stop-sc4s","title":"Stop SC4S","text":"
sudo systemctl stop sc4s\n
"},{"location":"gettingstarted/podman-systemd-general/#verify-proper-operation","title":"Verify Proper Operation","text":"

SC4S has a number of \u201cpreflight\u201d checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. After this step is complete, verify SC4S is properly communicating with Splunk by executing the following search in Splunk:

index=* sourcetype=sc4s:events \"starting up\"\n

This should yield an event similar to the following when the startup process proceeds normally (without syntax errors).

syslog-ng starting up; version='3.28.1'\n

If you do not see this, try the following before proceeding to deeper-level troubleshooting:

podman logs SC4S\n

You should see events similar to those below in the output:

syslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n

If the output does not display, see \u201cTroubleshoot sc4s server\u201d and \u201cTroubleshoot resources\u201d for more information.

"},{"location":"gettingstarted/podman-systemd-general/#sc4s-non-root-operation","title":"SC4S non-root operation","text":""},{"location":"gettingstarted/podman-systemd-general/#note","title":"NOTE:","text":"

Operating as a non-root user makes it impossible to use standard ports 514 and 601. Many devices cannot alter their destination port, so this operation may only be appropriate for cases where accepting syslog data from the public internet cannot be avoided.

"},{"location":"gettingstarted/podman-systemd-general/#prequisites","title":"Prequisites","text":"

Podman and slirp4netns must be installed.

"},{"location":"gettingstarted/podman-systemd-general/#setup","title":"Setup","text":"
  1. Increase the number of user namespaces. Execute the following with sudo privileges:

    $ echo \"user.max_user_namespaces=28633\" > /etc/sysctl.d/userns.conf      \n$ sysctl -p /etc/sysctl.d/userns.conf\n

  2. Create a non-root user from which to run SC4S and to prepare Podman for non-root operations:

    sudo useradd -m -d /home/sc4s -s /bin/bash sc4s\nsudo passwd sc4s  # type password here\nsudo su - sc4s\nmkdir -p /home/sc4s/local\nmkdir -p /home/sc4s/archive\nmkdir -p /home/sc4s/tls\npodman system migrate\n

  3. Load the new environment variables. To do this, temporarily switch to any other user, and then log back in as the SC4S user. When logging in as the SC4S user, don\u2019t use the \u2018su\u2019 command, as it won\u2019t load the new variables. Instead, you can use, for example, the command \u2018ssh sc4s@localhost\u2019.

  4. Create unit file in ~/.config/systemd/user/sc4s.service with the following content:

    [Unit]\nUser=sc4s\nDescription=SC4S Container\nWants=NetworkManager.service network-online.target\nAfter=NetworkManager.service network-online.target\n[Install]\nWantedBy=multi-user.target\n[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n# Required mount point for syslog-ng persist data (including disk buffer)\nEnvironment=\"SC4S_PERSIST_MOUNT=splunk-sc4s-var:/var/lib/syslog-ng\"\n# Optional mount point for local overrides and configuration\nEnvironment=\"SC4S_LOCAL_MOUNT=/home/sc4s/local:/etc/syslog-ng/conf.d/local:z\"\n# Optional mount point for local disk archive (EWMM output) files\nEnvironment=\"SC4S_ARCHIVE_MOUNT=/home/sc4s/archive:/var/lib/syslog-ng/archive:z\"\n# Map location of TLS custom TLS\nEnvironment=\"SC4S_TLS_MOUNT=/home/sc4s/tls:/etc/syslog-ng/tls:z\"\nTimeoutStartSec=0\nExecStartPre=/usr/bin/podman pull $SC4S_IMAGE\n# Note: The path /usr/bin/bash may vary based on your operating system.\n# when startup fails on running bash check if the path is correct\nExecStartPre=/usr/bin/bash -c \"/usr/bin/systemctl --user set-environment SC4SHOST=$(hostname -s)\"\nExecStart=/usr/bin/podman run -p 2514:514 -p 2514:514/udp -p 6514:6514  \\\n        -e \"SC4S_CONTAINER_HOST=${SC4SHOST}\" \\\n        -v \"$SC4S_PERSIST_MOUNT\" \\\n        -v \"$SC4S_LOCAL_MOUNT\" \\\n        -v \"$SC4S_ARCHIVE_MOUNT\" \\\n        -v \"$SC4S_TLS_MOUNT\" \\\n        --env-file=/home/sc4s/env_file \\\n        --health-cmd=\"/healthcheck.sh\" \\\n        --health-interval=10s --health-retries=6 --health-timeout=6s \\\n        --network host \\\n        --name SC4S \\\n        --rm $SC4S_IMAGE\nRestart=on-abnormal\n

  5. Create your env_file file at /home/sc4s/env_file

    SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://xxx.xxx.xxx.xxx:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\nSC4S_LISTEN_DEFAULT_TCP_PORT=8514\nSC4S_LISTEN_DEFAULT_UDP_PORT=8514\nSC4S_LISTEN_DEFAULT_RFC5426_PORT=8601\nSC4S_LISTEN_DEFAULT_RFC6587_PORT=8601\n

"},{"location":"gettingstarted/podman-systemd-general/#run-service","title":"Run service","text":"

To run the service as a non-root user, run the systemctl command with --user flag:

systemctl --user daemon-reload\nsystemctl --user enable sc4s\nsystemctl --user start sc4s\n

The remainder of the setup can be found in the main setup instructions.

"},{"location":"gettingstarted/quickstart_guide/","title":"Quickstart Guide","text":"

This guide will enable you to quickly implement basic changes to your Splunk instance and set up a simple SC4S installation. It\u2019s a great starting point for working with SC4S and establishing a minimal operational solution. The same steps are thoroughly described in the Splunk Setup and Runtime configuration sections.

"},{"location":"gettingstarted/quickstart_guide/#splunk-setup","title":"Splunk setup","text":"
  1. Create the following default indexes that are used by SC4S:

  2. Create a HEC token for SC4S. When filling out the form for the token, leave the \u201cSelected Indexes\u201d pane blank and specify that a lastChanceIndex be created so that all data received by SC4S will have a target destination in Splunk.

"},{"location":"gettingstarted/quickstart_guide/#sc4s-setup-using-rhel","title":"SC4S setup (using RHEL)","text":"
  1. Set the host OS kernel to match the default receiver buffer of SC4S, which is set to 16MB.

a. Add the following to /etc/sysctl.conf:

```\nnet.core.rmem_default = 17039360\nnet.core.rmem_max = 17039360\n```\n

b. Apply to the kernel:

```\nsysctl -p\n```\n
  1. Ensure the kernel is not dropping packets:

    netstat -su | grep \"receive errors\"\n
  2. Create the systemd unit file /lib/systemd/system/sc4s.service.

  3. Copy and paste from the SC4S sample unit file (Docker) or SC4S sample unit file (Podman).

  4. Install Podman or Docker:

    sudo yum -y install podman\n
    or
    sudo yum install docker-engine -y\n

  5. Create a Podman/Docker local volume that will contain the disk buffer files and other SC4S state files (choose one in the command below):

    sudo podman|docker volume create splunk-sc4s-var\n
  6. Create directories to be used as a mount point for local overrides and configurations:

    mkdir /opt/sc4s/local

    mkdir /opt/sc4s/archive

    mkdir /opt/sc4s/tls

  7. Create the environment file /opt/sc4s/env_file and replace the HEC_URL and HEC_TOKEN as necessary:

      SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\n  SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n  #Uncomment the following line if using untrusted SSL certificates\n  #SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
  8. Configure SC4S for systemd and start SC4S:

    sudo systemctl daemon-reload

    sudo systemctl enable sc4s

    sudo systemctl start sc4s

  9. Check podman/docker logs for errors:

    sudo podman|docker logs SC4S\n
  10. Search on Splunk for successful installation of SC4S:

    index=* sourcetype=sc4s:events \"starting up\"\n
  11. Send sample data to default udp port 514 of SC4S host:

    echo \u201cHello SC4S\u201d > /dev/udp/<SC4S_ip>/514\n
"},{"location":"sources/","title":"Introduction","text":"

When using Splunk Connect for Syslog to onboard a data source, the syslog-ng \u201capp-parser\u201d performs the operations that are traditionally performed at index-time by the corresponding Technical Add-on installed there. These index-time operations include linebreaking, source/sourcetype setting and timestamping. For this reason, if a data source is exclusively onboarded using SC4S then you will not need to install its corresponding Add-On on the indexers. You must, however, install the Add-on on the search head(s) for the user communities interested in this data source.

SC4S is designed to process \u201csyslog\u201d referring to IETF RFC standards 5424, legacy BSD syslog, RFC3164 (Not a standard document), and many \u201calmost\u201d syslog formats.

When possible data sources are identified and processed based on characteristics of the event that make them unique as compared to other events for example. Cisco devices using IOS will include \u201d : %\u201d followed by a string. While Arista EOS devices will use a valid RFC3164 header with a value in the \u201cPROGRAM\u201d position with \u201c%\u201d as the first char in the \u201cMESSAGE\u201d portion. This allows two similar event structures to be processed correctly.

When identification by message content alone is not possible for example the \u201csshd\u201d program field is commonly used across vendors additional \u201chint\u201d or guidance configuration allows SC4S to better classify events. The hints can be applied by definition of a specific port which will be used as a property of the event or by configuration of a host name/ip pattern. For example \u201cVMWARE VSPHERE\u201d products have a number of \u201cPROGRAM\u201d fields which can be used to identify vmware specific events in the syslog stream and these can be properly sourcetyped automatically however because \u201csshd\u201d is not unique it will be treated as generic \u201cos:nix\u201d events until further configuration is applied. The administrator can take one of two actions to refine the processing for vmware

"},{"location":"sources/#supporting-previously-unknown-sources","title":"Supporting previously unknown sources","text":"

Many log sources can be supported using one of the flexible options available without specific code known as app-parsers.

New supported sources are added regularly. Please submit an issue with a description of the vend/product. Configuration information an a compressed pcap (.zip) from a non-production environment to request support for a new source.

Many sources can be self supported. While we encourage sharing new sources via the github project to promote consistency and develop best-practices there is no requirement to engage in the community.

"},{"location":"sources/#almost-syslog","title":"Almost Syslog","text":"

Sources sending legacy non conformant 3164 like streams can be assisted by the creation of an \u201cAlmost Syslog\u201d Parser. In an such a parser the goal is to process the syslog header allowing other parsers to correctly parse and handle the event. The following example is take from a currently supported format where the source product used epoch in the time stamp field.

    #Example event\n    #<134>1 1563249630.774247467 devicename security_event ids_alerted signature=1:28423:1 \n    # In the example note the vendor incorrectly included \"1\" following PRI defined in RFC5424 as indicating a compliant message\n    # The parser must remove the 1 before properly parsing\n    # The epoch time is captured by regex\n    # The epoch time is converted back into an RFC3306 date and provided to the parser\n    block parser syslog_epoch-parser() {    \n    channel {\n            filter { \n                message('^(\\<\\d+\\>)(?:1(?= ))? ?(\\d{10,13}(?:\\.\\d+)?) (.*)', flags(store-matches));\n            };  \n            parser {             \n                date-parser(\n                    format('%s.%f', '%s')\n                    template(\"$2\")\n                );\n            };\n            parser {\n                syslog-parser(\n\n                    flags(assume-utf8, expect-hostname, guess-timezone)\n                    template(\"$1 $S_ISODATE $3\")\n                    );\n            };\n            rewrite(set_rfc3164_epoch);                       \n\n    };\n    };\n    application syslog_epoch[sc4s-almost-syslog] {\n        parser { syslog_epoch-parser(); };   \n    };\n
"},{"location":"sources/#standard-syslog-using-message-parsing","title":"Standard Syslog using message parsing","text":"

Syslog data conforming to RFC3164 or complying with RFC standards mentioned above can be processed with an app-parser allowing the use of the default port rather than requiring custom ports the following example take from a currently supported source uses the value of \u201cprogram\u201d to identify the source as this program value is unique. Care must be taken to write filter conditions strictly enough to not conflict with similar sources

block parser alcatel_switch-parser() {    \n channel {\n        rewrite {\n            r_set_splunk_dest_default(\n                index('netops')\n                sourcetype('alcatel:switch')\n                vendor('alcatel')\n                product('switch')\n                template('t_hdr_msg')\n            );              \n        };       \n\n\n   };\n};\napplication alcatel_switch[sc4s-syslog] {\n filter { \n        program('swlogd' type(string) flags(prefix));\n    }; \n    parser { alcatel_switch-parser(); };   \n};\n
"},{"location":"sources/#standard-syslog-vendor-product-by-source","title":"Standard Syslog vendor product by source","text":"

In some cases standard syslog is also generic and can not be disambiguated from other sources by message content alone. When this happens and only a single source type is desired the \u201csimple\u201d option above is valid but requires managing a port. The following example allows use of a named port OR the vendor product by source configuration.

block parser dell_poweredge_cmc-parser() {    \n channel {\n\n        rewrite {\n            r_set_splunk_dest_default(\n                index('infraops')\n                sourcetype('dell:poweredge:cmc:syslog')\n                vendor('dell')\n                product('poweredge')\n                class('cmc')\n            );              \n        };       \n   };\n};\napplication dell_poweredge_cmc[sc4s-network-source] {\n filter { \n        (\"${.netsource.sc4s_vendor_product}\" eq \"dell_poweredge_cmc\"\n        or \"${SOURCE}\" eq \"s_DELL_POWEREDGE_CMC\")\n         and \"${fields.sc4s_vendor_product}\" eq \"\"\n    };    \n\n    parser { dell_poweredge_cmc-parser(); };   \n};\n
"},{"location":"sources/#filtering-events-from-output","title":"Filtering events from output","text":"

In some cases specific events may be considered \u201cnoise\u201d and functionality must be implemented to prevent forwarding of these events to Splunk In version 2.0.0 of SC4S a new feature was implemented to improve the ease of use and efficiency of this progress.

The following example will \u201cnull_queue\u201d or drop cisco IOS device events at the debug level. Note Cisco does not use the PRI to indicate DEBUG a message filter is required.

block parser cisco_ios_debug-postfilter() {\n    channel {\n        #In this case the outcome is drop the event other logic such as adding indexed fields or editing the message is possible\n        rewrite(r_set_dest_splunk_null_queue);\n   };\n};\napplication cisco_ios_debug-postfilter[sc4s-postfilter] {\n filter {\n        \"${fields.sc4s_vendor}\" eq \"cisco\" and\n        \"${fields.sc4s_product}\" eq \"ios\"\n        #Note regex reads as\n        # start from first position\n        # Any atleast 1 char that is not a `-`\n        # constant '-7-'\n        and message('^%[^\\-]+-7-');\n    };\n    parser { cisco_ios_debug-postfilter(); };\n};\n
"},{"location":"sources/#another-example-to-drop-events-based-on-src-and-action-values-in-message","title":"Another example to drop events based on \u201csrc\u201d and \u201caction\u201d values in message","text":"
#filename: /opt/sc4s/local/config/app_parsers/rewriters/app-dest-rewrite-checkpoint_drop\n\nblock parser app-dest-rewrite-checkpoint_drop-d_fmt_hec_default() {    \n    channel {\n        rewrite(r_set_dest_splunk_null_queue);\n    };\n};\n\napplication app-dest-rewrite-checkpoint_drop-d_fmt_hec_default[sc4s-lp-dest-format-d_hec_fmt] {\n    filter {\n        match('checkpoint' value('fields.sc4s_vendor') type(string))\n        and match('syslog' value('fields.sc4s_product') type(string))\n\n        and match('Drop' value('.SDATA.sc4s@2620.action') type(string))\n        and match('12.' value('.SDATA.sc4s@2620.src') type(string) flags(prefix) );\n\n    };    \n    parser { app-dest-rewrite-checkpoint_drop-d_fmt_hec_default(); };   \n};\n
"},{"location":"sources/#the-sc4s-fallback-sourcetype","title":"The SC4S \u201cfallback\u201d sourcetype","text":"

If SC4S receives an event on port 514 which has no soup filter, that event will be given a \u201cfallback\u201d sourcetype. If you see events in Splunk with the fallback sourcetype, then you should figure out what source the events are from and determine why these events are not being sourcetyped correctly. The most common reason for events categorized as \u201cfallback\u201d is the lack of a SC4S filter for that source, and in some cases a misconfigured relay which alters the integrity of the message format. In most cases this means a new SC4S filter must be developed. In this situation you can either build a filter or file an issue with the community to request help.

The \u201cfallback\u201d sourcetype is formatted in JSON to allow the administrator to see the constituent syslog-ng \u201cmacros\u201d (fields) that have been automatically parsed by the syslog-ng server An RFC3164 (legacy BSD syslog) \u201con the wire\u201d raw message is usually (but unfortunately not always) comprised of the following syslog-ng macros, in this order and spacing:

<$PRI> $HOST $LEGACY_MSGHDR$MESSAGE\n

These fields can be very useful in building a new filter for that sourcetype. In addition, the indexed field sc4s_syslog_format is helpful in determining if the incoming message is standard RFC3164. A value of anything other than rfc3164 or rfc5424_strict indicates a vendor perturbation of standard syslog, which will warrant more careful examination when building a filter.

"},{"location":"sources/#splunk-connect-for-syslog-and-splunk-metadata","title":"Splunk Connect for Syslog and Splunk metadata","text":"

A key aspect of SC4S is to properly set Splunk metadata prior to the data arriving in Splunk (and before any TA processing takes place. The filters will apply the proper index, source, sourcetype, host, and timestamp metadata automatically by individual data source. Proper values for this metadata (including a recommended index) are included with all \u201cout-of-the-box\u201d log paths included with SC4S and are chosen to properly interface with the corresponding TA in Splunk. The administrator will need to ensure all recommended indexes be created to accept this data if the defaults are not changed.

It is understood that default values will need to be changed in many installations. Each source documented in this section has a table entitled \u201cSourcetype and Index Configuration\u201d, which highlights the default index and sourcetype for each source. See the section \u201cSC4S metadata configuration\u201d in the \u201cConfiguration\u201d page for more information on how to override the default values in this table.

"},{"location":"sources/#unique-listening-ports","title":"Unique listening ports","text":"

SC4S supports unique listening ports for each source technology/log path (e.g. Cisco ASA), which is useful when the device is sending data on a port different from the typical default syslog port (UDP port 514). In some cases, when the source device emits data that is not able to be distinguished from other device types, a unique port is sometimes required. The specific environment variables used for setting \u201cunique ports\u201d are outlined in each source document in this section.

Using the default ports as unique listening ports is discouraged since it can lead to unintended consequences. There were cases of customers using port 514 as the unique listening port dedicated for a particular vendor and then sending other events to the same port, which caused some of those events to be misclassified.

In most cases only one \u201cunique port\u201d is needed for each source. However, SC4S also supports multiple network listening ports per source, which can be useful for a narrow set of compliance use cases. When configuring a source port variable to enable multiple ports, use a comma-separated list with no spaces (e.g. SC4S_LISTEN_CISCO_ASA_UDP_PORT=5005,6005).

"},{"location":"sources/#filtering-by-an-extra-product-description","title":"Filtering by an extra product description","text":"

Due to the fact that unique listening port feature differentiate vendor and product based on the first two underscore characters (\u2018_\u2019), it is possible to filter events by an extra string added to the product. For example in case of having several devices of the same type sending logs over different ports it is possible to route it to different indexes based only on port value while retaining proper vendor and product fields. In general, it follows convention:

SC4S_LISTEN_{VENDOR}_{PRODUCT}_{PROTOCOL}_PORT={PORT VALUE 1},{PORT VALUE 2}...\n
But for special use cases it can be extended to:
SC4S_LISTEN_{VENDOR}_{PRODUCT}_{ADDITIONAL_STRING}_{PROTOCOL}_PORT={PORT VALUE},{PORT VALUE 2}...\n
This feature removes the need for complex pre/post filters.

Example:

SC4S_LISTEN_EAMPLEVENDOR_EXAMPLEPRODUCT_GROUP01-001_UDP_PORT=18514\n\nsets:\nvendor = < example vendor >\nproduct = < example product >\ntag = .source.s_EAMPLEVENDOR_EXAMPLEPRODUCT_GROUP01-001\n
SC4S_LISTEN_EAMPLEVENDOR_EXAMPLEPRODUCT_GROUP01-002_UDP_PORT=28514\n\nsets:\nvendor = < example vendor >\nproduct = < example product >\ntag = .source.s_EAMPLEVENDOR_EXAMPLEPRODUCT_GROUP01-002\n

"},{"location":"sources/base/cef/","title":"Common Event Format (CEF)","text":""},{"location":"sources/base/cef/#product-various-products-that-send-cef-format-messages-via-syslog","title":"Product - Various products that send CEF-format messages via syslog","text":"

Each CEF product should have their own source entry in this documentation set. In a departure from normal configuration, all CEF products should use the \u201cCEF\u201d version of the unique port and archive environment variable settings (rather than a unique one per product), as the CEF log path handles all products sending events to SC4S in the CEF format. Examples of this include Arcsight, Imperva, and Cyberark. Therefore, the CEF environment variables for unique port, archive, etc. should be set only once.

If your deployment has multiple CEF devices that send to more than one port, set the CEF unique port variable(s) as a comma-separated list. See Unique Listening Ports for details.

The source documentation included below is a reference baseline for any product that sends data using the CEF log path.

Ref Link Splunk Add-on CEF https://bitbucket.org/SPLServices/ta-cef-for-splunk/downloads/ Product Manual https://docs.imperva.com/bundle/cloud-application-security/page/more/log-configuration.htm"},{"location":"sources/base/cef/#splunk-metadata-with-cef-events","title":"Splunk Metadata with CEF events","text":"

The keys (first column) in splunk_metadata.csv for CEF data sources have a slightly different meaning than those for non-CEF ones. The typical vendor_product syntax is instead replaced by checks against specific columns of the CEF event \u2013 namely the first, second, and fourth columns following the leading CEF:0 (\u201ccolumn 0\u201d). These specific columns refer to the CEF device_vendor, device_product, and device_event_class, respectively. The third column, device_version, is not used for metadata assignment.

SC4S sets metadata based on the first two columns, and (optionally) the fourth. While the key (first column) in the splunk_metadata file for non-CEF sources uses a \u201cvendor_product\u201d syntax that is arbitrary, the syntax for this key for CEF events is based on the actual contents of columns 1,2 and 4 from the CEF event, namely:

device_vendor_device_product_device_class

The final device_class portion is optional. Therefore, CEF entries in splunk_metadata can have a key representing the vendor and product, and others representing a vendor and product coupled with one or more additional classes. This allows for more granular metadata assignment (or overrides).

Here is a snippet of a sample Imperva CEF event that includes a CEF device class entry (which is \u201cFirewall\u201d):

Apr 19 10:29:53 3.3.3.3 CEF:0|Imperva Inc.|SecureSphere|12.0.0|Firewall|SSL Untraceable Connection|Medium|\n

and the corresponding match in splunk_metadata.csv:

Imperva Inc._SecureSphere_Firewall,sourcetype,imperva:waf:firewall:cef\n
"},{"location":"sources/base/cef/#default-sourcetype","title":"Default Sourcetype","text":"sourcetype notes cef Common sourcetype"},{"location":"sources/base/cef/#default-source","title":"Default Source","text":"source notes Varies Varies"},{"location":"sources/base/cef/#default-index-configuration","title":"Default Index Configuration","text":"key source index notes Vendor_Product Varies main none"},{"location":"sources/base/cef/#filter-type","title":"Filter type","text":"

MSG Parse: This filter parses message content

"},{"location":"sources/base/cef/#options","title":"Options","text":"Variable default description SC4S_LISTEN_CEF_UDP_PORT empty string Enable a UDP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_CEF_TCP_PORT empty string Enable a TCP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_CEF_TLS_PORT empty string Enable a TLS port for this specific vendor product using a comma-separated list of port numbers SC4S_DEST_CEF_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source"},{"location":"sources/base/leef/","title":"Log Extended Event Format (LEEF)","text":""},{"location":"sources/base/leef/#product-various-products-that-send-leef-v1-and-v2-format-messages-via-syslog","title":"Product - Various products that send LEEF V1 and V2 format messages via syslog","text":"

Each LEEF product should have their own source entry in this documentation set by vendor. In a departure from normal configuration, all LEEF products should use the \u201cLEEF\u201d version of the unique port and archive environment variable settings (rather than a unique one per product), as the LEEF log path handles all products sending events to SC4S in the LEEF format. Examples of this include QRadar itself as well as other legacy systems. Therefore, the LEEF environment variables for unique port, archive, etc. should be set only once.

If your deployment has multiple LEEF devices that send to more than one port, set the LEEF unique port variable(s) as a comma-separated list. See Unique Listening Ports for details.

The source documentation included below is a reference baseline for any product that sends data using the LEEF log path.

Some vendors implement LEEF v2.0 format events incorrectly, omitting the required \u201ckey=value\u201d separator field from the LEEF header, thus forcing the consumer to assume the default tab \\t character. SC4S will correctly process this omission, but will not correctly process other non-compliant formats.

The LEEF format allows for the inclusion of a field devTime containing the device timestamp and allows the sender to also specify the format of this timestamp in another field called devTimeFormat, which uses the Java Time format. SC4S uses syslog-ng strptime format which is not directly translatable to the Java Time format. Therefore, SC4S has provided support for the following common formats. If needed, additional time formats can be requested via an issue on github.

    '%s.%f',\n    '%s',\n    '%b %d %H:%M:%S.%f',\n    '%b %d %H:%M:%S',\n    '%b %d %Y %H:%M:%S.%f',\n    '%b %e %Y %H:%M:%S',\n    '%b %e %H:%M:%S.%f',\n    '%b %e %H:%M:%S',\n    '%b %e %Y %H:%M:%S.%f',\n    '%b %e %Y %H:%M:%S'  \n
Ref Link Splunk Add-on LEEF None Product Manual https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/com.ibm.dsm.doc/c_LEEF_Format_Guide_intro.html"},{"location":"sources/base/leef/#splunk-metadata-with-leef-events","title":"Splunk Metadata with LEEF events","text":"

The keys (first column) in splunk_metadata.csv for LEEF data sources have a slightly different meaning than those for non-LEEF ones. The typical vendor_product syntax is instead replaced by checks against specific columns of the LEEF event \u2013 namely the first and second, columns following the leading LEEF:VERSION (\u201ccolumn 0\u201d). These specific columns refer to the LEEF device_vendor, and device_product, respectively.

device_vendor_device_product

Here is a snippet of a sample LANCOPE event in LEEF 2.0 format:

<111>Apr 19 10:29:53 3.3.3.3 LEEF:2.0|Lancope|StealthWatch|1.0|41|^|src=192.0.2.0^dst=172.50.123.1^sev=5^cat=anomaly^srcPort=81^dstPort=21^usrName=joe.black\n

and the corresponding match in splunk_metadata.csv:

Lancope_StealthWatch,source,lancope:stealthwatch\n
"},{"location":"sources/base/leef/#default-sourcetype","title":"Default Sourcetype","text":"sourcetype notes LEEF:1 Common sourcetype for all LEEF v1 events LEEF:2:<separator> Common sourcetype for all LEEF v2 events separator is the printable literal or hex value of the separator used in the event"},{"location":"sources/base/leef/#default-source","title":"Default Source","text":"source notes vendor:product Varies"},{"location":"sources/base/leef/#default-index-configuration","title":"Default Index Configuration","text":"key source index notes Vendor_Product Varies main none"},{"location":"sources/base/leef/#filter-type","title":"Filter type","text":"

MSG Parse: This filter parses message content

"},{"location":"sources/base/leef/#options","title":"Options","text":"Variable default description SC4S_LISTEN_LEEF_UDP_PORT empty string Enable a UDP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_LEEF_TCP_PORT empty string Enable a TCP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_LEEF_TLS_PORT empty string Enable a TLS port for this specific vendor product using a comma-separated list of port numbers SC4S_DEST_LEEF_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source"},{"location":"sources/base/nix/","title":"Generic *NIX","text":"

Many appliance vendor utilize Linux and BSD distributions as the foundation of the solution. When configured to log via syslog, these devices\u2019 OS logs (from a security perspective) can be monitored using the common Splunk Nix TA.

Note: This is NOT a replacement for or alternative to the Splunk Universal forwarder on Linux and Unix. For general-purpose server applications, the Universal Forwarder offers more comprehensive collection of events and metrics appropriate for both security and operations use cases.

Ref Link Splunk Add-on https://splunkbase.splunk.com/app/833/"},{"location":"sources/base/nix/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes nix:syslog None"},{"location":"sources/base/nix/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes nix_syslog nix:syslog osnix none"},{"location":"sources/base/nix/#filter-type","title":"Filter type","text":"

MSG Parse: This filter parses message content

"},{"location":"sources/base/nix/#setup-and-configuration","title":"Setup and Configuration","text":""},{"location":"sources/base/nix/#options","title":"Options","text":"Variable default description SC4S_DEST_NIX_SYSLOG_ARCHIVE no Enable archive to disk for this specific source SC4S_DEST_NIX_SYSLOG_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source"},{"location":"sources/base/simple/","title":"Simple Log path by port","text":"

The SIMPLE source configuration allows configuration of a log path for SC4S using a single port to a single index/sourcetype combination to quickly onboard new sources that have not been formally supported in the product. Source data must use RFC5424 or a common variant of RFC3164 formatting.

"},{"location":"sources/base/simple/#splunk-metadata-with-simple-events","title":"Splunk Metadata with SIMPLE events","text":"

The keys (first column) in splunk_metadata.csv for SIMPLE data sources is a user-created key using the vendor_product convention. For example, to on-board a new product first firewall using a source type of first:firewall and index netfw, add the following two lines to the configuration file as shown:

first_firewall,index,netfw\nfirst_firewall,sourcetype,first:firewall\n
"},{"location":"sources/base/simple/#options","title":"Options","text":"

For the variables below, replace VENDOR_PRODUCT with the key (converted to upper case) used in the splunk_metadata.csv. Based on the example above, to establish a tcp listener for first firewall we would use SC4S_LISTEN_SIMPLE_FIRST_FIREWALL_TCP_PORT.

Variable default description SC4S_LISTEN_SIMPLE_VENDOR_PRODUCT_UDP_PORT empty string Enable a UDP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_SIMPLE_VENDOR_PRODUCT_TCP_PORT empty string Enable a TCP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_SIMPLE_VENDOR_PRODUCT_TLS_PORT empty string Enable a TLS port for this specific vendor product using a comma-separated list of port numbers SC4S_ARCHIVE_SIMPLE_VENDOR_PRODUCT no Enable archive to disk for this specific source SC4S_DEST_SIMPLE_VENDOR_PRODUCT_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source"},{"location":"sources/base/simple/#important-notes","title":"Important Notes","text":""},{"location":"sources/vendor/AVI/","title":"Common","text":""},{"location":"sources/vendor/AVI/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/AVI/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual https://avinetworks.com/docs/latest/syslog-formats/"},{"location":"sources/vendor/AVI/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes avi:events None"},{"location":"sources/vendor/AVI/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes avi_vantage avi:events netops none"},{"location":"sources/vendor/Alcatel/Switch/","title":"Switch","text":""},{"location":"sources/vendor/Alcatel/Switch/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Alcatel/Switch/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Alcatel/Switch/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes alcatel:switch None"},{"location":"sources/vendor/Alcatel/Switch/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes alcatel_switch alcatel:switch netops none"},{"location":"sources/vendor/Alsid/Alsid/","title":"Alsid","text":"

The product has been purchased and republished under a new product name by Tenable this configuration is obsolete.

"},{"location":"sources/vendor/Alsid/Alsid/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Alsid/Alsid/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/5173/ Product Manual unknown"},{"location":"sources/vendor/Alsid/Alsid/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes alsid:syslog None"},{"location":"sources/vendor/Alsid/Alsid/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes alsid_syslog alsid:syslog oswinsec none"},{"location":"sources/vendor/Arista/","title":"EOS","text":""},{"location":"sources/vendor/Arista/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Arista/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Arista/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes arista:eos:* None"},{"location":"sources/vendor/Arista/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes arista_eos arista:eos netops none arista_eos_$PROCESSNAME arista:eosq netops The \u201cprocess\u201d field is used from the event"},{"location":"sources/vendor/Aruba/ap/","title":"Access Points","text":""},{"location":"sources/vendor/Aruba/ap/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Aruba/ap/#links","title":"Links","text":"Ref Link"},{"location":"sources/vendor/Aruba/ap/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes aruba:syslog Dynamically Created"},{"location":"sources/vendor/Aruba/ap/#index-configuration","title":"Index Configuration","text":"key index notes aruba_ap netops none"},{"location":"sources/vendor/Aruba/ap/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-aruba_ap.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-aruba_ap[sc4s-vps] {\n filter { \n        host(\"aruba-ap-*\" type(glob))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('aruba')\n            product('ap')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Aruba/clearpass/","title":"Clearpass","text":""},{"location":"sources/vendor/Aruba/clearpass/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Aruba/clearpass/#links","title":"Links","text":"Ref Link"},{"location":"sources/vendor/Aruba/clearpass/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes aruba:clearpass Dynamically Created"},{"location":"sources/vendor/Aruba/clearpass/#index-configuration","title":"Index Configuration","text":"key index notes aruba_clearpass netops none aruba_clearpass_endpoint-profile netops none aruba_clearpass_alert netops none aruba_clearpass_endpoint-audit-record netops none aruba_clearpass_policy-server-session netops none aruba_clearpass_post-auth-monit-config netops none aruba_clearpass_snmp-session-log netops none aruba_clearpass_radius-session netops none aruba_clearpass_system-event netops none aruba_clearpass_tacacs-accounting-detail netops none aruba_clearpass_tacacs-accounting-record netops none"},{"location":"sources/vendor/Aruba/clearpass/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-aruba_clearpass.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-aruba_clearpass[sc4s-vps] {\n filter { \n        host(\"aruba-cp-*\" type(glob))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('aruba')\n            product('clearpass')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Avaya/","title":"SIP Manager","text":""},{"location":"sources/vendor/Avaya/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Avaya/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Avaya/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes avaya:avaya None"},{"location":"sources/vendor/Avaya/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes avaya_sipmgr avaya:avaya main none"},{"location":"sources/vendor/Aviatrix/aviatrix/","title":"Aviatrix","text":""},{"location":"sources/vendor/Aviatrix/aviatrix/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Aviatrix/aviatrix/#product-switches","title":"Product - Switches","text":"Ref Link Splunk Add-on \u2013 Product Manual Link"},{"location":"sources/vendor/Aviatrix/aviatrix/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes aviatrix:cloudx-cli None aviatrix:kernel None aviatrix:cloudxd None aviatrix:avx-nfq None aviatrix:avx-gw-state-sync None aviatrix:perfmon None"},{"location":"sources/vendor/Aviatrix/aviatrix/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes aviatrix_cloudx-cli aviatrix:cloudx-cli netops none aviatrix_kernel aviatrix:kernel netops none aviatrix_cloudxd aviatrix:cloudxd netops none aviatrix_avx-nfq aviatrix:avx-nfq netops none aviatrix_avx-gw-state-sync aviatrix:avx-gw-state-sync netops none aviatrix_perfmon aviatrix:perfmon netops none"},{"location":"sources/vendor/Barracuda/waf/","title":"WAF (Cloud)","text":""},{"location":"sources/vendor/Barracuda/waf/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Barracuda/waf/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual https://campus.barracuda.com/product/WAAS/doc/79462622/log-export"},{"location":"sources/vendor/Barracuda/waf/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes barracuda:tr none"},{"location":"sources/vendor/Barracuda/waf/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes barracuda_waf barracuda:web:firewall netwaf None"},{"location":"sources/vendor/Barracuda/waf/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-barracuda_syslog.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-barracuda_syslog[sc4s-vps] {\n filter {      \n        netmask(169.254.100.1/24)\n        or host(\"barracuda\" type(string) flags(ignore-case))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('barracuda')\n            product('syslog')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Barracuda/waf_on_prem/","title":"Barracuda WAF (On Premises)","text":""},{"location":"sources/vendor/Barracuda/waf_on_prem/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Barracuda/waf_on_prem/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3776 Product Manual https://campus.barracuda.com/product/webapplicationfirewall/doc/92767349/exporting-log-formats/"},{"location":"sources/vendor/Barracuda/waf_on_prem/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes barracuda:system program(\u201cSYS\u201d) barracuda:waf program(\u201cWF\u201d) barracuda:web program(\u201cTR\u201d) barracuda:audit program(\u201cAUDIT\u201d) barracuda:firewall program(\u201cNF\u201d)"},{"location":"sources/vendor/Barracuda/waf_on_prem/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes barracuda_system barracuda:system netwaf None barracuda_waf barracuda:waf netwaf None barracuda_web barracuda:web netwaf None barracuda_audit barracuda:audit netwaf None barracuda_firewall barracuda:firewall netwaf None"},{"location":"sources/vendor/BeyondTrust/sra/","title":"Secure Remote Access (Bomgar)","text":""},{"location":"sources/vendor/BeyondTrust/sra/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/BeyondTrust/sra/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/BeyondTrust/sra/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes beyondtrust:sra None"},{"location":"sources/vendor/BeyondTrust/sra/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes beyondtrust_sra beyondtrust:sra infraops none"},{"location":"sources/vendor/BeyondTrust/sra/#options","title":"Options","text":"Variable default description SC4S_DEST_BEYONDTRUST_SRA_SPLUNK_HEC_FMT JSON Restructure data from vendor format to json for splunk destinations set to \u201cNONE\u201d for native format SC4S_DEST_BEYONDTRUST_SRA_SYSLOG_FMT SDATA Restructure data from vendor format to SDATA for SYSLOG destinations set to \u201cNONE\u201d for native ormat"},{"location":"sources/vendor/Broadcom/brightmail/","title":"Brightmail","text":""},{"location":"sources/vendor/Broadcom/brightmail/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Broadcom/brightmail/#links","title":"Links","text":"Ref Link Splunk Add-on TBD Product Manual https://support.symantec.com/us/en/article.howto38250.html"},{"location":"sources/vendor/Broadcom/brightmail/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes symantec:smg Requires version TA 3.6"},{"location":"sources/vendor/Broadcom/brightmail/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes symantec_brightmail symantec:smg email none"},{"location":"sources/vendor/Broadcom/brightmail/#options","title":"Options","text":"Variable default description SC4S_SOURCE_FF_SYMANTEC_BRIGHTMAIL_GROUPMSG yes Email processing events generated by the bmserver process will be grouped by host+program+pid+msg ID into a single event SC4S_DEST_SYMANTEC_BRIGHTMAIL_SPLUNK_HEC_FMT empty if \u201cJSON\u201d and GROUPMSG is enabled format the event in json SC4S_DEST_SYMANTEC_BRIGHTMAIL_SYSLOG_FMT empty if \u201cSDATA\u201d and GROUPMSG is enabled format the event in rfc5424 sdata"},{"location":"sources/vendor/Broadcom/dlp/","title":"Symantec DLP","text":""},{"location":"sources/vendor/Broadcom/dlp/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Broadcom/dlp/#links","title":"Links","text":"Ref Link Splunk Add-on Symatec DLP https://splunkbase.splunk.com/app/3029/ Source doc https://knowledge.broadcom.com/external/article/159509/generating-syslog-messages-from-data-los.html"},{"location":"sources/vendor/Broadcom/dlp/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes symantec:dlp:syslog None"},{"location":"sources/vendor/Broadcom/dlp/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes symantec_dlp symantec:dlp:syslog netdlp none"},{"location":"sources/vendor/Broadcom/dlp/#option-1-correct-source-syslog-formats","title":"Option 1: Correct Source syslog formats","text":""},{"location":"sources/vendor/Broadcom/dlp/#syslog-alert-response","title":"Syslog Alert Response","text":"

Login to Symantec DLP and edit the Syslog Response rule. The default configuration will appear as follows

$POLICY$^^$INCIDENT_ID$^^$SUBJECT$^^$SEVERITY$^^$MATCH_COUNT$^^$RULES$^^$SENDER$^^$RECIPIENTS$^^$BLOCKED$^^$FILE_NAME$^^$PARENT_PATH$^^$SCAN$^^$TARGET$^^$PROTOCOL$^^$INCIDENT_SNAPSHOT$\n

DO NOT replace the text prepend the following literal

SymantecDLPAlert: \n

Result note the space between \u2018:\u2019 and \u2018$\u2019

SymantecDLPAlert: $POLICY$^^$INCIDENT_ID$^^$SUBJECT$^^$SEVERITY$^^$MATCH_COUNT$^^$RULES$^^$SENDER$^^$RECIPIENTS$^^$BLOCKED$^^$FILE_NAME$^^$PARENT_PATH$^^$SCAN$^^$TARGET$^^$PROTOCOL$^^$INCIDENT_SNAPSHOT$\n
"},{"location":"sources/vendor/Broadcom/dlp/#syslog-system-events","title":"Syslog System events","text":""},{"location":"sources/vendor/Broadcom/dlp/#option-2-manual-vendor-product-by-source-parser-configuration","title":"Option 2: Manual Vendor Product by source Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-symantec_dlp.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-symantec_dlp[sc4s-vps] {\n filter {      \n        #netmask(169.254.100.1/24)\n        #host(\"-esx-\")\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('symantec')\n            product('dlp')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Broadcom/ep/","title":"Symantec Endpoint Protection (SEPM)","text":""},{"location":"sources/vendor/Broadcom/ep/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Broadcom/ep/#product-symantec-endpoint-protection","title":"Product - Symantec Endpoint Protection","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2772/ Product Manual https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/symantec-security-software/endpoint-security-and-management/endpoint-protection/all/Monitoring-Reporting-and-Enforcing-Compliance/viewing-logs-v7522439-d37e464/exporting-data-to-a-syslog-server-v8442743-d15e1107.html"},{"location":"sources/vendor/Broadcom/ep/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes symantec:ep:syslog Warning the syslog method of accepting EP logs has been reported to show high data loss and is not Supported by Splunk symantec:ep:admin:syslog none symantec:ep:agent:syslog none symantec:ep:agt:system:syslog none symantec:ep:behavior:syslog none symantec:ep:packet:syslog none symantec:ep:policy:syslog none symantec:ep:proactive:syslog none symantec:ep:risk:syslog none symantec:ep:scan:syslog none symantec:ep:scm:system:syslog none symantec:ep:security:syslog none symantec:ep:traffic:syslog none"},{"location":"sources/vendor/Broadcom/ep/#index-configuration","title":"Index Configuration","text":"key index notes symantec_ep epav none"},{"location":"sources/vendor/Broadcom/proxy/","title":"ProxySG/ASG","text":"

Symantec now Broadcom ProxySG/ASG is formerly known as the \u201cBluecoat\u201d proxy

Broadcom products are inclusive of products formerly marketed under Symantec and Bluecoat brands.

"},{"location":"sources/vendor/Broadcom/proxy/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Broadcom/proxy/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2758/ Product Manual https://support.symantec.com/us/en/article.tech242216.html"},{"location":"sources/vendor/Broadcom/proxy/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes bluecoat:proxysg:access:kv Requires version TA 3.8.1 bluecoat:proxysg:access:syslog Requires version TA 3.8.1"},{"location":"sources/vendor/Broadcom/proxy/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes bluecoat_proxy bluecoat:proxysg:access:syslog netops none bluecoat_proxy_splunkkv bluecoat:proxysg:access:kv netproxy none"},{"location":"sources/vendor/Broadcom/proxy/#setup-and-configuration","title":"Setup and Configuration","text":"
<111>1 $(date)T$(x-bluecoat-hour-utc):$(x-bluecoat-minute-utc):$(x-bluecoat-second-utc)Z $(s-computername) ProxySG - splunk_format - c-ip=$(c-ip) rs-Content-Type=$(quot)$(rs(Content-Type))$(quot)  cs-auth-groups=$(cs-auth-groups) cs-bytes=$(cs-bytes) cs-categories=$(cs-categories) cs-host=$(cs-host) cs-ip=$(cs-ip) cs-method=$(cs-method) cs-uri-port=$(cs-uri-port) cs-uri-scheme=$(cs-uri-scheme) cs-User-Agent=$(quot)$(cs(User-Agent))$(quot) cs-username=$(cs-username) dnslookup-time=$(dnslookup-time) duration=$(duration) rs-status=$(rs-status) rs-version=$(rs-version) s-action=$(s-action) s-ip=$(s-ip) service.name=$(service.name) service.group=$(service.group) s-supplier-ip=$(s-supplier-ip) s-supplier-name=$(s-supplier-name) sc-bytes=$(sc-bytes) sc-filter-result=$(sc-filter-result) sc-status=$(sc-status) time-taken=$(time-taken) x-exception-id=$(x-exception-id) x-virus-id=$(x-virus-id) c-url=$(quot)$(url)$(quot) cs-Referer=$(quot)$(cs(Referer))$(quot) c-cpu=$(c-cpu) connect-time=$(connect-time) cs-auth-groups=$(cs-auth-groups) cs-headerlength=$(cs-headerlength) cs-threat-risk=$(cs-threat-risk) r-ip=$(r-ip) r-supplier-ip=$(r-supplier-ip) rs-time-taken=$(rs-time-taken) rs-server=$(rs(server)) s-connect-type=$(s-connect-type) s-icap-status=$(s-icap-status) s-sitename=$(s-sitename) s-source-port=$(s-source-port) s-supplier-country=$(s-supplier-country) sc-Content-Encoding=$(sc(Content-Encoding)) sr-Accept-Encoding=$(sr(Accept-Encoding)) x-auth-credential-type=$(x-auth-credential-type) x-cookie-date=$(x-cookie-date) x-cs-certificate-subject=$(x-cs-certificate-subject) x-cs-connection-negotiated-cipher=$(x-cs-connection-negotiated-cipher) x-cs-connection-negotiated-cipher-size=$(x-cs-connection-negotiated-cipher-size) x-cs-connection-negotiated-ssl-version=$(x-cs-connection-negotiated-ssl-version) x-cs-ocsp-error=$(x-cs-ocsp-error) x-cs-Referer-uri=$(x-cs(Referer)-uri) x-cs-Referer-uri-address=$(x-cs(Referer)-uri-address) x-cs-Referer-uri-extension=$(x-cs(Referer)-uri-extension) x-cs-Referer-uri-host=$(x-cs(Referer)-uri-host) x-cs-Referer-uri-hostname=$(x-cs(Referer)-uri-hostname) x-cs-Referer-uri-path=$(x-cs(Referer)-uri-path) x-cs-Referer-uri-pathquery=$(x-cs(Referer)-uri-pathquery) x-cs-Referer-uri-port=$(x-cs(Referer)-uri-port) x-cs-Referer-uri-query=$(x-cs(Referer)-uri-query) x-cs-Referer-uri-scheme=$(x-cs(Referer)-uri-scheme) x-cs-Referer-uri-stem=$(x-cs(Referer)-uri-stem) x-exception-category=$(x-exception-category) x-exception-category-review-message=$(x-exception-category-review-message) x-exception-company-name=$(x-exception-company-name) x-exception-contact=$(x-exception-contact) x-exception-details=$(x-exception-details) x-exception-header=$(x-exception-header) x-exception-help=$(x-exception-help) x-exception-last-error=$(x-exception-last-error) x-exception-reason=$(x-exception-reason) x-exception-sourcefile=$(x-exception-sourcefile) x-exception-sourceline=$(x-exception-sourceline) x-exception-summary=$(x-exception-summary) x-icap-error-code=$(x-icap-error-code) x-rs-certificate-hostname=$(x-rs-certificate-hostname) x-rs-certificate-hostname-category=$(x-rs-certificate-hostname-category) x-rs-certificate-observed-errors=$(x-rs-certificate-observed-errors) x-rs-certificate-subject=$(x-rs-certificate-subject) x-rs-certificate-validate-status=$(x-rs-certificate-validate-status) x-rs-connection-negotiated-cipher=$(x-rs-connection-negotiated-cipher) x-rs-connection-negotiated-cipher-size=$(x-rs-connection-negotiated-cipher-size) x-rs-connection-negotiated-ssl-version=$(x-rs-connection-negotiated-ssl-version) x-rs-ocsp-error=$(x-rs-ocsp-error) cs-uri-extension=$(cs-uri-extension) cs-uri-path=$(cs-uri-path) cs-uri-query=$(quot)$(cs-uri-query)$(quot) c-uri-pathquery=$(c-uri-pathquery)\n
"},{"location":"sources/vendor/Broadcom/sslva/","title":"SSL Visibility Appliance","text":""},{"location":"sources/vendor/Broadcom/sslva/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Broadcom/sslva/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual https://knowledge.broadcom.com/external/article/168879/when-sending-session-logs-from-ssl-visib.html"},{"location":"sources/vendor/Broadcom/sslva/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes broadcom:sslva none"},{"location":"sources/vendor/Broadcom/sslva/#index-configuration","title":"Index Configuration","text":"key index notes broadcom_sslva netproxy none"},{"location":"sources/vendor/Brocade/switch/","title":"Switch","text":""},{"location":"sources/vendor/Brocade/switch/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Brocade/switch/#product-switches","title":"Product - Switches","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Brocade/switch/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes brocade:syslog None"},{"location":"sources/vendor/Brocade/switch/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes brocade_syslog brocade:syslog netops none"},{"location":"sources/vendor/Brocade/switch/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app_parsers/app-vps-brocade_syslog.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-brocade_syslog[sc4s-vps] {\n filter { \n        host(\"^test_brocade-\")\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('brocade')\n            product('syslog')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Buffalo/","title":"Terastation","text":""},{"location":"sources/vendor/Buffalo/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Buffalo/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Buffalo/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes buffalo:terastation None"},{"location":"sources/vendor/Buffalo/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes buffalo_terastation buffalo:terastation infraops none"},{"location":"sources/vendor/Buffalo/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-buffalo_terastation.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-buffalo_terastation[sc4s-vps] {\n filter { \n        host(\"^test_buffalo_terastation-\")\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('buffalo')\n            product('terastation')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Checkpoint/firewallos/","title":"Firewall OS","text":"

Firewall OS format is by devices supporting a direct Syslog output

"},{"location":"sources/vendor/Checkpoint/firewallos/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual unknown"},{"location":"sources/vendor/Checkpoint/firewallos/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cp_log:fw:syslog None"},{"location":"sources/vendor/Checkpoint/firewallos/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes checkpoint_fw cp_log:fw:syslog netops none"},{"location":"sources/vendor/Checkpoint/firewallos/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-checkpoint_fw.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-checkpoint_fw[sc4s-vps] {\n filter { \n        host(\"^checkpoint_fw-\")\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('checkpoint')\n            product('fw')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Checkpoint/logexporter_5424/","title":"Log Exporter (Syslog)","text":""},{"location":"sources/vendor/Checkpoint/logexporter_5424/#key-facts","title":"Key Facts","text":" Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4293 Product Manual https://sc1.checkpoint.com/documents/App_for_Splunk/html_frameset.htm"},{"location":"sources/vendor/Checkpoint/logexporter_5424/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cp_log:syslog None"},{"location":"sources/vendor/Checkpoint/logexporter_5424/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes checkpoint_syslog cp_log:syslog netops none"},{"location":"sources/vendor/Checkpoint/logexporter_5424/#source-and-index-configuration","title":"Source and Index Configuration","text":"

Checkpoint Software blades with a CIM mapping have been sub-grouped into sources to allow routing to appropriate indexes. All other source metadata is left as their defaults.

key source index notes checkpoint_syslog_dlp dlp netdlp none checkpoint_syslog_email email email none checkpoint_syslog_firewall firewall netfw none checkpoint_syslog_sessions sessions netops none checkpoint_syslog_web web netproxy none checkpoint_syslog_audit audit netops none checkpoint_syslog_endpoint endpoint netops none checkpoint_syslog_network network netops checkpoint_syslog_ids ids netids checkpoint_syslog_ids_malware ids_malware netids"},{"location":"sources/vendor/Checkpoint/logexporter_5424/#source-configuration","title":"Source Configuration","text":""},{"location":"sources/vendor/Checkpoint/logexporter_5424/#splunk-side","title":"Splunk Side","text":""},{"location":"sources/vendor/Checkpoint/logexporter_5424/#checkpoint-side","title":"Checkpoint Side","text":"
  1. Go to the cp terminal and use the expert command to log-in in expert mode.
  2. Ensure the built-in variable $EXPORTERDIR shell variable is defined with:
echo \"$EXPORTERDIR\"\n
  1. Create a new Log Exporter target in $EXPORTERDIR/targets with:
LOG_EXPORTER_NAME='SyslogToSplunk' # Name this something unique but meaningful\nTARGET_SERVER='example.internal' # The indexer or heavy forwarder to send logs to. Can be an FQDN or an IP address.\nTARGET_PORT='514' # Syslog defaults to 514\nTARGET_PROTOCOL='tcp' # IETF Syslog is specifically TCP\n\ncp_log_export add name \"$LOG_EXPORTER_NAME\" target-server \"$TARGET_SERVER\" target-port \"$TARGET_PORT\" protocol \"$TARGET_PROTOCOL\" format 'syslog'\n
  1. Make a global copy of the built-in Syslog format definition with:
cp \"$EXPORTERDIR/conf/SyslogFormatDefinition.xml\" \"$EXPORTERDIR/conf/SplunkRecommendedFormatDefinition.xml\"\n
  1. Edit $EXPORTERDIR/conf/SplunkRecommendedFormatDefinition.xml by modifying the start_message_body, fields_separatator, and field_value_separatator keys as shown below. a. Note: The misspelling of \u201cseparator\u201d as \u201cseparatator\u201d is intentional, and is to line up with both Checkpoint\u2019s documentation and parser implementation.
<start_message_body>[sc4s@2620 </start_message_body>\n<!-- ... -->\n<fields_separatator> </fields_separatator>\n<!-- ... -->\n<field_value_separatator>=</field_value_separatator>\n
  1. Copy the new format config to your new target\u2019s conf directory with:
cp \"$EXPORTERDIR/conf/SplunkRecommendedFormatDefinition.xml\"  \"$EXPORTERDIR/targets/$LOG_EXPORTER_NAME/conf\"\n
  1. Edit $EXPORTERDIR/targets/$LOG_EXPORTER_NAME/targetConfiguration.xml by adding the reference to the $EXPORTERDIR/targets/$LOG_EXPORTER_NAME/conf/SplunkRecommendedFormatDefinition.xml under the key <formatHeaderFile>. a. For example, if $EXPORTERDIR is /opt/CPrt-R81/log_exporter and $LOG_EXPORTER_NAME is SyslogToSplunk, the absolute path will become:
<formatHeaderFile>/opt/CPrt-R81/log_exporter/targets/SyslogToSplunk/conf/SplunkRecommendedFormatDefinition.xml</formatHeaderFile>\n
  1. Restart the new log exporter with:
cp_log_export restart name \"$LOG_EXPORTER_NAME\"\n
  1. Warning: If you\u2019re migrating from the old Splunk Syslog format, make sure that the older format\u2019s log exporter is disabled, as it would lead to data duplication.
"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/","title":"Log Exporter (Splunk)","text":"

The \u201cSplunk Format\u201d is legacy and should not be used for new deployments see Log Exporter (Syslog)

"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/#key-facts","title":"Key Facts","text":"

The Splunk host field will be derived as follows using the first match

If the host is in the format <host>-v_<bladename> use bladename for host

"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4293/ Product Manual https://sc1.checkpoint.com/documents/App_for_Splunk/html_frameset.htm"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cp_log None"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes checkpoint_splunk cp_log netops none"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/#source-and-index-configuration","title":"Source and Index Configuration","text":"

Checkpoint Software blades with CIM mapping have been sub-grouped into sources to allow routing to appropriate indexes. All other source meta data is left at default

key source index notes checkpoint_splunk_dlp dlp netdlp none checkpoint_splunk_email email email none checkpoint_splunk_firewall firewall netfw none checkpoint_splunk_os program:${program} netops none checkpoint_splunk_sessions sessions netops none checkpoint_splunk_web web netproxy none checkpoint_splunk_audit audit netops none checkpoint_splunk_endpoint endpoint netops none checkpoint_splunk_network network netops checkpoint_splunk_ids ids netids checkpoint_splunk_ids_malware ids_malware netids"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/#options","title":"Options","text":"Variable default description SC4S_LISTEN_CHECKPOINT_SPLUNK_NOISE_CONTROL no Suppress any duplicate product+loguid pairs processed within 2 seconds of the last matching event SC4S_LISTEN_CHECKPOINT_SPLUNK_OLD_HOST_RULES empty string when set to yes reverts host name selection order to originsicname\u2013>origin_sic_name\u2013>hostname"},{"location":"sources/vendor/Cisco/cisco_ace/","title":"Application Control Engine (ACE)","text":""},{"location":"sources/vendor/Cisco/cisco_ace/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_ace/#links","title":"Links","text":"Ref Link Splunk Add-on None"},{"location":"sources/vendor/Cisco/cisco_ace/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:ace None"},{"location":"sources/vendor/Cisco/cisco_ace/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_ace cisco:ace netops none"},{"location":"sources/vendor/Cisco/cisco_acs/","title":"Cisco Access Control System (ACS)","text":""},{"location":"sources/vendor/Cisco/cisco_acs/#key-facts","title":"Key facts","text":" Ref Link Splunk Add-on https://splunkbase.splunk.com/app/1811/ Product Manual https://community.cisco.com/t5/security-documents/acs-5-x-configuring-the-external-syslog-server/ta-p/3143143"},{"location":"sources/vendor/Cisco/cisco_acs/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:acs Aggregation used"},{"location":"sources/vendor/Cisco/cisco_acs/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_acs cisco:acs netauth None"},{"location":"sources/vendor/Cisco/cisco_acs/#splunk-setup-and-configuration","title":"Splunk Setup and Configuration","text":"
EXTRACT-AA-signature = CSCOacs_(?<signature>\\S+):?\n# Note the value of this config is empty to disable\nEXTRACT-AA-syslog_message = \nEXTRACT-acs_message_header2 = ^CSCOacs_\\S+\\s+(?<log_session_id>\\S+)\\s+(?<total_segments>\\d+)\\s+(?<segment_number>\\d+)\\s+(?<acs_message>.*)\n
"},{"location":"sources/vendor/Cisco/cisco_asa/","title":"ASA/FTD (Firepower)","text":""},{"location":"sources/vendor/Cisco/cisco_asa/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_asa/#links","title":"Links","text":"Ref Link Splunk Add-on for ASA (No long supports FWSM and PIX) https://splunkbase.splunk.com/app/1620/ Cisco eStreamer for Splunk https://splunkbase.splunk.com/app/1629/ Product Manual https://www.cisco.com/c/en/us/support/docs/security/pix-500-series-security-appliances/63884-config-asa-00.html"},{"location":"sources/vendor/Cisco/cisco_asa/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:asa cisco FTD Firepower will also use this source type except those noted below cisco:ftd cisco FTD Firepower will also use this source type except those noted below cisco:fwsm Splunk has cisco:pix cisco PIX will also use this source type except those noted below cisco:firepower:syslog FTD Unified events see https://www.cisco.com/c/en/us/td/docs/security/firepower/Syslogs/b_fptd_syslog_guide.pdf"},{"location":"sources/vendor/Cisco/cisco_asa/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_asa cisco:asa netfw none cisco_fwsm cisco:fwsm netfw none cisco_pix cisco:pix netfw none cisco_firepower cisco:firepower:syslog netids none cisco_ftd cisco:ftd netfw none"},{"location":"sources/vendor/Cisco/cisco_asa/#source-setup-and-configuration","title":"Source Setup and Configuration","text":""},{"location":"sources/vendor/Cisco/cisco_dna/","title":"Digital Network Area(DNA)","text":""},{"location":"sources/vendor/Cisco/cisco_dna/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_dna/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual multiple"},{"location":"sources/vendor/Cisco/cisco_dna/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:dna None"},{"location":"sources/vendor/Cisco/cisco_dna/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_dna cisco:dna netops None"},{"location":"sources/vendor/Cisco/cisco_dna/#sc4s-options","title":"SC4S Options","text":"Variable default description SC4S_SOURCE_CISCO_DNA_FIXHOST yes Current firmware incorrectly sends the value of the syslog server host name (destination) in the host field if this is ever corrected this value will need to be set back to no we suggest using yes"},{"location":"sources/vendor/Cisco/cisco_esa/","title":"Email Security Appliance (ESA)","text":""},{"location":"sources/vendor/Cisco/cisco_esa/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_esa/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/1761/ Product Manual https://www.cisco.com/c/en/us/td/docs/security/esa/esa14-0/user_guide/b_ESA_Admin_Guide_14-0.pdf"},{"location":"sources/vendor/Cisco/cisco_esa/#esa-log-configuration","title":"ESA Log Configuration","text":"

If feasible for you, you can use following log configuration on the ESA. The log name configured on the ESA can then be parsed easily by sc4s.

ESA Log Name ESA Log Type sc4s_gui_logs HTTP Logs sc4s_mail_logs IronPort Text Mail Logs sc4s_amp AMP Engine Logs sc4s_audit_logs Audit Logs sc4s_antispam Anti-Spam Logs sc4s_content_scanner Content Scanner Logs sc4s_error_logs IronPort Text Mail Logs (Loglevel: Critical) sc4s_system_logs System Logs"},{"location":"sources/vendor/Cisco/cisco_esa/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:esa:http The HTTP logs of Cisco IronPort ESA record information about the secure HTTP services enabled on the interface. cisco:esa:textmail Text mail logs of Cisco IronPort ESA record email information and status. cisco:esa:amp Advanced Malware Protection (AMP) of Cisco IronPort ESA records malware detection and blocking, continuous analysis, and retrospective alerting details. cisco:esa:authentication These logs record successful user logins and unsuccessful login attempts. cisco:esa:cef The Consolidated Event Logs summarizes each message event in a single log line. cisco:esa:error_logs Error logs of Cisco IronPort ESA records error that occurred for ESA configurations or internal issues. cisco:esa:content_scanner Content scanner logs of Cisco IronPort ESA scans messages that contain password-protected attachments for malicious activity and data privacy. cisco:esa:antispam Anti-spam logs record the status of the anti-spam scanning feature of your system, including the status on receiving updates of the latest anti-spam rules. Also, any logs related to the Context Adaptive Scanning Engine are logged here. cisco:esa:system_logs System logs record the boot information, virtual appliance license expiration alerts, DNS status information, and comments users typed using commit command."},{"location":"sources/vendor/Cisco/cisco_esa/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_esa cisco:esa:http email None cisco_esa cisco:esa:textmail email None cisco_esa cisco:esa:amp email None cisco_esa cisco:esa:authentication email None cisco_esa cisco:esa:cef email None cisco_esa cisco:esa:error_logs email None cisco_esa cisco:esa:content_scanner email None cisco_esa cisco:esa:antispam email None cisco_esa cisco:esa:system_logs email None"},{"location":"sources/vendor/Cisco/cisco_esa/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-cisco_esa.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-cisco_esa[sc4s-vps] {\n filter { \n        host(\"^esa-\")\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('cisco')\n            product('esa')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Cisco/cisco_imc/","title":"Cisco Integrated Management Controller (IMC)","text":""},{"location":"sources/vendor/Cisco/cisco_imc/#key-facts","title":"Key facts","text":" Ref Link Splunk Add-on na Product Manual multiple"},{"location":"sources/vendor/Cisco/cisco_imc/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:ucm None"},{"location":"sources/vendor/Cisco/cisco_imc/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_cimc cisco:infraops infraops None"},{"location":"sources/vendor/Cisco/cisco_ios/","title":"Cisco Networking (IOS and Compatible)","text":"

Cisco Network Products of multiple types share common logging characteristics the following types are known to be compatible:

"},{"location":"sources/vendor/Cisco/cisco_ios/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_ios/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/1467/ IOS Manual https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst2960/software/release/12-2_55_se/configuration/guide/scg_2960/swlog.html NX-OS Manual https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/6-x/system_management/configuration/guide/b_Cisco_Nexus_9000_Series_NX-OS_System_Management_Configuration_Guide/sm_5syslog.html Cisco ACI https://community.cisco.com/legacyfs/online/attachments/document/technote-aci-syslog_external-v1.pdf Cisco WLC & AP https://www.cisco.com/c/en/us/support/docs/wireless/4100-series-wireless-lan-controllers/107252-WLC-Syslog-Server.html#anc8 Cisco IOS-XR https://www.cisco.com/c/en/us/td/docs/iosxr/cisco8000/system-monitoring/73x/b-system-monitoring-cg-cisco8k-73x/implementing_system_logging.html"},{"location":"sources/vendor/Cisco/cisco_ios/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:ios This source type is also used for NX-OS, ACI and WLC product lines cisco:xr This source type is used for Cisco IOS XR"},{"location":"sources/vendor/Cisco/cisco_ios/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_ios cisco:ios netops none cisco_xr cisco:xr netops none"},{"location":"sources/vendor/Cisco/cisco_ios/#filter-type","title":"Filter type","text":""},{"location":"sources/vendor/Cisco/cisco_ios/#setup-and-configuration","title":"Setup and Configuration","text":"

If you want to send raw logs to splunk (without any drop) then only use this feature Please set following property in env_file:

SC4S_ENABLE_CISCO_IOS_RAW_MSG=yes\n
Restart SC4S and it will send entire message without any drop.

"},{"location":"sources/vendor/Cisco/cisco_ise/","title":"Cisco ise","text":""},{"location":"sources/vendor/Cisco/cisco_ise/#cisco-identity-services-engine-ise","title":"Cisco Identity Services Engine (ISE)","text":""},{"location":"sources/vendor/Cisco/cisco_ise/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_ise/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/1915/ Product Manual https://www.cisco.com/c/en/us/td/docs/security/ise/syslog/Cisco_ISE_Syslogs/m_IntrotoSyslogs.html"},{"location":"sources/vendor/Cisco/cisco_ise/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:ise:syslog Aggregation used"},{"location":"sources/vendor/Cisco/cisco_ise/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_ise cisco:ise:syslog netauth None"},{"location":"sources/vendor/Cisco/cisco_meraki/","title":"Cisco meraki","text":""},{"location":"sources/vendor/Cisco/cisco_meraki/#meraki-mr-ms-mx","title":"Meraki (MR, MS, MX)","text":""},{"location":"sources/vendor/Cisco/cisco_meraki/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_meraki/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3018 Product Manual https://documentation.meraki.com/zGeneral_Administration/Monitoring_and_Reporting/Syslog_Server_Overview_and_Configuration"},{"location":"sources/vendor/Cisco/cisco_meraki/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes meraki:accesspoints Not compliant with the Splunk Add-on meraki:securityappliances Not compliant with the Splunk Add-on meraki:switches Not compliant with the Splunk Add-on meraki For all Meraki devices. Compliant with the Splunk Add-on"},{"location":"sources/vendor/Cisco/cisco_meraki/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes meraki_accesspoints meraki:accesspoints netfw meraki_securityappliances meraki:securityappliances netfw meraki_switches meraki:switches netfw cisco_meraki meraki netfw"},{"location":"sources/vendor/Cisco/cisco_meraki/#parser-configuration","title":"Parser Configuration","text":"
  1. Either by defining Cisco Meraki hosts:

    #/opt/sc4s/local/config/app_parsers/app-vps-cisco_meraki.conf\n#File name provided is a suggestion it must be globally unique\n\nblock parser app-vps-test-cisco_meraki() {\n    channel {\n        if {\n            filter { host(\"^test-mx-\") };\n            parser { \n                p_set_netsource_fields(\n                    vendor('meraki')\n                    product('securityappliances')\n                ); \n            };\n        } elif {\n            filter { host(\"^test-mr-\") };\n            parser { \n                p_set_netsource_fields(\n                    vendor('meraki')\n                    product('accesspoints')\n                ); \n            };\n        } elif {\n            filter { host(\"^test-ms-\") };\n            parser { \n                p_set_netsource_fields(\n                    vendor('meraki')\n                    product('switches')\n                ); \n            };\n        } else {\n            parser { \n                p_set_netsource_fields(\n                    vendor('cisco')\n                    product('meraki')\n                ); \n            };\n        };\n    }; \n};\n\n\napplication app-vps-test-cisco_meraki[sc4s-vps] {\n    filter {\n        host(\"^test-meraki-\")\n        or host(\"^test-mx-\")\n        or host(\"^test-mr-\")\n        or host(\"^test-ms-\")\n    };\n    parser { app-vps-test-cisco_meraki(); };\n};\n

  2. Or by a unique port:

    # /opt/sc4s/env_file\nSC4S_LISTEN_CISCO_MERAKI_UDP_PORT=5004\nSC4S_LISTEN_MERAKI_SECURITYAPPLIANCES_UDP_PORT=5005\nSC4S_LISTEN_MERAKI_ACCESSPOINTS_UDP_PORT=5006\nSC4S_LISTEN_MERAKI_SWITCHES_UDP_PORT=5007\n

"},{"location":"sources/vendor/Cisco/cisco_mm/","title":"Meeting Management","text":""},{"location":"sources/vendor/Cisco/cisco_mm/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_mm/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual multiple"},{"location":"sources/vendor/Cisco/cisco_mm/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:mm:system:* final component take from the program field of the message header cisco:mm:audit Requires setup of vendor product by source see below"},{"location":"sources/vendor/Cisco/cisco_mm/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_mm_system cisco:mm:system:* netops None cisco_mm_audit cisco:mm:audit netops None"},{"location":"sources/vendor/Cisco/cisco_mm/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-cisco_mm.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-cisco_mm[sc4s-vps] {\n filter { \n        host('^test-cmm-')\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('cisco')\n            product('mm')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Cisco/cisco_ms/","title":"Meeting Server","text":""},{"location":"sources/vendor/Cisco/cisco_ms/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_ms/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual multiple"},{"location":"sources/vendor/Cisco/cisco_ms/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:ms None"},{"location":"sources/vendor/Cisco/cisco_ms/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_ms cisco:ms netops None"},{"location":"sources/vendor/Cisco/cisco_ms/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-cisco_ms.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-cisco_ms[sc4s-vps] {\n filter { \n        host('^test-cms-')\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('cisco')\n            product('ms')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Cisco/cisco_tvcs/","title":"TelePresence Video Communication Server (TVCS)","text":""},{"location":"sources/vendor/Cisco/cisco_tvcs/#links","title":"Links","text":"Ref Link Product Manual https://www.cisco.com/c/en/us/products/unified-communications/telepresence-video-communication-server-vcs/index.html"},{"location":"sources/vendor/Cisco/cisco_tvcs/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:vcs none"},{"location":"sources/vendor/Cisco/cisco_tvcs/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_tvcs cisco:tvcs main none"},{"location":"sources/vendor/Cisco/cisco_ucm/","title":"Unified Communications Manager (UCM)","text":""},{"location":"sources/vendor/Cisco/cisco_ucm/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_ucm/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual multiple"},{"location":"sources/vendor/Cisco/cisco_ucm/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:ucm None"},{"location":"sources/vendor/Cisco/cisco_ucm/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_ucm cisco:ucm ucm None"},{"location":"sources/vendor/Cisco/cisco_ucshx/","title":"Unified Computing System (UCS)","text":""},{"location":"sources/vendor/Cisco/cisco_ucshx/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_ucshx/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual multiple"},{"location":"sources/vendor/Cisco/cisco_ucshx/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:ucs None"},{"location":"sources/vendor/Cisco/cisco_ucshx/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_ucs cisco:ucs infraops None"},{"location":"sources/vendor/Cisco/cisco_viptela/","title":"Viptela","text":""},{"location":"sources/vendor/Cisco/cisco_viptela/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_viptela/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual multiple"},{"location":"sources/vendor/Cisco/cisco_viptela/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:viptela None"},{"location":"sources/vendor/Cisco/cisco_viptela/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_viptela cisco:viptela netops None"},{"location":"sources/vendor/Cisco/cisco_wsa/","title":"Web Security Appliance (WSA)","text":""},{"location":"sources/vendor/Cisco/cisco_wsa/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_wsa/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/1747/ Product Manual https://www.cisco.com/c/en/us/td/docs/security/wsa/wsa11-7/user_guide/b_WSA_UserGuide_11_7.html"},{"location":"sources/vendor/Cisco/cisco_wsa/#sourcetypes","title":"Sourcetypes","text":"

| cisco:wsa:l4tm | The L4TM logs of Cisco IronPort WSA record sites added to the L4TM block and allow lists. | | cisco:wsa:squid | The access logs of Cisco IronPort WSA version prior to 11.7 record Web Proxy client history in squid. | | cisco:wsa:squid:new | The access logs of Cisco IronPort WSA version since 11.7 record Web Proxy client history in squid. | | cisco:wsa:w3c:recommended | The access logs of Cisco IronPort WSA version since 12.5 record Web Proxy client history in W3C. |

"},{"location":"sources/vendor/Cisco/cisco_wsa/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_wsa cisco:wsa:l4tm netproxy None cisco_wsa cisco:wsa:squid netproxy None cisco_wsa cisco:wsa:squid:new netproxy None cisco_wsa cisco:wsa:w3c:recommended netproxy None"},{"location":"sources/vendor/Cisco/cisco_wsa/#filter-type","title":"Filter type","text":"

IP, Netmask or Host

"},{"location":"sources/vendor/Cisco/cisco_wsa/#source-setup-and-configuration","title":"Source Setup and Configuration","text":""},{"location":"sources/vendor/Cisco/cisco_wsa/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-cisco_wsa.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-cisco_wsa[sc4s-vps] {\n filter { \n        host(\"^wsa-\")\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('cisco')\n            product('wsa')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Citrix/netscaler/","title":"Netscaler ADC/SDX","text":""},{"location":"sources/vendor/Citrix/netscaler/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Citrix/netscaler/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2770/ Product Manual https://docs.citrix.com/en-us/citrix-adc/12-1/system/audit-logging/configuring-audit-logging.html"},{"location":"sources/vendor/Citrix/netscaler/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes citrix:netscaler:syslog None citrix:netscaler:appfw None citrix:netscaler:appfw:cef None"},{"location":"sources/vendor/Citrix/netscaler/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes citrix_netscaler citrix:netscaler:syslog netfw none citrix_netscaler citrix:netscaler:appfw netfw none citrix_netscaler citrix:netscaler:appfw:cef netfw none"},{"location":"sources/vendor/Citrix/netscaler/#source-setup-and-configuration","title":"Source Setup and Configuration","text":""},{"location":"sources/vendor/Clearswift/","title":"WAF (Cloud)","text":""},{"location":"sources/vendor/Clearswift/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Clearswift/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual https://clearswifthelp.clearswift.com/SEG/472/en/Content/Sections/SystemsCenter/SYCLogList.htm"},{"location":"sources/vendor/Clearswift/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes clearswift:${PROGRAM} none"},{"location":"sources/vendor/Clearswift/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes clearswift clearswift:${PROGRAM} email None"},{"location":"sources/vendor/Clearswift/#parser-configuration","title":"Parser Configuration","text":"

```c

"},{"location":"sources/vendor/Clearswift/#optsc4slocalconfigapp-parsersapp-vps-clearswiftconf","title":"/opt/sc4s/local/config/app-parsers/app-vps-clearswift.conf","text":""},{"location":"sources/vendor/Clearswift/#file-name-provided-is-a-suggestion-it-must-be-globally-unique","title":"File name provided is a suggestion it must be globally unique","text":"

application app-vps-clearswift[sc4s-vps] { filter { host(\u201ctest-clearswift-\u201d type(string) flags(prefix)) }; parser { p_set_netsource_fields( vendor(\u2018clearswift\u2019) product(\u2018clearswift\u2019) ); }; };

"},{"location":"sources/vendor/Cohesity/cluster/","title":"Cluster","text":""},{"location":"sources/vendor/Cohesity/cluster/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cohesity/cluster/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Cohesity/cluster/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cohesity:cluster:audit None cohesity:cluster:dataprotection None cohesity:api:audit None cohesity:alerts None"},{"location":"sources/vendor/Cohesity/cluster/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cohesity_cluster_audit cohesity:cluster:audit infraops none cohesity_api_audit cohesity:api:audit infraops none cohesity_cluster_dataprotection cohesity:cluster:dataprotection infraops none cohesity_alerts cohesity:alerts infraops none"},{"location":"sources/vendor/CyberArk/epv/","title":"Vendor - CyberArk","text":""},{"location":"sources/vendor/CyberArk/epv/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/CyberArk/epv/#product-epv","title":"Product - EPV","text":"Ref Link Splunk Add-on CyberArk https://splunkbase.splunk.com/app/2891/ Add-on Manual https://docs.splunk.com/Documentation/AddOns/latest/CyberArk/About"},{"location":"sources/vendor/CyberArk/epv/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cyberark:epv:cef None"},{"location":"sources/vendor/CyberArk/epv/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes Cyber-Ark_Vault cyberark:epv:cef netauth none"},{"location":"sources/vendor/CyberArk/pta/","title":"PTA","text":""},{"location":"sources/vendor/CyberArk/pta/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/CyberArk/pta/#links","title":"Links","text":"Ref Link Splunk Add-on CyberArk https://splunkbase.splunk.com/app/2891/ Add-on Manual https://docs.splunk.com/Documentation/AddOns/latest/CyberArk/About Product Manual https://docs.cyberark.com/PAS/Latest/en/Content/PTA/CEF-Based-Format-Definition.htm"},{"location":"sources/vendor/CyberArk/pta/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cyberark:pta:cef None"},{"location":"sources/vendor/CyberArk/pta/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes CyberArk_PTA cyberark:pta:cef main none"},{"location":"sources/vendor/Cylance/protect/","title":"Protect","text":""},{"location":"sources/vendor/Cylance/protect/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cylance/protect/#links","title":"Links","text":"Ref Link Splunk Add-on CyberArk https://splunkbase.splunk.com/app/3709/"},{"location":"sources/vendor/Cylance/protect/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes syslog_protect Catchall syslog_threat_classification None syslog_audit_log None syslog_exploit None syslog_app_control None syslog_threat None syslog_device None syslog_device_control None syslog_script_control None syslog_optics None"},{"location":"sources/vendor/Cylance/protect/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes cylance_protect syslog_protect epintel none cylance_protect_auditlog syslog_audit_log epintel none cylance_protect_threatclassification syslog_threat_classification epintel none cylance_protect_exploitattempt syslog_exploit epintel none cylance_protect_appcontrol syslog_app_control epintel none cylance_protect_threat syslog_threat epintel none cylance_protect_device syslog_device epintel none cylance_protect_devicecontrol syslog_device_control epintel none cylance_protect_scriptcontrol syslog_protect epintel none cylance_protect_scriptcontrol syslog_script_control epintel none cylance_protect_optics syslog_optics epintel none"},{"location":"sources/vendor/DARKTRACE/darktrace/","title":"Darktrace","text":""},{"location":"sources/vendor/DARKTRACE/darktrace/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/DARKTRACE/darktrace/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/DARKTRACE/darktrace/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes darktrace none darktrace:audit none"},{"location":"sources/vendor/DARKTRACE/darktrace/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes darktrace_syslog darktrace netids None darktrace_audit darktrace_audit netids None"},{"location":"sources/vendor/Dell/avamar/","title":"Dell Avamar","text":""},{"location":"sources/vendor/Dell/avamar/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Dell/avamar/#links","title":"Links","text":"Ref Link Splunk Add-on na Add-on Manual https://www.delltechnologies.com/asset/en-us/products/data-protection/technical-support/docu91832.pdf"},{"location":"sources/vendor/Dell/avamar/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes dell:avamar:msc None"},{"location":"sources/vendor/Dell/avamar/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes dell_avamar_cms dell:avamar:msc netops none"},{"location":"sources/vendor/Dell/cmc/","title":"CMC (VRTX)","text":""},{"location":"sources/vendor/Dell/cmc/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Dell/cmc/#links","title":"Links","text":"Ref Link Splunk Add-on na Add-on Manual https://www.dell.com/support/manuals/en-us/dell-chassis-management-controller-v3.10-dell-poweredge-vrtx/cmcvrtx31ug/overview?guid=guid-84595265-d37c-4765-8890-90f629737b17"},{"location":"sources/vendor/Dell/cmc/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes dell:poweredge:cmc:syslog None"},{"location":"sources/vendor/Dell/cmc/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes dell_poweredge_cmc dell:poweredge:cmc:syslog infraops none"},{"location":"sources/vendor/Dell/cmc/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-dell_cmc.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-dell_cmc[sc4s-vps] {\n filter { \n        host(\"test-dell-cmc-\" type(string) flags(prefix))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('dell')\n            product('poweredge_cmc')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Dell/emc_powerswitchn/","title":"EMC Powerswitch N Series","text":""},{"location":"sources/vendor/Dell/emc_powerswitchn/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Dell/emc_powerswitchn/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual https://dl.dell.com/manuals/common/networking_nxxug_en-us.pdf"},{"location":"sources/vendor/Dell/emc_powerswitchn/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes dell:emc:powerswitch:n None"},{"location":"sources/vendor/Dell/emc_powerswitchn/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes dellemc_powerswitch_n all netops none"},{"location":"sources/vendor/Dell/emc_powerswitchn/#parser-configuration","title":"Parser Configuration","text":"
  1. Through sc4s-vps

    #/opt/sc4s/local/config/app-parsers/app-vps-dell_switch_n.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-dell_switch_n[sc4s-vps] {\n filter { \n        host(\"test-dell-switch-n-\" type(string) flags(prefix))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('dellemc')\n            product('powerswitch_n')\n        ); \n    };   \n};\n

  2. or through unique port

    # /opt/sc4s/env_file \nSC4S_LISTEN_DELLEMC_POWERSWITCH_N_UDP_PORT=5005\n

"},{"location":"sources/vendor/Dell/idrac/","title":"iDrac","text":""},{"location":"sources/vendor/Dell/idrac/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Dell/idrac/#links","title":"Links","text":"Ref Link Splunk Add-on na Add-on Manual https://www.dell.com/support/manuals/en-au/dell-opnmang-sw-v8.1/eemi_13g_v1.2-v1/introduction?guid=guid-8f22a1a9-ac01-43d1-a9d2-390ca6708d5e&lang=en-us"},{"location":"sources/vendor/Dell/idrac/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes dell:poweredge:idrac:syslog None"},{"location":"sources/vendor/Dell/idrac/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes dell_poweredge_idrac dell:poweredge:idrac:syslog infraops none"},{"location":"sources/vendor/Dell/rsa_secureid/","title":"RSA SecureID","text":""},{"location":"sources/vendor/Dell/rsa_secureid/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Dell/rsa_secureid/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2958/ Product Manual https://docs.splunk.com/Documentation/AddOns/released/RSASecurID/Aboutthisaddon"},{"location":"sources/vendor/Dell/rsa_secureid/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes rsa:securid:syslog Catchall; used if a more specific source type can not be identified rsa:securid:admin:syslog None rsa:securid:runtime:syslog None nix:syslog None"},{"location":"sources/vendor/Dell/rsa_secureid/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes dell-rsa_secureid all netauth none dell-rsa_secureid_trace rsa:securid:trace netauth none dell-rsa_secureid nix:syslog osnix uses os_nix key of not configured bye host/ip/port"},{"location":"sources/vendor/Dell/rsa_secureid/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app_parsers/app-vps-dell_rsa_secureid.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-dell_rsa_secureid[sc4s-vps] {\n filter { \n        host(\"test_rsasecureid*\" type(glob))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('dell')\n            product('rsa_secureid')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Dell/sonic/","title":"Dell Networking SONiC","text":""},{"location":"sources/vendor/Dell/sonic/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Dell/sonic/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual link"},{"location":"sources/vendor/Dell/sonic/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes dell:sonic None"},{"location":"sources/vendor/Dell/sonic/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes dell_sonic dell:sonic netops none"},{"location":"sources/vendor/Dell/sonic/#parser-configuration","title":"Parser Configuration","text":"
  1. Through sc4s-vps

    #/opt/sc4s/local/config/app-parsers/app-vps-dell_sonic.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-dell_sonic[sc4s-vps] {\n filter { \n        host(\"sonic\" type(string) flags(prefix))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('dell')\n            product('sonic')\n        ); \n    };   \n};\n

  2. or through unique port

    # /opt/sc4s/env_file \nSC4S_LISTEN_DELL_SONIC_UDP_PORT=5005\n

"},{"location":"sources/vendor/Dell/sonicwall/","title":"Sonicwall","text":""},{"location":"sources/vendor/Dell/sonicwall/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Dell/sonicwall/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/6203/"},{"location":"sources/vendor/Dell/sonicwall/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes dell:sonicwall None"},{"location":"sources/vendor/Dell/sonicwall/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes dell_sonicwall-firewall dell:sonicwall netfw none"},{"location":"sources/vendor/Dell/sonicwall/#options","title":"Options","text":"Variable default description SC4S_DEST_DELL_SONICWALL-FIREWALL_SPLUNK_HEC_FMT JSON Restructure data from vendor format to json for splunk destinations set to \u201cNONE\u201d for native format SC4S_DEST_DELL_SONICWALL-FIREWALL_SYSLOG_FMT SDATA Restructure data from vendor format to SDATA for SYSLOG destinations set to \u201cNONE\u201d for native format"},{"location":"sources/vendor/Dell/sonicwall/#note","title":"Note:","text":"

The sourcetype has been changed in version 2.35.0 making it compliant with corresponding TA.

"},{"location":"sources/vendor/F5/bigip/","title":"BigIP","text":""},{"location":"sources/vendor/F5/bigip/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/F5/bigip/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2680/ Product Manual unknown"},{"location":"sources/vendor/F5/bigip/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes f5:bigip:syslog None f5:bigip:irule None f5:bigip:ltm:http:irule None f5:bigip:gtm:dns:request:irule None f5:bigip:gtm:dns:response:irule None f5:bigip:ltm:failed:irule None f5:bigip:asm:syslog None f5:bigip:apm:syslog None nix:syslog None f5:bigip:ltm:access_json User defined configuration via irule producing a RFC5424 syslog event with json content within the message field <111>1 2020-05-28T22:48:15Z foo.example.com F5 - access_json - {\"event_type\":\"HTTP_REQUEST\", \"src_ip\":\"10.66.98.41\"} This source type requires a customer specific Splunk Add-on for utility value"},{"location":"sources/vendor/F5/bigip/#index-configuration","title":"Index Configuration","text":"key index notes f5_bigip netops none f5_bigip_irule netops none f5_bigip_asm netwaf none f5_bigip_apm netops none f5_bigip_nix netops if f_f5_bigip is not set the index osnix will be used f5_bigip_access_json netops none"},{"location":"sources/vendor/F5/bigip/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-f5_bigip.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-f5_bigip[sc4s-vps] {\n filter { \n        \"${HOST}\" eq \"f5_bigip\"\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('f5')\n            product('bigip')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/FireEye/cms/","title":"CMS","text":""},{"location":"sources/vendor/FireEye/cms/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/FireEye/cms/#links","title":"Links","text":"Ref Link Technology Add-On for FireEye https://splunkbase.splunk.com/app/1904/"},{"location":"sources/vendor/FireEye/cms/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes fe_cef_syslog"},{"location":"sources/vendor/FireEye/cms/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes FireEye_CMS fe_cef_syslog fireeye"},{"location":"sources/vendor/FireEye/emps/","title":"eMPS","text":""},{"location":"sources/vendor/FireEye/emps/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/FireEye/emps/#links","title":"Links","text":"Ref Link Technology Add-On for FireEye https://splunkbase.splunk.com/app/1904/"},{"location":"sources/vendor/FireEye/emps/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes fe_cef_syslog"},{"location":"sources/vendor/FireEye/emps/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes FireEye_eMPS fe_cef_syslog fireeye"},{"location":"sources/vendor/FireEye/etp/","title":"etp","text":""},{"location":"sources/vendor/FireEye/etp/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/FireEye/etp/#links","title":"Links","text":"Ref Link Technology Add-On for FireEye https://splunkbase.splunk.com/app/1904/"},{"location":"sources/vendor/FireEye/etp/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes fe_etp source does not provide host name constant \u201cetp.fireeye.com\u201d is use regardless of region"},{"location":"sources/vendor/FireEye/etp/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes FireEye_ETP fe_etp fireeye"},{"location":"sources/vendor/FireEye/hx/","title":"hx","text":""},{"location":"sources/vendor/FireEye/hx/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/FireEye/hx/#links","title":"Links","text":"Ref Link Technology Add-On for FireEye https://splunkbase.splunk.com/app/1904/"},{"location":"sources/vendor/FireEye/hx/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes hx_cef_syslog"},{"location":"sources/vendor/FireEye/hx/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes fireeye_hx hx_cef_syslog fireeye"},{"location":"sources/vendor/Forcepoint/","title":"Email Security","text":""},{"location":"sources/vendor/Forcepoint/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Forcepoint/#links","title":"Links","text":"Ref Link Splunk Add-on none Product Manual none"},{"location":"sources/vendor/Forcepoint/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes forcepoint:email:kv None"},{"location":"sources/vendor/Forcepoint/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes forcepoint_email forcepoint:email:kv email none"},{"location":"sources/vendor/Forcepoint/webprotect/","title":"Webprotect (Websense)","text":""},{"location":"sources/vendor/Forcepoint/webprotect/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Forcepoint/webprotect/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2966/ Product Manual http://www.websense.com/content/support/library/web/v85/siem/siem.pdf"},{"location":"sources/vendor/Forcepoint/webprotect/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes websense:cg:kv None"},{"location":"sources/vendor/Forcepoint/webprotect/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes forcepoint_webprotect websense:cg:kv netproxy none forcepoint_ websense:cg:kv netproxy if the log is in format of vendor=Forcepoint product= , the key will will be forcepoint_random"},{"location":"sources/vendor/Fortinet/fortimail/","title":"FortiWMail","text":""},{"location":"sources/vendor/Fortinet/fortimail/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Fortinet/fortimail/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3249"},{"location":"sources/vendor/Fortinet/fortimail/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes fml:<type> type value is determined from the message"},{"location":"sources/vendor/Fortinet/fortimail/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes fortinet_fortimail_<type> fml:<type> email type value is determined from the message"},{"location":"sources/vendor/Fortinet/fortios/","title":"Fortios","text":""},{"location":"sources/vendor/Fortinet/fortios/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Fortinet/fortios/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2846/ Product Manual https://docs.fortinet.com/product/fortigate/6.2"},{"location":"sources/vendor/Fortinet/fortios/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes fgt_log Catch-all sourcetype; not used by the TA fgt_traffic None fgt_utm None fgt_event None"},{"location":"sources/vendor/Fortinet/fortios/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes fortinet_fortios_traffic fgt_traffic netfw none fortinet_fortios_utm fgt_utm netfw none fortinet_fortios_event fgt_event netops none fortinet_fortios_log fgt_log netops none"},{"location":"sources/vendor/Fortinet/fortios/#source-setup-and-configuration","title":"Source Setup and Configuration","text":"
config log memory filter\n\nset forward-traffic enable\n\nset local-traffic enable\n\nset sniffer-traffic disable\n\nset anomaly enable\n\nset voip disable\n\nset multicast-traffic enable\n\nset dns enable\n\nend\n\nconfig system global\n\nset cli-audit-log enable\n\nend\n\nconfig log setting\n\nset neighbor-event enable\n\nend\n
"},{"location":"sources/vendor/Fortinet/fortios/#options","title":"Options","text":"Variable default description SC4S_OPTION_FORTINET_SOURCETYPE_PREFIX fgt Notice starting with version 1.6 of the fortinet add-on and app the sourcetype required changes from fgt_* to fortinet_* this is a breaking change to use the new sourcetype set this variable to fortigate in the env_file"},{"location":"sources/vendor/Fortinet/fortiweb/","title":"FortiWeb","text":""},{"location":"sources/vendor/Fortinet/fortiweb/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Fortinet/fortiweb/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4679/ Product Manual https://docs.fortinet.com/product/fortiweb/6.3"},{"location":"sources/vendor/Fortinet/fortiweb/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes fgt_log Catch-all sourcetype; not used by the TA fwb_traffic None fwb_attack None fwb_event None"},{"location":"sources/vendor/Fortinet/fortiweb/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes fortinet_fortiweb_traffic fwb_traffic netfw none fortinet_fortiweb_attack fwb_attack netids none fortinet_fortiweb_event fwb_event netops none fortinet_fortiweb_log fwb_log netops none"},{"location":"sources/vendor/Fortinet/fortiweb/#source-setup-and-configuration","title":"Source Setup and Configuration","text":"
config log syslog-policy\n\nedit splunk  \n\nconfig syslog-server-list \n\nedit 1\n\nset server x.x.x.x\n\nset port 514 (Example. Should be the same as default or dedicated port selected for sc4s)   \n\nend\n\nend\n\nconfig log syslogd\n\nset policy splunk\n\nset status enable\n\nend\n
"},{"location":"sources/vendor/GitHub/","title":"Enterprise Server","text":""},{"location":"sources/vendor/GitHub/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/GitHub/#links","title":"Links","text":"Ref Link Splunk Add-on Product Manual"},{"location":"sources/vendor/GitHub/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes github:enterprise:audit The audit logs of GitHub Enterprise server have information about audites actions performed by github user."},{"location":"sources/vendor/GitHub/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes github_ent github:enterprise:audit gitops None"},{"location":"sources/vendor/HAProxy/syslog/","title":"HAProxy","text":""},{"location":"sources/vendor/HAProxy/syslog/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/HAProxy/syslog/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3135/"},{"location":"sources/vendor/HAProxy/syslog/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes haproxy:tcp Default syslog format haproxy:splunk:http Splunk\u2019s documented custom format. Note: detection is based on client_ip prefix in message"},{"location":"sources/vendor/HAProxy/syslog/#index-configuration","title":"Index Configuration","text":"key index notes haproxy_syslog netlb none"},{"location":"sources/vendor/HPe/ilo/","title":"ILO (4+)","text":""},{"location":"sources/vendor/HPe/ilo/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/HPe/ilo/#links","title":"Links","text":""},{"location":"sources/vendor/HPe/ilo/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes hpe:ilo none"},{"location":"sources/vendor/HPe/ilo/#index-configuration","title":"Index Configuration","text":"key index notes hpe_ilo infraops none"},{"location":"sources/vendor/HPe/jedirect/","title":"Jedirect","text":""},{"location":"sources/vendor/HPe/jedirect/#jetdirect","title":"JetDirect","text":""},{"location":"sources/vendor/HPe/jedirect/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/HPe/jedirect/#links","title":"Links","text":"Ref Link"},{"location":"sources/vendor/HPe/jedirect/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes hpe:jetdirect none"},{"location":"sources/vendor/HPe/jedirect/#index-configuration","title":"Index Configuration","text":"key index notes hpe_jetdirect print none"},{"location":"sources/vendor/HPe/procurve/","title":"Procurve Switch","text":"

HP Procurve switches have multiple log formats used.

"},{"location":"sources/vendor/HPe/procurve/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/HPe/procurve/#links","title":"Links","text":"Ref Link Switch https://support.hpe.com/hpesc/public/docDisplay?docId=a00091844en_us Switch (A Series) (Flex) https://techhub.hpe.com/eginfolib/networking/docs/switches/12500/5998-4870_nmm_cg/content/378584395.htm"},{"location":"sources/vendor/HPe/procurve/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes hpe:procurve none"},{"location":"sources/vendor/HPe/procurve/#index-configuration","title":"Index Configuration","text":"key index notes hpe_procurve netops none"},{"location":"sources/vendor/IBM/datapower/","title":"Data power","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4662/"},{"location":"sources/vendor/IBM/datapower/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes ibm:datapower:syslog Common sourcetype ibm:datapower:* * is taken from the event sourcetype"},{"location":"sources/vendor/IBM/datapower/#index-configuration","title":"Index Configuration","text":"key source index notes ibm_datapower na inifraops none"},{"location":"sources/vendor/IBM/datapower/#parser-configuration","title":"Parser Configuration","text":"

Parser configuration is conditional only required if additional events are produced by the device that do not match the default configuration.

#/opt/sc4s/local/config/app-parsers/app-vps-ibm_datapower.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-ibm_datapower[sc4s-vps] {\n filter { \n        host(\"^test-ibmdp-\")\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('ibm')\n            product('datapower')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/ISC/bind/","title":"bind","text":"

This source type is often re-implemented by specific add-ons such as infoblox or bluecat if a more specific source type is desired see that source documentation for instructions

"},{"location":"sources/vendor/ISC/bind/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/ISC/bind/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2876/"},{"location":"sources/vendor/ISC/bind/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes isc:bind none"},{"location":"sources/vendor/ISC/bind/#index-configuration","title":"Index Configuration","text":"key index notes isc_bind isc:bind none"},{"location":"sources/vendor/ISC/dhcpd/","title":"dhcpd","text":"

This source type is often re-implemented by specific add-ons such as infoblox or bluecat if a more specific source type is desired see that source documentation for instructions

"},{"location":"sources/vendor/ISC/dhcpd/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/ISC/dhcpd/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3010/"},{"location":"sources/vendor/ISC/dhcpd/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes isc:dhcp none"},{"location":"sources/vendor/ISC/dhcpd/#index-configuration","title":"Index Configuration","text":"key index notes isc_dhcp isc:dhcp none"},{"location":"sources/vendor/ISC/dhcpd/#filter-type","title":"Filter type","text":"

MSG Parse: This filter parses message content

"},{"location":"sources/vendor/ISC/dhcpd/#options","title":"Options","text":"

None

"},{"location":"sources/vendor/ISC/dhcpd/#verification","title":"Verification","text":"

An active site will generate frequent events use the following search to check for new events

Verify timestamp, and host values match as expected

index=<asconfigured> (sourcetype=isc:dhcp\")\n
"},{"location":"sources/vendor/Imperva/incapusla/","title":"Incapsula","text":""},{"location":"sources/vendor/Imperva/incapusla/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Imperva/incapusla/#links","title":"Links","text":"Ref Link Splunk Add-on CEF https://bitbucket.org/SPLServices/ta-cef-for-splunk/downloads/ Splunk Add-on Source Specific https://bitbucket.org/SPLServices/ta-cef-imperva-incapsula/downloads/ Product Manual https://docs.imperva.com/bundle/cloud-application-security/page/more/log-configuration.htm"},{"location":"sources/vendor/Imperva/incapusla/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cef Common sourcetype"},{"location":"sources/vendor/Imperva/incapusla/#source","title":"Source","text":"sourcetype notes Imperva:Incapsula Common sourcetype"},{"location":"sources/vendor/Imperva/incapusla/#index-configuration","title":"Index Configuration","text":"key source index notes Incapsula_SIEMintegration Imperva:Incapsula netwaf none"},{"location":"sources/vendor/Imperva/waf/","title":"On-Premises WAF (SecureSphere WAF)","text":""},{"location":"sources/vendor/Imperva/waf/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Imperva/waf/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2874/ Product Manual https://community.microfocus.com/dcvta86296/attachments/dcvta86296/partner-documentation-h-o/22/2/Imperva_SecureSphere_11_5_CEF_Config_Guide_2018.pdf"},{"location":"sources/vendor/Imperva/waf/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes imperva:waf none imperva:waf:firewall:cef none imperva:waf:security:cef none"},{"location":"sources/vendor/Imperva/waf/#index-configuration","title":"Index Configuration","text":"key index notes Imperva Inc._SecureSphere netwaf none"},{"location":"sources/vendor/InfoBlox/","title":"NIOS","text":"

Warning: Despite the TA indication this data source is CIM compliant all versions of NIOS including the most recent available as of 2019-12-17 do not support the DNS data model correctly. For DNS security use cases use Splunk Stream instead.

"},{"location":"sources/vendor/InfoBlox/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/InfoBlox/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2934/ Product Manual https://docs.infoblox.com/display/ILP/NIOS?preview=/8945695/43728387/NIOS_8.4_Admin_Guide.pdf"},{"location":"sources/vendor/InfoBlox/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes infoblox:dns None infoblox:dhcp None infoblox:threatprotect None nix:syslog None"},{"location":"sources/vendor/InfoBlox/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes infoblox_nios_dns infoblox:dns netdns none infoblox_nios_dhcp infoblox:dhcp netipam none infoblox_nios_threatprotect infoblox:threatprotect netids none infoblox_nios_audit infoblox:audit netops none infoblox_nios_fallback infoblox:port netops none"},{"location":"sources/vendor/InfoBlox/#options","title":"Options","text":"Variable default description SC4S_LISTEN_INFOBLOX_NIOS_UDP_PORT empty Vendor specific port SC4S_LISTEN_INFOBLOX_NIOS_TCP_PORT empty Vendor specific port"},{"location":"sources/vendor/InfoBlox/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-infoblox_nios.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-infoblox_nios[sc4s-vps] {\n filter { \n        host(\"infoblox-*\" type(glob))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('infoblox')\n            product('nios')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Juniper/junos/","title":"JunOS","text":""},{"location":"sources/vendor/Juniper/junos/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Juniper/junos/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2847/ JunOS TechLibrary https://www.juniper.net/documentation/en_US/junos/topics/example/syslog-messages-configuring-qfx-series.html"},{"location":"sources/vendor/Juniper/junos/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes juniper:junos:firewall None juniper:junos:firewall:structured None juniper:junos:idp None juniper:junos:idp:structured None juniper:junos:aamw:structured None juniper:junos:secintel:structured None juniper:junos:snmp None"},{"location":"sources/vendor/Juniper/junos/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes juniper_junos_legacy juniper:legacy netops none juniper_junos_flow juniper:junos:firewall netfw none juniper_junos_utm juniper:junos:firewall netfw none juniper_junos_firewall juniper:junos:firewall netfw none juniper_junos_ids juniper:junos:firewall netids none juniper_junos_idp juniper:junos:idp netids none juniper_junos_snmp juniper:junos:snmp netops none juniper_junos_structured_fw juniper:junos:firewall:structured netfw none juniper_junos_structured_ids juniper:junos:firewall:structured netids none juniper_junos_structured_utm juniper:junos:firewall:structured netfw none juniper_junos_structured_idp juniper:junos:idp:structured netids none juniper_junos_structured_aamw juniper:junos:aamw:structured netfw none juniper_junos_structured_secintel juniper:junos:secintel:structured netfw none"},{"location":"sources/vendor/Juniper/netscreen/","title":"Netscreen","text":""},{"location":"sources/vendor/Juniper/netscreen/#netscreen","title":"Netscreen","text":""},{"location":"sources/vendor/Juniper/netscreen/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Juniper/netscreen/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2847/ Netscreen Manual http://kb.juniper.net/InfoCenter/index?page=content&id=KB4759"},{"location":"sources/vendor/Juniper/netscreen/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes netscreen:firewall None"},{"location":"sources/vendor/Juniper/netscreen/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes juniper_netscreen netscreen:firewall netfw none"},{"location":"sources/vendor/Kaspersky/es/","title":"Enterprise Security RFC5424","text":""},{"location":"sources/vendor/Kaspersky/es/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Kaspersky/es/#links","title":"Links","text":"Ref Link Splunk Add-on non"},{"location":"sources/vendor/Kaspersky/es/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes kaspersky:syslog:es Where PROGRAM starts with KES kaspersky:syslog None"},{"location":"sources/vendor/Kaspersky/es/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes kaspersky_syslog kaspersky:syslog epav none kaspersky_syslog_es kaspersky:syslog:es epav none"},{"location":"sources/vendor/Kaspersky/es_cef/","title":"Enterprise Security CEF","text":"

The TA link provided has commented out the CEF support as of 2022-03-18 manual edits are required

"},{"location":"sources/vendor/Kaspersky/es_cef/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Kaspersky/es_cef/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4656/"},{"location":"sources/vendor/Kaspersky/es_cef/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes kaspersky:cef kaspersky:klaud kaspersky:klsrv kaspersky:gnrl kaspersky:klnag kaspersky:klprci kaspersky:klbl"},{"location":"sources/vendor/Kaspersky/es_cef/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes KasperskyLab_SecurityCenter all epav none"},{"location":"sources/vendor/Kaspersky/es_leef/","title":"Enterprise Security Leef","text":"

Leef format has not been tested samples needed

"},{"location":"sources/vendor/Kaspersky/es_leef/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Kaspersky/es_leef/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4656/"},{"location":"sources/vendor/Kaspersky/es_leef/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes kaspersky:cef kaspersky:klaud kaspersky:klsrv kaspersky:gnrl kaspersky:klnag kaspersky:klprci kaspersky:klbl"},{"location":"sources/vendor/Kaspersky/es_leef/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes KasperskyLab_SecurityCenter all epav none"},{"location":"sources/vendor/Liveaction/liveaction_livenx/","title":"Liveaction - livenx","text":""},{"location":"sources/vendor/Liveaction/liveaction_livenx/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Liveaction/liveaction_livenx/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual None"},{"location":"sources/vendor/Liveaction/liveaction_livenx/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes liveaction:livenx none"},{"location":"sources/vendor/Liveaction/liveaction_livenx/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes liveaction_livenx liveaction:livenx netops None"},{"location":"sources/vendor/McAfee/epo/","title":"EPO","text":""},{"location":"sources/vendor/McAfee/epo/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/McAfee/epo/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/5085/ Product Manual https://kc.mcafee.com/corporate/index?page=content&id=KB87927"},{"location":"sources/vendor/McAfee/epo/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes mcafee:epo:syslog none"},{"location":"sources/vendor/McAfee/epo/#source","title":"Source","text":"source notes policy_auditor_vulnerability_assessment Policy Auditor Vulnerability Assessment events mcafee_agent McAfee Agent events mcafee_endpoint_security McAfee Endpoint Security events"},{"location":"sources/vendor/McAfee/epo/#index-configuration","title":"Index Configuration","text":"key index notes mcafee_epo epav none"},{"location":"sources/vendor/McAfee/epo/#filter-type","title":"Filter type","text":"

MSG Parse: This filter parses message content

"},{"location":"sources/vendor/McAfee/epo/#options","title":"Options","text":"Variable default description SC4S_LISTEN_MCAFEE_EPO_TLS_PORT empty string Enable a TLS port for this specific vendor product using a comma-separated list of port numbers SC4S_DEST_MCAFEE_EPO_ARCHIVE no Enable archive to disk for this specific source SC4S_DEST_MCAFEE_EPO_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source SC4S_SOURCE_TLS_ENABLE no This must be set to yes so that SC4S listens for encrypted syslog from ePO"},{"location":"sources/vendor/McAfee/epo/#additional-setup","title":"Additional setup","text":"

You must create a certificate for the SC4S server to receive encrypted syslog from ePO. A self-signed certificate is fine. Generate a self-signed certificate on the SC4S host:

openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout /opt/sc4s/tls/server.key -out /opt/sc4s/tls/server.pem

Uncomment the following line in /lib/systemd/system/sc4s.service to allow the docker container to use the certificate:

Environment=\"SC4S_TLS_MOUNT=/opt/sc4s/tls:/etc/syslog-ng/tls:z\"

"},{"location":"sources/vendor/McAfee/epo/#troubleshooting","title":"Troubleshooting","text":"

from the command line of the SC4S host, run this: openssl s_client -connect localhost:6514

The message:

socket: Bad file descriptor\nconnect:errno=9\n

indicates that SC4S is not listening for encrypted syslog. Note that a netstat may show the port open, but it is not accepting encrypted traffic as configured.

It may take several minutes for the syslog option to be available in the registered servers dropdown.

"},{"location":"sources/vendor/McAfee/nsp/","title":"Network Security Platform","text":""},{"location":"sources/vendor/McAfee/nsp/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/McAfee/nsp/#links","title":"Links","text":"Ref Link Product Manual https://docs.mcafee.com/bundle/network-security-platform-10.1.x-product-guide/page/GUID-373C1CA6-EC0E-49E1-8858-749D1AA2716A.html"},{"location":"sources/vendor/McAfee/nsp/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes mcafee:nsp none"},{"location":"sources/vendor/McAfee/nsp/#source","title":"Source","text":"source notes mcafee:nsp:alert Alert/Attack Events mcafee:nsp:audit Audit Event or User Activity Events mcafee:nsp:fault Fault Events mcafee:nsp:firewall Firewall Events"},{"location":"sources/vendor/McAfee/nsp/#index-configuration","title":"Index Configuration","text":"key index notes mcafee_nsp netids none"},{"location":"sources/vendor/McAfee/wg/","title":"Wg","text":""},{"location":"sources/vendor/McAfee/wg/#web-gateway","title":"Web Gateway","text":""},{"location":"sources/vendor/McAfee/wg/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/McAfee/wg/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3009/ Product Manual https://kc.mcafee.com/corporate/index?page=content&id=KB77988&actp=RSS"},{"location":"sources/vendor/McAfee/wg/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes mcafee:wg:kv none"},{"location":"sources/vendor/McAfee/wg/#index-configuration","title":"Index Configuration","text":"key index notes mcafee_wg netproxy none"},{"location":"sources/vendor/Microfocus/arcsight/","title":"Arcsight Internal Agent","text":""},{"location":"sources/vendor/Microfocus/arcsight/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Microfocus/arcsight/#links","title":"Links","text":"Ref Link Splunk Add-on CEF https://github.com/splunk/splunk-add-on-for-cef/downloads/"},{"location":"sources/vendor/Microfocus/arcsight/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cef Common sourcetype"},{"location":"sources/vendor/Microfocus/arcsight/#source","title":"Source","text":"source notes ArcSight:ArcSight Internal logs"},{"location":"sources/vendor/Microfocus/arcsight/#index-configuration","title":"Index Configuration","text":"key source index notes ArcSight_ArcSight ArcSight:ArcSight main none"},{"location":"sources/vendor/Microfocus/windows/","title":"Arcsight Microsoft Windows (CEF)","text":""},{"location":"sources/vendor/Microfocus/windows/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Microfocus/windows/#links","title":"Links","text":"Ref Link Splunk Add-on CEF https://bitbucket.org/SPLServices/ta-cef-for-splunk/downloads/ Splunk Add-on CEF https://bitbucket.org/SPLServices/ta-cef-microsoft-windows-for-splunk/downloads/ Product Manual https://docs.imperva.com/bundle/cloud-application-security/page/more/log-configuration.htm"},{"location":"sources/vendor/Microfocus/windows/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cef Common sourcetype"},{"location":"sources/vendor/Microfocus/windows/#source","title":"Source","text":"source notes CEFEventLog:System or Application Event Windows Application and System Event Logs CEFEventLog:Microsoft Windows Windows Security Event Logs"},{"location":"sources/vendor/Microfocus/windows/#index-configuration","title":"Index Configuration","text":"key source index notes Microsoft_System or Application Event CEFEventLog:System or Application Event oswin none Microsoft_Microsoft Windows CEFEventLog:Microsoft Windows oswinsec none"},{"location":"sources/vendor/Microsoft/","title":"Cloud App Security (MCAS)","text":""},{"location":"sources/vendor/Microsoft/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Microsoft/#links","title":"Links","text":"Ref Link Splunk Add-on CEF https://bitbucket.org/SPLServices/ta-cef-for-splunk/downloads/ Splunk Add-on Source Specific none Product Manual https://docs.microsoft.com/en-us/cloud-app-security/siem"},{"location":"sources/vendor/Microsoft/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cef Common sourcetype"},{"location":"sources/vendor/Microsoft/#source","title":"Source","text":"source notes microsoft:cas Common sourcetype"},{"location":"sources/vendor/Microsoft/#index-configuration","title":"Index Configuration","text":"key source index notes MCAS_SIEM_Agent microsoft:cas main none"},{"location":"sources/vendor/Mikrotik/routeros/","title":"RouterOS","text":""},{"location":"sources/vendor/Mikrotik/routeros/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Mikrotik/routeros/#links","title":"Links","text":""},{"location":"sources/vendor/Mikrotik/routeros/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes routeros none"},{"location":"sources/vendor/Mikrotik/routeros/#index-configuration","title":"Index Configuration","text":"key index notes mikrotik_routeros netops none mikrotik_routeros_fw netfw Used for events with forward:"},{"location":"sources/vendor/Mikrotik/routeros/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-mikrotik_routeros.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-mikrotik_routeros[sc4s-vps] {\n filter { \n        host(\"test-mrtros-\" type(string) flags(prefix))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('mikrotik')\n            product('routeros')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/NetApp/ontap/","title":"OnTap","text":""},{"location":"sources/vendor/NetApp/ontap/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/NetApp/ontap/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3418/ Product Manual unknown"},{"location":"sources/vendor/NetApp/ontap/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes netapp:ems None"},{"location":"sources/vendor/NetApp/ontap/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes netapp_ontap netapp:ems infraops none"},{"location":"sources/vendor/NetApp/storage-grid/","title":"StorageGRID","text":""},{"location":"sources/vendor/NetApp/storage-grid/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/NetApp/storage-grid/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3895/ Product Manual unknown"},{"location":"sources/vendor/NetApp/storage-grid/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes grid:auditlog None grid:rest:api None"},{"location":"sources/vendor/NetApp/storage-grid/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes netapp_grid grid:auditlog infraops none netapp_grid grid:rest:api infraops none"},{"location":"sources/vendor/NetScout/arbor_edge/","title":"DatAdvantage","text":""},{"location":"sources/vendor/NetScout/arbor_edge/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/NetScout/arbor_edge/#links","title":"Links","text":"Ref Link TA https://github.com/arbor/TA_netscout_aed"},{"location":"sources/vendor/NetScout/arbor_edge/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes netscout:aed"},{"location":"sources/vendor/NetScout/arbor_edge/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes NETSCOUT_Arbor Edge Defense netscout:aed netids NETSCOUT_Arbor Networks APS netscout:aed netids"},{"location":"sources/vendor/Netmotion/mobilityserver/","title":"Mobility Server","text":""},{"location":"sources/vendor/Netmotion/mobilityserver/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Netmotion/mobilityserver/#links","title":"Links","text":"Ref Link Splunk Add-on none Product Manual unknown"},{"location":"sources/vendor/Netmotion/mobilityserver/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes netmotion:mobilityserver:* The third segment of the source type is constructed from the sdid field of the syslog sdata"},{"location":"sources/vendor/Netmotion/mobilityserver/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes netmotion_mobility-server_* netmotion:mobilityserver:* netops none"},{"location":"sources/vendor/Netmotion/reporting/","title":"Reporting","text":""},{"location":"sources/vendor/Netmotion/reporting/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Netmotion/reporting/#links","title":"Links","text":"Ref Link Splunk Add-on none Product Manual unknown"},{"location":"sources/vendor/Netmotion/reporting/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes netmotion:reporting None"},{"location":"sources/vendor/Netmotion/reporting/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes netmotion_reporting netmotion:reporting netops none"},{"location":"sources/vendor/Netwrix/endpoint_protector/","title":"Endpoint Protector by CoSoSys","text":""},{"location":"sources/vendor/Netwrix/endpoint_protector/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Netwrix/endpoint_protector/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual na"},{"location":"sources/vendor/Netwrix/endpoint_protector/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes netwrix:epp None"},{"location":"sources/vendor/Netwrix/endpoint_protector/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes netwrix_epp netwrix:epp netops None"},{"location":"sources/vendor/Novell/netiq/","title":"NetIQ","text":""},{"location":"sources/vendor/Novell/netiq/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Novell/netiq/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Novell/netiq/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes novell:netiq none"},{"location":"sources/vendor/Novell/netiq/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes novell_netiq novell_netiq netauth None"},{"location":"sources/vendor/Nutanix/cvm/","title":"Nutanix_CVM_Audit","text":""},{"location":"sources/vendor/Nutanix/cvm/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Nutanix/cvm/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Nutanix/cvm/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes nutanix:syslog CVM logs nutanix:syslog:audit CVM system audit logs Considering the message host format is default ntnx-xxxx-cvm"},{"location":"sources/vendor/Nutanix/cvm/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes nutanix_syslog nutanix:syslog infraops none nutanix_syslog_audit nutanix:syslog:audit infraops none"},{"location":"sources/vendor/Ossec/ossec/","title":"Ossec","text":""},{"location":"sources/vendor/Ossec/ossec/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Ossec/ossec/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2808/ Product Manual https://www.ossec.net/docs/index.html"},{"location":"sources/vendor/Ossec/ossec/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes ossec The add-on supports data from the following sources: File Integrity Management (FIM) data, FTP data, su data, ssh data, Windows data, including audit and logon information"},{"location":"sources/vendor/Ossec/ossec/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes ossec_ossec ossec main None"},{"location":"sources/vendor/PaloaltoNetworks/cortexxdr/","title":"Cortext","text":""},{"location":"sources/vendor/PaloaltoNetworks/cortexxdr/#key-facts","title":"Key facts","text":" Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2757/"},{"location":"sources/vendor/PaloaltoNetworks/cortexxdr/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes pan:* pan:xsoar none"},{"location":"sources/vendor/PaloaltoNetworks/cortexxdr/#index-configuration","title":"Index Configuration","text":"key index notes Palo Alto Networks_Palo Alto Networks Cortex XSOAR epintel none"},{"location":"sources/vendor/PaloaltoNetworks/panos/","title":"panos","text":""},{"location":"sources/vendor/PaloaltoNetworks/panos/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/PaloaltoNetworks/panos/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2757/ Product Manual https://docs.paloaltonetworks.com/pan-os/9-0/pan-os-admin/monitoring/use-syslog-for-monitoring/configure-syslog-monitoring.html"},{"location":"sources/vendor/PaloaltoNetworks/panos/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes pan:log None pan:globalprotect none pan:traffic None pan:threat None pan:system None pan:config None pan:hipmatch None pan:correlation None pan:userid None"},{"location":"sources/vendor/PaloaltoNetworks/panos/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes pan_panos_log pan:log netops none pan_panos_globalprotect pan:globalprotect netfw none pan_panos_traffic pan:traffic netfw none pan_panos_threat pan:threat netproxy none pan_panos_system pan:system netops none pan_panos_config pan:config netops none pan_panos_hipmatch pan:hipmatch netops none pan_panos_correlation pan:correlation netops none pan_panos_userid pan:userid netauth none"},{"location":"sources/vendor/PaloaltoNetworks/panos/#filter-type","title":"Filter type","text":"

MSG Parse: This filter parses message content

"},{"location":"sources/vendor/PaloaltoNetworks/panos/#setup-and-configuration","title":"Setup and Configuration","text":""},{"location":"sources/vendor/PaloaltoNetworks/panos/#options","title":"Options","text":"Variable default description SC4S_LISTEN_PULSE_PAN_PANOS_RFC6587_PORT empty string Enable a TCP using IETF Framing (RFC6587) port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_PAN_PANOS_TCP_PORT empty string Enable a TCP port for this specific vendor product using a comma-separated list of port numbers SC4S_DEST_PAN_PANOS_ARCHIVE no Enable archive to disk for this specific source SC4S_DEST_PAN_PANOS_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source"},{"location":"sources/vendor/PaloaltoNetworks/panos/#verification","title":"Verification","text":"

An active firewall will generate frequent events. Use the following search to validate events are present per source device

index=<asconfigured> sourcetype=pan:*| stats count by host\n
"},{"location":"sources/vendor/PaloaltoNetworks/prisma/","title":"Prisma SD-WAN ION","text":""},{"location":"sources/vendor/PaloaltoNetworks/prisma/#key-facts","title":"Key facts","text":" Ref Link Splunk Add-on none Product Manual https://docs.paloaltonetworks.com/prisma/prisma-sd-wan/prisma-sd-wan-admin/prisma-sd-wan-sites-and-devices/use-external-services-for-monitoring/syslog-server-support-in-prisma-sd-wan Product Manual https://docs.paloaltonetworks.com/prisma/prisma-sd-wan/prisma-sd-wan-admin/prisma-sd-wan-sites-and-devices/use-external-services-for-monitoring/syslog-server-support-in-prisma-sd-wan/syslog-flow-export"},{"location":"sources/vendor/PaloaltoNetworks/prisma/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes prisma:sd-wan:flow prisma:sd-wan:authentication prisma:sd-wan:event"},{"location":"sources/vendor/PaloaltoNetworks/prisma/#index-configuration","title":"Index Configuration","text":"key index notes prisma_sd-wan_flow netwaf none prisma_sd-wan_authentication netwaf none prisma_sd-wan_event netwaf none"},{"location":"sources/vendor/PaloaltoNetworks/traps/","title":"Traps","text":""},{"location":"sources/vendor/PaloaltoNetworks/traps/#traps","title":"TRAPS","text":""},{"location":"sources/vendor/PaloaltoNetworks/traps/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/PaloaltoNetworks/traps/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2757/"},{"location":"sources/vendor/PaloaltoNetworks/traps/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes pan:traps4 none"},{"location":"sources/vendor/PaloaltoNetworks/traps/#index-configuration","title":"Index Configuration","text":"key index notes Palo Alto Networks_Traps Agent epintel none"},{"location":"sources/vendor/Pfsense/firewall/","title":"Firewall","text":""},{"location":"sources/vendor/Pfsense/firewall/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Pfsense/firewall/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/1527/ Product Manual https://docs.netgate.com/pfsense/en/latest/monitoring/copying-logs-to-a-remote-host-with-syslog.html?highlight=syslog"},{"location":"sources/vendor/Pfsense/firewall/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes pfsense:filterlog None pfsense:* All programs other than filterlog"},{"location":"sources/vendor/Pfsense/firewall/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes pfsense pfsense netops none pfsense_filterlog pfsense:filterlog netfw none"},{"location":"sources/vendor/Pfsense/firewall/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-pfsense_firewall.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-pfsense_firewall[sc4s-vps] {\n filter { \n        \"${HOST}\" eq \"pfsense_firewall\"\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('pfsense')\n            product('firewall')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Polycom/rprm/","title":"RPRM","text":""},{"location":"sources/vendor/Polycom/rprm/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Polycom/rprm/#links","title":"Links","text":"Ref Link Splunk Add-on none Product Manual unknown"},{"location":"sources/vendor/Polycom/rprm/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes polycom:rprm:syslog"},{"location":"sources/vendor/Polycom/rprm/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes polycom_rprm polycom:rprm:syslog netops none"},{"location":"sources/vendor/Powertech/interact/","title":"PowerTech Interact","text":""},{"location":"sources/vendor/Powertech/interact/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Powertech/interact/#links","title":"Links","text":"Ref Link Splunk Add-on None"},{"location":"sources/vendor/Powertech/interact/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes PowerTech:Interact:cef CEF"},{"location":"sources/vendor/Powertech/interact/#source","title":"Source","text":"source notes PowerTech:Interact:cef None"},{"location":"sources/vendor/Powertech/interact/#index-configuration","title":"Index Configuration","text":"key source index notes PowerTech_Interact PowerTech:Interact netops none"},{"location":"sources/vendor/Proofpoint/","title":"Proofpoint Protection Server","text":""},{"location":"sources/vendor/Proofpoint/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Proofpoint/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3080/ Product Manual https://proofpointcommunities.force.com/community/s/article/Remote-Syslog-Forwarding"},{"location":"sources/vendor/Proofpoint/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes pps_filter_log pps_mail_log This sourcetype will conflict with sendmail itself, so will require that the PPS send syslog on a dedicated port or be uniquely identifiable with a hostname glob or CIDR block if this sourcetype is desired for PPS."},{"location":"sources/vendor/Proofpoint/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes proofpoint_pps_filter pps_filter_log email none proofpoint_pps_sendmail pps_mail_log email none"},{"location":"sources/vendor/Proofpoint/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-proofpoint_pps.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-proofpoint_pps[sc4s-vps] {\n filter { \n        host(\"pps-*\" type(glob))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('proofpoint')\n            product('pps')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Pulse/connectsecure/","title":"Pulse","text":""},{"location":"sources/vendor/Pulse/connectsecure/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Pulse/connectsecure/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3852/ JunOS TechLibrary https://docs.pulsesecure.net/WebHelp/Content/PCS/PCS_AdminGuide_8.2/Configuring%20Syslog.htm"},{"location":"sources/vendor/Pulse/connectsecure/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes pulse:connectsecure None pulse:connectsecure:web None"},{"location":"sources/vendor/Pulse/connectsecure/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes pulse_connect_secure pulse:connectsecure netfw none pulse_connect_secure_web pulse:connectsecure:web netproxy none"},{"location":"sources/vendor/PureStorage/array/","title":"Array","text":""},{"location":"sources/vendor/PureStorage/array/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/PureStorage/array/#links","title":"Links","text":"Ref Link Splunk Add-on None note TA published on Splunk base does not include syslog extractions Product Manual"},{"location":"sources/vendor/PureStorage/array/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes purestorage:array purestorage:array:${class} This type is generated from the message"},{"location":"sources/vendor/PureStorage/array/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes purestorage_array purestorage:array infraops None purestorage_array_${class} purestorage:array:class infraops class is extracted as the string following \u201cpurity.\u201d"},{"location":"sources/vendor/Qumulo/storage/","title":"Storage","text":""},{"location":"sources/vendor/Qumulo/storage/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Qumulo/storage/#links","title":"Links","text":"Ref Link Splunk Add-on none"},{"location":"sources/vendor/Qumulo/storage/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes qumulo:storage None"},{"location":"sources/vendor/Qumulo/storage/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes qumulo_storage qumulo:storage infraops none"},{"location":"sources/vendor/Radware/defensepro/","title":"DefensePro","text":""},{"location":"sources/vendor/Radware/defensepro/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Radware/defensepro/#links","title":"Links","text":"Ref Link Splunk Add-on Note this add-on does not provide functional extractions https://splunkbase.splunk.com/app/4480/ Product Manual https://www.radware.com/products/defensepro/"},{"location":"sources/vendor/Radware/defensepro/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes radware:defensepro Note some events do not contain host"},{"location":"sources/vendor/Radware/defensepro/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes radware_defensepro radware:defensepro netops none"},{"location":"sources/vendor/Raritan/dsx/","title":"DSX","text":""},{"location":"sources/vendor/Raritan/dsx/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Raritan/dsx/#links","title":"Links","text":"Ref Link Splunk Add-on none Product Manual https://www.raritan.com/products/kvm-serial/serial-console-servers/serial-over-ip-console-server"},{"location":"sources/vendor/Raritan/dsx/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes raritan:dsx Note events do not contain host"},{"location":"sources/vendor/Raritan/dsx/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes raritan_dsx raritan:dsx infraops none"},{"location":"sources/vendor/Raritan/dsx/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-raritan_dsx.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-raritan_dsx[sc4s-vps] {\n filter { \n        host(\"raritan_dsx*\" type(glob))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('raritan')\n            product('dsx')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Ricoh/mfp/","title":"MFP","text":""},{"location":"sources/vendor/Ricoh/mfp/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Ricoh/mfp/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Ricoh/mfp/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes ricoh:mfp None"},{"location":"sources/vendor/Ricoh/mfp/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes ricoh_syslog ricoh:mfp printer none"},{"location":"sources/vendor/Ricoh/mfp/#sc4s-options","title":"SC4S Options","text":"Variable default description SC4S_SOURCE_RICOH_SYSLOG_FIXHOST yes Current firmware incorrectly sends the value of HOST in the program field if this is ever corrected this value will need to be set back to no we suggest using yes"},{"location":"sources/vendor/Riverbed/","title":"Syslog","text":"

Used when more specific steelhead or steelconnect can not be identified

"},{"location":"sources/vendor/Riverbed/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Riverbed/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Riverbed/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes riverbed:syslog None"},{"location":"sources/vendor/Riverbed/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes riverbed_syslog riverbed:syslog netops none riverbed_syslog_nix_syslog nix:syslog osnix none"},{"location":"sources/vendor/Riverbed/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-riverbed_syslog.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-riverbed_syslog[sc4s-vps] {\n filter {      \n        host(....)\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('riverbed')\n            product('syslog')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Riverbed/steelconnect/","title":"Steelconnect","text":""},{"location":"sources/vendor/Riverbed/steelconnect/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Riverbed/steelconnect/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Riverbed/steelconnect/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes riverbed:steelconnect None"},{"location":"sources/vendor/Riverbed/steelconnect/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes riverbed_syslog_steelconnect riverbed:steelconnect netops none"},{"location":"sources/vendor/Riverbed/steelhead/","title":"SteelHead","text":""},{"location":"sources/vendor/Riverbed/steelhead/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Riverbed/steelhead/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Riverbed/steelhead/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes riverbed:steelhead None"},{"location":"sources/vendor/Riverbed/steelhead/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes riverbed_syslog_steelhead riverbed:steelhead netops none"},{"location":"sources/vendor/Riverbed/steelhead/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-riverbed_syslog.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-riverbed_syslog[sc4s-vps] {\n filter {      \n        host(....)\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('riverbed')\n            product('syslog')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Ruckus/SmartZone/","title":"Smart Zone","text":"

Some events may not match the source format please report issues if found

"},{"location":"sources/vendor/Ruckus/SmartZone/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Ruckus/SmartZone/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Ruckus/SmartZone/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes ruckus:smartzone None"},{"location":"sources/vendor/Ruckus/SmartZone/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes ruckus_smartzone ruckus:smartzone netops none"},{"location":"sources/vendor/Schneider/apc/","title":"APC Power systems","text":""},{"location":"sources/vendor/Schneider/apc/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Schneider/apc/#links","title":"Links","text":"Ref Link Splunk Add-on none Product Manual multiple"},{"location":"sources/vendor/Schneider/apc/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes apc:syslog None"},{"location":"sources/vendor/Schneider/apc/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes schneider_apc apc:syslog main none"},{"location":"sources/vendor/Schneider/apc/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-schneider_apc.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-schneider_apc[sc4s-vps] {\n filter { \n        host(\"test_apc-*\" type(glob))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('schneider')\n            product('apc')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/SecureAuthIdP/secureauth_idp/","title":"SecureAuth IdP","text":""},{"location":"sources/vendor/SecureAuthIdP/secureauth_idp/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/SecureAuthIdP/secureauth_idp/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3008"},{"location":"sources/vendor/SecureAuthIdP/secureauth_idp/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes secureauth:idp none"},{"location":"sources/vendor/SecureAuthIdP/secureauth_idp/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes secureauth_idp secureauth:idp netops None"},{"location":"sources/vendor/Semperis/DSP/","title":"Semperis DSP","text":""},{"location":"sources/vendor/Semperis/DSP/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Semperis/DSP/#links","title":"Links","text":"Ref Link Splunk Add-on None"},{"location":"sources/vendor/Semperis/DSP/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes semperis:dsp none"},{"location":"sources/vendor/Semperis/DSP/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes semperis_dsp semperis:dsp netops None"},{"location":"sources/vendor/Solace/evenbroker/","title":"EventBroker","text":""},{"location":"sources/vendor/Solace/evenbroker/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Solace/evenbroker/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Solace/evenbroker/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes solace:eventbroker None"},{"location":"sources/vendor/Solace/evenbroker/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes solace_eventbroker solace:eventbroker main none"},{"location":"sources/vendor/Sophos/Firewall/","title":"Web Appliance","text":""},{"location":"sources/vendor/Sophos/Firewall/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Sophos/Firewall/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/6187/ Product Manual unknown"},{"location":"sources/vendor/Sophos/Firewall/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes sophos:xg:atp None sophos:xg:anti_spam None sophos:xg:anti_virus None sophos:xg:content_filtering None sophos:xg:event None sophos:xg:firewall None sophos:xg:ssl None sophos:xg:sandbox None sophos:xg:system_health None sophos:xg:heartbeat None sophos:xg:waf None sophos:xg:wireless_protection None sophos:xg:idp None"},{"location":"sources/vendor/Sophos/Firewall/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes sophos_xg_atp sophos:xg:atp netdlp none sophos_xg_anti_spam sophos:xg:anti_spam netdlp none sophos_xg_anti_virus sophos:xg:anti_virus netdlp none sophos_xg_content_filtering sophos:xg:content_filtering netdlp none sophos_xg_event sophos:xg:event netdlp none sophos_xg_firewall sophos:xg:firewall netdlp none sophos_xg_ssl sophos:xg:ssl netdlp none sophos_xg_sandbox sophos:xg:sandbox netdlp none sophos_xg_system_health sophos:xg:system_health netdlp none sophos_xg_heartbeat sophos:xg:heartbeat netdlp none sophos_xg_waf sophos:xg:waf netdlp none sophos_xg_wireless_protection sophos:xg:wireless_protection netdlp none sophos_xg_idp sophos:xg:idp netdlp none"},{"location":"sources/vendor/Sophos/webappliance/","title":"Web Appliance","text":""},{"location":"sources/vendor/Sophos/webappliance/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Sophos/webappliance/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Sophos/webappliance/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes sophos:webappliance None"},{"location":"sources/vendor/Sophos/webappliance/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes sophos_webappliance sophos:webappliance netproxy none"},{"location":"sources/vendor/Sophos/webappliance/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-sophos_webappliance.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-sophos_webappliance[sc4s-vps] {\n filter { \n        host(\"test-sophos-webapp-\" type(string) flags(prefix))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('sophos')\n            product('webappliance')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Spectracom/","title":"NTP Appliance","text":""},{"location":"sources/vendor/Spectracom/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Spectracom/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Spectracom/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes spectracom:ntp None nix:syslog None"},{"location":"sources/vendor/Spectracom/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes spectracom_ntp spectracom:ntp netops none"},{"location":"sources/vendor/Spectracom/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-spectracom_ntp.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-spectracom_ntp[sc4s-vps] {\n filter { \n        netmask(169.254.100.1/24)\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('spectracom')\n            product('ntp')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Splunk/heavyforwarder/","title":"Splunk Heavy Forwarder","text":"

In certain network architectures such as those using data diodes or those networks requiring \u201cin the clear\u201d inspection at network egress SC4S can be used to accept specially formatted output from Splunk as RFC5424 syslog.

"},{"location":"sources/vendor/Splunk/heavyforwarder/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Splunk/heavyforwarder/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Splunk/heavyforwarder/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes spectracom:ntp None nix:syslog None"},{"location":"sources/vendor/Splunk/heavyforwarder/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"

Index Source and Sourcetype will be used as determined by the Source/HWF

"},{"location":"sources/vendor/Splunk/heavyforwarder/#splunk-configuration","title":"Splunk Configuration","text":""},{"location":"sources/vendor/Splunk/heavyforwarder/#outputsconf","title":"outputs.conf","text":"
#Because audit trail is protected and we can't transform it we can not use default we must use tcp_routing\n[tcpout]\ndefaultGroup = NoForwarding\n\n[tcpout:nexthop]\nserver = localhost:9000\nsendCookedData = false\n
"},{"location":"sources/vendor/Splunk/heavyforwarder/#propsconf","title":"props.conf","text":"
[default]\nADD_EXTRA_TIME_FIELDS = none\nANNOTATE_PUNCT = false\nSHOULD_LINEMERGE = false\nTRANSFORMS-zza-syslog = syslog_canforward, metadata_meta,  metadata_source, metadata_sourcetype, metadata_index, metadata_host, metadata_subsecond, metadata_time, syslog_prefix, syslog_drop_zero\n# The following applies for TCP destinations where the IETF frame is required\nTRANSFORMS-zzz-syslog = syslog_octal, syslog_octal_append\n# Comment out the above and uncomment the following for udp\n#TRANSFORMS-zzz-syslog-udp = syslog_octal, syslog_octal_append, syslog_drop_zero\n\n[audittrail]\n# We can't transform this source type its protected\nTRANSFORMS-zza-syslog =\nTRANSFORMS-zzz-syslog =\n
"},{"location":"sources/vendor/Splunk/heavyforwarder/#transformsconf","title":"transforms.conf","text":"
syslog_canforward]\nREGEX = ^.(?!audit)\nDEST_KEY = _TCP_ROUTING\nFORMAT = nexthop\n\n[metadata_meta]\nSOURCE_KEY = _meta\nREGEX = (?ims)(.*)\nFORMAT = ~~~SM~~~$1~~~EM~~~$0 \nDEST_KEY = _raw\n\n[metadata_source]\nSOURCE_KEY = MetaData:Source\nREGEX = ^source::(.*)$\nFORMAT = s=\"$1\"] $0\nDEST_KEY = _raw\n\n[metadata_sourcetype]\nSOURCE_KEY = MetaData:Sourcetype\nREGEX = ^sourcetype::(.*)$\nFORMAT = st=\"$1\" $0\nDEST_KEY = _raw\n\n[metadata_index]\nSOURCE_KEY = _MetaData:Index\nREGEX = (.*)\nFORMAT = i=\"$1\" $0\nDEST_KEY = _raw\n\n[metadata_host]\nSOURCE_KEY = MetaData:Host\nREGEX = ^host::(.*)$\nFORMAT = \" h=\"$1\" $0\nDEST_KEY = _raw\n\n[syslog_prefix]\nSOURCE_KEY = _time\nREGEX = (.*)\nFORMAT = <1>1 - - SPLUNK - COOKED [fields@274489 $0\nDEST_KEY = _raw\n\n[metadata_time]\nSOURCE_KEY = _time\nREGEX = (.*)\nFORMAT =  t=\"$1$0\nDEST_KEY = _raw\n\n[metadata_subsecond]\nSOURCE_KEY = _meta\nREGEX = \\_subsecond\\:\\:(\\.\\d+)\nFORMAT = $1 $0\nDEST_KEY = _raw\n\n[syslog_octal]\nINGEST_EVAL= mlen=length(_raw)+1\n\n[syslog_octal_append]\nINGEST_EVAL = _raw=mlen + \" \" + _raw\n\n[syslog_drop_zero]\nINGEST_EVAL = queue=if(mlen<10,\"nullQueue\",queue)\n
"},{"location":"sources/vendor/Splunk/sc4s/","title":"Splunk Connect for Syslog (SC4S)","text":""},{"location":"sources/vendor/Splunk/sc4s/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Splunk/sc4s/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4740/ Product Manual https://splunk-connect-for-syslog.readthedocs.io/en/latest/"},{"location":"sources/vendor/Splunk/sc4s/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes sc4s:events Internal events from the SC4S container and underlying syslog-ng process sc4s:metrics syslog-ng operational metrics that will be delivered directly to a metrics index in Splunk"},{"location":"sources/vendor/Splunk/sc4s/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes splunk_sc4s_events all main none splunk_sc4s_metrics all _metrics none splunk_sc4s_fallback all main none"},{"location":"sources/vendor/Splunk/sc4s/#filter-type","title":"Filter type","text":"

SC4S events and metrics are generated automatically and no specific ports or filters need to be configured for the collection of this data.

"},{"location":"sources/vendor/Splunk/sc4s/#setup-and-configuration","title":"Setup and Configuration","text":""},{"location":"sources/vendor/Splunk/sc4s/#options","title":"Options","text":"Variable default description SC4S_DEST_SPLUNK_SC4S_METRICS_HEC multi2 event produce metrics as plain text events; single produce metrics using Splunk Enterprise 7.3 single metrics format; multi produce metrics using Splunk Enterprise >8.1 multi metric format multi2 produces improved (reduced resource consumption) multi metric format SC4S_SOURCE_MARK_MESSAGE_NULLQUEUE yes (yes"},{"location":"sources/vendor/Splunk/sc4s/#verification","title":"Verification","text":"

SC4S will generate versioning events at startup. These startup events can be used to validate HEC is set up properly on the Splunk side.

index=<asconfigured> sourcetype=sc4s:events | stats count by host\n

Metrics can be observed via the \u201cAnalytics\u2013>Metrics\u201d navigation in the Search and Reporting app in Splunk.

"},{"location":"sources/vendor/StealthWatch/StealthIntercept/","title":"Stealth Intercept","text":""},{"location":"sources/vendor/StealthWatch/StealthIntercept/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/StealthWatch/StealthIntercept/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4609/ Product Manual unknown"},{"location":"sources/vendor/StealthWatch/StealthIntercept/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes StealthINTERCEPT None StealthINTERCEPT:alerts SC4S Format Shifts to JSON override template to t_msg_hdr for original raw"},{"location":"sources/vendor/StealthWatch/StealthIntercept/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes stealthbits_stealthintercept StealthINTERCEPT netids none stealthbits_stealthintercept_alerts StealthINTERCEPT:alerts netids Note TA does not support this source type"},{"location":"sources/vendor/Tanium/platform/","title":"Platform","text":"

This source requires a TLS connection; in most cases enabling TLS and using the default port 6514 is adequate. The source is understood to require a valid certificate.

"},{"location":"sources/vendor/Tanium/platform/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Tanium/platform/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4439/"},{"location":"sources/vendor/Tanium/platform/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes tanium none"},{"location":"sources/vendor/Tanium/platform/#index-configuration","title":"Index Configuration","text":"key index notes tanium_syslog epintel none"},{"location":"sources/vendor/Tenable/ad/","title":"ad","text":""},{"location":"sources/vendor/Tenable/ad/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Tenable/ad/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4060/ Product Manual"},{"location":"sources/vendor/Tenable/ad/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes tenable:ad:alerts None"},{"location":"sources/vendor/Tenable/ad/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes tenable_ad tenable:ad:alerts oswinsec none"},{"location":"sources/vendor/Tenable/nnm/","title":"nnm","text":""},{"location":"sources/vendor/Tenable/nnm/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Tenable/nnm/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4060/ Product Manual https://docs.tenable.com/integrations/Splunk/Content/Splunk2/ProcessWorkflow.htm"},{"location":"sources/vendor/Tenable/nnm/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes tenable:nnm:vuln None"},{"location":"sources/vendor/Tenable/nnm/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes tenable_nnm tenable:nnm:vuln netfw none"},{"location":"sources/vendor/Thales/thales_vormetric/","title":"Thales Vormetric Data Security Platform","text":""},{"location":"sources/vendor/Thales/thales_vormetric/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Thales/thales_vormetric/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual link"},{"location":"sources/vendor/Thales/thales_vormetric/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes thales:vormetric None"},{"location":"sources/vendor/Thales/thales_vormetric/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes thales_vormetric thales:vormetric netauth None"},{"location":"sources/vendor/Thycotic/secretserver/","title":"Secret Server","text":""},{"location":"sources/vendor/Thycotic/secretserver/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Thycotic/secretserver/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4060/ Product Manual"},{"location":"sources/vendor/Thycotic/secretserver/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes thycotic:syslog None"},{"location":"sources/vendor/Thycotic/secretserver/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes Thycotic Software_Secret Server thycotic:syslog netauth none"},{"location":"sources/vendor/Tintri/syslog/","title":"Syslog","text":""},{"location":"sources/vendor/Tintri/syslog/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Tintri/syslog/#links","title":"Links","text":"Ref Link Splunk Add-on None"},{"location":"sources/vendor/Tintri/syslog/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes tintri none"},{"location":"sources/vendor/Tintri/syslog/#index-configuration","title":"Index Configuration","text":"key index notes tintri_syslog infraops none"},{"location":"sources/vendor/Trellix/cms/","title":"Trellix CMS","text":""},{"location":"sources/vendor/Trellix/cms/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Trellix/cms/#links","title":"Links","text":"Ref Link Splunk Add-on None"},{"location":"sources/vendor/Trellix/cms/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes trellix:cms CEF"},{"location":"sources/vendor/Trellix/cms/#source","title":"Source","text":"source notes trellix:cms None"},{"location":"sources/vendor/Trellix/cms/#index-configuration","title":"Index Configuration","text":"key source index notes trellix_cms trellix:cms netops none"},{"location":"sources/vendor/Trellix/mps/","title":"Trellix MPS","text":""},{"location":"sources/vendor/Trellix/mps/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Trellix/mps/#links","title":"Links","text":"Ref Link Splunk Add-on None"},{"location":"sources/vendor/Trellix/mps/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes trellix:mps CEF"},{"location":"sources/vendor/Trellix/mps/#source","title":"Source","text":"source notes trellix:mps None"},{"location":"sources/vendor/Trellix/mps/#index-configuration","title":"Index Configuration","text":"key source index notes trellix_mps trellix:mps netops none"},{"location":"sources/vendor/Trend/deepsecurity/","title":"Deep Security","text":""},{"location":"sources/vendor/Trend/deepsecurity/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Trend/deepsecurity/#links","title":"Links","text":"Ref Link Splunk Add-on CEF https://splunkbase.splunk.com/app/1936/"},{"location":"sources/vendor/Trend/deepsecurity/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes deepsecurity-system_events deepsecurity-intrusion_prevention deepsecurity-integrity_monitoring deepsecurity-log_inspection deepsecurity-web_reputation deepsecurity-firewall deepsecurity-antimalware deepsecurity-app_control"},{"location":"sources/vendor/Trend/deepsecurity/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes Trend Micro_Deep Security Agent deepsecurity epintel Used only if a correct source type is not matched Trend Micro_Deep Security Agent_intrusion prevention deepsecurity-intrusion_prevention epintel Trend Micro_Deep Security Agent_integrity monitoring deepsecurity-integrity_monitoring epintel Trend Micro_Deep Security Agent_log inspection deepsecurity-log_inspection epintel Trend Micro_Deep Security Agent_web reputation deepsecurity-web_reputation epintel Trend Micro_Deep Security Agent_firewall deepsecurity-firewall epintel Trend Micro_Deep Security Agent_antimalware deepsecurity-antimalware epintel Trend Micro_Deep Security Agent_app control deepsecurity-app_control epintel Trend Micro_Deep Security Manager deepsecurity-system_events epintel"},{"location":"sources/vendor/Ubiquiti/unifi/","title":"Unifi","text":"

All Ubiquity Unfi firewalls, switches, and access points share a common syslog configuration via the NMS.

"},{"location":"sources/vendor/Ubiquiti/unifi/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Ubiquiti/unifi/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4107/ Product Manual https://https://help.ubnt.com/"},{"location":"sources/vendor/Ubiquiti/unifi/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes ubnt Used when no sub source type is required by add on ubnt:fw USG events ubnt:threat USG IDS events ubnt:switch Unifi Switches ubnt:wireless Access Point logs"},{"location":"sources/vendor/Ubiquiti/unifi/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes ubiquiti_unifi ubnt netops none ubiquiti_unifi_fw ubnt:fw netfw none"},{"location":"sources/vendor/Ubiquiti/unifi/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-ubiquiti_unifi_fw.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-ubiquiti_unifi_fw[sc4s-vps] {\n filter { \n        host(\"usg-*\" type(glob))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('ubiquiti')\n            product('unifi')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/VMWare/airwatch/","title":"Airwatch","text":"

AirWatch is a product used for enterprise mobility management (EMM) software and standalone management systems for content, applications and email.

"},{"location":"sources/vendor/VMWare/airwatch/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/VMWare/airwatch/#links","title":"Links","text":"Ref Link Product Manual https://docs.vmware.com/en/VMware-Workspace-ONE/index.html"},{"location":"sources/vendor/VMWare/airwatch/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes vmware:airwatch None"},{"location":"sources/vendor/VMWare/airwatch/#index-configuration","title":"Index Configuration","text":"key index notes vmware_airwatch epintel none"},{"location":"sources/vendor/VMWare/carbonblack/","title":"Carbon Black Protection","text":""},{"location":"sources/vendor/VMWare/carbonblack/#rfc-5424-format","title":"RFC 5424 Format","text":""},{"location":"sources/vendor/VMWare/carbonblack/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/VMWare/carbonblack/#links","title":"Links","text":"Ref Link Splunk Add-on none"},{"location":"sources/vendor/VMWare/carbonblack/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes vmware:cb:protect Common sourcetype"},{"location":"sources/vendor/VMWare/carbonblack/#source","title":"Source","text":"source notes carbonblack:protection:cef Note this method of onboarding is not recommended for a more complete experience utilize the json format supported by he product with hec or s3"},{"location":"sources/vendor/VMWare/carbonblack/#index-configuration","title":"Index Configuration","text":"key source index notes vmware_cb-protect carbonblack:protection:cef epintel none"},{"location":"sources/vendor/VMWare/carbonblack/#legacy-cef-format","title":"Legacy CEF Format","text":""},{"location":"sources/vendor/VMWare/carbonblack/#key-facts_1","title":"Key facts","text":""},{"location":"sources/vendor/VMWare/carbonblack/#links_1","title":"Links","text":"Ref Link Splunk Add-on none"},{"location":"sources/vendor/VMWare/carbonblack/#sourcetypes_1","title":"Sourcetypes","text":"sourcetype notes cef Common sourcetype"},{"location":"sources/vendor/VMWare/carbonblack/#source_1","title":"Source","text":"source notes carbonblack:protection:cef Note this method of onboarding is not recommended for a more complete experience utilize the json format supported by he product with hec or s3"},{"location":"sources/vendor/VMWare/carbonblack/#index-configuration_1","title":"Index Configuration","text":"key source index notes Carbon Black_Protection carbonblack:protection:cef epintel none"},{"location":"sources/vendor/VMWare/horizonview/","title":"Horizon View","text":""},{"location":"sources/vendor/VMWare/horizonview/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/VMWare/horizonview/#links","title":"Links","text":"Ref Link Splunk Add-on None Manual unknown"},{"location":"sources/vendor/VMWare/horizonview/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes vmware:horizon None nix:syslog When used with a default port this will follow the generic NIX configuration when using a dedicated port, IP or host rules events will follow the index configuration for vmware nsx"},{"location":"sources/vendor/VMWare/horizonview/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes vmware_horizon vmware:horizon main none"},{"location":"sources/vendor/VMWare/vsphere/","title":"Vsphere","text":""},{"location":"sources/vendor/VMWare/vsphere/#product-vsphere-esx-nsx-controller-manager-edge","title":"Product - vSphere - ESX NSX (Controller, Manager, Edge)","text":"

Vmware vsphere product line has multiple old and known issues in syslog output.

WARNING use of a load balancer with udp will cause \u201ccorrupt\u201d event behavior due to out of order message processing caused by the load balancer

Ref Link Splunk Add-on ESX https://splunkbase.splunk.com/app/5603/ Splunk Add-on Vcenter https://splunkbase.splunk.com/app/5601/ Splunk Add-on nxs none Splunk Add-on vsan none"},{"location":"sources/vendor/VMWare/vsphere/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes vmware:esxlog:${PROGRAM} None vmware:nsxlog:${PROGRAM} None vmware:vclog:${PROGRAM} None nix:syslog When used with a default port, this will follow the generic NIX configuration. When using a dedicated port, IP or host rules events will follow the index configuration for vmware nsx"},{"location":"sources/vendor/VMWare/vsphere/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes vmware_vsphere_esx vmware:esxlog:${PROGRAM} infraops none vmware_vsphere_nsx vmware:nsxlog:${PROGRAM} infraops none vmware_vsphere_nsxfw vmware:nsxlog:dfwpktlogs netfw none vmware_vsphere_vc vmware:vclog:${PROGRAM} infraops none"},{"location":"sources/vendor/VMWare/vsphere/#filter-type","title":"Filter type","text":"

MSG Parse: This filter parses message content when using the default configuration. SC4S will normalize the structure of vmware events from multiple incorrectly formed varients to rfc5424 format to improve parsing

"},{"location":"sources/vendor/VMWare/vsphere/#setup-and-configuration","title":"Setup and Configuration","text":""},{"location":"sources/vendor/VMWare/vsphere/#options","title":"Options","text":"Variable default description SC4S_LISTEN_VMWARE_VSPHERE_TCP_PORT empty string Enable a TCP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_VMWARE_VSPHERE_UDP_PORT empty string Enable a UDP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_VMWARE_VSPHERE_TLS_PORT empty string Enable a TLS port for this specific vendor product using a comma-separated list of port numbers SC4S_SOURCE_VMWARE_VSPHERE_GROUPMSG empty string empty/yes groups known instances of improperly split events set \u201cyes\u201d to return to enable"},{"location":"sources/vendor/VMWare/vsphere/#verification","title":"Verification","text":"

An active proxy will generate frequent events. Use the following search to validate events are present per source device

index=<asconfigured> sourcetype=\"vmware:vsphere:*\" | stats count by host\n
"},{"location":"sources/vendor/VMWare/vsphere/#automatic-parser-configuration","title":"Automatic Parser Configuration","text":"

Enable the following options in the env_file

#Do not enable with a SNAT load balancer\nSC4S_USE_NAME_CACHE=yes\n#Combine known split events into a single event for Splunk\nSC4S_SOURCE_VMWARE_VSPHERE_GROUPMSG=yes\n#Learn vendor product from recognized events and apply to generic events\n#for example after the first vpxd event sshd will utilize vps \"vmware_vsphere_nix_syslog\" rather than \"nix_syslog\"\nSC4S_USE_VPS_CACHE=yes\n
"},{"location":"sources/vendor/VMWare/vsphere/#manual-parser-configuration","title":"Manual Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-vmware_vsphere.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-vmware_vsphere[sc4s-vps] {\n filter {      \n        #netmask(169.254.100.1/24)\n        #host(\"-esx-\")\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('vmware')\n            product('vsphere')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Varonis/datadvantage/","title":"DatAdvantage","text":""},{"location":"sources/vendor/Varonis/datadvantage/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Varonis/datadvantage/#links","title":"Links","text":"Ref Link Technology Add-On for Varonis https://splunkbase.splunk.com/app/4256/"},{"location":"sources/vendor/Varonis/datadvantage/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes varonis:ta"},{"location":"sources/vendor/Varonis/datadvantage/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes Varonis Inc._DatAdvantage varonis:ta main"},{"location":"sources/vendor/Vectra/cognito/","title":"Cognito","text":""},{"location":"sources/vendor/Vectra/cognito/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Vectra/cognito/#links","title":"Links","text":"Ref Link Technology Add-On for Vectra Cognito https://splunkbase.splunk.com/app/4408/"},{"location":"sources/vendor/Vectra/cognito/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes vectra:cognito:detect vectra:cognito:accountdetect vectra:cognito:accountscoring vectra:cognito:audit vectra:cognito:campaigns vectra:cognito:health vectra:cognito:hostscoring vectra:cognito:accountlockdown"},{"location":"sources/vendor/Vectra/cognito/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes Vectra Networks_X Series vectra:cognito:detect main Vectra Networks_X Series_accountdetect vectra:cognito:accountdetect main Vectra Networks_X Series_asc vectra:cognito:accountscoring main Vectra Networks_X Series_audit vectra:cognito:audit main Vectra Networks_X Series_campaigns vectra:cognito:campaigns main Vectra Networks_X Series_health vectra:cognito:health main Vectra Networks_X Series_hsc vectra:cognito:hostscoring main Vectra Networks_X Series_lockdown vectra:cognito:accountlockdown main"},{"location":"sources/vendor/Veeam/veeam/","title":"Veeam","text":""},{"location":"sources/vendor/Veeam/veeam/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Veeam/veeam/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes veeam:vbr:syslog"},{"location":"sources/vendor/Veeam/veeam/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes veeam_vbr_syslog veeam:vbr:syslog infraops none"},{"location":"sources/vendor/Wallix/bastion/","title":"Bastion","text":""},{"location":"sources/vendor/Wallix/bastion/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Wallix/bastion/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3661/"},{"location":"sources/vendor/Wallix/bastion/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes WB:syslog note this sourcetype includes program:rdproxy all other data will be treated as nix"},{"location":"sources/vendor/Wallix/bastion/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes wallix_bastion infraops main none"},{"location":"sources/vendor/Wallix/bastion/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-wallix_bastion.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-wallix_bastion[sc4s-vps] {\n filter { \n        host('^wasb')\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('wallix')\n            product('bastion')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/XYPro/mergedaudit/","title":"Merged Audit","text":"

XY Pro merged audit also called XYGate or XMA is the defacto solution for syslog from HP Nonstop Server (Tandem)

"},{"location":"sources/vendor/XYPro/mergedaudit/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/XYPro/mergedaudit/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual https://xypro.com/products/hpe-software-from-xypro/"},{"location":"sources/vendor/XYPro/mergedaudit/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cef None"},{"location":"sources/vendor/XYPro/mergedaudit/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes XYPRO_NONSTOP cef infraops none"},{"location":"sources/vendor/Zscaler/lss/","title":"LSS","text":"

The ZScaler product manual includes and extensive section of configuration for multiple Splunk TCP input ports around page 26. When using SC4S these ports are not required and should not be used. Simply configure all outputs from the LSS to utilize the IP or host name of the SC4S instance and port 514

"},{"location":"sources/vendor/Zscaler/lss/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Zscaler/lss/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3865/ Product Manual https://community.zscaler.com/t/zscaler-splunk-app-design-and-installation-documentation/4728"},{"location":"sources/vendor/Zscaler/lss/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes zscalerlss-zpa-app None zscalerlss-zpa-bba None zscalerlss-zpa-connector None zscalerlss-zpa-auth None zscalerlss-zpa-audit None"},{"location":"sources/vendor/Zscaler/lss/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes zscaler_lss zscalerlss-zpa-app, zscalerlss-zpa-bba, zscalerlss-zpa-connector, zscalerlss-zpa-auth, zscalerlss-zpa-audit netproxy none"},{"location":"sources/vendor/Zscaler/nss/","title":"NSS","text":"

The ZScaler product manual includes and extensive section of configuration for multiple Splunk TCP input ports around page 26. When using SC4S these ports are not required and should not be used. Simply configure all outputs from the NSS to utilize the IP or host name of the SC4S instance and port 514

"},{"location":"sources/vendor/Zscaler/nss/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Zscaler/nss/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3865/ Product Manual https://community.zscaler.com/t/zscaler-splunk-app-design-and-installation-documentation/4728"},{"location":"sources/vendor/Zscaler/nss/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes zscaler_nss_alerts Requires format customization add \\tvendor=Zscaler\\tproduct=alerts immediately prior to the \\n in the NSS Alert Web format. See Zscaler manual for more info. zscaler_nss_dns Requires format customization add \\tvendor=Zscaler\\tproduct=dns immediately prior to the \\n in the NSS DNS format. See Zscaler manual for more info. zscaler_nss_web None zscaler_nss_fw Requires format customization add \\tvendor=Zscaler\\tproduct=fw immediately prior to the \\n in the Firewall format. See Zscaler manual for more info."},{"location":"sources/vendor/Zscaler/nss/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes zscaler_nss_alerts zscalernss-alerts main none zscaler_nss_dns zscalernss-dns netdns none zscaler_nss_fw zscalernss-fw netfw none zscaler_nss_web zscalernss-web netproxy none zscaler_nss_tunnel zscalernss-tunnel netops none zscaler_zia_audit zscalernss-zia-audit netops none zscaler_zia_sandbox zscalernss-zia-sandbox main none"},{"location":"sources/vendor/Zscaler/nss/#filter-type","title":"Filter type","text":"

MSG Parse: This filter parses message content

"},{"location":"sources/vendor/Zscaler/nss/#setup-and-configuration","title":"Setup and Configuration","text":""},{"location":"sources/vendor/a10networks/vthunder/","title":"a10networks vthunder","text":""},{"location":"sources/vendor/a10networks/vthunder/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/a10networks/vthunder/#links","title":"Links","text":"Ref Link A10 Networks SSL Insight App https://splunkbase.splunk.com/app/3937 A10 Networks Application Firewall App https://splunkbase.splunk.com/app/3920 A10 Networks L4 Firewall App https://splunkbase.splunk.com/app/3910"},{"location":"sources/vendor/a10networks/vthunder/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes a10networks:vThunder:cef CEF a10networks:vThunder:syslog Syslog"},{"location":"sources/vendor/a10networks/vthunder/#source","title":"Source","text":"source notes a10networks:vThunder None"},{"location":"sources/vendor/a10networks/vthunder/#index-configuration","title":"Index Configuration","text":"key source index notes a10networks_vThunder a10networks:vThunder netwaf, netops none"},{"location":"sources/vendor/epic/epic_ehr/","title":"Epic EHR","text":""},{"location":"sources/vendor/epic/epic_ehr/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/epic/epic_ehr/#links","title":"Links","text":"Ref Link Splunk Add-on na"},{"location":"sources/vendor/epic/epic_ehr/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes epic:epic-ehr:syslog None"},{"location":"sources/vendor/epic/epic_ehr/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes epic_epic-ehr epic:epic-ehr:syslog main none"},{"location":"sources/vendor/syslog-ng/loggen/","title":"loggen","text":"

Loggen is a tool used to load test syslog implementations.

"},{"location":"sources/vendor/syslog-ng/loggen/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/syslog-ng/loggen/#links","title":"Links","text":"Ref Link Product Manual https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.26/administration-guide/96#loggen.1"},{"location":"sources/vendor/syslog-ng/loggen/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes syslogng:loggen By default, loggen uses the legacy BSD-syslog message format.BSD example:loggen --inet --dgram --number 1 <ip> <port>RFC5424 example:loggen --inet --dgram -PF --number 1 <ip> <port>Refer to above manual link for more examples."},{"location":"sources/vendor/syslog-ng/loggen/#index-configuration","title":"Index Configuration","text":"key index notes syslogng_loggen main none"},{"location":"troubleshooting/troubleshoot_SC4S_server/","title":"Validate server startup and operations","text":"

This topic helps you find the most common solutions to startup and operational issues with SC4S.

If you plan to run SC4S with standard configuration, we recommend that you perform startup out of systemd.

If you are using a custom configuration of SC4S with significant modifications, for example, multiple unique ports for sources, hostname/CIDR block configuration for sources, or new log paths, start SC4S with the container runtime command podman or docker directly from the command line as described in this topic. When you are satisfied with the operation, you can then transition to systemd.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-systemd-errors-occur-during-sc4s-startup","title":"Issue: systemd errors occur during SC4S startup","text":"

If you are running out of systemd, you may see this at startup:

[root@sc4s syslog-ng]# systemctl start sc4s\nJob for sc4s.service failed because the control process exited with error code. See \"systemctl status sc4s.service\" and \"journalctl -xe\" for details.\n
Most issues that occur with startup and operation of SC4S involve syntax errors or duplicate listening ports.

Try the following to resolve the issue:

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#check-that-your-sc4s-container-is-running","title":"Check that your SC4S container is running","text":"

If you start with systemd and the container is not running, check with the following:

journalctl -b -u sc4s | tail -100\n
This will print the last 100 lines of the system journal in detail, which should be sufficient to see the specific syntax or runtime failure and guide you in troubleshooting the unexpected container exit.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#check-that-the-sc4s-container-starts-and-runs-properly-outside-of-the-systemd-service-environment","title":"Check that the SC4S container starts and runs properly outside of the systemd service environment","text":"

As an alternative to launching with systemd during the initial installation phase, you can test the container startup outside of the systemd startup environment. This is especially important for troubleshooting or log path development, for example, when SC4S_DEBUG_CONTAINER is set to \u201cyes\u201d.

The following command launches the container directly from the command line. This command assumes the local mounted directories are set up as shown in the \u201cgetting started\u201d examples. Adjust for your local requirements, if you are using Docker, substitute \u201cdocker\u201d for \u201cpodman\u201d for the container runtime command.

/usr/bin/podman run \\\n    -v splunk-sc4s-var:/var/lib/syslog-ng \\\n    -v /opt/sc4s/local:/etc/syslog-ng/conf.d/local:z \\\n    -v /opt/sc4s/archive:/var/lib/syslog-ng/archive:z \\\n    -v /opt/sc4s/tls:/etc/syslog-ng/tls:z \\\n    --env-file=/opt/sc4s/env_file \\\n    --network host \\\n    --name SC4S \\\n    --rm ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\n
"},{"location":"troubleshooting/troubleshoot_SC4S_server/#check-that-the-container-is-still-running-when-systemd-indicates-that-its-not-running","title":"Check that the container is still running when systemd indicates that it\u2019s not running","text":"

In some instances, particularly when SC4S_DEBUG_CONTAINER=yes, an SC4S container might not shut down completely when starting/stopping out of systemd, and systemd will attempt to start a new container when one is already running with the SC4S name. You will see this type of output when viewing the journal after a failed start caused by this condition, or a similar message when the container is run directly from the CLI:

Jul 15 18:45:20 sra-sc4s-alln01-02 podman[11187]: Error: error creating container storage: the container name \"SC4S\" is already in use by \"894357502b2a7142d097ea3ca1468d1cb4fbc69959a9817a1bbe145a09d37fb9\". You have to remove that container...\nJul 15 18:45:20 sra-sc4s-alln01-02 systemd[1]: sc4s.service: Main process exited, code=exited, status=125/n/a\n

To rectify this, execute:

podman rm -f SC4S\n

SC4S should then start normally.

Do not use systemd when SC4S_DEBUG_CONTAINER is set to \u201cyes\u201d, instead use the CLI podman or docker commands directly to start/stop SC4S.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-hectoken-connection-errors-for-example-no-data-in-splunk","title":"Issue: HEC/token connection errors, for example, \u201cNo data in Splunk\u201d","text":"

SC4S performs basic HEC connectivity and index checks at startup and creates logs that indicate general connection issues and indexes that may not be accessible or configured on Splunk. To check the container logs that contain the results of these tests, run:

/usr/bin/<podman|docker> logs SC4S\n

You will see entries similar to the following:

SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful; checking indexes...\n\nSC4S_ENV_CHECK_INDEX: Checking email {\"text\":\"Incorrect index\",\"code\":7,\"invalid-event-number\":1}\nSC4S_ENV_CHECK_INDEX: Checking epav {\"text\":\"Incorrect index\",\"code\":7,\"invalid-event-number\":1}\nSC4S_ENV_CHECK_INDEX: Checking main {\"text\":\"Success\",\"code\":0}\n

Note the specifics of the indexes that are not configured correctly, and rectify this in your Splunk configuration. If this is not addressed properly, you may see output similar to the below when data flows into SC4S:

Mar 16 19:00:06 b817af4e89da syslog-ng[1]: Server returned with a 4XX (client errors) status code, which means we are not authorized or the URL is not found.; url='https://splunk-instance.com:8088/services/collector/event', status_code='400', driver='d_hec#0', location='/opt/syslog-ng/etc/conf.d/destinations/splunk_hec.conf:2:5'\nMar 16 19:00:06 b817af4e89da syslog-ng[1]: Server disconnected while preparing messages for sending, trying again; driver='d_hec#0', location='/opt/syslog-ng/etc/conf.d/destinations/splunk_hec.conf:2:5', worker_index='4', time_reopen='10', batch_size='1000'\n
This is an indication that the standard d_hec destination in syslog-ng, which is the route to Splunk, is rejected by the HEC endpoint. A 400 error is commonly caused by an index that has not been created in Splunk. One bad index can damage the batch, in this case, 1000 events, and prevent any of the data from being sent to Splunk. Make sure that the container logs are free of these kinds of errors in production. You can use the alternate HEC debug destination to help debug this condition by sending direct \u201ccurl\u201d commands to the HEC endpoint outside of the SC4S setting.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-invalid-sc4s-listening-ports","title":"Issue: Invalid SC4S listening ports","text":"

SC4S exclusively grants a port to a device when SC4S_LISTEN_{vendor}_{product}_{TCP/UDP/TLS}_PORT={port}.

During startup, SC4S validates that listening ports are configured correctly, and shows any issues in container logs.

You will receive an error message similar to the following if listening ports for MERAKI SWITCHES are configured incorrectly:

SC4S_LISTEN_MERAKI_SWITCHES_TCP_PORT: Wrong port number, don't use default port like (514,614,6514)\nSC4S_LISTEN_MERAKI_SWITCHES_UDP_PORT: 7000 is not unique and has already been used for another source\nSC4S_LISTEN_MERAKI_SWITCHES_TLS_PORT: 999999999999 must be integer within the range (0, 10000)\n

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-sc4s-local-disk-resource-issues","title":"Issue: SC4S local disk resource issues","text":"

d_hec_debug and d_archive are organized by sourcetype; the du -sh * command can be used in each subdirectory to find the culprit.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-incorrect-sc4skernel-udp-input-buffer-settings","title":"Issue: Incorrect SC4S/kernel UDP Input Buffer settings","text":"

UDP Input Buffer Settings let you request a certain buffer size when configuring the UDP sockets. The kernel must have its parameters set to the same size or greater than what the syslog-ng configuration is requesting, or the following will occur in the SC4S logs:

/usr/bin/<podman|docker> logs SC4S\n
The following warning message is not a failure condition unless you are reaching the upper limit of your hardware performance.
The kernel refused to set the receive buffer (SO_RCVBUF) to the requested size, you probably need to adjust buffer related kernel parameters; so_rcvbuf='1703936', so_rcvbuf_set='425984'\n
Make changes to /etc/sysctl.conf, changing receive buffer values to 16 MB:

net.core.rmem_default = 17039360\nnet.core.rmem_max = 17039360 \n
Run the following commands to implement your changes:
sysctl -p restart SC4S \n

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-invalid-sc4s-tls-listener","title":"Issue: Invalid SC4S TLS listener","text":"

To verify the correct configuration of the TLS server use the following command. Replace the IP, FQDN, and port as appropriate:

<podman|docker> run -ti drwetter/testssl.sh --severity MEDIUM --ip 127.0.0.1 selfsigned.example.com:6510\n
"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-unable-to-retrieve-logs-from-non-rfc-5424-compliant-sources","title":"Issue: Unable to retrieve logs from non RFC-5424 compliant sources","text":"

If a data source you are trying to ingest claims it is RFC-5424 compliant but you get an \u201cError processing log message:\u201d from SC4S, this message indicates that the data source still violates the RFC-5424 standard in some way. In this case, the underlying syslog-ng process will send an error event, with the location of the error in the original event highlighted with >@< to indicate where the error occurred. Here is an example error message:

{ [-]\n   ISODATE: 2020-05-04T21:21:59.001+00:00\n   MESSAGE: Error processing log message: <14>1 2020-05-04T21:21:58.117351+00:00 arcata-pks-cluster-1 pod.log/cf-workloads/logspinner-testing-6446b8ef - - [kubernetes@47450 cloudfoundry.org/process_type=\"web\" cloudfoundry.org/rootfs-version=\"v75.0.0\" cloudfoundry.org/version=\"eae53cc3-148d-4395-985c-8fef0606b9e3\" controller-revision-hash=\"logspinner-testing-6446b8ef05-7db777754c\" cloudfoundry.org/app_guid=\"f71634fe-34a4-4f89-adac-3e523f61a401\" cloudfoundry.org/source_type=\"APP\" security.istio.io/tlsMode=\"istio\" statefulset.kubernetes.io/pod-n>@<ame=\"logspinner-testing-6446b8ef05-0\" cloudfoundry.org/guid=\"f71634fe-34a4-4f89-adac-3e523f61a401\" namespace_name=\"cf-workloads\" object_name=\"logspinner-testing-6446b8ef05-0\" container_name=\"opi\" vm_id=\"vm-e34452a3-771e-4994-666e-bfbc7eb77489\"] Duration 10.00299412s TotalSent 10 Rate 0.999701 \n   PID: 33\n   PRI: <43>\n   PROGRAM: syslog-ng\n}\n

In this example the error can be seen in the snippet statefulset.kubernetes.io/pod-n>@<ame. The error states that the \u201cSD-NAME\u201d (the left-hand side of the name=value pairs) cannot be longer than 32 printable ASCII characters, and the indicated name exceeds that. Ideally you should address this issue with the vendor, however, you can add an exception to the SC4S filter log path or an alternative workaround log path created for the data source.

In this example, the reason RAWMSG is not shown in the fields above is because this error message is coming from syslog-ng itself. In messages of the type Error processing log message: where the PROGRAM is shown as syslog-ng, your incoming message is not RFC-5424 compliant.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-terminal-is-overwhelmed-by-metrics-and-internal-processing-messages-in-a-custom-environment-configuration","title":"Issue: Terminal is overwhelmed by metrics and internal processing messages in a custom environment configuration","text":"

In non-containerized SC4S deployments, if you try to start the SC4S service, the terminal may be overwhelmed by the internal and metrics logs. Example of the issue can be found here: Github Terminal abuse issue

To resolve this, set following property in env_file:

SC4S_SEND_METRICS_TERMINAL=no\n

Restart SC4S.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-you-are-missing-cef-logs-that-are-not-rfc-compliant","title":"Issue: You are missing CEF logs that are not RFC compliant","text":"
  1. To resolve this, set following property in env_file:

    SC4S_DISABLE_DROP_INVALID_CEF=yes\n

  2. Restart SC4S.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-you-are-missing-vmware-cb-protect-logs-that-are-not-rfc-compliant","title":"Issue: You are missing VMWARE CB-PROTECT logs that are not RFC compliant","text":"
  1. To resolve this, set following property in env_file:

    SC4S_DISABLE_DROP_INVALID_VMWARE_CB_PROTECT=yes\n

  2. Restart SC4S.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-you-are-missing-cisco-ios-logs-that-are-not-rfc-compliant","title":"Issue: You are missing CISCO IOS logs that are not RFC compliant","text":"
  1. To resolve this, set following property in env_file:
    SC4S_DISABLE_DROP_INVALID_CISCO=yes\n
  2. Restart SC4S.
"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-you-are-missing-vmware-vsphere-logs-that-are-not-rfc-compliant","title":"Issue: You are missing VMWARE VSPHERE logs that are not RFC compliant","text":"
  1. To resolve this, set following property in env_file:

    SC4S_DISABLE_DROP_INVALID_VMWARE_VSPHERE=yes\n

  2. Restart SC4S.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-you-are-missing-raw-bsd-logs-that-are-not-rfc-compliant","title":"Issue: You are missing RAW BSD logs that are not RFC compliant","text":"
  1. To resolve this, set following property in env_file:

    SC4S_DISABLE_DROP_INVALID_RAW_BSD=yes\n

  2. Restart SC4S.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-you-are-missing-raw-xml-logs-that-are-not-rfc-compliant","title":"Issue: You are missing RAW XML logs that are not RFC compliant","text":"
  1. To resolve this, set following property in env_file:

    SC4S_DISABLE_DROP_INVALID_XML=yes\n

  2. Restart SC4S.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-you-are-missing-hpe-jetdirect-logs-that-are-not-rfc-compliant","title":"Issue: You are missing HPE JETDIRECT logs that are not RFC compliant","text":"
  1. To resolve this, set following property in env_file:

    SC4S_DISABLE_DROP_INVALID_HPE=yes\n

  2. Restart SC4S and it will not drop any invalid HPE JETDIRECT format.

NOTE: Please use only in this case of exception and this is splunk-unsupported feature. Also this setting might impact SC4S performance.

"},{"location":"troubleshooting/troubleshoot_resources/","title":"SC4S Logging and Troubleshooting Resources","text":""},{"location":"troubleshooting/troubleshoot_resources/#helpful-linux-and-container-commands","title":"Helpful Linux and container commands","text":""},{"location":"troubleshooting/troubleshoot_resources/#linux-service-systemd-commands","title":"Linux service (systemd) commands","text":""},{"location":"troubleshooting/troubleshoot_resources/#container-commands","title":"Container commands","text":"

All of the following container commands can be run with the podman or docker runtime.

"},{"location":"troubleshooting/troubleshoot_resources/#test-commands","title":"Test commands","text":"

Check your SC4S port using the nc command. Run this command where SC4S is hosted and check data in Splunk for success and failure:

echo '<raw_sample>' |nc <host> <port>\n

"},{"location":"troubleshooting/troubleshoot_resources/#obtain-raw-message-events","title":"Obtain raw message events","text":"

During development or troubleshooting, you may need to obtain samples of the messages exactly as they are received by SC4S. These events contain the full syslog message, including the <PRI> preamble, and are different from messages that have been processed by SC4S and Splunk.

These raw messages help to determine that SC4S parsers and filters are operating correctly, and are needed for playback when testing. The community supporting SC4S will always first ask for raw samples before any development or troubleshooting exercise.

Here are some options for obtaining raw logs for one or more sourcetypes:

NOTE: Be sure to turn off the RAWMSG variable when you are finished, because it doubles the memory and disk requirements of SC4S. Do not use RAWMSG in production.

"},{"location":"troubleshooting/troubleshoot_resources/#run-exec-into-the-container-advanced-task","title":"Run exec into the container (advanced task)","text":"

You can confirm how the templating process created the actual syslog-ng configuration files by calling exec into the container and navigating the syslog-ng config filesystem directly. To do this, run

/usr/bin/podman exec -it SC4S /bin/bash\n
and navigate to /opt/syslog-ng/etc/ to see the actual configuration files in use. If you are familiar with container operations and syslog-ng, you can modify files directly and reload syslog-ng with the command kill -1 1 in the container. You can also run the /entrypoint.sh script, or a subset of it, such as everything but syslog-ng, and have complete control over the templating and underlying syslog-ng process. This is an advanced topic and further help can be obtained through the github issue tracker and Slack channels.

"},{"location":"troubleshooting/troubleshoot_resources/#keeping-a-failed-container-running-advanced-topic","title":"Keeping a failed container running (advanced topic)","text":"

To debug a configuration syntax issue at startup, keep the container running after a syslog-ng startup failure. In order to facilitate troubleshooting and make syslog-ng configuration changes from within a running container, the container can be forced to remain running when syslog-ng fails to start (which normally terminates the container). To enable this, add SC4S_DEBUG_CONTAINER=yes to the env_file. Use this capability in conjunction with exec calls into the container.

NOTE: Do not enable the debug container mode while running out of systemd. Instead, run the container manually from the CLI, so that you can use the podman or docker commands needed to start, stop, and clean up cruft left behind by the debug process. Only when SC4S_DEBUG_CONTAINER is set to \u201cno\u201d (or completely unset) should systemd startup processing resume.

"},{"location":"troubleshooting/troubleshoot_resources/#fix-time-zones","title":"Fix time zones","text":"

Time zone mismatches can occur if SC4S and logHost are not in same time zones. To resolve this, create a filter using sc4s-lp-dest-format-d_hec_fmt, for example:

#filename: /opt/sc4s/local/config/app_parsers/rewriters/app-dest-rewrite-fix_tz_something.conf\n\nblock parser app-dest-rewrite-checkpoint_drop-d_fmt_hec_default() {    \n    channel {\n            rewrite { fix-time-zone(\"EST5EDT\"); };\n    };\n};\napplication app-dest-rewrite-fix_tz_something-d_fmt_hec_default[sc4s-lp-dest-format-d_hec_fmt] {\n    filter {\n        match('checkpoint' value('fields.sc4s_vendor') type(string))                 <- this must be customized\n        and match('syslog' value('fields.sc4s_product') type(string))                <- this must be customized\n        and match('Drop' value('.SDATA.sc4s@2620.action') type(string))              <- this must be customized\n        and match('12.' value('.SDATA.sc4s@2620.src') type(string) flags(prefix) );  <- this must be customized\n\n    };    \n    parser { app-dest-rewrite-checkpoint_drop-d_fmt_hec_default(); };   \n};\n

If destport, container, and proto are not available in indexed fields, you can create a post-filter:

#filename: /opt/sc4s/local/config/app_parsers/rewriters/app-dest-rewrite-fix_tz_something.conf\n\nblock parser app-dest-rewrite-fortinet_fortios-d_fmt_hec_default() {\n    channel {\n            rewrite {\n                  fix-time-zone(\"EST5EDT\");\n            };\n    };\n};\n\napplication app-dest-rewrite-device-d_fmt_hec_default[sc4s-postfilter] {\n    filter {\n         match(\"xxxx\", value(\"fields.sc4s_destport\") type(glob));  <- this must be customized\n    };\n    parser { app-dest-rewrite-fortinet_fortios-d_fmt_hec_default(); };\n};\n
Note that filter match statement should be aligned to your data

The parser accepts time zone in formats: \u201cAmerica/New York\u201d or \u201cEST5EDT\u201d, but not short in form such as \u201cEST\u201d.

"},{"location":"troubleshooting/troubleshoot_resources/#issue-cyberark-log-problems","title":"Issue: CyberArk log problems","text":"

When data is received on the indexers, all events are merged together into one event. Check the following link for CyberArk configuration information: https://cyberark-customers.force.com/s/article/00004289.

"},{"location":"troubleshooting/troubleshoot_resources/#issue-sc4s-events-drop-when-another-interface-is-used-to-receive-logs","title":"Issue: SC4S events drop when another interface is used to receive logs","text":"

When a second or alternate interface is used to receive syslog traffic, RPF (Reverse Path Forwarding) filtering in RHEL, which is configured as default configuration, may drop events. To resolve this, add a static route for the source device to point back to the dedicated syslog interface. See https://access.redhat.com/solutions/53031.

"},{"location":"troubleshooting/troubleshoot_resources/#issue-splunk-does-not-ingest-sc4s-events-from-other-virtual-machines","title":"Issue: Splunk does not ingest SC4S events from other virtual machines","text":"

When data is transmitted through an echo message from the same instance, data is sent successfully to Splunk. However, when the echo is sent from a different instance, the data may not appear in Splunk and the errors are not reported in the logs. To resolve this issue, check whether an internal firewall is enabled. If an internal firewall is active, verify whether the default port 514 or the port which you have used is blocked. Here are some commands to check and enable your firewall:

#To list all the firewall ports\nsudo firewall-cmd --list-all\n#to enable 514 if its not enabled\nsudo firewall-cmd --zone=public --permanent --add-port=514/udp\nsudo firewall-cmd  --reload\n

"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to Splunk Connect for Syslog!","text":"

Splunk Connect for Syslog is an open source packaged solution for getting data into Splunk. It is based on the syslog-ng Open Source Edition (Syslog-NG OSE) and transports data to Splunk via the Splunk HTTP event Collector (HEC) rather than writing events to disk for collection by a Universal Forwarder.

"},{"location":"#product-goals","title":"Product Goals","text":""},{"location":"#support","title":"Support","text":"

Splunk Support: If you are an existing Splunk customer with access to the Support Portal, create a support ticket for the quickest resolution to any issues you experience. Here are some examples of when it may be appropriate to create a support ticket: - If you experience an issue with the current version of SC4S, such as a feature gap or a documented feature that is not working as expected. - If you have difficulty with the configuration of SC4S, either at the back end or with the out-of-box parsers or index configurations. - If you experience performance issues and need help understanding the bottlenecks. - If you have any questions or issues with the SC4S documentation.

GitHub Issues: For all enhancement requests, please feel free to create GitHub issues. We prioritize and work on issues based on their priority and resource availability. You can help us by tagging the requests with the appropriate labels.

Splunk Developers are active in the external usergroup on best effort basis, please use support case/github issues to resolve your issues quickly

"},{"location":"#contributing","title":"Contributing","text":"

We welcome feedback and contributions from the community! Please see our contribution guidelines for more information on how to get involved.

"},{"location":"#license","title":"License","text":""},{"location":"#references","title":"References","text":""},{"location":"CONTRIBUTING/","title":"CONTRIBUTING","text":"

Splunk welcomes contributions from the SC4S community, and your feedback and enhancements are appreciated. There\u2019s always code that can be clarified, functionality that can be extended, and new data filters to develop, and documentation to refine. If you see something you think should be fixed or added, go for it!

"},{"location":"CONTRIBUTING/#data-safety","title":"Data Safety","text":"

Splunk Connect for Syslog is a community built and maintained product. Anyone with internet access can get a Splunk GitHub account and participate. As with any publicly available repository, care must be taken to never share private data via Issues, Pull Requests or any other mechanisms. Any data that is shared in the Splunk Connect for Syslog GitHub repository is made available to the entire Community without limits. Members of the Community and/or their employers (including Splunk) assume no responsibility or liability for any damages resulting from the sharing of private data via the Splunk GitHub.

Any data samples shared in the Splunk GitHub repository must be free of private data. * Working locally, identify potentially sensitive field values in data samples (Public IP address, URL, Hostname, Etc.) * Replace all potentially sensitive field values with synthetic values * Manually review data samples to re-confirm they are free of private data before sharing in the Splunk GitHub

"},{"location":"CONTRIBUTING/#prerequisites","title":"Prerequisites","text":"

When contributing to this repository, please first discuss the change you wish to make via a GitHub issue or Slack message with the owners of this repository.

"},{"location":"CONTRIBUTING/#setup-development-environment","title":"Setup Development Environment","text":"

For a basic development environment docker and a bash shell is all that is required. For a more complete IDE experience see our wiki (Setup PyCharm)[https://github.com/splunk/splunk-connect-for-syslog/wiki/SC4S-Development-Setup-Using-PyCharm]

"},{"location":"CONTRIBUTING/#feature-requests-and-bug-reports","title":"Feature Requests and Bug Reports","text":"

Have ideas on improvements or found a problem? While the community encourages everyone to contribute code, it is also appreciated when someone reports an issue. Please report any issues or bugs you find through GitHub\u2019s issue tracker.

If you are reporting a bug, please include the following details:

We want to hear about your enhancements as well. Feel free to submit them as issues:

"},{"location":"CONTRIBUTING/#fixing-issues","title":"Fixing Issues","text":"

Look through our issue tracker to find problems to fix! Feel free to comment and tag community members of this project with any questions or concerns.

"},{"location":"CONTRIBUTING/#pull-requests","title":"Pull Requests","text":"

What is a \u201cpull request\u201d? It informs the project\u2019s core developers about the changes you want to review and merge. Once you submit a pull request, it enters a stage of code review where you and others can discuss its potential modifications and even add more commits to it later on.

If you want to learn more, please consult this tutorial on how pull requests work in the GitHub Help Center.

Here\u2019s an overview of how you can make a pull request against this project:

"},{"location":"CONTRIBUTING/#code-review","title":"Code Review","text":"

There are two aspects of code review: giving and receiving. To make it easier for your PR to receive reviews, consider the reviewers will need you to:

"},{"location":"CONTRIBUTING/#testing","title":"Testing","text":"

Testing is the responsibility of all contributors. In general, we try to adhere to TDD, writing the test first. There are multiple types of tests. The location of the test code varies with type, as do the specifics of the environment needed to successfully run the test.

We could always use improvements to our documentation! Anyone can contribute to these docs - whether you\u2019re new to the project, you\u2019ve been around a long time, and whether you self-identify as a developer, an end user, or someone who just can\u2019t stand seeing typos. What exactly is needed?

"},{"location":"CONTRIBUTING/#release-notes","title":"Release Notes","text":"

To add commit messages to release notes, tag the message in following format

[TYPE] <commit message>\n
[TYPE] can be among the following * FEATURE * FIX * DOC * TEST * CI * REVERT * FILTERADD * FILTERMOD

Sample commit:\ngit commit -m \"[TEST] test-message\"\n
"},{"location":"architecture/","title":"SC4S Architectural Considerations","text":"

SC4S provides performant and reliable syslog data collection. When you are planning your configuration, review the following architectural considerations. These recommendations pertain to the Syslog protocol and age, and are not specific to Splunk Connect for Syslog.

"},{"location":"architecture/#the-syslog-protocol","title":"The syslog Protocol","text":"

The syslog protocol design prioritizes speed and efficiency, which can occur at the expense of resiliency and reliability. User Data Protocol (UDP) provides the ability to \u201csend and forget\u201d events over the network without regard to or acknowledgment of receipt. Transport Layer Secuirty (TLS) and Secure Sockets Layer (SSL) protocols are also supported, though UDP prevails as the preferred syslog transport for most data centers.

Because of these tradeoffs, traditional methods to provide scale and resiliency do not necessarily transfer to syslog.

"},{"location":"architecture/#ip-protocol","title":"IP protocol","text":"

By default, SC4S listens on ports using IPv4. IPv6 is also supported, see SC4S_IPV6_ENABLE in source configuration options.

"},{"location":"architecture/#collector-location","title":"Collector Location","text":"

Since syslog is a \u201csend and forget\u201d protocol, it does not perform well when routed through substantial network infrastructure. This includes front-side load balancers and WAN. The most reliable way to collect syslog traffic is to provide for edge collection rather than centralized collection. If you centrally locate your syslog server, the UDP and (stateless) TCP traffic cannot adjust and data loss will occur.

"},{"location":"architecture/#syslog-data-collection-at-scale","title":"syslog Data Collection at Scale","text":"

As a best practice, do not co-locate syslog-ng servers for horizontal scale and load balance to them with a front-side load balancer:

"},{"location":"architecture/#high-availability-considerations-and-challenges","title":"High availability considerations and challenges","text":"

Load balancing for high availability does not work well for stateless, unacknowledged syslog traffic. More data is preserved when you use a more simple design such as vMotioned VMs. With syslog, the protocol itself is prone to loss, and syslog data collection can be made \u201cmostly available\u201d at best.

"},{"location":"architecture/#udp-vs-tcp","title":"UDP vs. TCP","text":"

Run your syslog configuration on UDP rather than TCP.

The syslogd daemon optimally uses UDP for log forwarding to reduce overhead. This is because UDP\u2019s streaming method does not require the overhead of establishing a network session. UDP reduces network load on the network stream with no required receipt verification or window adjustment.

TCP uses Acknowledgement Signals (ACKS) to avoid data loss, however, loss can still occur when:

Use TCP if the syslog event is larger than the maximum size of the UDP packet on your network typically limited to Web Proxy, DLP, and IDs type sources. To mitigate the drawbacks of TCP you can use TLS over TCP:

"},{"location":"configuration/","title":"SC4S configuration variables","text":"

SC4S is primarily controlled by environment variables. This topic describes the categories and variables you need to properly configure SC4S for your environment.

"},{"location":"configuration/#global-configuration-variables","title":"Global configuration variables","text":"Variable Values Description SC4S_USE_REVERSE_DNS yes or no (default) Use reverse DNS to identify hosts when HOST is not valid in the syslog header. SC4S_REVERSE_DNS_KEEP_FQDN yes or no (default) When enabled, SC4S will not extract the hostname from FQDN, and instead will pass the full domain name to the host. SC4S_CONTAINER_HOST string Variable that is passed to the container to identify the actual log host for container implementations.

If the host value is not present in an event, and you require that a true hostname be attached to each event, SC4S provides an optional ability to perform a reverse IP to name lookup. If the variable SC4S_USE_REVERSE_DNS is set to \u201cyes\u201d, then SC4S first checks host.csv and replaces the value of host with the specified value that matches the incoming IP address. If no value is found in host.csv, SC4S attempts a reverse DNS lookup against the configured nameserver. In this case, SC4S by default extracts only the hostname from FQDN (example.domain.com -> example). If SC4S_REVERSE_DNS_KEEP_FQDN variable is set to \u201cyes\u201d, full domain name is assigned to the host field.

Note: Using the SC4S_USE_REVERSE_DNS variable can have a significant impact on performance if the reverse DNS facility is not performant. Check this variable if you notice that events are indexed later than the actual timestamp in the event, for example, if you notice a latency between _indextime and _time.

"},{"location":"configuration/#configure-your-external-http-proxy","title":"Configure your external HTTP proxy","text":"

Many HTTP proxies are not provisioned with application traffic in mind. Ensure adequate capacity is available to avoid data loss and proxy outages. The following variables must be entered in lower case:

Variable Values Description http_proxy undefined Use libcurl format proxy string \u201chttp://username:password@proxy.server:port\u201d https_proxy undefined Use libcurl format proxy string \u201chttp://username:password@proxy.server:port\u201d"},{"location":"configuration/#configure-your-splunk-hec-destination","title":"Configure your Splunk HEC destination","text":"Variable Values Description SC4S_DEST_SPLUNK_HEC_<ID>_CIPHER_SUITE comma separated list Opens the SSL cipher suite list. SC4S_DEST_SPLUNK_HEC_<ID>_SSL_VERSION comma separated list Opens the SSL version list. SC4S_DEST_SPLUNK_HEC_<ID>_WORKERS numeric The number of destination workers (threads), the default value is 10 threads. You do not need to change this variable from the default unless your environment has a very high or low volume. Consult with the SC4S community for advice about configuring your settings for environments with very high or low volumes. SC4S_DEST_SPLUNK_INDEXED_FIELDS r_unixtime,facility,severity,container,loghost,destport,fromhostip,protonone This is the list of SC4S indexed fields that will be included with each event in Splunk. The default is the entire list except \u201cnone\u201d. Two other indexed fields, sc4s_vendor_product and sc4s_syslog_format, also appear along with the fields selected and cannot be turned on or off individually. If you do not want any indexed fields, set the value to the single value of \u201cnone\u201d. When you set this variable, you must separate multiple entries with commas, do not include extra spaces.This list maps to the following indexed fields that will appear in all Splunk events:facility: sc4s_syslog_facilityseverity: sc4s_syslog_severitycontainer: sc4s_containerloghost: sc4s_loghostdport: sc4s_destportfromhostip: sc4s_fromhostipproto: sc4s_proto

The destination operating parameters outlined above should be individually controlled using the destination ID. For example, to set the number of workers for the default destination, use SC4S_DEST_SPLUNK_HEC_DEFAULT_WORKERS. To configure workers for the alternate HEC destination d_hec_FOO, use SC4S_DEST_SPLUNK_HEC_FOO_WORKERS.

"},{"location":"configuration/#configure-timezones-for-legacy-sources","title":"Configure timezones for legacy sources","text":"

Set the SC4S_DEFAULT_TIMEZONE variable to a recognized \u201czone info\u201d (Region/City) time zone format such as America/New_York. Setting this value forces SC4S to use the specified timezone and honor its associated Daylight Savings rules for all events without a timezone offset in the header or message payload.

"},{"location":"configuration/#configure-your-sc4s-disk-buffer","title":"Configure your SC4S disk buffer","text":"

SC4S provides the ability to minimize the number of lost events if the connection to all the Splunk indexers is lost. This capability utilizes the disk buffering feature of Syslog-ng.

SC4S receives a response from the Splunk HTTP Event Collector (HEC) when a message is received successfully. If a confirmation message from the HEC endpoint is not received (or a \u201cserver busy\u201d reply, such as a \u201c503\u201d is sent), the load balancer will try the next HEC endpoint in the pool. If all pool members are exhausted, for example, if there were a full network outage to the HEC endpoints, events will queue to the local disk buffer on the SC4S Linux host.

SC4S will continue attempting to send the failed events while it buffers all new incoming events to disk. If the disk space allocated to disk buffering fills up then SC4S will stop accepting new events and subsequent events will be lost.

Once SC4S gets confirmation that events are again being received by one or more indexers, events will then stream from the buffer using FIFO queueing.

The number of events in the disk buffer will reduce as long as the incoming event volume is less than the maximum SC4S, with the disk buffer in the path, can handle. When all events have been emptied from the disk buffer, SC4S will resume streaming events directly to Splunk.

Disk buffers in SC4S are allocated per destination. Keep this in mind when using additional destinations that have disk buffering configured. By default, when you configure alternate HEC destinations, disk buffering is configured identically to that of the main HEC destination, unless overridden individually.

"},{"location":"configuration/#estimate-your-storage-allocation","title":"Estimate your storage allocation","text":"

As an example, to protect against a full day of lost connectivity from SC4S to all your indexers at maximum throughput, the calculation would look like the following:

60,000 EPS * 86400 seconds * 800 bytes * 1.7 = 6.4 TB of storage

"},{"location":"configuration/#about-disk-buffering","title":"About disk buffering","text":"

Note the following about disk buffering:

"},{"location":"configuration/#disk-buffer-variables","title":"Disk Buffer Variables","text":"Variable Values/Default Description SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_ENABLE yes(default) or no Enable local disk buffering. SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_RELIABLE yes or no(default) Enable reliable/normal disk buffering (normal is the recommended value). SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_MEMBUFSIZE bytes (10241024) The worker\u2019s memory buffer size in bytes, used with reliable disk buffering. SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_MEMBUFLENGTH messages (15000) The worker\u2019s memory buffer size in message count, used with normal disk buffering. SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_DISKBUFSIZE bytes (53687091200) Size of local disk buffering bytes, the default is 50 GB per worker. SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_DIR path Location to store the disk buffer files. This location is fixed when using the container and should not be modified.

Note: The buffer options apply to each worker rather than the entire destination.

"},{"location":"configuration/#archive-file-configuration","title":"Archive File Configuration","text":"

This feature is designed to support compliance or diode mode archival of all messages. The files are stored in a folder structure at the mount point using the pattern shown in the table below, depending on the value of the SC4S_GLOBAL_ARCHIVE_MODE variable. Events for both modes are formatted using syslog-ng\u2019s EWMM template.

Variable Value/Default Location/Pattern SC4S_GLOBAL_ARCHIVE_MODE compliance(default) <archive mount>/${.splunk.sourcetype}/${HOST}/$YEAR-$MONTH-$DAY-archive.log SC4S_GLOBAL_ARCHIVE_MODE diode <archive mount>/${YEAR}/${MONTH}/${DAY}/${fields.sc4s_vendor_product}_${YEAR}${MONTH}${DAY}${HOUR}${MIN}.log\"

Use the following variables to select global archiving or per-source archiving. SC4S does not prune the files that are created, therefore an administrator must provide a means of log rotation to prune files and move them to an archival system to avoid exhausting disk space.

Variable Values Description SC4S_ARCHIVE_GLOBAL yes or undefined Enable archiving of all vendor_products. SC4S_DEST_<VENDOR_PRODUCT>_ARCHIVE yes(default) or undefined Enables selective archiving by vendor product."},{"location":"configuration/#syslog-source-configuration","title":"Syslog Source Configuration","text":"Variable Values/Default Description SC4S_SOURCE_TLS_ENABLE yes or no(default) Enable TLS globally. Be sure to configure the certificate as shown below. SC4S_LISTEN_DEFAULT_TLS_PORT undefined or 6514 Enable a TLS listener on port 6514. SC4S_LISTEN_DEFAULT_RFC6425_PORT undefined or 5425 Enable a TLS listener on port 5425. SC4S_SOURCE_TLS_OPTIONS no-sslv2 Comma-separated list of the following options: no-sslv2, no-sslv3, no-tlsv1, no-tlsv11, no-tlsv12, none. See syslog-ng docs for the latest list and default values. SC4S_SOURCE_TLS_CIPHER_SUITE See openssl Colon-delimited list of ciphers to support, for example, ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384. See openssl for the latest list and defaults. SC4S_SOURCE_TCP_MAX_CONNECTIONS 2000 Maximum number of TCP connections. SC4S_SOURCE_UDP_IW_USE yes or no(default) Determine whether to change the initial Window size for UDP. SC4S_SOURCE_UDP_FETCH_LIMIT 1000 Number of events to fetch from server buffer at one time. SC4S_SOURCE_UDP_IW_SIZE 250000 Initial Window size. SC4S_SOURCE_TCP_IW_SIZE 20000000 Initial Window size. SC4S_SOURCE_TCP_FETCH_LIMIT 2000 Number of events to fetch from server buffer at one time. SC4S_SOURCE_UDP_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_TCP_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_TLS_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_RFC5426_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_RFC6587_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_RFC5425_SO_RCVBUFF 17039360 Server buffer size in bytes. Make sure that the host OS kernel is configured similarly. SC4S_SOURCE_LISTEN_UDP_SOCKETS 4 Number of kernel sockets per active UDP port, which configures multi-threading of the UDP input buffer in the kernel to prevent packet loss. Total UDP input buffer is the multiple of SOCKETS x SO_RCVBUFF. SC4S_SOURCE_LISTEN_RFC5426_SOCKETS 1 Number of kernel sockets per active UDP port, which configures multi-threading of the input buffer in the kernel to prevent packet loss. Total UDP input buffer is the sum of SOCKETS x SO_RCVBUFF. SC4S_SOURCE_LISTEN_RFC6587_SOCKETS 1 Number of kernel sockets per active UDP port, which configures multi-threading of the input buffer in the kernel to prevent packet loss. Total UDP input buffer is the sum of SOCKETS x SO_RCVBUFF. SC4S_SOURCE_LISTEN_RFC5425_SOCKETS 1 Number of kernel sockets per active UDP port, which configures multi-threading of the input buffer in the kernel to prevent packet loss. Total UDP input buffer is the sum of SOCKETS x SO_RCVBUFF. SC4S_SOURCE_STORE_RAWMSG undefined or \u201cno\u201d Store unprocessed \u201con the wire\u201d raw message in the RAWMSG macro for use with the \u201cfallback\u201d sourcetype. Do not set this in production, substantial memory and disk overhead will result. Use this only for log path and filter development. SC4S_IPV6_ENABLE yes or no(default) Enable dual-stack IPv6 listeners and health checks."},{"location":"configuration/#configure-your-syslog-source-tls-certificate","title":"Configure your syslog source TLS certificate","text":"
  1. Create the folder /opt/sc4s/tls .
  2. Uncomment the appropriate mount line in the unit or yaml file.
  3. Save the server private key in PEM format with no password to /opt/sc4s/tls/server.key.
  4. Save the server certificate in PEM format to /opt/sc4s/tls/server.pem.
  5. Ensure the entry SC4S_SOURCE_TLS_ENABLE=yes exists in /opt/sc4s/env_file.
"},{"location":"configuration/#configure-additional-pki-trust-anchors","title":"Configure additional PKI trust anchors","text":"

Additional certificate authorities may be trusted by appending each PEM formatted certificate to /opt/sc4s/tls/trusted.pem.

"},{"location":"configuration/#configure-sc4s-metadata","title":"Configure SC4S metadata","text":""},{"location":"configuration/#override-the-log-path-of-indexes-or-metadata","title":"Override the log path of indexes or metadata","text":"

Set Splunk metadata before the data arrives in Splunk and before any add-on processing occurs. The filters apply the index, source, sourcetype, host, and timestamp metadata automatically by individual data source. Values for this metadata, including a recommended index and output format, are included with all \u201cout-of-the-box\u201d log paths included with SC4S and are chosen to properly interface with the corresponding add-on in Splunk. You must ensure all recommended indexes accept this data if the defaults are not changed.

To accommodate the override of default values, each log path consults an internal lookup file that maps Splunk metadata to the specific data source being processed. This file contains the defaults that are used by SC4S to set the appropriate Splunk metadata, index, host, source, and sourcetype, for each data source. This file is not directly available to the administrator, but a copy of the file is deposited in the local mounted directory for reference, /opt/sc4s/local/context/splunk_metadata.csv.example by default. This copy is provided solely for reference. To add to the list or to override default entries, create an override file without the example extension (for example /opt/sc4s/local/context/splunk_metadata.csv) and modify it according to the instructions below.

splunk_metadata.csv is a CSV file containing a \u201ckey\u201d that is referenced in the log path for each data source. These keys are documented in the individual source files in this section, and let you override Splunk metadata.

The following is example line from a typical splunk_metadata.csv override file:

juniper_netscreen,index,ns_index\n

The columns in this file are key, metadata, and value. To make a change using the override file, consult the example file (or the source documentation) for the proper key and modify and add rows in the table, specifying one or more of the following metadata/value pairs for a given key:

In our example above, the juniper_netscreen key references a new index used for that data source called ns_index.

For most deployments the index should be the only change needed, other default metadata should almost never be overridden.

The splunk_metadata.csv file is a true override file and the entire example file should not be copied over to the override. The override file is usually just one or two lines, unless an entire index category (for example netfw) needs to be overridden.

When building a custom SC4S log path, append the splunk_metadata.csv file with an appropriate new key and default for the index. The new key will not exist in the internal lookup or in the example file. Care should be taken during log path design to choose appropriate index, sourcetype and template defaults so that admins are not compelled to override them. If the custom log path is later added to the list of SC4S-supported sources, this addendum can be removed.

The splunk_metadata.csv.example file is provided for reference only and is not used directly by SC4S. It is an exact copy of the internal file, and can therefore change from release to release. Be sure to check the example file to make sure the keys for any overrides map correctly to the ones in the example file.

"},{"location":"configuration/#override-index-or-metadata-based-on-host-ip-or-subnet-compliance-overrides","title":"Override index or metadata based on host, ip, or subnet (compliance overrides)","text":"

In some cases you can provide the same overrides based on PCI scope, geography, or other criteria. Use a file that uniquely identifies these source exceptions via syslog-ng filters, which map to an associated lookup of alternate indexes, sources, or other metadata. Indexed fields can also be added to further classify the data.

The csv file provides three columns: filter name, field name, and value. Filter names in the conf file must match one or more corresponding filter name rows in the csv file. The field name column obeys the following convention:

This file construct is best shown by an example. Here is an example of a compliance_meta_by_source.conf file and its corresponding compliance_meta_by_source.csv file:

filter f_test_test {\n   host(\"something-*\" type(glob)) or\n   netmask(192.168.100.1/24)\n};\n
f_test_test,.splunk.index,\"pciindex\"\nf_test_test,fields.compliance,\"pci\"\n

Ensure that the filter names in the conf file match one or more rows in the csv file. Any incoming message with a hostname starting with something- or arriving from a netmask of 192.168.100.1/24 will match the f_test_test filter, and the corresponding entries in the csv file will be checked for overrides. The new index is pciindex, and an indexed field named compliance will be sent to Splunk with its value set to pci. To add additional overrides, add another filter foo_bar {}; stanza to the conf file, then add appropriate entries to the csv file that match the filter names to the overrides.

Take care that your syntax is correct; for more information on proper syslog-ng syntax, see the syslog-ng documentation. A syntax error will cause the runtime process to abort in the \u201cpreflight\u201d phase at startup.

To update your changes, restart SC4S.

"},{"location":"configuration/#drop-all-data-by-ip-or-subnet-deprecated","title":"Drop all data by IP or subnet (deprecated)","text":"

Using vendor_product_by_source to null queue is now a deprecated task. See the supported method for dropping data in Filtering events from output.

"},{"location":"configuration/#splunk-connect-for-syslog-output-templates-syslog-ng-templates","title":"Splunk Connect for Syslog output templates (syslog-ng templates)","text":"

Splunk Connect for Syslog uses the syslog-ng template mechanism to format the output event that will be sent to Splunk. These templates can format the messages in a number of ways, including straight text and JSON, and can utilize the many syslog-ng \u201cmacros\u201d fields to specify what gets placed in the event delivered to the destination. The following table is a list of the templates used in SC4S, which can be used for metadata override. New templates can also be added by the administrator in the \u201clocal\u201d section for local destinations; pay careful attention to the syntax as the templates are \u201clive\u201d syslog-ng config code.

Template name Template contents Notes t_standard ${DATE} ${HOST} ${MSGHDR}${MESSAGE} Standard template for most RFC3164 (standard syslog) traffic. t_msg_only ${MSGONLY} syslog-ng $MSG is sent, no headers (host, timestamp, etc.) . t_msg_trim $(strip $MSGONLY) Similar to syslog-ng $MSG with whitespace stripped. t_everything ${ISODATE} ${HOST} ${MSGHDR}${MESSAGE} Standard template with ISO date format. t_hdr_msg ${MSGHDR}${MESSAGE} Useful for non-compliant syslog messages. t_legacy_hdr_msg ${LEGACY_MSGHDR}${MESSAGE} Useful for non-compliant syslog messages. t_hdr_sdata_msg ${MSGHDR}${MSGID} ${SDATA} ${MESSAGE} Useful for non-compliant syslog messages. t_program_msg ${PROGRAM}[${PID}]: ${MESSAGE} Useful for non-compliant syslog messages. t_program_nopid_msg ${PROGRAM}: ${MESSAGE} Useful for non-compliant syslog messages. t_JSON_3164 $(format-json \u2013scope rfc3164\u2013pair PRI=\u201d<$PRI>\u201d\u2013key LEGACY_MSGHDR\u2013exclude FACILITY\u2013exclude PRIORITY) JSON output of all RFC3164-based syslog-ng macros. Useful with the \u201cfallback\u201d sourcetype to aid in new filter development. t_JSON_5424 $(format-json \u2013scope rfc5424\u2013pair PRI=\u201d<$PRI>\u201d\u2013key ISODATE\u2013exclude DATE\u2013exclude FACILITY\u2013exclude PRIORITY) JSON output of all RFC5424-based syslog-ng macros; for use with RFC5424-compliant traffic. t_JSON_5424_SDATA $(format-json \u2013scope rfc5424\u2013pair PRI=\u201d<$PRI>\u201d\u2013key ISODATE\u2013exclude DATE\u2013exclude FACILITY\u2013exclude PRIORITY)\u2013exclude MESSAGE JSON output of all RFC5424-based syslog-ng macros except for MESSAGE; for use with RFC5424-compliant traffic."},{"location":"configuration/#about-ebpf","title":"About eBPF","text":"

eBPF helps mitigate congestion of single heavy data stream by utilizing multithreading and is used with SC4S_SOURCE_LISTEN_UDP_SOCKETS. To leverage this feature you need your host OS to be able to use eBPF and must run Docker or Podman in privileged mode.

Variable Values Description SC4S_ENABLE_EBPF=yes yes or no(default) Use eBPF to leverage multithreading when consuming from a single connection. SC4S_EBPF_NO_SOCKETS=4 integer Set number of threads to use. For optimal performance this should not be less than the value set for SC4S_SOURCE_LISTEN_UDP_SOCKETS.

To run Docker or Podman in privileged mode, edit the service file /lib/systemd/system/sc4s.service to add the --privileged flag to the Docker or Ppodman run command:

ExecStart=/usr/bin/podman run \\\n        -e \"SC4S_CONTAINER_HOST=${SC4SHOST}\" \\\n        -v \"$SC4S_PERSIST_MOUNT\" \\\n        -v \"$SC4S_LOCAL_MOUNT\" \\\n        -v \"$SC4S_ARCHIVE_MOUNT\" \\\n        -v \"$SC4S_TLS_MOUNT\" \\\n        --privileged \\\n        --env-file=/opt/sc4s/env_file \\\n        --health-cmd=\"/healthcheck.sh\" \\\n        --health-interval=10s --health-retries=6 --health-timeout=6s \\\n        --network host \\\n        --name SC4S \\\n        --rm $SC4S_IMAGE\n

"},{"location":"configuration/#change-your-status-port","title":"Change your status port","text":"

Use SC4S_LISTEN_STATUS_PORT to change the \u201cstatus\u201d port used by the internal health check process. The default value is 8080.

"},{"location":"create-parser/","title":"Create a parser","text":"

SC4S parsers perform operations that would normally be performed during index time, including linebreaking, source and sourcetype setting, and timestamping. You can write your own parser if the parsers available in the SC4S package do not meet your needs.

"},{"location":"create-parser/#before-you-start","title":"Before you start","text":""},{"location":"create-parser/#procure-a-raw-log-message","title":"Procure a raw log message","text":"

If you already have a raw log message, you can skip this step. Otherwise, you need to extract one to have something to work with. You can do this in multiple ways, this section describes three methods.

"},{"location":"create-parser/#procure-a-raw-log-message-using-tcpdump","title":"Procure a raw log message using tcpdump","text":"

You can use the tcpdump command to get incoming raw messages on a given port of your server:

tcpdump -n -s 0 -S -i any -v port 8088\n\ntcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes\n09:54:26.051644 IP (tos 0x0, ttl 64, id 29465, offset 0, flags [DF], proto UDP (17), length 466)\n10.202.22.239.41151 > 10.202.33.242.syslog: SYSLOG, length: 438\nFacility local0 (16), Severity info (6)\nMsg: 2022-04-28T16:16:15.466731-04:00 NTNX-21SM6M510425-B-CVM audispd[32075]: node=ntnx-21sm6m510425-b-cvm type=SYSCALL msg=audit(1651176975.464:2828209): arch=c000003e syscall=2 success=yes exit=6 a0=7f2955ac932e a1=2 a2=3e8 a3=3 items=1 ppid=29680 pid=4684 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=964698 comm=\u201csshd\u201d exe=\u201c/usr/sbin/sshd\u201d subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 key=\u201clogins\u201d\\0x0a\n
"},{"location":"create-parser/#procure-a-raw-log-message-using-wireshark","title":"Procure a raw log message using Wireshark","text":"

Once you get your stream of messages, copy one of them. Note that in UDP there are not usually any message separators. You can also read the logs using Wireshark from the .pcap file. From Wireshark go to Statistics > Conversations, then click on Follow Stream:

"},{"location":"create-parser/#procure-a-raw-log-message-by-saving-it-in-splunk","title":"Procure a raw log message by saving it in Splunk","text":"

See Obtaining \u201cOn-the-wire\u201d Raw Events.

"},{"location":"create-parser/#create-a-unit-test","title":"Create a unit test","text":"

To create a unit test, use the existing test case that is most similar to your use case. The naming convention is test_vendor_product.py.

  1. Make sure that your log is being parsed correctly by creating a test case. Assuming you have a raw message like this:

<14>1 2022-03-30T11:17:11.900862-04:00 host - - - - Carbon Black App Control event: text=\"File 'c:\\program files\\azure advanced threat protection sensor\\2.175.15073.51407\\winpcap\\x86\\packet.dll' [c4e671bf409076a6bf0897e8a11e6f1366d4b21bf742c5e5e116059c9b571363] would have blocked if the rule was not in Report Only mode.\" type=\"Policy Enforcement\" subtype=\"Execution block (unapproved file)\" hostname=\"CORP\\USER\" username=\"NT AUTHORITY\\SYSTEM\" date=\"3/30/2022 3:16:40 PM\" ip_address=\"10.0.0.3\" process=\"c:\\program files\\azure advanced threat protection sensor\\2.175.15073.51407\\microsoft.tri.sensor.updater.exe\" file_path=\"c:\\program files\\azure advanced threat protection sensor\\2.175.15073.51407\\winpcap\\x86\\packet.dll\" file_name=\"packet.dll\" file_hash=\"c4e671bf409076a6bf0897e8a11e6f1366d4b21bf742c5e5e116059c9b571363\" policy=\"High Enforcement - Domain Controllers\" rule_name=\"Report read-only memory map operations on unapproved executables by .NET applications\" process_key=\"00000433-0000-23d8-01d8-44491b26f203\" server_version=\"8.5.4.3\" file_trust=\"-2\" file_threat=\"-2\" process_trust=\"-2\" process_threat=\"-2\" prevalence=\"50\"

  1. Now run the test, for example:

    poetry run pytest -v --tb=long \\\n--splunk_type=external \\\n--splunk_hec_token=<HEC_TOKEN> \\\n--splunk_host=<HEC_ENDPOINT> \\\n--sc4s_host=<SC4S_IP> \\\n--junitxml=test-results/test.xml \\\n-n <NUMBER_OF_JOBS> \\\ntest/test_vendor_product.py\n

  2. The parsed log should appear in Splunk:

In this example the message is being parsed as a generic nix:syslog sourcetype. This means that the message format complied with RFC standards, and SC4S could correctly identify the format fields in the message.

"},{"location":"create-parser/#create-a-parser_1","title":"Create a parser","text":"

To assign your messages to the proper index and sourcetype you will need to create a parser. Your parser must be declared in package/etc/conf.d/conflib. The naming convention is app-type-vendor_product.conf.

  1. If you find a similar parser in SC4S, you can use it as a reference. In the parser, make sure you assign the proper sourcetype, index, vendor, product, and template. The template shows how your message should be parsed before sending them to Splunk.

The most basic configuration will forward raw log data with correct metadata, for example:

block parser app-syslog-vmware_cb-protect() {\n    channel {\n        rewrite {\n            r_set_splunk_dest_default(\n                index(\"epintel\")\n                sourcetype('vmware:cb:protect')\n                vendor(\"vmware\")\n                product(\"cb-protect\")\n                template(\"t_msg_only\")\n            );\n        };\n    };\n};\napplication app-syslog-vmware_cb-protect[sc4s-syslog] {\n    filter {\n        message('Carbon Black App Control event:  '  type(string)  flags(prefix));\n    };  \n    parser { app-syslog-vmware_cb-protect(); };\n};\n
All messages that start with the string Carbon Black App Control event: will now be routed to the proper index and assigned the given sourcetype: For more info about using message filtering go to sources documentation.

  1. To apply more transformations, add the parser:

    block parser app-syslog-vmware_cb-protect() {\n    channel {\n        rewrite {\n            r_set_splunk_dest_default(\n                index(\"epintel\")\n                sourcetype('vmware:cb:protect')\n                vendor(\"vmware\")\n                product(\"cb-protect\")\n                template(\"t_kv_values\")\n            );\n        };\n\n        parser {\n            csv-parser(delimiters(chars('') strings(': '))\n                       columns('header', 'message')\n                       prefix('.tmp.')\n                       flags(greedy, drop-invalid));\n            kv-parser(\n                prefix(\".values.\")\n                pair-separator(\" \")\n                template('${.tmp.message}')\n            );\n        };\n    };\n};\napplication app-syslog-vmware_cb-protect[sc4s-syslog] {\n    filter {\n        message('Carbon Black App Control event:  '  type(string)  flags(prefix));\n    };  \n    parser { app-syslog-vmware_cb-protect(); };\n};\n
    This example extracts all fields that are nested in the raw log message first by using csv-parser to split Carbon Black App Control event and the rest of message as a two separate fields named header and message. kv-parser will extract all key-value pairs in the message field.

  2. To test your parser, run a previously created test case. If you need more debugging, use docker ps to see your running containers and docker logs to see what\u2019s happening to the parsed message.

  3. Commit your changes and open a pull request.

"},{"location":"dashboard/","title":"SC4S Metrics and Events Dashboard","text":"

The SC4S Metrics and Events dashboard lets you monitor metrics and event flows for all SC4S instances sending data to a chosen Splunk platform.

"},{"location":"dashboard/#functionalities","title":"Functionalities","text":""},{"location":"dashboard/#overview-metrics","title":"Overview metrics","text":"

The SC4S and Metrics Overview dashboard displays the cumulative sum of received and dropped messages for all SC4S instances in a chosen interval for the specified time range. By default the interval is set to 30 seconds and the time range is set to 15 minutes.

The Received Messages panel can be used as a heartbeat metric. A healthy SC4S instance should send at least one message per 30 seconds. This metrics message is included in the count.

Keep the Dropped Messages panel at a constant level of 0. If SC4S drops messages due to filters, slow performance, or for any other reason, the number of dropped messages will persist until the instance restarts. The Dropped Messages panel does not include potential UDP messages dropped from the port buffer, which SC4S is not able to track.

"},{"location":"dashboard/#single-instance-metrics","title":"Single instance metrics","text":"

You can display the instance name and SC4S version for a specific SC4S instance (available in versions 3.16.0 and later).

This dashboard also displays a timechart of deltas for received, queued, and dropped messages for a specific SC4S instance.

"},{"location":"dashboard/#single-instance-events","title":"Single instance events","text":"

You can analyze traffic processed by an SC4S instance by visualizing the following events data:

"},{"location":"dashboard/#install-the-dashboard","title":"Install the dashboard","text":"
  1. In Splunk platform open Search -> Dashboards.
  2. Click on Create New Dashboard and make an empty dashboard. Be sure to choose Classic Dashboards.
  3. In the \u201cEdit Dashboard\u201d view, go to Source and replace the initial xml with the contents of dashboard/dashboard.xml published in the SC4S repository.
  4. Saving your changes. Your dashboard is ready to use.
"},{"location":"destinations/","title":"Supported SC4S destinations","text":"

You can configure Splunk Connect for Syslog to use any destination available in syslog-ng OSE. Helpers manage configuration for the three most common destination needs:

"},{"location":"destinations/#hec-destination","title":"HEC destination","text":""},{"location":"destinations/#configuration-options","title":"Configuration options","text":"Variable Values Description SC4S_DEST_SPLUNK_HEC_<ID>_URL url URL of the Splunk endpoint, this can be a single URL or a space-separated list. SC4S_DEST_SPLUNK_HEC_<ID>_TOKEN string Splunk HTTP Event Collector token. SC4S_DEST_SPLUNK_HEC_<ID>_MODE string \u201cGLOBAL\u201d or \u201cSELECT\u201d. SC4S_DEST_SPLUNK_HEC_<ID>_TLS_VERIFY yes(default) or no Verify HTTP(s) certificates. SC4S_DEST_SPLUNK_HEC_<ID>_HTTP_COMPRESSION yes or no(default) Compress outgoing HTTP traffic using the gzip method."},{"location":"destinations/#http-compression","title":"HTTP Compression","text":"

HTTP traffic compression helps to reduce network bandwidth usage when sending to a HEC destination. SC4S currently supports gzip for compressing transmitted traffic. Using the gzip compression algorithm can result in lower CPU load and increased utilization of RAM. The algorithm may also cause a decrease in performance by 6% to 7%. Compression affects the content but does not affect the HTTP headers. Enable batch packet processing to make the solution efficient, as this allows compression of a large number of logs at once.

Variable Values Description SC4S_DEST_SPLUNK_HEC_<ID>_HTTP_COMPRESSION yes or no(default) Compress outgoing HTTP traffic using the gzip method."},{"location":"destinations/#syslog-standard-destination","title":"Syslog standard destination","text":"

The use of \u201csyslog\u201d as a network protocol has been defined in Internet Engineering Task Force standards RFC5424, RFC5425, and RFC6587.

Note: SC4S sending messages to a syslog destination behaves like a relay. This means overwriting some original information, for example the original source IP.

"},{"location":"destinations/#configuration-options_1","title":"Configuration options","text":"Variable Values Description SC4S_DEST_SYSLOG_<ID>_HOST fqdn or ip The FQDN or IP of the target. SC4S_DEST_SYSLOG_<ID>_PORT number 601 is the default when framed, 514 is the default when not framed. SC4S_DEST_SYSLOG_<ID>_IETF yes/no, the default value is yes. Use IETF Standard frames. SC4S_DEST_SYSLOG_<ID>_TRANSPORT tcp,udp,tls. The default value is tcp. SC4S_DEST_SYSLOG_<ID>_MODE string \u201cGLOBAL\u201d or \u201cSELECT\u201d."},{"location":"destinations/#send-rfc5424-with-frames","title":"Send RFC5424 with frames","text":"

In this example, SC4S will send Cisco ASA events as RFC5424 syslog to a third party system.

The message format will be similar to: 123 <166>1 2022-02-02T14:59:55.000+00:00 kinetic-charlie - - - - %FTD-6-430003: DeviceUUID.

The destination name is taken from the environment variable, each destination must have a unique name. This value should be short and meaningful.

#env_file\nSC4S_DEST_SYSLOG_MYSYS_HOST=172.17.0.1\nSC4S_DEST_SYSLOG_MYSYS_PORT=514\nSC4S_DEST_SYSLOG_MYSYS_MODE=SELECT\n
#filename: /opt/sc4s/local/config/app_parsers/selectors/sc4s-lp-cisco_asa_d_syslog_mysys.conf\napplication sc4s-lp-cisco_asa_d_syslog_mysys[sc4s-lp-dest-select-d_syslog_mysys] {\n    filter {\n        'cisco' eq \"${fields.sc4s_vendor}\"\n        and 'asa' eq \"${fields.sc4s_product}\"\n    };    \n};\n
"},{"location":"destinations/#send-rfc5424-without-frames","title":"Send RFC5424 without frames","text":"

In this example SC4S will send Cisco ASA events to a third party system without frames.

The message format will be similar to: <166>1 2022-02-02T14:59:55.000+00:00 kinetic-charlie - - - - %FTD-6-430003: DeviceUUID.

#env_file\nSC4S_DEST_SYSLOG_MYSYS_HOST=172.17.0.1\nSC4S_DEST_SYSLOG_MYSYS_PORT=514\nSC4S_DEST_SYSLOG_MYSYS_MODE=SELECT\n# set to #yes for ietf frames\nSC4S_DEST_SYSLOG_MYSYS_IETF=no \n
#filename: /opt/sc4s/local/config/app_parsers/selectors/sc4s-lp-cisco_asa_d_syslog_mysys.conf\napplication sc4s-lp-cisco_asa_d_syslog_mysys[sc4s-lp-dest-select-d_syslog_mysys] {\n    filter {\n        'cisco' eq \"${fields.sc4s_vendor}\"\n        and 'asa' eq \"${fields.sc4s_product}\"\n    };    \n};\n
"},{"location":"destinations/#legacy-bsd","title":"Legacy BSD","text":"

In many cases, the actual configuration required is Legacy BSD syslog which is not a standard and was documented in RFC3164.

Variable Values Description SC4S_DEST_BSD_<ID>_HOST fqdn or ip The FQDN or IP of the target. SC4S_DEST_BSD_<ID>_PORT number, the default is 514. SC4S_DEST_BSD_<ID>_TRANSPORT tcp,udp,tls, the default is tcp. SC4S_DEST_BSD_<ID>_MODE string \u201cGLOBAL\u201d or \u201cSELECT\u201d."},{"location":"destinations/#send-legacy-bsd","title":"Send legacy BSD","text":"

The message format will be similar to: <134>Feb 2 13:43:05.000 horse-ammonia CheckPoint[26203].

#env_file\nSC4S_DEST_BSD_MYSYS_HOST=172.17.0.1\nSC4S_DEST_BSD_MYSYS_PORT=514\nSC4S_DEST_BSD_MYSYS_MODE=SELECT\n
#filename: /opt/sc4s/local/config/app_parsers/selectors/sc4s-lp-cisco_asa_d_bsd_mysys.conf\napplication sc4s-lp-cisco_asa_d_bsd_mysys[sc4s-lp-dest-select-d_bsd_mysys] {\n    filter {\n        'cisco' eq \"${fields.sc4s_vendor}\"\n        and 'asa' eq \"${fields.sc4s_product}\"\n    };    \n};\n
"},{"location":"destinations/#multiple-destinations","title":"Multiple destinations","text":"

SC4S can send data to multiple destinations. In the original setup the default destination accepts all events. This ensures that at least one destination receives the event, helping to avoid data loss due to misconfiguration. The provided examples demonstrate possible options for configuring additional HEC destinations.

"},{"location":"destinations/#send-all-events-to-the-additional-destination","title":"Send all events to the additional destination","text":"

After adding this example to your basic configuration SC4S will send all events both to SC4S_DEST_SPLUNK_HEC_DEFAULT_URL and SC4S_DEST_SPLUNK_HEC_OTHER_URL.

#Note \"OTHER\" should be a meaningful name\nSC4S_DEST_SPLUNK_HEC_OTHER_URL=https://splunk:8088\nSC4S_DEST_SPLUNK_HEC_OTHER_TOKEN=${SPLUNK_HEC_TOKEN}\nSC4S_DEST_SPLUNK_HEC_OTHER_TLS_VERIFY=no\nSC4S_DEST_SPLUNK_HEC_OTHER_MODE=GLOBAL\n

"},{"location":"destinations/#send-only-selected-events-to-the-additional-destination","title":"Send only selected events to the additional destination","text":"

After adding this example to your basic configuration SC4S will send Cisco IOS events to SC4S_DEST_SPLUNK_HEC_OTHER_URL.

#Note \"OTHER\" should be a meaningful name\nSC4S_DEST_SPLUNK_HEC_OTHER_URL=https://splunk:8088\nSC4S_DEST_SPLUNK_HEC_OTHER_TOKEN=${SPLUNK_HEC_TOKEN}\nSC4S_DEST_SPLUNK_HEC_OTHER_TLS_VERIFY=no\nSC4S_DEST_SPLUNK_HEC_OTHER_MODE=SELECT\n

application sc4s-lp-cisco_ios_dest_fmt_other[sc4s-lp-dest-select-d_hec_fmt_other] {\n    filter {\n        'cisco' eq \"${fields.sc4s_vendor}\"\n        and 'asa' eq \"${fields.sc4s_product}\"\n    };\n};\n
"},{"location":"destinations/#advanced-topic-configure-filtered-alternate-destinations","title":"Advanced topic: Configure filtered alternate destinations","text":"

You may require more granularity for a specific data source. For example, you may want to send all Cisco ASA debug traffic to Cisco Prime for analysis. To accommodate this, filtered alternate destinations let you supply a filter to redirect a portion of a source\u2019s traffic to a list of alternate destinations and, optionally, prevent matching events from being sent to Splunk. You configure this using environment variables:

Variable Values Description SC4S_DEST_<VENDOR_PRODUCT>_ALT_FILTER syslog-ng filter Filter to determine which events are sent to alternate destinations. SC4S_DEST_<VENDOR_PRODUCT>_FILTERED_ALTERNATES Comma or space-separated list of syslog-ng destinations. Send filtered events to alternate syslog-ng destinations using the VENDOR_PRODUCT syntax, for example, SC4S_DEST_CISCO_ASA_FILTERED_ALTERNATES.

This is an advanced capability, and filters and destinations using proper syslog-ng syntax must be constructed before using this functionality.

The regular destinations, including the primary HEC destination or configured archive destination, for example d_hec or d_archive, are not included for events matching the configured alternate destination filter. If an event matches the filter, the list of filtered alternate destinations completely replaces any mainline destinations, including defaults and global or source-based standard alternate destinations. Include them in the filtered destination list if desired.

Since the filtered alternate destinations completely replace the mainline destinations, including HEC to Splunk, a filter that matches all traffic can be used with a destination list that does not include the standard HEC destination to effectively turn off HEC for a given data source.

"},{"location":"edge_processor/","title":"Edge Processor integration guide (Experimental)","text":""},{"location":"edge_processor/#intro","title":"Intro","text":"

You can use the Edge Processor to:

"},{"location":"edge_processor/#how-it-works","title":"How it works","text":"
stateDiagram\n    direction LR\n\n    SC4S: SC4S\n    EP: Edge Processor\n    Dest: Another destination\n    Device: Your device\n    S3: AWS S3\n    Instance: Instance\n    Pipeline: Pipeline with SPL2\n\n    Device --> SC4S: Syslog protocol\n    SC4S --> EP: HEC\n    state EP {\n      direction LR\n      Instance --> Pipeline\n    }\n    EP --> Splunk\n    EP --> S3\n    EP --> Dest
"},{"location":"edge_processor/#set-up-the-edge-processor-for-sc4s","title":"Set up the Edge Processor for SC4S","text":"

SC4S using same protocol for communication with Splunk and Edge Processor. For that reason setup process will be very similar, but it have some differences.

Set up on Docker / PodmanSet up on Kubernetes
  1. On the env_file, configure the HEC URL as IP of managed instance, that you registered on Edge Processor.
  2. Add your HEC token. You can find your token in the Edge Processor \u201cglobal settings\u201d page.
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://x.x.x.x:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
  1. Set up the Edge Processor on your values.yaml HEC URL using the IP of managed instance, that you registered on Edge Processor.
  2. Provide the hec_token. You can find this token on the Edge Processor\u2019s \u201cglobal settings\u201d page.
splunk:\n  hec_url: \"http://x.x.x.x:8088\"\n  hec_token: \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\"\n  hec_verify_tls: \"no\"\n
"},{"location":"edge_processor/#mtls-encryption","title":"mTLS encryption","text":"

Before setup, generate mTLS certificates. Server mTLS certificates should be uploaded to Edge Processor and client certifcates should be used with SC4S.

Rename the certificate files. SC4S requires the following names:

Set up on Docker / PodmanSet up on Kubernetes
  1. Use HTTPS in HEC url: SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://x.x.x.x:8088.
  2. Move your clients mTLS certificates (key.pem, cert.pem, ca_cert.pem) to /opt/sc4s/tls/hec.
  3. Mount /opt/sc4s/tls/hec to /etc/syslog-ng/tls/hec using docker/podman volumes.
  4. Define mounting mTLS point for HEC: SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_MOUNT=/etc/syslog-ng/tls/hec.
  5. Start or restart SC4S.
  1. Add the secret name of the mTLS certificates to the values.yaml file:
splunk:\n  hec_url: \"https://x.x.x.x:8088\"\n  hec_token: \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\"\n  hec_tls: \"hec-tls-secret\"\n
  1. Add your mTLS certificates to the charts/splunk-connect-for-syslog/secrets.yaml file:
hec_tls:\n  secret: \"hec-tls-secret\"\n  value:\n    key: |\n      -----BEGIN PRIVATE KEY-----\n      Exmaple key\n      -----END PRIVATE KEY-----\n    cert: |\n      -----BEGIN CERTIFICATE-----\n      Exmaple cert\n      -----END CERTIFICATE-----\n    ca: |\n      -----BEGIN CERTIFICATE-----\n      Example ca\n      -----END CERTIFICATE-----\n
  1. Encrypt your secrets.yaml:
ansible-vault encrypt charts/splunk-connect-for-syslog/secrets.yaml\n
  1. Add the IP address for your cluster nodes to the inventory file ansible/inventory/inventory_microk8s_ha.yaml.

  2. Deploy the Ansible playbook:

ansible-playbook -i ansible/inventory/inventory_microk8s_ha.yaml ansible/playbooks/microk8s_ha.yml --ask-vault-pass\n
"},{"location":"edge_processor/#scaling-edge-processor","title":"Scaling Edge Processor","text":"

To scale you can distribute traffic between Edge Processor managed instances. To set this up, update the HEC URL with a comma-separated list of URLs for your managed instances.

Set up on Docker/PodmanSet up on Kubernetes

Update HEC URL in env_file:

SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://x.x.x.x:8088,http://x.x.x.x:8088,http://x.x.x.x:8088\n

Update HEC URL in values.yaml:

splunk:\n  hec_url: \"http://x.x.x.x:8088,http://x.x.x.x:8088,http://x.x.x.x:8088\"\n
"},{"location":"experiments/","title":"Current experimental features","text":""},{"location":"experiments/#3120","title":"> 3.12.0","text":"

SC4S_USE_NAME_CACHE=yes supports IPv6.

"},{"location":"experiments/#300","title":"> 3.0.0","text":""},{"location":"experiments/#ebpf","title":"eBPF","text":"

eBPF is a feature that leverages Linux kernel infrastructure to evenly distribute the load, especially in cases when there is a huge stream of messages incoming from a single appliance. To use the eBPF feature, you must have a host machine with and OS that supports eBPF. eBPF should be used only in cases when other ways of SC4S tuning fail. See the instruction for configuration details. To learn more visit this blog post.

"},{"location":"experiments/#sc4s-lite","title":"SC4S Lite","text":"

In the new 3.0.0 update, we\u2019ve introduced SC4S Lite. SC4S Lite is designed for those who prefer speed and custom filters over the pre-set ones that come with the standard SC4S. It\u2019s similar to our default version, without the pre-defined filters and complex app_parser topics. More information can be found at dedicated page.

"},{"location":"experiments/#2130","title":"> 2.13.0","text":""},{"location":"faq/","title":"Splunk Connect for Syslog (SC4S) Frequently Asked Questions","text":"

Q: The universal forwarder with file-based architecture has been the documented Splunk best practice for a long time. Why should I switch to an HTTP Event Collector (HEC) based architecture?

A:

Q: Is the Splunk HTTP Event Collector (HEC) as reliable as the Splunk universal forwarder?

A: HEC utilizes standard HTTP mechanisms to confirm that the endpoint is responsive before sending data. The HEC architecture allows you to use an industry standard load balancer between SC4S and the indexer or the included load balancing capability built into SC4S itself.

Q: What if my team doesn\u2019t know how to manage containers?

A: Using a runtime like Podman to deploy and manage SC4S containers is exceptionally easy even for those with no prior \u201ccontainer experience\u201d. Our application of container technology behaves much like a packaging system. The interaction uses \u201csystemctl\u201d commands a Linux admin would use for other common administration activities. The best approach is to try it out in a lab to see what the experience is like for yourself!

Q: Can my team use SC4S with Windows?

A: You can now run Docker on Windows! Microsoft has introduced public preview technology for Linux containers on Windows. Alternatively, a minimal Centos/Ubuntu Linux VM running on Windows hyper-v is a reliable production-grade choice.

Q: My company has the traditional universal forwarder and files-based syslog architecture deployed and running, should I rip and replace a working installation with SC4S?

A: Generally speaking, if a deployment is working and you are happy with it, it\u2019s best to leave it as is until there is need for major deployment changes, such as scaling your configuration. The search performance improvements from better data distribution is one benefit, so if Splunk users have complained about search performance or you are curious about the possible performance gains, we recommend doing an analysis of the data distribution across the indexers.

Q: What is the best way to migrate to SC4S from an existing syslog architecture?

A: When exploring migration to SC4S we strongly recommend that you experiment in a lab prior to deployment to production. There are a couple of approaches to consider:

  1. Configure the new SC4S infrastructure for all your sources.
  2. Confirm all the sourcetypes are being indexed as expected.
  3. Stop the existing syslog servers.
  1. Stand up the new SC4S infrastructure in its default configuration.
  2. Confirm that all the sourcetypes are being indexed as expected.
  3. Retire the old syslog servers listening on port 514.
  4. Once the 514 sources are complete, migrate any other sources. To do this, configure SC4S filters to explicitly identify them either through a unique port, hostID, or CIDR block.
  5. Once you confirm that each sourcetype is successfully indexed, disable the old syslog configurations for that source.

Q: How can SC4S be deployed to provide high availability?

A: The syslog protocol was not designed with HA as a goal, so configuration can be challenging. See Performant AND Reliable Syslog UDP is best for an excellent overview of this topic.

The syslog protocol limits the extent to which you can make any syslog collection architecture HA; at best it can be made \u201cmostly available\u201d. To do this, keep it simple and use OS clustering (shared IP) or even just VMs with vMotion. This simple architecture will encounter far less data loss over time than more complicated schemes. Another possible option is containerization HA schemes for SC4S (centered around MicroK8s) that will take some of the administrative burden of clustering away, but still functions as OS clustering under the hood.

Q: I\u2019m worried about data loss if SC4S goes down. Could I feed syslog to redundant SC4S servers to provide HA, without creating duplicate events in Splunk?

A: In many system design decisions there is some level of compromise. Any network protocol that doesn\u2019t have an application level ACK will lose data because speed is selected over reliability in the design. This is the case with syslog. Use a clustered IP with an active/passive node for a level of resilience while keeping complexity to a minimum. It could be possible to implement a far more complex solution utilizing an additional intermediary technology like Kafka, however the costs may outweigh the real world benefits.

Q: If the XL reference HW can handle just under 1 terabyte per day, how can SC4S be scaled to handle large deployments of many terabytes per day?

A: SC4S is a distributed architecture. SC4S instances should be deployed in the same VLAN as the source devices. This means that each SC4S instance will only see a subset of the total syslog traffic in a large deployment. Even in a deployment of 100 terabytes or greater, the individual SC4S instances will see loads in gigabytes per day rather than terabytes per day.

Q: SC4S is being blocked by fapolicyd, how do I fix that?

A: Create a rule that allows running SC4S in fapolicyd configuration:

Q: I am facing a unique issue that my postfilter configuration is not working although I don\u2019t have any postfilter for the mentioned source?

A: There may be a OOB postfilter for the source which will be applied, validate this by checking the value of sc4s_tags in Splunk. To resolve this, see [sc4s-finalfilter]. Do not use this resolution in any other situation as it can add the cost of the data processing.

Q: Where should the configuration for the vendors be placed? There are folders of app-parsers and directories. Which one to use? Does this also mean that csv files for metadata are no longer required?

A: The configuration for vendors should be placed in /opt/sc4s/local/config/*/.conf. Most of the folders are placeholders, the configuration will work in any of these folders with a .conf extension. CSV should be placed in local/context/*.csv. Using splunk_metadata.csv is good for metadata override, but use .conf file for everything else in place of other csv files.

Q: Can we have a file in which we can create all default indexes in one effort?

A: Refer to indexes.conf, which contains all indexes being created in one effort. This file also has lastChanceIndex configured, to use if it fits your requirements. For more information on this file, please refer Splunk docs.

"},{"location":"lb/","title":"About using load balancers","text":"

Load balancers are not a best practice for SC4S. The exception to this is a narrow use case where the syslog server is exposed to untrusted clients on the internet, for example, with Palo Alto Cortex.

"},{"location":"lb/#considerations","title":"Considerations","text":""},{"location":"lb/#alternatives","title":"Alternatives","text":"

The best deployment model for high availability is a Microk8s based deployment with MetalLB in BGP mode. This model uses a special class of load balancer that is implemented as destination network translation.

"},{"location":"lite/","title":"SC4S Lite","text":""},{"location":"lite/#about-sc4s-lite","title":"About SC4S Lite","text":"

SC4S Lite provides a scalable, performance-oriented solution for ingesting syslog data into Splunk. Pluggable modular parsers offer you the flexibility to incorporate custom data processing logic to suit specific use cases.

"},{"location":"lite/#architecture","title":"Architecture","text":""},{"location":"lite/#sc4s-lite_1","title":"SC4S Lite","text":"

SC4S Lite provides a lightweight, high-performance SC4S solution.

"},{"location":"lite/#pluggable-modules","title":"Pluggable Modules","text":"

Pluggable modules are predefined modules that you can enable and disable through configuration files. Each pluggable module represents a set of parsers for each vendor that supports SC4S. You can only enable or disable modules, you cannot create new modules or update existing ones. For more information see the pluggable modules documentation .

"},{"location":"lite/#splunk-enterprise-or-splunk-cloud","title":"Splunk Enterprise or Splunk Cloud","text":"

You configure SC4S Lite to send syslog data to Splunk Enterprise or Splunk Cloud. The Splunk Platform provides comprehensive analysis, searching, and visualization of your processed data.

"},{"location":"lite/#how-sc4s-lite-processes-your-data","title":"How SC4S Lite processes your data","text":"
  1. Source systems send syslog data to SC4S Lite. The data may be transmitted using UDP, TCP, or RELP, depending on your system\u2019s capabilities and configurations.
  2. SC4S Lite receives the syslog data and routes it through the appropriate parsers, as defined by you during configuration.
  3. The parsers in the pluggable module process the data, such as parsing, filtering, and enriching the data with metadata.
  4. SC4S Lite forwards the processed syslog data to the Splunk platform over the HTTP Event Collector (HEC).
"},{"location":"lite/#security-considerations","title":"Security considerations","text":"

SC4S Lite is built on an Alpine lightweight container which has very little vulnerability. SC4S Lite supports secure syslog data transmission protocols such as RELP and TLS over TCP to protect your data in transit. Additionally, the environment in which SC4S Lite is deployed enhances data security.

"},{"location":"lite/#scalability-and-performance","title":"Scalability and performance","text":"

SC4S Lite provides superior performance and scalability thanks to the lightweight architecture and pluggable parsers, which distribute the processing load. It is also packaged with eBPF functionality to further enhance performance. Note that actual performance may depend on factors such as your server capacity and network bandwidth.

"},{"location":"lite/#implement-sc4s-lite","title":"Implement SC4S Lite","text":"

To implementat of SC4S Lite:

  1. Set up the SC4S Lite environment.
  2. Install SC4S Lite following the instructions for your chosen environment with the following changes:
  1. Configure source systems to send syslog data to SC4S Lite.
  2. Enable or disable your pluggable modules. All pluggable modules are enabled by default.
  3. Test the setup to ensure that your syslog data is correctly received, processed, and forwarded to Splunk.
"},{"location":"performance/","title":"Performance and Sizing","text":"

Performance testing against our lab configuration produces the following results and limitations.

"},{"location":"performance/#tested-configurations","title":"Tested Configurations","text":""},{"location":"performance/#splunk-cloud-noah","title":"Splunk Cloud Noah","text":""},{"location":"performance/#environment","title":"Environment","text":"
/opt/syslog-ng/bin/loggen -i --rate=100000 --interval=1800 -P -F --sdata=\"[test name=\\\"stress17\\\"]\" -s 800 --active-connections=10 <local_hostmane> <sc4s_external_tcp514_port>\n
"},{"location":"performance/#result","title":"Result","text":"SC4S instance root networking slirp4netns networking m5zn.large average rate = 21109.66 msg/sec, count=38023708, time=1801.25, (average) msg size=800, bandwidth=16491.92 kB/sec average rate = 20738.39 msg/sec, count=37344765, time=1800.75, (average) msg size=800, bandwidth=16201.87 kB/sec m5zn.xlarge average rate = 34820.94 msg/sec, count=62687563, time=1800.28, (average) msg size=800, bandwidth=27203.86 kB/sec average rate = 35329.28 msg/sec, count=63619825, time=1800.77, (average) msg size=800, bandwidth=27601.00 kB/sec m5zn.2xlarge average rate = 71929.91 msg/sec, count=129492418, time=1800.26, (average) msg size=800, bandwidth=56195.24 kB/sec average rate = 70894.84 msg/sec, count=127630166, time=1800.27, (average) msg size=800, bandwidth=55386.60 kB/sec m5zn.2xlarge average rate = 85419.09 msg/sec, count=153778825, time=1800.29, (average) msg size=800, bandwidth=66733.66 kB/sec average rate = 84733.71 msg/sec, count=152542466, time=1800.26, (average) msg size=800, bandwidth=66198.21 kB/sec"},{"location":"performance/#splunk-enterprise","title":"Splunk Enterprise","text":""},{"location":"performance/#environment_1","title":"Environment","text":"
/opt/syslog-ng/bin/loggen -i --rate=100000 --interval=600 -P -F --sdata=\"[test name=\\\"stress17\\\"]\" -s 800 --active-connections=10 <local_hostmane> <sc4s_external_tcp514_port>\n
"},{"location":"performance/#result_1","title":"Result","text":"SC4S instance root networking slirp4netns networking m5zn.large average rate = 21511.69 msg/sec, count=12930565, time=601.095, (average) msg size=800, bandwidth=16806.01 kB/sec average rate = 21583.13 msg/sec, count=12973491, time=601.094, (average) msg size=800, bandwidth=16861.82 kB/sec average rate = 20738.39 msg/sec, count=37344765, time=1800.75, (average) msg size=800, bandwidth=16201.87 kB/sec m5zn.xlarge average rate = 37514.29 msg/sec, count=22530855, time=600.594, (average) msg size=800, bandwidth=29308.04 kB/sec average rate = 37549.86 msg/sec, count=22552210, time=600.594, (average) msg size=800, bandwidth=29335.83 kB/sec average rate = 35329.28 msg/sec, count=63619825, time=1800.77, (average) msg size=800, bandwidth=27601.00 kB/sec m5zn.2xlarge average rate = 98580.10 msg/sec, count=59157495, time=600.096, (average) msg size=800, bandwidth=77015.70 kB/sec average rate = 99463.10 msg/sec, count=59687310, time=600.095, (average) msg size=800, bandwidth=77705.55 kB/sec average rate = 84733.71 msg/sec, count=152542466, time=1800.26, (average) msg size=800, bandwidth=66198.21 kB/sec"},{"location":"performance/#guidance-on-sizing-hardware","title":"Guidance on sizing hardware","text":""},{"location":"pluggable_modules/","title":"Working with pluggable modules","text":"

SC4S Lite pluggable modules are predefined modules that you can enable or disable by modifying your config.yaml file. This file contains a list of add-ons. See the example and list of available pluggable modules in (config.yaml reference file) for more information. Once you update config.yaml, you mount it to the Docker container and override /etc/syslog-ng/config.yaml.

"},{"location":"pluggable_modules/#install-sc4s-lite-using-docker-compose","title":"Install SC4S Lite using Docker Compose","text":"

The installation process is identical to the installation process for Docker Compose for SC4S with the following modifications.

volumes:\n    - /path/to/your/config.yaml:/etc/syslog-ng/config.yaml\n
"},{"location":"pluggable_modules/#kubernetes","title":"Kubernetes:","text":"

The installation process is identical to the installation process for Kubernetes for SC4S with the following modifications:

sc4s:\n    addons:\n        config.yaml: |-\n            ---\n            addons:\n                - cisco\n                - paloalto\n                - dell\n
"},{"location":"upgrade/","title":"Upgrading SC4S","text":""},{"location":"upgrade/#upgrade-sc4s","title":"Upgrade SC4S","text":"
  1. For the latest version, use the latest tag for the SC4S image in the sc4s.service unit file. You can also set a specific version in the unit file if desired.
[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n
  1. Restart the service. sudo systemctl restart sc4s

See the release notes for more information.

"},{"location":"upgrade/#upgrade-notes","title":"Upgrade Notes","text":"

Version 3 does not introduce any breaking changes. To upgrade to version 3, review the service file and change the container reference from container2 to container3. For a step by step guide see here.

You may need to migrate legacy log paths or version 1 app-parsers for version 2. To do this, open an issue and attach the original configuration and a compressed pcap of sample data for testing. We will evaluate whether to include the source in an upcoming release.

"},{"location":"upgrade/#upgrade-from-2230","title":"Upgrade from <2.23.0","text":""},{"location":"upgrade/#upgrade-from-2","title":"Upgrade from <2","text":"
#Current app parsers contain one or more lines\nvendor_product('value_here')\n#This must change to failure to make this change will prevent sc4s from starting\nvendor('value')\nproduct('here')\n
"},{"location":"v3_upgrade/","title":"Upgrading Splunk Connect for Syslog v2 -> v3","text":""},{"location":"v3_upgrade/#upgrade-process-for-version-newer-than-230","title":"Upgrade process (for version newer than 2.3.0)","text":"

In general the upgrade process consists of three steps: - change of container version - restart of service - validation NOTE: Version 3 of SC4S is using alpine linux distribution as base image in opposition to previous versions which used UBI (Red Hat) image.

"},{"location":"v3_upgrade/#dockerpodman","title":"Docker/Podman","text":""},{"location":"v3_upgrade/#update-container-image-version","title":"Update container image version","text":"

In the service file: /lib/systemd/system/sc4s.service container image reference should be updated to version 3 with latest tag:

[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n

"},{"location":"v3_upgrade/#restart-sc4s-service","title":"Restart sc4s service","text":"

Restart the service: sudo systemctl restart sc4s

"},{"location":"v3_upgrade/#validate","title":"Validate","text":"

After the above command is executed successfully, the following information with the version becomes visible in the container logs: sudo podman logs SC4S for podman or sudo docker logs SC4S for docker. Expected output:

SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...\nSC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...\nsyslog-ng checking config\nsc4s version=3.0.0\nstarting goss\nstarting syslog-ng \n

If you are upgrading from version lower than 2.3.0 please refer to this guide.

"},{"location":"gettingstarted/","title":"Before you start","text":""},{"location":"gettingstarted/#getting-started","title":"Getting Started","text":"

Splunk Connect for Syslog (SC4S) is a distribution of syslog-ng that simplifies getting your syslog data into Splunk Enterprise and Splunk Cloud. SC4S provides a runtime-agnostic solution that lets you deploy using the container runtime environment of choice and a configuration framework. This lets you process logs out-of-the-box from many popular devices and systems.

"},{"location":"gettingstarted/#planning-deployment","title":"Planning Deployment","text":"

Syslog can refer to multiple message formats as well as, optionally, a wire protocol for event transmission between computer systems over UDP, TCP, or TLS. This protocol minimizes overhead on the sender, favoring performance over reliability. This means any instability or resource constraint can cause data to be lost in transmission.

"},{"location":"gettingstarted/#implementation","title":"Implementation","text":""},{"location":"gettingstarted/#quickstart-guide","title":"Quickstart Guide","text":""},{"location":"gettingstarted/#splunk-setup","title":"Splunk Setup","text":""},{"location":"gettingstarted/#runtime-configuration","title":"Runtime configuration","text":""},{"location":"gettingstarted/ansible-docker-podman/","title":"Podman/Docker","text":"

SC4S installation can be automated with Ansible. To do this, you provide a list of hosts on which you want to run SC4S and basic configuration information, such as the Splunk endpoint, HEC token, and TLS configuration.

"},{"location":"gettingstarted/ansible-docker-podman/#step-1-prepare-your-initial-configuration","title":"Step 1: Prepare your initial configuration","text":"
  1. Before running SC4S with Ansible, provide env_file with your Splunk endpoint and HEC token:

SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://xxx.xxx.xxx.xxx:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
2. Provide a list of hosts on which you want to run your cluster and the host application in the inventory file:
all:\n  hosts:\n  children:\n    node:\n      hosts:\n        node_1:\n          ansible_host:\n

"},{"location":"gettingstarted/ansible-docker-podman/#step-2-deploy-sc4s-on-your-configuration","title":"Step 2: Deploy SC4S on your configuration","text":"
  1. If you have Ansible installed on your host, run the Ansible playbook to deploy SC4S. Otherwise, use the Docker Ansible image provided in the package:
    # From repository root\ndocker-compose -f ansible/docker-compose.yml build\ndocker-compose -f ansible/docker-compose.yml up -d\ndocker exec -it ansible_sc4s /bin/bash\n
  2. If you used the Docker Ansible image in the previous step, then from your container remote shell, authenticate to and run the playbook.
"},{"location":"gettingstarted/ansible-docker-podman/#step-3-validate-your-configuration","title":"Step 3: Validate your configuration","text":"

SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once the checks are complete, validate that SC4S properly communicate with Splunk. To do this, execute the following search in Splunk:

index=* sourcetype=sc4s:events \"starting up\"\n

This should yield an event similar to the following:

syslog-ng starting up; version='3.28.1'\n
You can verify if all SC4S instances work by checking the sc4s_container in Splunk. Each instance should have a different container ID. All other fields should be the same.

The startup process should proceed normally without syntax errors. If it does not, follow the steps below before proceeding to deeper-level troubleshooting:

  1. Verify that the URL, token, and TLS/SSL settings are correct, and that the appropriate firewall ports are open (8088 or 443).
  2. Verify that your indexes are created in Splunk, and that your token has access to them.
  3. If you are using a load balancer, verify that it is operating properly.
  4. Execute the following command to check the SC4S startup process running in the container.
    sudo docker ps\n

docker logs <ID | image name> \n
or:
sudo systemctl status sc4s\n

SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...\nSC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...\nsyslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n

If you do not see this output, see \u201cTroubleshoot sc4s server\u201d and \u201cTroubleshoot resources\u201d for more information.

"},{"location":"gettingstarted/ansible-docker-swarm/","title":"Docker Swarm","text":"

SC4S installation can be automated with Ansible. To do this, you provide a list of hosts on which you want to run SC4S and the basic configuration, such as Splunk endpoint, HEC token, and TLS configuration. To perform this task, you must have existing understanding of Docker Swarm and be able to set up your Swarm architecture and configuration.

"},{"location":"gettingstarted/ansible-docker-swarm/#step-1-prepare-your-initial-configuration","title":"Step 1: Prepare your initial configuration","text":"
  1. Before running SC4S with Ansible, provide env_file with your Splunk endpoint and HEC token:

SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://xxx.xxx.xxx.xxx:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
2. Provide a list of hosts on which you want to run your Docker Swarm cluster and the host application in the inventory file:
all:\n  hosts:\n  children:\n    manager:\n      hosts:\n        manager_node_1:\n          ansible_host:\n\n    worker:\n      hosts:\n        worker_node_1:\n          ansible_host:\n        worker_node_2:\n          ansible_host:\n
3. You can run your cluster with one or more manager nodes. One advantage of hosting SC4S with Docker Swarm is that you can leverage the Swarm internal load balancer. See your Swarm Mode documentation at Docker.

  1. You can also provide extra service configurations, for example, the number of replicas, in the /ansible/app/docker-compose.yml file:
    version: \"3.7\"\nservices:\n  sc4s:\n    deploy:\n      replicas: 2\n      ...\n
"},{"location":"gettingstarted/ansible-docker-swarm/#step-2-deploy-sc4s-on-your-configuration","title":"Step 2: Deploy SC4S on your configuration","text":"
  1. If you have Ansible installed on your host, run the Ansible playbook to deploy SC4S. Otherwise, use the Docker Ansible image provided in the package:
    # From repository root\ndocker-compose -f ansible/docker-compose.yml build\ndocker-compose -f ansible/docker-compose.yml up -d\ndocker exec -it ansible_sc4s /bin/bash\n
  2. If you used the Docker Ansible image in Step 1, then from your container remote shell, run the Docker Swam Ansible playbook.
  1. If your deployment is successfull, you can check the state of the Swarm cluster and deployed stack from the manager node remote shell:
NAME SERVICES ORCHESTRATOR sc4s 1 Swarm ID NAME MODE REPLICAS IMAGE PORTS 1xv9vvbizf3m sc4s_sc4s replicated 2/2 ghcr.io/splunk/splunk-connect-for-syslog/container3:latest :514->514/tcp, :601->601/tcp, :6514->6514/tcp, :514->514/udp"},{"location":"gettingstarted/ansible-docker-swarm/#step-3-validate-your-configuration","title":"Step 3: validate your configuration","text":"

SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once the checks are complete, validate that SC4S properly communicate with Splunk. To do this, execute the following search in Splunk:

index=* sourcetype=sc4s:events \"starting up\"\n

You should see an event similar to the following:

syslog-ng starting up; version='3.28.1'\n
You can verify if all services in the Swarm cluster work by checking the sc4s_container in Splunk. Each service should have a different container ID. All other fields should be the same.

The startup process should proceed normally without syntax errors. If it does not, follow the steps below before proceeding to deeper-level troubleshooting:

  1. Verify that the URL, token, and TLS/SSL settings are correct, and that the appropriate firewall ports are open (8088 or 443).
  2. Verify that your indexes are created in Splunk, and that your token has access to them.
  3. If you are using a load balancer, verify that it is operating properly.
  4. Execute the following command to check the SC4S startup process running in the container.
sudo docker|podman ps\n
docker|podman logs <ID | image name> \n
SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...\nSC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...\nsyslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
  1. If you do not see this output, see \u201cTroubleshoot sc4s server\u201d and \u201cTroubleshoot resources\u201d for more information.
"},{"location":"gettingstarted/ansible-mk8s/","title":"mk8s","text":"

To automate SC4S installation with Ansible, you provide a list of hosts on which you want to run SC4S as well as basic configuration information, such as the Splunk endpoint, HEC token, and TLS configuration. To perform this task, you must have existing understanding of MicroK8s and be able to set up your Kubernetes cluster architecture and configuration.

"},{"location":"gettingstarted/ansible-mk8s/#step-1-prepare-your-initial-configuration","title":"Step 1: Prepare your initial configuration","text":"
  1. Before you run SC4S with Ansible, update values.yaml with your Splunk endpoint and HEC token. You can find the example file here.

  2. In the inventory file, provide a list of hosts on which you want to run your cluster and the host application:

    all:\n  hosts:\n  children:\n    node:\n      hosts:\n        node_1:\n          ansible_host:\n

  3. Alternatively, you can spin up a high-availability cluster:
    all:\n  hosts:\n  children:\n    manager:\n      hosts:\n        manager:\n          ansible_host:\n\n    workers:\n      hosts:\n        worker1:\n          ansible_host:\n        worker2:\n          ansible_host:\n
"},{"location":"gettingstarted/ansible-mk8s/#step-2-deploy-sc4s-on-your-configuration","title":"Step 2: Deploy SC4S on your configuration","text":"
  1. If you have Ansible installed on your host, run the Ansible playbook to deploy SC4S. Otherwise, use the Docker Ansible image provided in the package:
    # From repository root\ndocker-compose -f ansible/docker-compose.yml build\ndocker-compose -f ansible/docker-compose.yml up -d\ndocker exec -it ansible_sc4s /bin/bash\n
  2. If you used the Docker Ansible image, then from your container remote shell, authenticate to and run the MicroK8s playbook.
"},{"location":"gettingstarted/ansible-mk8s/#step-3-validate-your-configuration","title":"Step 3: Validate your configuration","text":"

SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once the checks are complete, validate that SC4S properly communicates with Splunk. To do this, execute the following search in Splunk:

index=* sourcetype=sc4s:events \"starting up\"\n

This should yield an event similar to the following:

syslog-ng starting up; version='3.28.1'\n

You can verify whether all services in the cluster work by checking the sc4s_container in Splunk. Each service should have a different container ID. All other fields should be the same.

The startup process should proceed normally without syntax errors. If it does not, follow the steps below before proceeding to deeper-level troubleshooting:

  1. Verify that the URL, token, and TLS/SSL settings are correct, and that the appropriate firewall ports are open (8088 or 443).
  2. Verify that your indexes are created in Splunk, and that your token has access to them.
  3. If you are using a load balancer, verify that it is operating properly.
  4. Execute the following command to check the SC4S startup process running in the container.
sudo microk8s kubectl get pods\nsudo microk8s kubectl logs <podname>\n

You should see events similar to those below in the output:

SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback...\nSC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events...\nsyslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
"},{"location":"gettingstarted/byoe-rhel8/","title":"Configure SC4S in a non-containerized SC4S deployment","text":"

Configuring SC4S in a non-containerized SC4S deployment requires a custom configuration. Note that since Splunk does not control your unique environment, we cannot help with setting up environments, debugging networking, etc. Consider this configuration only if:

This topic provides guidance for using the SC4S syslog-ng configuration files directly on the host OS running on a hardware server or virtual machine. You must provide:

You must modify the base configuration for most environments to accomodate enterprise infrastructure variations. When you upgrade, evaluate the current environment compared to this reference then develop and test an installation-specific installation plan. Do not depend on the distribution-supplied version of syslog-ng, as it may not be recent enough to support your needs. See this blog post to learn more.

"},{"location":"gettingstarted/byoe-rhel8/#install-sc4s-in-a-custom-environment","title":"Install SC4S in a custom environment","text":"

These installation instructions assume a recent RHEL or CentOS-based release. You may have to make minor adjustments for Debian and Ubuntu. The examples provided here use pre-compiled binaries for the syslog-ng installation in /etc/syslog-ng. Your configuration may vary.

The following installation instructions are summarized from a blog maintained by the One Identity team.

  1. Install CentOS or RHEL 8.0. See your OS documentation for instructions.

  2. Enable EPEL (Centos 8).

dnf install 'dnf-command(copr)' -y\ndnf install epel-release -y\ndnf copr enable czanik/syslog-ng336  -y\ndnf install syslog-ng syslog-ng-python syslog-ng-http python3-pip gcc python3-devel -y\n
  1. Disable the distribution-supplied syslog-ng unit file. rsyslog will continue to be the system logger, but should be left enabled only if it is not configured to listen on the same ports as SC4S. You can also configure SC4S to provide local logging.
sudo systemctl stop syslog-ng\nsudo systemctl disable syslog-ng\n
  1. Download the latest bare_metal.tar from releases on github and untar the package in /etc/syslog-ng. This step unpacks a tarball with the SC4S version of the syslog-ng config files in the standard /etc/syslog-ng location, and will overwrite existing content. Make sure that any previous configurations of syslog-ng are saved prior to executing the download step.

For production use, select the latest version of SC4S that does not have an -rc, -alpha, or -beta suffix.

sudo wget -c https://github.com/splunk/splunk-connect-for-syslog/releases/download/<latest release>/baremetal.tar -O - | sudo tar -x -C /etc/syslog-ng\n
  1. Install python requirements:
sudo pip3 install -r /etc/syslog-ng/requirements.txt\n
  1. Optionally, to use monitoring, install goss and confirm that the version is v0.3.16 or later. goss installs in /usr/local/bin by default, so do one of the following:
curl -L https://github.com/aelsabbahy/goss/releases/latest/download/goss-linux-amd64 -o /usr/local/bin/goss\nchmod +rx /usr/local/bin/goss\ncurl -L https://github.com/aelsabbahy/goss/releases/latest/download/dgoss -o /usr/local/bin/dgoss\n# Alternatively, using the latest\n# curl -L https://raw.githubusercontent.com/aelsabbahy/goss/latest/extras/dgoss/dgoss -o /usr/local/bin/dgoss\nchmod +rx /usr/local/bin/dgoss\n
  1. You can run SC4S using systemd in one of two ways, depending on administrator preference and orchestration methodology. These are not the only ways to run in a custom environment:
  1. To run the entrypoint.sh script directly in systemd, create the SC4S unit file /lib/systemd/system/sc4s.service and add the following:
[Unit]\nDescription=SC4S Syslog Daemon\nDocumentation=https://splunk-connect-for-syslog.readthedocs.io/en/latest/\nWants=network.target network-online.target\nAfter=network.target network-online.target\n\n[Service]\nType=simple\nExecStart=/etc/syslog-ng/entrypoint.sh\nExecReload=/bin/kill -HUP $MAINPID\nEnvironmentFile=/etc/syslog-ng/env_file\nStandardOutput=journal\nStandardError=journal\nRestart=on-abnormal\n\n[Install]\nWantedBy=multi-user.target\n
  1. To run entrypoint.sh as a preconfigured script, modify the script by commenting out or removing the stanzas following the OPTIONAL for BYOE comments in the script. This prevents syslog-ng from being launched by the script. Then create the SC4S unit file /lib/systemd/system/syslog-ng.service and add the following content:
[Unit]\nDescription=System Logger Daemon\nDocumentation=man:syslog-ng(8)\nAfter=network.target\n\n[Service]\nType=notify\nExecStart=/usr/sbin/syslog-ng -F $SYSLOGNG_OPTS -p /var/run/syslogd.pid\nExecReload=/bin/kill -HUP $MAINPID\nEnvironmentFile=-/etc/default/syslog-ng\nEnvironmentFile=-/etc/sysconfig/syslog-ng\nStandardOutput=journal\nStandardError=journal\nRestart=on-failure\n\n[Install]\nWantedBy=multi-user.target\n
  1. Create the file /etc/syslog-ng/env_file and add the following environment variables. Adjust the URL/TOKEN as needed.
# The following \"path\" variables can differ from the container defaults specified in the entrypoint.sh script. \n# These are *optional* for most BYOE installations, which do not differ from the install location used.\n# in the container version of SC4S.  Failure to properly set these will cause startup failure.\n#SC4S_ETC=/etc/syslog-ng\n#SC4S_VAR=/etc/syslog-ng/var\n#SC4S_BIN=/bin\n#SC4S_SBIN=/usr/sbin\n#SC4S_TLS=/etc/syslog-ng/tls\n\n# General Options\nSC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://splunk.smg.aws:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=a778f63a-5dff-4e3c-a72c-a03183659e94\n\n# Uncomment the following line if using untrusted (self-signed) SSL certificates\n# SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
  1. Reload systemctl and restart syslog-ng (example here is shown for systemd option (1) above)
sudo systemctl daemon-reload\nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/byoe-rhel8/#configure-sc4s-listening-ports","title":"Configure SC4S listening ports","text":"

The standard SC4S configuration uses UDP/TCP port 514 as the default for the listening port for syslog traffic, and TCP port 6514 for TLS. You can change these defaults by adding the following additional environment variables to the env_file:

SC4S_LISTEN_DEFAULT_TCP_PORT=514\nSC4S_LISTEN_DEFAULT_UDP_PORT=514\nSC4S_LISTEN_DEFAULT_RFC6587_PORT=601\nSC4S_LISTEN_DEFAULT_RFC5426_PORT=601\nSC4S_LISTEN_DEFAULT_RFC5425_PORT=5425\nSC4S_LISTEN_DEFAULT_TLS_PORT=6514\n

"},{"location":"gettingstarted/byoe-rhel8/#create-unique-dedicated-listening-ports","title":"Create unique dedicated listening ports","text":"

For some source technologies, categorization by message content is not possible. To collect these sources, dedicate a unique listening port to a specific source. See Sources for more information.

"},{"location":"gettingstarted/docker-compose-MacOS/","title":"Install Docker Desktop for MacOS","text":"

Refer to the \u201cMacOS\u201d section in your Docker documentation to set up your Docker Desktop for MacOS.

"},{"location":"gettingstarted/docker-compose-MacOS/#perform-your-initial-sc4s-configuration","title":"Perform your initial SC4S configuration","text":"

You can run SC4S using either docker-compose or the docker run command in the command line. This topic focuses solely on using docker-compose.

  1. Create a directory on the server for local configurations and disk buffering. Make it available to all administrators, for example: /opt/sc4s/.

  2. Create a docker-compose.yml file in your new directory, based on the provided template. By default, the latest container is automatically downloaded at each restart. As a best practice, consult this topic at the time of any new upgrade to check for any changes in the latest template.

    version: \"3.7\"\nservices:\n  sc4s:\n    deploy:\n      replicas: 2\n      restart_policy:\n        condition: on-failure\n    image: ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\n    ports:\n       - target: 514\n         published: 514\n         protocol: tcp\n       - target: 514\n         published: 514\n         protocol: udp\n       - target: 601\n         published: 601\n         protocol: tcp\n       - target: 6514\n         published: 6514\n         protocol: tcp\n    env_file:\n      - /opt/sc4s/env_file\n    volumes:\n      - /opt/sc4s/local:/etc/syslog-ng/conf.d/local:z\n      - splunk-sc4s-var:/var/lib/syslog-ng\n# Uncomment the following line if local disk archiving is desired\n#     - /opt/sc4s/archive:/var/lib/syslog-ng/archive:z\n# Map location of TLS custom TLS\n#     - /opt/sc4s/tls:/etc/syslog-ng/tls:z\n\nvolumes:\n  splunk-sc4s-var:\n

  3. In Docker Desktop, set the /opt/sc4s folder as shared.
  4. Create a local volume that will contain the disk buffer files in the event of a communication failure to the upstream destinations. This volume also keeps track of the state of syslog-ng between restarts, and in particular the state of the disk buffer. Be sure to account for disk space requirements for the Docker volume. This volume is located in /var/lib/docker/volumes/ and could grow significantly if there is an extended outage to the SC4S destinations. See SC4S disk buffer configuration for more information.

    sudo docker volume create splunk-sc4s-var\n

  5. Create the subdirectories: /opt/sc4s/local, /opt/sc4s/archive, and /opt/sc4s/tls. Make sure these directories match the volume mounts specified indocker-compose.yml.

  6. Create a file named /opt/sc4s/env_file.

SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
6. Update the following environment variables and values in /opt/sc4s/env_file:

"},{"location":"gettingstarted/docker-compose-MacOS/#create-unique-dedicated-listening-ports","title":"Create unique dedicated listening ports","text":"

Each listening port on the container must be mapped to a listening port on the host. Make sure to update the docker-compose.yml file when adding listening ports for new data sources.

To configure unique ports:

  1. Modify the /opt/sc4s/env_file file to include the port-specific environment variables. See the Sources documentation to identify the specific environment variables that are mapped to each data source vendor and technology.
  2. Modify the Docker Compose file that starts the SC4S container so that it reflects the additional listening ports you have created. You can amend the Docker Compose file with additional target stanzas in the ports section of the file (after the default ports). For example, the following additional target and published lines provide for 21 additional technology-specific UDP and TCP ports:
       - target: 5000-5020\n         published: 5000-5020\n         protocol: tcp\n       - target: 5000-5020\n         published: 5000-5020\n         protocol: udp\n
  1. Restart SC4S using the command in the \u201cStart/Restart SC4S\u201d section in this topic.

For more information about configuration refer to Docker and Podman basic configurations and detailed configuration.

"},{"location":"gettingstarted/docker-compose-MacOS/#startrestart-sc4s","title":"Start/Restart SC4S","text":"

From the catalog where you created compose file, execute:

docker-compose up\n
Otherwise use docker-compose with -f flag pointing to the compose file
docker-compose up -f /path/to/compose/file/docker-compose.yml\n

"},{"location":"gettingstarted/docker-compose-MacOS/#stop-sc4s","title":"Stop SC4S","text":"

Execute:

docker-compose down \n
or

docker-compose down -f /path/to/compose/file/docker-compose.yml\n
"},{"location":"gettingstarted/docker-compose-MacOS/#verify-proper-operation","title":"Verify Proper Operation","text":"

SC4S performs automatic checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once these checks are complete, verify that SC4S is properly communicating with Splunk:

index=* sourcetype=sc4s:events \"starting up\"\n

When the startup process proceeds normally, you should see an event similar to the following:

syslog-ng starting up; version='3.28.1'\n

If you do not see this, try the following steps to troubleshoot:

  1. Check to see that the URL, token, and TLS/SSL settings are correct, and that the appropriate firewall ports are open (8088 or 443).
  2. Check to see that the proper indexes are created in Splunk, and that the token has access to them.
  3. Ensure the proper operation of the load balancer if used.
  4. Check the SC4S startup process running:
docker logs <container_name>\n

You should see events similar to those below in the output:

syslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n

If you do not see the output above, proceed to the \u201cTroubleshoot sc4s server\u201d and \u201cTroubleshoot resources\u201d sections for more detailed information.

"},{"location":"gettingstarted/docker-compose/","title":"Install Docker Desktop","text":"

Refer to your Docker documentation to set up your Docker Desktop.

"},{"location":"gettingstarted/docker-compose/#perform-your-initial-sc4s-configuration","title":"Perform your initial SC4S configuration","text":"

You can run SC4S with docker-compose, or in the command line using the command docker run. Both options are described in this topic.

  1. Create a directory on the server for local configurations and disk buffering. Make it available to all administrators, for example: /opt/sc4s/. If you are using docker-compose, create a docker-compose.yml file in this directory using the template provided here. By default, the latest SC4S image is automatically downloaded at each restart. As a best practice, check back here regularly for any changes made to the latest template is incorporated into production before you relaunch with Docker Compose.
version: \"3.7\"\nservices:\n  sc4s:\n    deploy:\n      replicas: 2\n      restart_policy:\n        condition: on-failure\n    image: ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\n    ports:\n       - target: 514\n         published: 514\n         protocol: tcp\n       - target: 514\n         published: 514\n         protocol: udp\n       - target: 601\n         published: 601\n         protocol: tcp\n       - target: 6514\n         published: 6514\n         protocol: tcp\n    env_file:\n      - /opt/sc4s/env_file\n    volumes:\n      - /opt/sc4s/local:/etc/syslog-ng/conf.d/local:z\n      - splunk-sc4s-var:/var/lib/syslog-ng\n# Uncomment the following line if local disk archiving is desired\n#     - /opt/sc4s/archive:/var/lib/syslog-ng/archive:z\n# Map location of TLS custom TLS\n#     - /opt/sc4s/tls:/etc/syslog-ng/tls:z\n\nvolumes:\n  splunk-sc4s-var:\n
  1. In Docker, set the /opt/sc4s folder as shared.
  2. Create a local volume that will contain the disk buffer files in the event of a communication failure to the upstream destinations. This volume also keeps track of the state of syslog-ng between restarts, and in particular the state of the disk buffer. Be sure to account for disk space requirements for the Docker volume. This volume is located in /var/lib/docker/volumes/ and could grow significantly if there is an extended outage to the SC4S destinations. See SC4S Disk Buffer Configuration in the Configuration topic for more information.
sudo docker volume create splunk-sc4s-var\n
  1. Create the subdirectories: /opt/sc4s/local, /opt/sc4s/archive, and /opt/sc4s/tls. If you are using the docker-compose.yml file, make sure these directories match the volume mounts specified indocker-compose.yml.

  2. Create a file named /opt/sc4s/env_file.

SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
6. Update the following environment variables and values to /opt/sc4s/env_file:

NOTE: Splunk Connect for Syslog defaults to secure configurations. If you are not using trusted SSL certificates, be sure to uncomment the last line in the example above.

For more information about configuration, see Docker and Podman basic configurations and detailed configuration.

"},{"location":"gettingstarted/docker-compose/#start-or-restart-sc4s","title":"Start or restart SC4S","text":"
docker run -p 514:514 -p 514:514/udp -p 6514:6514 -p 5000-5020:5000-5020 -p 5000-5020:5000-5020/udp \\\n    --env-file=/opt/sc4s/env_file \\\n    --name SC4S \\\n    --rm ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\n

Otherwise use docker compose with -f flag pointing to the compose file:

docker compose up -f /path/to/compose/file/docker-compose.yml\n

"},{"location":"gettingstarted/docker-compose/#stop-sc4s","title":"Stop SC4S","text":"

If the container is run directly from the CLI, stop the container using the docker stop <containerID> command.

If using docker compose, execute:

docker compose down \n
or

docker compose down -f /path/to/compose/file/docker-compose.yml\n
"},{"location":"gettingstarted/docker-compose/#validate-your-configuration","title":"Validate your configuration","text":"

SC4S performs automatic checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once these checks are complete, verify that SC4S is properly communicating with Splunk:

index=* sourcetype=sc4s:events \"starting up\"\n

This should yield an event similar to the following when the startup process proceeds normally:

syslog-ng starting up; version='3.28.1'\n

If you do not see this, try the following steps to troubleshoot:

  1. Check to see that the URL, token, and TLS/SSL settings are correct, and that the appropriate firewall ports are open (8088 or 443).
  2. Check to see that the proper indexes are created in Splunk, and that the token has access to them.
  3. Ensure the proper operation of the load balancer if used.
  4. Check the SC4S startup process running in the container.
docker logs SC4S\n

You should see events similar to those below in the output:

syslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n

If you do not see the output above, see \u201cTroubleshoot SC4S server\u201d and \u201cTroubleshoot resources\u201d sections for more detailed information.

"},{"location":"gettingstarted/docker-podman-offline/","title":"Install a container while offline","text":"

You can stage SC4S by downloading the image so that it can be loaded on a host machine, for example on an airgapped system, without internet connectivity.

  1. Download the container image oci_container.tgz from our Github Page. The following example downloads v3.23.1, replace the URL with the latest release or pre-release version as desired:
sudo wget https://github.com/splunk/splunk-connect-for-syslog/releases/download/v3.23.1/oci_container.tar.gz\n
  1. Distribute the container to the airgapped host machine using your preferred file transfer utility.
  2. Execute the following command, using Docker or Podman:
<podman or docker> load < oci_container.tar.gz\n
  1. Make a note of the container ID for the resulting load:
Loaded image: ghcr.io/splunk/splunk-connect-for-syslog/container3:3.23.1\n
  1. Use the container ID to create a local label:

    <podman or docker> tag ghcr.io/splunk/splunk-connect-for-syslog/container3:3.23.1 sc4slocal:latest\n

  2. Use the local label sc4slocal:latest in the relevant unit or YAML file to launch SC4S by setting the SC4S_IMAGE environment variable in the unit file, or the relevant image: tag if you are using Docker Compose/Swarm. This label will cause the runtime to select the locally loaded image, and will not attempt to obtain the container image from the internet.

Environment=\"SC4S_IMAGE=sc4slocal:latest\"\n
7. Remove the entry from the relevant unit file when your configuration uses systemd. This is because an external connection to pull the container is no longer needed or available:

ExecStartPre=/usr/bin/docker pull $SC4S_IMAGE\n
"},{"location":"gettingstarted/docker-systemd-general/","title":"Install Docker CE","text":""},{"location":"gettingstarted/docker-systemd-general/#before-you-begin","title":"Before you begin","text":"

Before you start:

"},{"location":"gettingstarted/docker-systemd-general/#initial-setup","title":"Initial Setup","text":"

This topic provides the most recent unit file. By default, the latest SC4S image is automatically downloaded at each restart. Consult this topic when you upgrade your SC4S installation and check for changes to the provided template unit file. Make sure these changes are incorporated into your configuration before you relaunch with systemd.

  1. Create the systemd unit file /lib/systemd/system/sc4s.service based on the provided template:
[Unit]\nDescription=SC4S Container\nWants=NetworkManager.service network-online.target docker.service\nAfter=NetworkManager.service network-online.target docker.service\nRequires=docker.service\n\n[Install]\nWantedBy=multi-user.target\n\n[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n\n# Required mount point for syslog-ng persist data (including disk buffer)\nEnvironment=\"SC4S_PERSIST_MOUNT=splunk-sc4s-var:/var/lib/syslog-ng\"\n\n# Optional mount point for local overrides and configurations; see notes in docs\nEnvironment=\"SC4S_LOCAL_MOUNT=/opt/sc4s/local:/etc/syslog-ng/conf.d/local:z\"\n\n# Optional mount point for local disk archive (EWMM output) files\nEnvironment=\"SC4S_ARCHIVE_MOUNT=/opt/sc4s/archive:/var/lib/syslog-ng/archive:z\"\n\n# Map location of TLS custom TLS\nEnvironment=\"SC4S_TLS_MOUNT=/opt/sc4s/tls:/etc/syslog-ng/tls:z\"\n\nTimeoutStartSec=0\n\nExecStartPre=/usr/bin/docker pull $SC4S_IMAGE\n\n# Note: /usr/bin/bash will not be valid path for all OS\n# when startup fails on running bash check if the path is correct\nExecStartPre=/usr/bin/bash -c \"/usr/bin/systemctl set-environment SC4SHOST=$(hostname -s)\"\n\nExecStart=/usr/bin/docker run \\\n        -e \"SC4S_CONTAINER_HOST=${SC4SHOST}\" \\\n        -v \"$SC4S_PERSIST_MOUNT\" \\\n        -v \"$SC4S_LOCAL_MOUNT\" \\\n        -v \"$SC4S_ARCHIVE_MOUNT\" \\\n        -v \"$SC4S_TLS_MOUNT\" \\\n        --env-file=/opt/sc4s/env_file \\\n        --network host \\\n        --name SC4S \\\n        --rm $SC4S_IMAGE\n\nRestart=on-abnormal\n
  1. Execute the following command to create a local volume. This volume contains the disk buffer files in case of a communication failure to the upstream destinations:
sudo docker volume create splunk-sc4s-var\n
  1. Account for disk space requirements for the new Docker volume. The Docker volume can grow significantly if there is an extended outage to the SC4S destinations. This volume can be found at /var/lib/docker/volumes/. See SC4S Disk Buffer Configuration.

  2. Create the following subdirectories:

  1. Create a file named /opt/sc4s/env_file and add the following environment variables and values:
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
  1. Update SC4S_DEST_SPLUNK_HEC_DEFAULT_URL and SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN to reflect the correct values for your environment. Do not configure HEC Acknowledgement when deploying the HEC token on the Splunk side, the underlying syslog-ng HTTP destination does not support this feature.

  2. The default number of SC4S_DEST_SPLUNK_HEC_<ID>_WORKERS is 10. Consult the community if you feel the number of workers should deviate from this.

  3. Splunk Connect for Syslog defaults to secure configurations. If you are not using trusted SSL certificates, be sure to uncomment the last line in the example in step 5.

For more information see Docker and Podman basic configurations and detailed configuration.

"},{"location":"gettingstarted/docker-systemd-general/#configure-sc4s-for-systemd","title":"Configure SC4S for systemd","text":"

To configure SC4S for systemd run the following commands:

sudo systemctl daemon-reload\nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/docker-systemd-general/#restart-sc4s","title":"Restart SC4S","text":"

To restart SC4S run the following command:

sudo systemctl restart sc4s\n
"},{"location":"gettingstarted/docker-systemd-general/#implement-unit-file-changes","title":"Implement unit file changes","text":"

If you made changes to the configuration unit file, for example to configure with dedicated ports, you must stop SC4S and re-run the systemd configuration commands to implement your changes.

sudo systemctl stop sc4s\nsudo systemctl daemon-reload \nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/docker-systemd-general/#validate-your-configuration","title":"Validate your configuration","text":"

SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. Once the checks are complete, validate that SC4S properly communicate with Splunk. To do this, execute the following search in Splunk:

index=* sourcetype=sc4s:events \"starting up\"\n

You should see an event similar to the following:

syslog-ng starting up; version='3.28.1'\n

The startup process should proceed normally without syntax errors. If it does not, follow the steps below before proceeding to deeper-level troubleshooting:

  1. Verify that the URL, token, and TLS/SSL settings are correct, and that the appropriate firewall ports are open (8088 or 443).
  2. Verify that your indexes are created in Splunk, and that your token has access to them.
  3. If you are using a load balancer, verify that it is operating properly.
  4. Execute the following command to check the SC4S startup process running in the container.
docker logs SC4S\n

You should see events similar to those below in the output:

syslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n
  1. If you do not see this output, see \u201cTroubleshoot sc4s server\u201d and \u201cTroubleshoot resources\u201d for more information.
"},{"location":"gettingstarted/getting-started-runtime-configuration/","title":"Implement a Container Runtime and SC4S","text":""},{"location":"gettingstarted/getting-started-runtime-configuration/#step-1-configure-your-os-to-work-with-sc4s","title":"Step 1: Configure your OS to work with SC4S","text":""},{"location":"gettingstarted/getting-started-runtime-configuration/#tune-your-receive-buffer","title":"Tune your receive buffer","text":"

You must tune the host Linux OS receive buffer size to match the SC4S default. This helps to avoid event dropping at the network level. The default receive buffer for SC4S is 16 MB for UDP traffic, which should be acceptable for most environments. To set the host OS kernel to match your buffer:

  1. Edit /etc/sysctl.conf using the following whole-byte values corresponding to 16 MB:

    net.core.rmem_default = 17039360\nnet.core.rmem_max = 17039360\n

  2. Apply to the kernel:

    sysctl -p\n

  3. To verify that the kernel does not drop packets, periodically monitor the buffer using the command netstat -su | grep \"receive errors\". Failure to tune the kernel for high-volume traffic results in message loss, which can be unpredictable and difficult to detect. The default values for receive kernel buffers in most distributions is 2 MB, which may not be adequate for your configuration.

"},{"location":"gettingstarted/getting-started-runtime-configuration/#configure-ipv4-forwarding","title":"Configure IPv4 forwarding","text":"

In many distributions, for example CentOS provisioned in AWS, IPv4 forwarding is not enabled by default. IPv4 forwarding must be enabled for container networking.

net.ipv4.ip_forward=1\n
"},{"location":"gettingstarted/getting-started-runtime-configuration/#step-2-create-your-local-directory-structure","title":"Step 2: Create your local directory structure","text":"

Create the following three directories:

When you create these directories, make sure that they match the volume mounts specified in the sc4s.service unit file. Failure to do this will cause SC4S to abort at startup.

"},{"location":"gettingstarted/getting-started-runtime-configuration/#step-3-select-a-container-runtime-and-sc4s-configuration","title":"Step 3: Select a Container Runtime and SC4S Configuration","text":"

The table below shows possible ways to run SC4S using Docker or Podman with various management and orchestration systems.

Check your Podman or Docker documentation to see which operating systems are supported by your chosen container management tool. If the SC4S deployment model involves additional limitations or requirements regarding operating systems, you will find them in the column labeled \u2018Additional Operating Systems Requirements\u2019.

Container Runtime and Orchestration Additional Operating Systems Requirements MicroK8s Ubuntu with Microk8s Podman + systemd Docker CE + systemd Docker Desktop + Compose MacOS Docker Compose Bring your own Environment RHEL or CentOS 8.1 & 8.2 (best option) Offline Container Installation Ansible+Docker Swarm Ansible+Podman Ansible+Docker"},{"location":"gettingstarted/getting-started-splunk-setup/","title":"Splunk setup","text":"

To ensure proper integration for SC4S and Splunk, perform the following tasks in your Splunk instance:

  1. Create your SC4S indexes in Splunk.
  2. Configure your HTTP event collector.
"},{"location":"gettingstarted/getting-started-splunk-setup/#step-1-create-indexes-within-splunk","title":"Step 1: Create indexes within Splunk","text":"

SC4S maps each sourcetype to the following indexes by default. You will also need to create these indexes in Splunk:

If you use custom indexes in SC4S you must also create them in Splunk. See Create custom indexes for more information.

"},{"location":"gettingstarted/getting-started-splunk-setup/#step-2-configure-your-http-event-collector","title":"Step 2: Configure your HTTP event collector","text":"

See Use the HTTP event collector for HEC configuration instructions based on your Splunk type.

Keep in mind the following best practices specific to HEC for SC4S:

"},{"location":"gettingstarted/getting-started-splunk-setup/#create-a-load-balancing-mechanism","title":"Create a load balancing mechanism","text":"

In some configurations, you should ensure output balancing from SC4S to Splunk indexers. To do this, you create a load balancing mechanism between SC4S and Splunk indexers. Note that this should not be confused with load balancing between sources and SC4S.

When configuring your load balancing mechanism, keep in mind the following:

"},{"location":"gettingstarted/k8s-microk8s/","title":"Install and configure SC4S with Kubernetes","text":"

Splunk provides an implementation for SC4S deployment with MicroK8s using a single-server MicroK8s as the deployment model. Clustering has some tradeoffs and should be only considered on a deployment-specific basis.

You can independently replicate the model deployment on different distributions of Kubernetes. Do not attempt this unless you have advanced understanding of Kubernetes and are willing and able to maintain this configuration regularly.

SC4S with MicroK8s leverages features of MicroK8s:

Splunk maintains container images, but it doesn\u2019t directly support or otherwise provide resolutions for issues within the runtime environment.

"},{"location":"gettingstarted/k8s-microk8s/#step-1-allocate-ip-addresses","title":"Step 1: Allocate IP addresses","text":"

This configuration requires as least two IP addresses: one for the host and one for the internal load balancer. We suggest allocating three IP addresses for the host and 5-10 IP addresses for later use.

"},{"location":"gettingstarted/k8s-microk8s/#step-2-install-microk8s","title":"Step 2: Install MicroK8s","text":"

To install MicroK8s:

sudo snap install microk8s --classic --channel=1.24\nsudo usermod -a -G microk8s $USER\nsudo chown -f -R $USER ~/.kube\nsu - $USER\nmicrok8s status --wait-ready\n

"},{"location":"gettingstarted/k8s-microk8s/#step-3-set-up-your-add-ons","title":"Step 3: Set up your add-ons","text":"

When you install metallb you will be prompted for one or more IPs to use as entry points. If you do not plan to enable clustering, then this IP may be the same IP as the host. If you do plan to enable clustering this IP should not be assigned to the host.

A single IP in CIDR format is x.x.x.x/32. Use CIDR or range syntax.

microk8s enable dns \nmicrok8s enable community\nmicrok8s enable metallb \nmicrok8s enable rbac \nmicrok8s enable storage \nmicrok8s enable openebs \nmicrok8s enable helm3\nmicrok8s status --wait-ready\n
"},{"location":"gettingstarted/k8s-microk8s/#step-4-add-an-sc4s-helm-repository","title":"Step 4: Add an SC4S Helm repository","text":"

To add an SC4S Helm repository:

microk8s helm3 repo add splunk-connect-for-syslog https://splunk.github.io/splunk-connect-for-syslog\nmicrok8s helm3 repo update\n
"},{"location":"gettingstarted/k8s-microk8s/#step-5-create-a-valuesyaml-file","title":"Step 5: Create a values.yaml file","text":"

Create the configuration file values.yaml. You can provide HEC token as a Kubernetes secret or in plain text.

"},{"location":"gettingstarted/k8s-microk8s/#provide-the-hec-token-as-plain-text","title":"Provide the HEC token as plain text","text":"
  1. Create values.yaml file:
#values.yaml\nsplunk:\n    hec_url: \"https://xxx.xxx.xxx.xxx:8088/services/collector/event\"\n    hec_token: \"00000000-0000-0000-0000-000000000000\"\n    hec_verify_tls: \"yes\"\n
  1. Install SC4S:
    microk8s helm3 install sc4s splunk-connect-for-syslog/splunk-connect-for-syslog -f values.yaml\n
"},{"location":"gettingstarted/k8s-microk8s/#provide-the-hec-token-as-secret","title":"Provide the HEC token as secret","text":"
  1. Create values.yaml file:
#values.yaml\nsplunk:\n    hec_url: \"https://xxx.xxx.xxx.xxx:8088/services/collector/event\"\n    hec_verify_tls: \"yes\"\n
  1. Install SC4S:
    export HEC_TOKEN=\"00000000-0000-0000-0000-000000000000\"\nmicrok8s helm3 install sc4s --set splunk.hec_token=$HEC_TOKEN splunk-connect-for-syslog/splunk-connect-for-syslog -f values.yaml\n
"},{"location":"gettingstarted/k8s-microk8s/#update-or-upgrade-sc4s","title":"Update or upgrade SC4S","text":"

Whenever the image is upgraded or when changes are made to the values.yaml file and should be applied, run the command:

microk8s helm3 upgrade sc4s splunk-connect-for-syslog/splunk-connect-for-syslog -f values.yaml\n
"},{"location":"gettingstarted/k8s-microk8s/#install-and-configure-sc4s-for-high-availability-ha","title":"Install and configure SC4S for High Availability (HA)","text":"

Three identically-sized nodes are required for HA. See your Microk8s documentation for more information.

  1. Update the configuration file:

    #values.yaml\nreplicaCount: 6 #2x node count\nsplunk:\n    hec_url: \"https://xxx.xxx.xxx.xxx:8088/services/collector/event\"\n    hec_token: \"00000000-0000-0000-0000-000000000000\"\n    hec_verify_tls: \"yes\"\n

  2. Upgrade SC4S to apply the new configuration:

    microk8s helm3 upgrade sc4s splunk-connect-for-syslog/splunk-connect-for-syslog -f values.yaml\n

"},{"location":"gettingstarted/k8s-microk8s/#configure-your-sc4s-instances-through-valuesyaml","title":"Configure your SC4S instances through values.yaml","text":"

With helm-based deployment you cannot configure environment variables and context files directly. Instead, use the values.yaml file to update your configuration, for example:

sc4s:\n  # Certificate as a k8s Secret with tls.key and tls.crt fields\n  # Ideally produced and managed by cert-manager.io\n  existingCert: example-com-tls\n  #\n  vendor_product:\n    - name: checkpoint\n      ports:\n        tcp: [9000] #Same as SC4S_LISTEN_CHECKPOINT_TCP_PORT=9000\n        udp: [9000]\n      options:\n        listen:\n          old_host_rules: \"yes\" #Same as SC4S_LISTEN_CHECKPOINT_OLD_HOST_RULES=yes\n\n    - name: infoblox\n      ports:\n        tcp: [9001, 9002]\n        tls: [9003]\n    - name: fortinet\n      ports:\n        ietf_udp:\n          - 9100\n          - 9101\n  context_files:\n    splunk_metadata.csv: |-\n      cisco_meraki,index,foo\n    host.csv: |-\n      192.168.1.1,foo\n      192.168.1.2,moon\n

Use the config_files and context_files variables to specify configuration and context files that are passed to SC4S.

"},{"location":"gettingstarted/k8s-microk8s/#manage-resources","title":"Manage resources","text":"

You should expect your system to require two instances per node by default. Adjust requests and limits to allow each instance to use about 40% of each node, presuming no other workload is present.

resources:\n  limits:\n    cpu: 100m\n    memory: 128Mi\n  requests:\n    cpu: 100m\n    memory: 128Mi\n
"},{"location":"gettingstarted/podman-systemd-general/","title":"Install podman","text":"

See Podman product installation docs for information about working with your Podman installation.

Before performing the tasks described in this topic, make sure you are familiar with using IPv4 forwarding with SC4S. See IPv4 forwarding .

"},{"location":"gettingstarted/podman-systemd-general/#initial-setup","title":"Initial Setup","text":"

NOTE: Make sure to use the latest unit file, which is provided here, with the current release. By default, the latest container is automatically downloaded at each restart. As a best practice, check back here regularly for any changes made to the latest template unit file is incorporated into production before you relaunch with systemd.

  1. Create the systemd unit file /lib/systemd/system/sc4s.service based on the following template:
[Unit]\nDescription=SC4S Container\nWants=NetworkManager.service network-online.target\nAfter=NetworkManager.service network-online.target\n\n[Install]\nWantedBy=multi-user.target\n\n[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n\n# Required mount point for syslog-ng persist data (including disk buffer)\nEnvironment=\"SC4S_PERSIST_MOUNT=splunk-sc4s-var:/var/lib/syslog-ng\"\n\n# Optional mount point for local overrides and configurations; see notes in docs\nEnvironment=\"SC4S_LOCAL_MOUNT=/opt/sc4s/local:/etc/syslog-ng/conf.d/local:z\"\n\n# Optional mount point for local disk archive (EWMM output) files\nEnvironment=\"SC4S_ARCHIVE_MOUNT=/opt/sc4s/archive:/var/lib/syslog-ng/archive:z\"\n\n# Map location of TLS custom TLS\nEnvironment=\"SC4S_TLS_MOUNT=/opt/sc4s/tls:/etc/syslog-ng/tls:z\"\n\nTimeoutStartSec=0\n\nExecStartPre=/usr/bin/podman pull $SC4S_IMAGE\n\n# Note: /usr/bin/bash will not be valid path for all OS\n# when startup fails on running bash check if the path is correct\nExecStartPre=/usr/bin/bash -c \"/usr/bin/systemctl set-environment SC4SHOST=$(hostname -s)\"\n\nExecStart=/usr/bin/podman run \\\n        -e \"SC4S_CONTAINER_HOST=${SC4SHOST}\" \\\n        -v \"$SC4S_PERSIST_MOUNT\" \\\n        -v \"$SC4S_LOCAL_MOUNT\" \\\n        -v \"$SC4S_ARCHIVE_MOUNT\" \\\n        -v \"$SC4S_TLS_MOUNT\" \\\n        --env-file=/opt/sc4s/env_file \\\n        --health-cmd=\"/healthcheck.sh\" \\\n        --health-interval=10s --health-retries=6 --health-timeout=6s \\\n        --network host \\\n        --name SC4S \\\n        --rm $SC4S_IMAGE\n\nRestart=on-abnormal\n
  1. Execute the following command to create a local volume, which contains the disk buffer files in the event of a communication failure, to the upstream destinations. This volume will also be used to keep track of the state of syslog-ng between restarts, and in particular the state of the disk buffer.
sudo podman volume create splunk-sc4s-var\n

NOTE: Be sure to account for disk space requirements for the podman volume you create. This volume will be located in /var/lib/containers/storage/volumes/ and could grow significantly if there is an extended outage to the SC4S destinations (typically HEC endpoints). See the \u201cSC4S Disk Buffer Configuration\u201d section on the Configuration page for more info.

  1. Create the subdirectories: * /opt/sc4s/local * /opt/sc4s/archive * /opt/sc4s/tls
  2. Create a file named /opt/sc4s/env_file and add the following environment variables and values:
SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
  1. Update SC4S_DEST_SPLUNK_HEC_DEFAULT_URL and SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN to reflect the correct values for your environment. Do not configure HEC Acknowledgement when deploying the HEC token on the Splunk side; the underlying syslog-ng http destination does not support this feature. The default value for SC4S_DEST_SPLUNK_HEC_<ID>_WORKERS is 10. Consult the community if you feel the number of workers (threads) should deviate from this.

NOTE: Splunk Connect for Syslog defaults to secure configurations. If you are not using trusted SSL certificates, be sure to uncomment the last line in the example above.

For more information about configuration refer to Docker and Podman basic configurations and detailed configuration.

"},{"location":"gettingstarted/podman-systemd-general/#configure-sc4s-for-systemd-and-start-sc4s","title":"Configure SC4S for systemd and start SC4S","text":"
sudo systemctl daemon-reload\nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/podman-systemd-general/#restart-sc4s","title":"Restart SC4S","text":"
sudo systemctl restart sc4s\n

If you have made changes to the configuration unit file, for example, in order to configure dedicated ports, you must first stop SC4S and re-run the systemd configuration commands:

sudo systemctl stop sc4s\nsudo systemctl daemon-reload \nsudo systemctl enable sc4s\nsudo systemctl start sc4s\n
"},{"location":"gettingstarted/podman-systemd-general/#stop-sc4s","title":"Stop SC4S","text":"
sudo systemctl stop sc4s\n
"},{"location":"gettingstarted/podman-systemd-general/#verify-proper-operation","title":"Verify Proper Operation","text":"

SC4S has a number of \u201cpreflight\u201d checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng configuration is correct. After this step is complete, verify SC4S is properly communicating with Splunk by executing the following search in Splunk:

index=* sourcetype=sc4s:events \"starting up\"\n

This should yield an event similar to the following when the startup process proceeds normally (without syntax errors).

syslog-ng starting up; version='3.28.1'\n

If you do not see this, try the following before proceeding to deeper-level troubleshooting:

podman logs SC4S\n

You should see events similar to those below in the output:

syslog-ng checking config\nsc4s version=v1.36.0\nstarting goss\nstarting syslog-ng\n

If the output does not display, see \u201cTroubleshoot sc4s server\u201d and \u201cTroubleshoot resources\u201d for more information.

"},{"location":"gettingstarted/podman-systemd-general/#sc4s-non-root-operation","title":"SC4S non-root operation","text":""},{"location":"gettingstarted/podman-systemd-general/#note","title":"NOTE:","text":"

Operating as a non-root user makes it impossible to use standard ports 514 and 601. Many devices cannot alter their destination port, so this operation may only be appropriate for cases where accepting syslog data from the public internet cannot be avoided.

"},{"location":"gettingstarted/podman-systemd-general/#prequisites","title":"Prequisites","text":"

Podman and slirp4netns must be installed.

"},{"location":"gettingstarted/podman-systemd-general/#setup","title":"Setup","text":"
  1. Increase the number of user namespaces. Execute the following with sudo privileges:

    $ echo \"user.max_user_namespaces=28633\" > /etc/sysctl.d/userns.conf      \n$ sysctl -p /etc/sysctl.d/userns.conf\n

  2. Create a non-root user from which to run SC4S and to prepare Podman for non-root operations:

    sudo useradd -m -d /home/sc4s -s /bin/bash sc4s\nsudo passwd sc4s  # type password here\nsudo su - sc4s\nmkdir -p /home/sc4s/local\nmkdir -p /home/sc4s/archive\nmkdir -p /home/sc4s/tls\npodman system migrate\n

  3. Load the new environment variables. To do this, temporarily switch to any other user, and then log back in as the SC4S user. When logging in as the SC4S user, don\u2019t use the \u2018su\u2019 command, as it won\u2019t load the new variables. Instead, you can use, for example, the command \u2018ssh sc4s@localhost\u2019.

  4. Create unit file in ~/.config/systemd/user/sc4s.service with the following content:

    [Unit]\nUser=sc4s\nDescription=SC4S Container\nWants=NetworkManager.service network-online.target\nAfter=NetworkManager.service network-online.target\n[Install]\nWantedBy=multi-user.target\n[Service]\nEnvironment=\"SC4S_IMAGE=ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\"\n# Required mount point for syslog-ng persist data (including disk buffer)\nEnvironment=\"SC4S_PERSIST_MOUNT=splunk-sc4s-var:/var/lib/syslog-ng\"\n# Optional mount point for local overrides and configuration\nEnvironment=\"SC4S_LOCAL_MOUNT=/home/sc4s/local:/etc/syslog-ng/conf.d/local:z\"\n# Optional mount point for local disk archive (EWMM output) files\nEnvironment=\"SC4S_ARCHIVE_MOUNT=/home/sc4s/archive:/var/lib/syslog-ng/archive:z\"\n# Map location of TLS custom TLS\nEnvironment=\"SC4S_TLS_MOUNT=/home/sc4s/tls:/etc/syslog-ng/tls:z\"\nTimeoutStartSec=0\nExecStartPre=/usr/bin/podman pull $SC4S_IMAGE\n# Note: The path /usr/bin/bash may vary based on your operating system.\n# when startup fails on running bash check if the path is correct\nExecStartPre=/usr/bin/bash -c \"/usr/bin/systemctl --user set-environment SC4SHOST=$(hostname -s)\"\nExecStart=/usr/bin/podman run -p 2514:514 -p 2514:514/udp -p 6514:6514  \\\n        -e \"SC4S_CONTAINER_HOST=${SC4SHOST}\" \\\n        -v \"$SC4S_PERSIST_MOUNT\" \\\n        -v \"$SC4S_LOCAL_MOUNT\" \\\n        -v \"$SC4S_ARCHIVE_MOUNT\" \\\n        -v \"$SC4S_TLS_MOUNT\" \\\n        --env-file=/home/sc4s/env_file \\\n        --health-cmd=\"/healthcheck.sh\" \\\n        --health-interval=10s --health-retries=6 --health-timeout=6s \\\n        --network host \\\n        --name SC4S \\\n        --rm $SC4S_IMAGE\nRestart=on-abnormal\n

  5. Create your env_file file at /home/sc4s/env_file

    SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://xxx.xxx.xxx.xxx:8088\nSC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx\n#Uncomment the following line if using untrusted SSL certificates\n#SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\nSC4S_LISTEN_DEFAULT_TCP_PORT=8514\nSC4S_LISTEN_DEFAULT_UDP_PORT=8514\nSC4S_LISTEN_DEFAULT_RFC5426_PORT=8601\nSC4S_LISTEN_DEFAULT_RFC6587_PORT=8601\n

"},{"location":"gettingstarted/podman-systemd-general/#run-service","title":"Run service","text":"

To run the service as a non-root user, run the systemctl command with --user flag:

systemctl --user daemon-reload\nsystemctl --user enable sc4s\nsystemctl --user start sc4s\n

The remainder of the setup can be found in the main setup instructions.

"},{"location":"gettingstarted/quickstart_guide/","title":"Quickstart Guide","text":"

This guide will enable you to quickly implement basic changes to your Splunk instance and set up a simple SC4S installation. It\u2019s a great starting point for working with SC4S and establishing a minimal operational solution. The same steps are thoroughly described in the Splunk Setup and Runtime configuration sections.

"},{"location":"gettingstarted/quickstart_guide/#splunk-setup","title":"Splunk setup","text":"
  1. Create the following default indexes that are used by SC4S:

  2. Create a HEC token for SC4S. When filling out the form for the token, leave the \u201cSelected Indexes\u201d pane blank and specify that a lastChanceIndex be created so that all data received by SC4S will have a target destination in Splunk.

"},{"location":"gettingstarted/quickstart_guide/#sc4s-setup-using-rhel","title":"SC4S setup (using RHEL)","text":"
  1. Set the host OS kernel to match the default receiver buffer of SC4S, which is set to 16MB.

a. Add the following to /etc/sysctl.conf:

```\nnet.core.rmem_default = 17039360\nnet.core.rmem_max = 17039360\n```\n

b. Apply to the kernel:

```\nsysctl -p\n```\n
  1. Ensure the kernel is not dropping packets:

    netstat -su | grep \"receive errors\"\n
  2. Create the systemd unit file /lib/systemd/system/sc4s.service.

  3. Copy and paste from the SC4S sample unit file (Docker) or SC4S sample unit file (Podman).

  4. Install Podman or Docker:

    sudo yum -y install podman\n
    or
    sudo yum install docker-engine -y\n

  5. Create a Podman/Docker local volume that will contain the disk buffer files and other SC4S state files (choose one in the command below):

    sudo podman|docker volume create splunk-sc4s-var\n
  6. Create directories to be used as a mount point for local overrides and configurations:

    mkdir /opt/sc4s/local

    mkdir /opt/sc4s/archive

    mkdir /opt/sc4s/tls

  7. Create the environment file /opt/sc4s/env_file and replace the HEC_URL and HEC_TOKEN as necessary:

      SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088\n  SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\n  #Uncomment the following line if using untrusted SSL certificates\n  #SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no\n
  8. Configure SC4S for systemd and start SC4S:

    sudo systemctl daemon-reload

    sudo systemctl enable sc4s

    sudo systemctl start sc4s

  9. Check podman/docker logs for errors:

    sudo podman|docker logs SC4S\n
  10. Search on Splunk for successful installation of SC4S:

    index=* sourcetype=sc4s:events \"starting up\"\n
  11. Send sample data to default udp port 514 of SC4S host:

    echo \u201cHello SC4S\u201d > /dev/udp/<SC4S_ip>/514\n
"},{"location":"sources/","title":"Introduction","text":"

When using Splunk Connect for Syslog to onboard a data source, the syslog-ng \u201capp-parser\u201d performs the operations that are traditionally performed at index-time by the corresponding Technical Add-on installed there. These index-time operations include linebreaking, source/sourcetype setting and timestamping. For this reason, if a data source is exclusively onboarded using SC4S then you will not need to install its corresponding Add-On on the indexers. You must, however, install the Add-on on the search head(s) for the user communities interested in this data source.

SC4S is designed to process \u201csyslog\u201d referring to IETF RFC standards 5424, legacy BSD syslog, RFC3164 (Not a standard document), and many \u201calmost\u201d syslog formats.

When possible data sources are identified and processed based on characteristics of the event that make them unique as compared to other events for example. Cisco devices using IOS will include \u201d : %\u201d followed by a string. While Arista EOS devices will use a valid RFC3164 header with a value in the \u201cPROGRAM\u201d position with \u201c%\u201d as the first char in the \u201cMESSAGE\u201d portion. This allows two similar event structures to be processed correctly.

When identification by message content alone is not possible for example the \u201csshd\u201d program field is commonly used across vendors additional \u201chint\u201d or guidance configuration allows SC4S to better classify events. The hints can be applied by definition of a specific port which will be used as a property of the event or by configuration of a host name/ip pattern. For example \u201cVMWARE VSPHERE\u201d products have a number of \u201cPROGRAM\u201d fields which can be used to identify vmware specific events in the syslog stream and these can be properly sourcetyped automatically however because \u201csshd\u201d is not unique it will be treated as generic \u201cos:nix\u201d events until further configuration is applied. The administrator can take one of two actions to refine the processing for vmware

"},{"location":"sources/#supporting-previously-unknown-sources","title":"Supporting previously unknown sources","text":"

Many log sources can be supported using one of the flexible options available without specific code known as app-parsers.

New supported sources are added regularly. Please submit an issue with a description of the vend/product. Configuration information an a compressed pcap (.zip) from a non-production environment to request support for a new source.

Many sources can be self supported. While we encourage sharing new sources via the github project to promote consistency and develop best-practices there is no requirement to engage in the community.

"},{"location":"sources/#almost-syslog","title":"Almost Syslog","text":"

Sources sending legacy non conformant 3164 like streams can be assisted by the creation of an \u201cAlmost Syslog\u201d Parser. In an such a parser the goal is to process the syslog header allowing other parsers to correctly parse and handle the event. The following example is take from a currently supported format where the source product used epoch in the time stamp field.

    #Example event\n    #<134>1 1563249630.774247467 devicename security_event ids_alerted signature=1:28423:1 \n    # In the example note the vendor incorrectly included \"1\" following PRI defined in RFC5424 as indicating a compliant message\n    # The parser must remove the 1 before properly parsing\n    # The epoch time is captured by regex\n    # The epoch time is converted back into an RFC3306 date and provided to the parser\n    block parser syslog_epoch-parser() {    \n    channel {\n            filter { \n                message('^(\\<\\d+\\>)(?:1(?= ))? ?(\\d{10,13}(?:\\.\\d+)?) (.*)', flags(store-matches));\n            };  \n            parser {             \n                date-parser(\n                    format('%s.%f', '%s')\n                    template(\"$2\")\n                );\n            };\n            parser {\n                syslog-parser(\n\n                    flags(assume-utf8, expect-hostname, guess-timezone)\n                    template(\"$1 $S_ISODATE $3\")\n                    );\n            };\n            rewrite(set_rfc3164_epoch);                       \n\n    };\n    };\n    application syslog_epoch[sc4s-almost-syslog] {\n        parser { syslog_epoch-parser(); };   \n    };\n
"},{"location":"sources/#standard-syslog-using-message-parsing","title":"Standard Syslog using message parsing","text":"

Syslog data conforming to RFC3164 or complying with RFC standards mentioned above can be processed with an app-parser allowing the use of the default port rather than requiring custom ports the following example take from a currently supported source uses the value of \u201cprogram\u201d to identify the source as this program value is unique. Care must be taken to write filter conditions strictly enough to not conflict with similar sources

block parser alcatel_switch-parser() {    \n channel {\n        rewrite {\n            r_set_splunk_dest_default(\n                index('netops')\n                sourcetype('alcatel:switch')\n                vendor('alcatel')\n                product('switch')\n                template('t_hdr_msg')\n            );              \n        };       \n\n\n   };\n};\napplication alcatel_switch[sc4s-syslog] {\n filter { \n        program('swlogd' type(string) flags(prefix));\n    }; \n    parser { alcatel_switch-parser(); };   \n};\n
"},{"location":"sources/#standard-syslog-vendor-product-by-source","title":"Standard Syslog vendor product by source","text":"

In some cases standard syslog is also generic and can not be disambiguated from other sources by message content alone. When this happens and only a single source type is desired the \u201csimple\u201d option above is valid but requires managing a port. The following example allows use of a named port OR the vendor product by source configuration.

block parser dell_poweredge_cmc-parser() {    \n channel {\n\n        rewrite {\n            r_set_splunk_dest_default(\n                index('infraops')\n                sourcetype('dell:poweredge:cmc:syslog')\n                vendor('dell')\n                product('poweredge')\n                class('cmc')\n            );              \n        };       \n   };\n};\napplication dell_poweredge_cmc[sc4s-network-source] {\n filter { \n        (\"${.netsource.sc4s_vendor_product}\" eq \"dell_poweredge_cmc\"\n        or \"${SOURCE}\" eq \"s_DELL_POWEREDGE_CMC\")\n         and \"${fields.sc4s_vendor_product}\" eq \"\"\n    };    \n\n    parser { dell_poweredge_cmc-parser(); };   \n};\n
"},{"location":"sources/#filtering-events-from-output","title":"Filtering events from output","text":"

In some cases specific events may be considered \u201cnoise\u201d and functionality must be implemented to prevent forwarding of these events to Splunk In version 2.0.0 of SC4S a new feature was implemented to improve the ease of use and efficiency of this progress.

The following example will \u201cnull_queue\u201d or drop cisco IOS device events at the debug level. Note Cisco does not use the PRI to indicate DEBUG a message filter is required.

block parser cisco_ios_debug-postfilter() {\n    channel {\n        #In this case the outcome is drop the event other logic such as adding indexed fields or editing the message is possible\n        rewrite(r_set_dest_splunk_null_queue);\n   };\n};\napplication cisco_ios_debug-postfilter[sc4s-postfilter] {\n filter {\n        \"${fields.sc4s_vendor}\" eq \"cisco\" and\n        \"${fields.sc4s_product}\" eq \"ios\"\n        #Note regex reads as\n        # start from first position\n        # Any atleast 1 char that is not a `-`\n        # constant '-7-'\n        and message('^%[^\\-]+-7-');\n    };\n    parser { cisco_ios_debug-postfilter(); };\n};\n
"},{"location":"sources/#another-example-to-drop-events-based-on-src-and-action-values-in-message","title":"Another example to drop events based on \u201csrc\u201d and \u201caction\u201d values in message","text":"
#filename: /opt/sc4s/local/config/app_parsers/rewriters/app-dest-rewrite-checkpoint_drop\n\nblock parser app-dest-rewrite-checkpoint_drop-d_fmt_hec_default() {    \n    channel {\n        rewrite(r_set_dest_splunk_null_queue);\n    };\n};\n\napplication app-dest-rewrite-checkpoint_drop-d_fmt_hec_default[sc4s-lp-dest-format-d_hec_fmt] {\n    filter {\n        match('checkpoint' value('fields.sc4s_vendor') type(string))\n        and match('syslog' value('fields.sc4s_product') type(string))\n\n        and match('Drop' value('.SDATA.sc4s@2620.action') type(string))\n        and match('12.' value('.SDATA.sc4s@2620.src') type(string) flags(prefix) );\n\n    };    \n    parser { app-dest-rewrite-checkpoint_drop-d_fmt_hec_default(); };   \n};\n
"},{"location":"sources/#the-sc4s-fallback-sourcetype","title":"The SC4S \u201cfallback\u201d sourcetype","text":"

If SC4S receives an event on port 514 which has no soup filter, that event will be given a \u201cfallback\u201d sourcetype. If you see events in Splunk with the fallback sourcetype, then you should figure out what source the events are from and determine why these events are not being sourcetyped correctly. The most common reason for events categorized as \u201cfallback\u201d is the lack of a SC4S filter for that source, and in some cases a misconfigured relay which alters the integrity of the message format. In most cases this means a new SC4S filter must be developed. In this situation you can either build a filter or file an issue with the community to request help.

The \u201cfallback\u201d sourcetype is formatted in JSON to allow the administrator to see the constituent syslog-ng \u201cmacros\u201d (fields) that have been automatically parsed by the syslog-ng server An RFC3164 (legacy BSD syslog) \u201con the wire\u201d raw message is usually (but unfortunately not always) comprised of the following syslog-ng macros, in this order and spacing:

<$PRI> $HOST $LEGACY_MSGHDR$MESSAGE\n

These fields can be very useful in building a new filter for that sourcetype. In addition, the indexed field sc4s_syslog_format is helpful in determining if the incoming message is standard RFC3164. A value of anything other than rfc3164 or rfc5424_strict indicates a vendor perturbation of standard syslog, which will warrant more careful examination when building a filter.

"},{"location":"sources/#splunk-connect-for-syslog-and-splunk-metadata","title":"Splunk Connect for Syslog and Splunk metadata","text":"

A key aspect of SC4S is to properly set Splunk metadata prior to the data arriving in Splunk (and before any TA processing takes place. The filters will apply the proper index, source, sourcetype, host, and timestamp metadata automatically by individual data source. Proper values for this metadata (including a recommended index) are included with all \u201cout-of-the-box\u201d log paths included with SC4S and are chosen to properly interface with the corresponding TA in Splunk. The administrator will need to ensure all recommended indexes be created to accept this data if the defaults are not changed.

It is understood that default values will need to be changed in many installations. Each source documented in this section has a table entitled \u201cSourcetype and Index Configuration\u201d, which highlights the default index and sourcetype for each source. See the section \u201cSC4S metadata configuration\u201d in the \u201cConfiguration\u201d page for more information on how to override the default values in this table.

"},{"location":"sources/#unique-listening-ports","title":"Unique listening ports","text":"

SC4S supports unique listening ports for each source technology/log path (e.g. Cisco ASA), which is useful when the device is sending data on a port different from the typical default syslog port (UDP port 514). In some cases, when the source device emits data that is not able to be distinguished from other device types, a unique port is sometimes required. The specific environment variables used for setting \u201cunique ports\u201d are outlined in each source document in this section.

Using the default ports as unique listening ports is discouraged since it can lead to unintended consequences. There were cases of customers using port 514 as the unique listening port dedicated for a particular vendor and then sending other events to the same port, which caused some of those events to be misclassified.

In most cases only one \u201cunique port\u201d is needed for each source. However, SC4S also supports multiple network listening ports per source, which can be useful for a narrow set of compliance use cases. When configuring a source port variable to enable multiple ports, use a comma-separated list with no spaces (e.g. SC4S_LISTEN_CISCO_ASA_UDP_PORT=5005,6005).

"},{"location":"sources/#filtering-by-an-extra-product-description","title":"Filtering by an extra product description","text":"

Due to the fact that unique listening port feature differentiate vendor and product based on the first two underscore characters (\u2018_\u2019), it is possible to filter events by an extra string added to the product. For example in case of having several devices of the same type sending logs over different ports it is possible to route it to different indexes based only on port value while retaining proper vendor and product fields. In general, it follows convention:

SC4S_LISTEN_{VENDOR}_{PRODUCT}_{PROTOCOL}_PORT={PORT VALUE 1},{PORT VALUE 2}...\n
But for special use cases it can be extended to:
SC4S_LISTEN_{VENDOR}_{PRODUCT}_{ADDITIONAL_STRING}_{PROTOCOL}_PORT={PORT VALUE},{PORT VALUE 2}...\n
This feature removes the need for complex pre/post filters.

Example:

SC4S_LISTEN_EAMPLEVENDOR_EXAMPLEPRODUCT_GROUP01-001_UDP_PORT=18514\n\nsets:\nvendor = < example vendor >\nproduct = < example product >\ntag = .source.s_EAMPLEVENDOR_EXAMPLEPRODUCT_GROUP01-001\n
SC4S_LISTEN_EAMPLEVENDOR_EXAMPLEPRODUCT_GROUP01-002_UDP_PORT=28514\n\nsets:\nvendor = < example vendor >\nproduct = < example product >\ntag = .source.s_EAMPLEVENDOR_EXAMPLEPRODUCT_GROUP01-002\n

"},{"location":"sources/base/cef/","title":"Common Event Format (CEF)","text":""},{"location":"sources/base/cef/#product-various-products-that-send-cef-format-messages-via-syslog","title":"Product - Various products that send CEF-format messages via syslog","text":"

Each CEF product should have their own source entry in this documentation set. In a departure from normal configuration, all CEF products should use the \u201cCEF\u201d version of the unique port and archive environment variable settings (rather than a unique one per product), as the CEF log path handles all products sending events to SC4S in the CEF format. Examples of this include Arcsight, Imperva, and Cyberark. Therefore, the CEF environment variables for unique port, archive, etc. should be set only once.

If your deployment has multiple CEF devices that send to more than one port, set the CEF unique port variable(s) as a comma-separated list. See Unique Listening Ports for details.

The source documentation included below is a reference baseline for any product that sends data using the CEF log path.

Ref Link Splunk Add-on CEF https://bitbucket.org/SPLServices/ta-cef-for-splunk/downloads/ Product Manual https://docs.imperva.com/bundle/cloud-application-security/page/more/log-configuration.htm"},{"location":"sources/base/cef/#splunk-metadata-with-cef-events","title":"Splunk Metadata with CEF events","text":"

The keys (first column) in splunk_metadata.csv for CEF data sources have a slightly different meaning than those for non-CEF ones. The typical vendor_product syntax is instead replaced by checks against specific columns of the CEF event \u2013 namely the first, second, and fourth columns following the leading CEF:0 (\u201ccolumn 0\u201d). These specific columns refer to the CEF device_vendor, device_product, and device_event_class, respectively. The third column, device_version, is not used for metadata assignment.

SC4S sets metadata based on the first two columns, and (optionally) the fourth. While the key (first column) in the splunk_metadata file for non-CEF sources uses a \u201cvendor_product\u201d syntax that is arbitrary, the syntax for this key for CEF events is based on the actual contents of columns 1,2 and 4 from the CEF event, namely:

device_vendor_device_product_device_class

The final device_class portion is optional. Therefore, CEF entries in splunk_metadata can have a key representing the vendor and product, and others representing a vendor and product coupled with one or more additional classes. This allows for more granular metadata assignment (or overrides).

Here is a snippet of a sample Imperva CEF event that includes a CEF device class entry (which is \u201cFirewall\u201d):

Apr 19 10:29:53 3.3.3.3 CEF:0|Imperva Inc.|SecureSphere|12.0.0|Firewall|SSL Untraceable Connection|Medium|\n

and the corresponding match in splunk_metadata.csv:

Imperva Inc._SecureSphere_Firewall,sourcetype,imperva:waf:firewall:cef\n
"},{"location":"sources/base/cef/#default-sourcetype","title":"Default Sourcetype","text":"sourcetype notes cef Common sourcetype"},{"location":"sources/base/cef/#default-source","title":"Default Source","text":"source notes Varies Varies"},{"location":"sources/base/cef/#default-index-configuration","title":"Default Index Configuration","text":"key source index notes Vendor_Product Varies main none"},{"location":"sources/base/cef/#filter-type","title":"Filter type","text":"

MSG Parse: This filter parses message content

"},{"location":"sources/base/cef/#options","title":"Options","text":"Variable default description SC4S_LISTEN_CEF_UDP_PORT empty string Enable a UDP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_CEF_TCP_PORT empty string Enable a TCP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_CEF_TLS_PORT empty string Enable a TLS port for this specific vendor product using a comma-separated list of port numbers SC4S_DEST_CEF_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source"},{"location":"sources/base/leef/","title":"Log Extended Event Format (LEEF)","text":""},{"location":"sources/base/leef/#product-various-products-that-send-leef-v1-and-v2-format-messages-via-syslog","title":"Product - Various products that send LEEF V1 and V2 format messages via syslog","text":"

Each LEEF product should have their own source entry in this documentation set by vendor. In a departure from normal configuration, all LEEF products should use the \u201cLEEF\u201d version of the unique port and archive environment variable settings (rather than a unique one per product), as the LEEF log path handles all products sending events to SC4S in the LEEF format. Examples of this include QRadar itself as well as other legacy systems. Therefore, the LEEF environment variables for unique port, archive, etc. should be set only once.

If your deployment has multiple LEEF devices that send to more than one port, set the LEEF unique port variable(s) as a comma-separated list. See Unique Listening Ports for details.

The source documentation included below is a reference baseline for any product that sends data using the LEEF log path.

Some vendors implement LEEF v2.0 format events incorrectly, omitting the required \u201ckey=value\u201d separator field from the LEEF header, thus forcing the consumer to assume the default tab \\t character. SC4S will correctly process this omission, but will not correctly process other non-compliant formats.

The LEEF format allows for the inclusion of a field devTime containing the device timestamp and allows the sender to also specify the format of this timestamp in another field called devTimeFormat, which uses the Java Time format. SC4S uses syslog-ng strptime format which is not directly translatable to the Java Time format. Therefore, SC4S has provided support for the following common formats. If needed, additional time formats can be requested via an issue on github.

    '%s.%f',\n    '%s',\n    '%b %d %H:%M:%S.%f',\n    '%b %d %H:%M:%S',\n    '%b %d %Y %H:%M:%S.%f',\n    '%b %e %Y %H:%M:%S',\n    '%b %e %H:%M:%S.%f',\n    '%b %e %H:%M:%S',\n    '%b %e %Y %H:%M:%S.%f',\n    '%b %e %Y %H:%M:%S'  \n
Ref Link Splunk Add-on LEEF None Product Manual https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/com.ibm.dsm.doc/c_LEEF_Format_Guide_intro.html"},{"location":"sources/base/leef/#splunk-metadata-with-leef-events","title":"Splunk Metadata with LEEF events","text":"

The keys (first column) in splunk_metadata.csv for LEEF data sources have a slightly different meaning than those for non-LEEF ones. The typical vendor_product syntax is instead replaced by checks against specific columns of the LEEF event \u2013 namely the first and second, columns following the leading LEEF:VERSION (\u201ccolumn 0\u201d). These specific columns refer to the LEEF device_vendor, and device_product, respectively.

device_vendor_device_product

Here is a snippet of a sample LANCOPE event in LEEF 2.0 format:

<111>Apr 19 10:29:53 3.3.3.3 LEEF:2.0|Lancope|StealthWatch|1.0|41|^|src=192.0.2.0^dst=172.50.123.1^sev=5^cat=anomaly^srcPort=81^dstPort=21^usrName=joe.black\n

and the corresponding match in splunk_metadata.csv:

Lancope_StealthWatch,source,lancope:stealthwatch\n
"},{"location":"sources/base/leef/#default-sourcetype","title":"Default Sourcetype","text":"sourcetype notes LEEF:1 Common sourcetype for all LEEF v1 events LEEF:2:<separator> Common sourcetype for all LEEF v2 events separator is the printable literal or hex value of the separator used in the event"},{"location":"sources/base/leef/#default-source","title":"Default Source","text":"source notes vendor:product Varies"},{"location":"sources/base/leef/#default-index-configuration","title":"Default Index Configuration","text":"key source index notes Vendor_Product Varies main none"},{"location":"sources/base/leef/#filter-type","title":"Filter type","text":"

MSG Parse: This filter parses message content

"},{"location":"sources/base/leef/#options","title":"Options","text":"Variable default description SC4S_LISTEN_LEEF_UDP_PORT empty string Enable a UDP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_LEEF_TCP_PORT empty string Enable a TCP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_LEEF_TLS_PORT empty string Enable a TLS port for this specific vendor product using a comma-separated list of port numbers SC4S_DEST_LEEF_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source"},{"location":"sources/base/nix/","title":"Generic *NIX","text":"

Many appliance vendor utilize Linux and BSD distributions as the foundation of the solution. When configured to log via syslog, these devices\u2019 OS logs (from a security perspective) can be monitored using the common Splunk Nix TA.

Note: This is NOT a replacement for or alternative to the Splunk Universal forwarder on Linux and Unix. For general-purpose server applications, the Universal Forwarder offers more comprehensive collection of events and metrics appropriate for both security and operations use cases.

Ref Link Splunk Add-on https://splunkbase.splunk.com/app/833/"},{"location":"sources/base/nix/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes nix:syslog None"},{"location":"sources/base/nix/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes nix_syslog nix:syslog osnix none"},{"location":"sources/base/nix/#filter-type","title":"Filter type","text":"

MSG Parse: This filter parses message content

"},{"location":"sources/base/nix/#setup-and-configuration","title":"Setup and Configuration","text":""},{"location":"sources/base/nix/#options","title":"Options","text":"Variable default description SC4S_DEST_NIX_SYSLOG_ARCHIVE no Enable archive to disk for this specific source SC4S_DEST_NIX_SYSLOG_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source"},{"location":"sources/base/simple/","title":"Simple Log path by port","text":"

The SIMPLE source configuration allows configuration of a log path for SC4S using a single port to a single index/sourcetype combination to quickly onboard new sources that have not been formally supported in the product. Source data must use RFC5424 or a common variant of RFC3164 formatting.

"},{"location":"sources/base/simple/#splunk-metadata-with-simple-events","title":"Splunk Metadata with SIMPLE events","text":"

The keys (first column) in splunk_metadata.csv for SIMPLE data sources is a user-created key using the vendor_product convention. For example, to on-board a new product first firewall using a source type of first:firewall and index netfw, add the following two lines to the configuration file as shown:

first_firewall,index,netfw\nfirst_firewall,sourcetype,first:firewall\n
"},{"location":"sources/base/simple/#options","title":"Options","text":"

For the variables below, replace VENDOR_PRODUCT with the key (converted to upper case) used in the splunk_metadata.csv. Based on the example above, to establish a tcp listener for first firewall we would use SC4S_LISTEN_SIMPLE_FIRST_FIREWALL_TCP_PORT.

Variable default description SC4S_LISTEN_SIMPLE_VENDOR_PRODUCT_UDP_PORT empty string Enable a UDP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_SIMPLE_VENDOR_PRODUCT_TCP_PORT empty string Enable a TCP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_SIMPLE_VENDOR_PRODUCT_TLS_PORT empty string Enable a TLS port for this specific vendor product using a comma-separated list of port numbers SC4S_ARCHIVE_SIMPLE_VENDOR_PRODUCT no Enable archive to disk for this specific source SC4S_DEST_SIMPLE_VENDOR_PRODUCT_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source"},{"location":"sources/base/simple/#important-notes","title":"Important Notes","text":""},{"location":"sources/vendor/AVI/","title":"Common","text":""},{"location":"sources/vendor/AVI/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/AVI/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual https://avinetworks.com/docs/latest/syslog-formats/"},{"location":"sources/vendor/AVI/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes avi:events None"},{"location":"sources/vendor/AVI/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes avi_vantage avi:events netops none"},{"location":"sources/vendor/Alcatel/Switch/","title":"Switch","text":""},{"location":"sources/vendor/Alcatel/Switch/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Alcatel/Switch/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Alcatel/Switch/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes alcatel:switch None"},{"location":"sources/vendor/Alcatel/Switch/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes alcatel_switch alcatel:switch netops none"},{"location":"sources/vendor/Alsid/Alsid/","title":"Alsid","text":"

The product has been purchased and republished under a new product name by Tenable this configuration is obsolete.

"},{"location":"sources/vendor/Alsid/Alsid/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Alsid/Alsid/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/5173/ Product Manual unknown"},{"location":"sources/vendor/Alsid/Alsid/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes alsid:syslog None"},{"location":"sources/vendor/Alsid/Alsid/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes alsid_syslog alsid:syslog oswinsec none"},{"location":"sources/vendor/Arista/","title":"EOS","text":""},{"location":"sources/vendor/Arista/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Arista/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Arista/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes arista:eos:* None"},{"location":"sources/vendor/Arista/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes arista_eos arista:eos netops none arista_eos_$PROCESSNAME arista:eosq netops The \u201cprocess\u201d field is used from the event"},{"location":"sources/vendor/Aruba/ap/","title":"Access Points","text":""},{"location":"sources/vendor/Aruba/ap/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Aruba/ap/#links","title":"Links","text":"Ref Link"},{"location":"sources/vendor/Aruba/ap/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes aruba:syslog Dynamically Created"},{"location":"sources/vendor/Aruba/ap/#index-configuration","title":"Index Configuration","text":"key index notes aruba_ap netops none"},{"location":"sources/vendor/Aruba/ap/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-aruba_ap.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-aruba_ap[sc4s-vps] {\n filter { \n        host(\"aruba-ap-*\" type(glob))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('aruba')\n            product('ap')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Aruba/clearpass/","title":"Clearpass","text":""},{"location":"sources/vendor/Aruba/clearpass/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Aruba/clearpass/#links","title":"Links","text":"Ref Link"},{"location":"sources/vendor/Aruba/clearpass/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes aruba:clearpass Dynamically Created"},{"location":"sources/vendor/Aruba/clearpass/#index-configuration","title":"Index Configuration","text":"key index notes aruba_clearpass netops none aruba_clearpass_endpoint-profile netops none aruba_clearpass_alert netops none aruba_clearpass_endpoint-audit-record netops none aruba_clearpass_policy-server-session netops none aruba_clearpass_post-auth-monit-config netops none aruba_clearpass_snmp-session-log netops none aruba_clearpass_radius-session netops none aruba_clearpass_system-event netops none aruba_clearpass_tacacs-accounting-detail netops none aruba_clearpass_tacacs-accounting-record netops none"},{"location":"sources/vendor/Aruba/clearpass/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-aruba_clearpass.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-aruba_clearpass[sc4s-vps] {\n filter { \n        host(\"aruba-cp-*\" type(glob))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('aruba')\n            product('clearpass')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Avaya/","title":"SIP Manager","text":""},{"location":"sources/vendor/Avaya/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Avaya/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Avaya/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes avaya:avaya None"},{"location":"sources/vendor/Avaya/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes avaya_sipmgr avaya:avaya main none"},{"location":"sources/vendor/Aviatrix/aviatrix/","title":"Aviatrix","text":""},{"location":"sources/vendor/Aviatrix/aviatrix/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Aviatrix/aviatrix/#product-switches","title":"Product - Switches","text":"Ref Link Splunk Add-on \u2013 Product Manual Link"},{"location":"sources/vendor/Aviatrix/aviatrix/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes aviatrix:cloudx-cli None aviatrix:kernel None aviatrix:cloudxd None aviatrix:avx-nfq None aviatrix:avx-gw-state-sync None aviatrix:perfmon None"},{"location":"sources/vendor/Aviatrix/aviatrix/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes aviatrix_cloudx-cli aviatrix:cloudx-cli netops none aviatrix_kernel aviatrix:kernel netops none aviatrix_cloudxd aviatrix:cloudxd netops none aviatrix_avx-nfq aviatrix:avx-nfq netops none aviatrix_avx-gw-state-sync aviatrix:avx-gw-state-sync netops none aviatrix_perfmon aviatrix:perfmon netops none"},{"location":"sources/vendor/Barracuda/waf/","title":"WAF (Cloud)","text":""},{"location":"sources/vendor/Barracuda/waf/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Barracuda/waf/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual https://campus.barracuda.com/product/WAAS/doc/79462622/log-export"},{"location":"sources/vendor/Barracuda/waf/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes barracuda:tr none"},{"location":"sources/vendor/Barracuda/waf/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes barracuda_waf barracuda:web:firewall netwaf None"},{"location":"sources/vendor/Barracuda/waf/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-barracuda_syslog.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-barracuda_syslog[sc4s-vps] {\n filter {      \n        netmask(169.254.100.1/24)\n        or host(\"barracuda\" type(string) flags(ignore-case))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('barracuda')\n            product('syslog')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Barracuda/waf_on_prem/","title":"Barracuda WAF (On Premises)","text":""},{"location":"sources/vendor/Barracuda/waf_on_prem/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Barracuda/waf_on_prem/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3776 Product Manual https://campus.barracuda.com/product/webapplicationfirewall/doc/92767349/exporting-log-formats/"},{"location":"sources/vendor/Barracuda/waf_on_prem/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes barracuda:system program(\u201cSYS\u201d) barracuda:waf program(\u201cWF\u201d) barracuda:web program(\u201cTR\u201d) barracuda:audit program(\u201cAUDIT\u201d) barracuda:firewall program(\u201cNF\u201d)"},{"location":"sources/vendor/Barracuda/waf_on_prem/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes barracuda_system barracuda:system netwaf None barracuda_waf barracuda:waf netwaf None barracuda_web barracuda:web netwaf None barracuda_audit barracuda:audit netwaf None barracuda_firewall barracuda:firewall netwaf None"},{"location":"sources/vendor/BeyondTrust/sra/","title":"Secure Remote Access (Bomgar)","text":""},{"location":"sources/vendor/BeyondTrust/sra/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/BeyondTrust/sra/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/BeyondTrust/sra/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes beyondtrust:sra None"},{"location":"sources/vendor/BeyondTrust/sra/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes beyondtrust_sra beyondtrust:sra infraops none"},{"location":"sources/vendor/BeyondTrust/sra/#options","title":"Options","text":"Variable default description SC4S_DEST_BEYONDTRUST_SRA_SPLUNK_HEC_FMT JSON Restructure data from vendor format to json for splunk destinations set to \u201cNONE\u201d for native format SC4S_DEST_BEYONDTRUST_SRA_SYSLOG_FMT SDATA Restructure data from vendor format to SDATA for SYSLOG destinations set to \u201cNONE\u201d for native ormat"},{"location":"sources/vendor/Broadcom/brightmail/","title":"Brightmail","text":""},{"location":"sources/vendor/Broadcom/brightmail/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Broadcom/brightmail/#links","title":"Links","text":"Ref Link Splunk Add-on TBD Product Manual https://support.symantec.com/us/en/article.howto38250.html"},{"location":"sources/vendor/Broadcom/brightmail/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes symantec:smg Requires version TA 3.6"},{"location":"sources/vendor/Broadcom/brightmail/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes symantec_brightmail symantec:smg email none"},{"location":"sources/vendor/Broadcom/brightmail/#options","title":"Options","text":"Variable default description SC4S_SOURCE_FF_SYMANTEC_BRIGHTMAIL_GROUPMSG yes Email processing events generated by the bmserver process will be grouped by host+program+pid+msg ID into a single event SC4S_DEST_SYMANTEC_BRIGHTMAIL_SPLUNK_HEC_FMT empty if \u201cJSON\u201d and GROUPMSG is enabled format the event in json SC4S_DEST_SYMANTEC_BRIGHTMAIL_SYSLOG_FMT empty if \u201cSDATA\u201d and GROUPMSG is enabled format the event in rfc5424 sdata"},{"location":"sources/vendor/Broadcom/dlp/","title":"Symantec DLP","text":""},{"location":"sources/vendor/Broadcom/dlp/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Broadcom/dlp/#links","title":"Links","text":"Ref Link Splunk Add-on Symatec DLP https://splunkbase.splunk.com/app/3029/ Source doc https://knowledge.broadcom.com/external/article/159509/generating-syslog-messages-from-data-los.html"},{"location":"sources/vendor/Broadcom/dlp/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes symantec:dlp:syslog None"},{"location":"sources/vendor/Broadcom/dlp/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes symantec_dlp symantec:dlp:syslog netdlp none"},{"location":"sources/vendor/Broadcom/dlp/#option-1-correct-source-syslog-formats","title":"Option 1: Correct Source syslog formats","text":""},{"location":"sources/vendor/Broadcom/dlp/#syslog-alert-response","title":"Syslog Alert Response","text":"

Login to Symantec DLP and edit the Syslog Response rule. The default configuration will appear as follows

$POLICY$^^$INCIDENT_ID$^^$SUBJECT$^^$SEVERITY$^^$MATCH_COUNT$^^$RULES$^^$SENDER$^^$RECIPIENTS$^^$BLOCKED$^^$FILE_NAME$^^$PARENT_PATH$^^$SCAN$^^$TARGET$^^$PROTOCOL$^^$INCIDENT_SNAPSHOT$\n

DO NOT replace the text prepend the following literal

SymantecDLPAlert: \n

Result note the space between \u2018:\u2019 and \u2018$\u2019

SymantecDLPAlert: $POLICY$^^$INCIDENT_ID$^^$SUBJECT$^^$SEVERITY$^^$MATCH_COUNT$^^$RULES$^^$SENDER$^^$RECIPIENTS$^^$BLOCKED$^^$FILE_NAME$^^$PARENT_PATH$^^$SCAN$^^$TARGET$^^$PROTOCOL$^^$INCIDENT_SNAPSHOT$\n
"},{"location":"sources/vendor/Broadcom/dlp/#syslog-system-events","title":"Syslog System events","text":""},{"location":"sources/vendor/Broadcom/dlp/#option-2-manual-vendor-product-by-source-parser-configuration","title":"Option 2: Manual Vendor Product by source Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-symantec_dlp.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-symantec_dlp[sc4s-vps] {\n filter {      \n        #netmask(169.254.100.1/24)\n        #host(\"-esx-\")\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('symantec')\n            product('dlp')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Broadcom/ep/","title":"Symantec Endpoint Protection (SEPM)","text":""},{"location":"sources/vendor/Broadcom/ep/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Broadcom/ep/#product-symantec-endpoint-protection","title":"Product - Symantec Endpoint Protection","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2772/ Product Manual https://techdocs.broadcom.com/content/broadcom/techdocs/us/en/symantec-security-software/endpoint-security-and-management/endpoint-protection/all/Monitoring-Reporting-and-Enforcing-Compliance/viewing-logs-v7522439-d37e464/exporting-data-to-a-syslog-server-v8442743-d15e1107.html"},{"location":"sources/vendor/Broadcom/ep/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes symantec:ep:syslog Warning the syslog method of accepting EP logs has been reported to show high data loss and is not Supported by Splunk symantec:ep:admin:syslog none symantec:ep:agent:syslog none symantec:ep:agt:system:syslog none symantec:ep:behavior:syslog none symantec:ep:packet:syslog none symantec:ep:policy:syslog none symantec:ep:proactive:syslog none symantec:ep:risk:syslog none symantec:ep:scan:syslog none symantec:ep:scm:system:syslog none symantec:ep:security:syslog none symantec:ep:traffic:syslog none"},{"location":"sources/vendor/Broadcom/ep/#index-configuration","title":"Index Configuration","text":"key index notes symantec_ep epav none"},{"location":"sources/vendor/Broadcom/proxy/","title":"ProxySG/ASG","text":"

Symantec now Broadcom ProxySG/ASG is formerly known as the \u201cBluecoat\u201d proxy

Broadcom products are inclusive of products formerly marketed under Symantec and Bluecoat brands.

"},{"location":"sources/vendor/Broadcom/proxy/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Broadcom/proxy/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2758/ Product Manual https://support.symantec.com/us/en/article.tech242216.html"},{"location":"sources/vendor/Broadcom/proxy/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes bluecoat:proxysg:access:kv Requires version TA 3.8.1 bluecoat:proxysg:access:syslog Requires version TA 3.8.1"},{"location":"sources/vendor/Broadcom/proxy/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes bluecoat_proxy bluecoat:proxysg:access:syslog netops none bluecoat_proxy_splunkkv bluecoat:proxysg:access:kv netproxy none"},{"location":"sources/vendor/Broadcom/proxy/#setup-and-configuration","title":"Setup and Configuration","text":"
<111>1 $(date)T$(x-bluecoat-hour-utc):$(x-bluecoat-minute-utc):$(x-bluecoat-second-utc)Z $(s-computername) ProxySG - splunk_format - c-ip=$(c-ip) rs-Content-Type=$(quot)$(rs(Content-Type))$(quot)  cs-auth-groups=$(cs-auth-groups) cs-bytes=$(cs-bytes) cs-categories=$(cs-categories) cs-host=$(cs-host) cs-ip=$(cs-ip) cs-method=$(cs-method) cs-uri-port=$(cs-uri-port) cs-uri-scheme=$(cs-uri-scheme) cs-User-Agent=$(quot)$(cs(User-Agent))$(quot) cs-username=$(cs-username) dnslookup-time=$(dnslookup-time) duration=$(duration) rs-status=$(rs-status) rs-version=$(rs-version) s-action=$(s-action) s-ip=$(s-ip) service.name=$(service.name) service.group=$(service.group) s-supplier-ip=$(s-supplier-ip) s-supplier-name=$(s-supplier-name) sc-bytes=$(sc-bytes) sc-filter-result=$(sc-filter-result) sc-status=$(sc-status) time-taken=$(time-taken) x-exception-id=$(x-exception-id) x-virus-id=$(x-virus-id) c-url=$(quot)$(url)$(quot) cs-Referer=$(quot)$(cs(Referer))$(quot) c-cpu=$(c-cpu) connect-time=$(connect-time) cs-auth-groups=$(cs-auth-groups) cs-headerlength=$(cs-headerlength) cs-threat-risk=$(cs-threat-risk) r-ip=$(r-ip) r-supplier-ip=$(r-supplier-ip) rs-time-taken=$(rs-time-taken) rs-server=$(rs(server)) s-connect-type=$(s-connect-type) s-icap-status=$(s-icap-status) s-sitename=$(s-sitename) s-source-port=$(s-source-port) s-supplier-country=$(s-supplier-country) sc-Content-Encoding=$(sc(Content-Encoding)) sr-Accept-Encoding=$(sr(Accept-Encoding)) x-auth-credential-type=$(x-auth-credential-type) x-cookie-date=$(x-cookie-date) x-cs-certificate-subject=$(x-cs-certificate-subject) x-cs-connection-negotiated-cipher=$(x-cs-connection-negotiated-cipher) x-cs-connection-negotiated-cipher-size=$(x-cs-connection-negotiated-cipher-size) x-cs-connection-negotiated-ssl-version=$(x-cs-connection-negotiated-ssl-version) x-cs-ocsp-error=$(x-cs-ocsp-error) x-cs-Referer-uri=$(x-cs(Referer)-uri) x-cs-Referer-uri-address=$(x-cs(Referer)-uri-address) x-cs-Referer-uri-extension=$(x-cs(Referer)-uri-extension) x-cs-Referer-uri-host=$(x-cs(Referer)-uri-host) x-cs-Referer-uri-hostname=$(x-cs(Referer)-uri-hostname) x-cs-Referer-uri-path=$(x-cs(Referer)-uri-path) x-cs-Referer-uri-pathquery=$(x-cs(Referer)-uri-pathquery) x-cs-Referer-uri-port=$(x-cs(Referer)-uri-port) x-cs-Referer-uri-query=$(x-cs(Referer)-uri-query) x-cs-Referer-uri-scheme=$(x-cs(Referer)-uri-scheme) x-cs-Referer-uri-stem=$(x-cs(Referer)-uri-stem) x-exception-category=$(x-exception-category) x-exception-category-review-message=$(x-exception-category-review-message) x-exception-company-name=$(x-exception-company-name) x-exception-contact=$(x-exception-contact) x-exception-details=$(x-exception-details) x-exception-header=$(x-exception-header) x-exception-help=$(x-exception-help) x-exception-last-error=$(x-exception-last-error) x-exception-reason=$(x-exception-reason) x-exception-sourcefile=$(x-exception-sourcefile) x-exception-sourceline=$(x-exception-sourceline) x-exception-summary=$(x-exception-summary) x-icap-error-code=$(x-icap-error-code) x-rs-certificate-hostname=$(x-rs-certificate-hostname) x-rs-certificate-hostname-category=$(x-rs-certificate-hostname-category) x-rs-certificate-observed-errors=$(x-rs-certificate-observed-errors) x-rs-certificate-subject=$(x-rs-certificate-subject) x-rs-certificate-validate-status=$(x-rs-certificate-validate-status) x-rs-connection-negotiated-cipher=$(x-rs-connection-negotiated-cipher) x-rs-connection-negotiated-cipher-size=$(x-rs-connection-negotiated-cipher-size) x-rs-connection-negotiated-ssl-version=$(x-rs-connection-negotiated-ssl-version) x-rs-ocsp-error=$(x-rs-ocsp-error) cs-uri-extension=$(cs-uri-extension) cs-uri-path=$(cs-uri-path) cs-uri-query=$(quot)$(cs-uri-query)$(quot) c-uri-pathquery=$(c-uri-pathquery)\n
"},{"location":"sources/vendor/Broadcom/sslva/","title":"SSL Visibility Appliance","text":""},{"location":"sources/vendor/Broadcom/sslva/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Broadcom/sslva/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual https://knowledge.broadcom.com/external/article/168879/when-sending-session-logs-from-ssl-visib.html"},{"location":"sources/vendor/Broadcom/sslva/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes broadcom:sslva none"},{"location":"sources/vendor/Broadcom/sslva/#index-configuration","title":"Index Configuration","text":"key index notes broadcom_sslva netproxy none"},{"location":"sources/vendor/Brocade/switch/","title":"Switch","text":""},{"location":"sources/vendor/Brocade/switch/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Brocade/switch/#product-switches","title":"Product - Switches","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Brocade/switch/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes brocade:syslog None"},{"location":"sources/vendor/Brocade/switch/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes brocade_syslog brocade:syslog netops none"},{"location":"sources/vendor/Brocade/switch/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app_parsers/app-vps-brocade_syslog.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-brocade_syslog[sc4s-vps] {\n filter { \n        host(\"^test_brocade-\")\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('brocade')\n            product('syslog')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Buffalo/","title":"Terastation","text":""},{"location":"sources/vendor/Buffalo/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Buffalo/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Buffalo/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes buffalo:terastation None"},{"location":"sources/vendor/Buffalo/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes buffalo_terastation buffalo:terastation infraops none"},{"location":"sources/vendor/Buffalo/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-buffalo_terastation.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-buffalo_terastation[sc4s-vps] {\n filter { \n        host(\"^test_buffalo_terastation-\")\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('buffalo')\n            product('terastation')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Checkpoint/firewallos/","title":"Firewall OS","text":"

Firewall OS format is by devices supporting a direct Syslog output

"},{"location":"sources/vendor/Checkpoint/firewallos/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual unknown"},{"location":"sources/vendor/Checkpoint/firewallos/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cp_log:fw:syslog None"},{"location":"sources/vendor/Checkpoint/firewallos/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes checkpoint_fw cp_log:fw:syslog netops none"},{"location":"sources/vendor/Checkpoint/firewallos/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-checkpoint_fw.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-checkpoint_fw[sc4s-vps] {\n filter { \n        host(\"^checkpoint_fw-\")\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('checkpoint')\n            product('fw')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Checkpoint/logexporter_5424/","title":"Log Exporter (Syslog)","text":""},{"location":"sources/vendor/Checkpoint/logexporter_5424/#key-facts","title":"Key Facts","text":" Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4293 Product Manual https://sc1.checkpoint.com/documents/App_for_Splunk/html_frameset.htm"},{"location":"sources/vendor/Checkpoint/logexporter_5424/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cp_log:syslog None"},{"location":"sources/vendor/Checkpoint/logexporter_5424/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes checkpoint_syslog cp_log:syslog netops none"},{"location":"sources/vendor/Checkpoint/logexporter_5424/#source-and-index-configuration","title":"Source and Index Configuration","text":"

Checkpoint Software blades with a CIM mapping have been sub-grouped into sources to allow routing to appropriate indexes. All other source metadata is left as their defaults.

key source index notes checkpoint_syslog_dlp dlp netdlp none checkpoint_syslog_email email email none checkpoint_syslog_firewall firewall netfw none checkpoint_syslog_sessions sessions netops none checkpoint_syslog_web web netproxy none checkpoint_syslog_audit audit netops none checkpoint_syslog_endpoint endpoint netops none checkpoint_syslog_network network netops checkpoint_syslog_ids ids netids checkpoint_syslog_ids_malware ids_malware netids"},{"location":"sources/vendor/Checkpoint/logexporter_5424/#source-configuration","title":"Source Configuration","text":""},{"location":"sources/vendor/Checkpoint/logexporter_5424/#splunk-side","title":"Splunk Side","text":""},{"location":"sources/vendor/Checkpoint/logexporter_5424/#checkpoint-side","title":"Checkpoint Side","text":"
  1. Go to the cp terminal and use the expert command to log-in in expert mode.
  2. Ensure the built-in variable $EXPORTERDIR shell variable is defined with:
echo \"$EXPORTERDIR\"\n
  1. Create a new Log Exporter target in $EXPORTERDIR/targets with:
LOG_EXPORTER_NAME='SyslogToSplunk' # Name this something unique but meaningful\nTARGET_SERVER='example.internal' # The indexer or heavy forwarder to send logs to. Can be an FQDN or an IP address.\nTARGET_PORT='514' # Syslog defaults to 514\nTARGET_PROTOCOL='tcp' # IETF Syslog is specifically TCP\n\ncp_log_export add name \"$LOG_EXPORTER_NAME\" target-server \"$TARGET_SERVER\" target-port \"$TARGET_PORT\" protocol \"$TARGET_PROTOCOL\" format 'syslog'\n
  1. Make a global copy of the built-in Syslog format definition with:
cp \"$EXPORTERDIR/conf/SyslogFormatDefinition.xml\" \"$EXPORTERDIR/conf/SplunkRecommendedFormatDefinition.xml\"\n
  1. Edit $EXPORTERDIR/conf/SplunkRecommendedFormatDefinition.xml by modifying the start_message_body, fields_separatator, and field_value_separatator keys as shown below. a. Note: The misspelling of \u201cseparator\u201d as \u201cseparatator\u201d is intentional, and is to line up with both Checkpoint\u2019s documentation and parser implementation.
<start_message_body>[sc4s@2620 </start_message_body>\n<!-- ... -->\n<fields_separatator> </fields_separatator>\n<!-- ... -->\n<field_value_separatator>=</field_value_separatator>\n
  1. Copy the new format config to your new target\u2019s conf directory with:
cp \"$EXPORTERDIR/conf/SplunkRecommendedFormatDefinition.xml\"  \"$EXPORTERDIR/targets/$LOG_EXPORTER_NAME/conf\"\n
  1. Edit $EXPORTERDIR/targets/$LOG_EXPORTER_NAME/targetConfiguration.xml by adding the reference to the $EXPORTERDIR/targets/$LOG_EXPORTER_NAME/conf/SplunkRecommendedFormatDefinition.xml under the key <formatHeaderFile>. a. For example, if $EXPORTERDIR is /opt/CPrt-R81/log_exporter and $LOG_EXPORTER_NAME is SyslogToSplunk, the absolute path will become:
<formatHeaderFile>/opt/CPrt-R81/log_exporter/targets/SyslogToSplunk/conf/SplunkRecommendedFormatDefinition.xml</formatHeaderFile>\n
  1. Restart the new log exporter with:
cp_log_export restart name \"$LOG_EXPORTER_NAME\"\n
  1. Warning: If you\u2019re migrating from the old Splunk Syslog format, make sure that the older format\u2019s log exporter is disabled, as it would lead to data duplication.
"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/","title":"Log Exporter (Splunk)","text":"

The \u201cSplunk Format\u201d is legacy and should not be used for new deployments see Log Exporter (Syslog)

"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/#key-facts","title":"Key Facts","text":"

The Splunk host field will be derived as follows using the first match

If the host is in the format <host>-v_<bladename> use bladename for host

"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4293/ Product Manual https://sc1.checkpoint.com/documents/App_for_Splunk/html_frameset.htm"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cp_log None"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes checkpoint_splunk cp_log netops none"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/#source-and-index-configuration","title":"Source and Index Configuration","text":"

Checkpoint Software blades with CIM mapping have been sub-grouped into sources to allow routing to appropriate indexes. All other source meta data is left at default

key source index notes checkpoint_splunk_dlp dlp netdlp none checkpoint_splunk_email email email none checkpoint_splunk_firewall firewall netfw none checkpoint_splunk_os program:${program} netops none checkpoint_splunk_sessions sessions netops none checkpoint_splunk_web web netproxy none checkpoint_splunk_audit audit netops none checkpoint_splunk_endpoint endpoint netops none checkpoint_splunk_network network netops checkpoint_splunk_ids ids netids checkpoint_splunk_ids_malware ids_malware netids"},{"location":"sources/vendor/Checkpoint/logexporter_legacy/#options","title":"Options","text":"Variable default description SC4S_LISTEN_CHECKPOINT_SPLUNK_NOISE_CONTROL no Suppress any duplicate product+loguid pairs processed within 2 seconds of the last matching event SC4S_LISTEN_CHECKPOINT_SPLUNK_OLD_HOST_RULES empty string when set to yes reverts host name selection order to originsicname\u2013>origin_sic_name\u2013>hostname"},{"location":"sources/vendor/Cisco/cisco_ace/","title":"Application Control Engine (ACE)","text":""},{"location":"sources/vendor/Cisco/cisco_ace/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_ace/#links","title":"Links","text":"Ref Link Splunk Add-on None"},{"location":"sources/vendor/Cisco/cisco_ace/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:ace None"},{"location":"sources/vendor/Cisco/cisco_ace/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_ace cisco:ace netops none"},{"location":"sources/vendor/Cisco/cisco_acs/","title":"Cisco Access Control System (ACS)","text":""},{"location":"sources/vendor/Cisco/cisco_acs/#key-facts","title":"Key facts","text":" Ref Link Splunk Add-on https://splunkbase.splunk.com/app/1811/ Product Manual https://community.cisco.com/t5/security-documents/acs-5-x-configuring-the-external-syslog-server/ta-p/3143143"},{"location":"sources/vendor/Cisco/cisco_acs/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:acs Aggregation used"},{"location":"sources/vendor/Cisco/cisco_acs/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_acs cisco:acs netauth None"},{"location":"sources/vendor/Cisco/cisco_acs/#splunk-setup-and-configuration","title":"Splunk Setup and Configuration","text":"
EXTRACT-AA-signature = CSCOacs_(?<signature>\\S+):?\n# Note the value of this config is empty to disable\nEXTRACT-AA-syslog_message = \nEXTRACT-acs_message_header2 = ^CSCOacs_\\S+\\s+(?<log_session_id>\\S+)\\s+(?<total_segments>\\d+)\\s+(?<segment_number>\\d+)\\s+(?<acs_message>.*)\n
"},{"location":"sources/vendor/Cisco/cisco_asa/","title":"ASA/FTD (Firepower)","text":""},{"location":"sources/vendor/Cisco/cisco_asa/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_asa/#links","title":"Links","text":"Ref Link Splunk Add-on for ASA (No long supports FWSM and PIX) https://splunkbase.splunk.com/app/1620/ Cisco eStreamer for Splunk https://splunkbase.splunk.com/app/1629/ Product Manual https://www.cisco.com/c/en/us/support/docs/security/pix-500-series-security-appliances/63884-config-asa-00.html"},{"location":"sources/vendor/Cisco/cisco_asa/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:asa cisco FTD Firepower will also use this source type except those noted below cisco:ftd cisco FTD Firepower will also use this source type except those noted below cisco:fwsm Splunk has cisco:pix cisco PIX will also use this source type except those noted below cisco:firepower:syslog FTD Unified events see https://www.cisco.com/c/en/us/td/docs/security/firepower/Syslogs/b_fptd_syslog_guide.pdf"},{"location":"sources/vendor/Cisco/cisco_asa/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_asa cisco:asa netfw none cisco_fwsm cisco:fwsm netfw none cisco_pix cisco:pix netfw none cisco_firepower cisco:firepower:syslog netids none cisco_ftd cisco:ftd netfw none"},{"location":"sources/vendor/Cisco/cisco_asa/#source-setup-and-configuration","title":"Source Setup and Configuration","text":""},{"location":"sources/vendor/Cisco/cisco_dna/","title":"Digital Network Area(DNA)","text":""},{"location":"sources/vendor/Cisco/cisco_dna/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_dna/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual multiple"},{"location":"sources/vendor/Cisco/cisco_dna/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:dna None"},{"location":"sources/vendor/Cisco/cisco_dna/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_dna cisco:dna netops None"},{"location":"sources/vendor/Cisco/cisco_dna/#sc4s-options","title":"SC4S Options","text":"Variable default description SC4S_SOURCE_CISCO_DNA_FIXHOST yes Current firmware incorrectly sends the value of the syslog server host name (destination) in the host field if this is ever corrected this value will need to be set back to no we suggest using yes"},{"location":"sources/vendor/Cisco/cisco_esa/","title":"Email Security Appliance (ESA)","text":""},{"location":"sources/vendor/Cisco/cisco_esa/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_esa/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/1761/ Product Manual https://www.cisco.com/c/en/us/td/docs/security/esa/esa14-0/user_guide/b_ESA_Admin_Guide_14-0.pdf"},{"location":"sources/vendor/Cisco/cisco_esa/#esa-log-configuration","title":"ESA Log Configuration","text":"

If feasible for you, you can use following log configuration on the ESA. The log name configured on the ESA can then be parsed easily by sc4s.

ESA Log Name ESA Log Type sc4s_gui_logs HTTP Logs sc4s_mail_logs IronPort Text Mail Logs sc4s_amp AMP Engine Logs sc4s_audit_logs Audit Logs sc4s_antispam Anti-Spam Logs sc4s_content_scanner Content Scanner Logs sc4s_error_logs IronPort Text Mail Logs (Loglevel: Critical) sc4s_system_logs System Logs"},{"location":"sources/vendor/Cisco/cisco_esa/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:esa:http The HTTP logs of Cisco IronPort ESA record information about the secure HTTP services enabled on the interface. cisco:esa:textmail Text mail logs of Cisco IronPort ESA record email information and status. cisco:esa:amp Advanced Malware Protection (AMP) of Cisco IronPort ESA records malware detection and blocking, continuous analysis, and retrospective alerting details. cisco:esa:authentication These logs record successful user logins and unsuccessful login attempts. cisco:esa:cef The Consolidated Event Logs summarizes each message event in a single log line. cisco:esa:error_logs Error logs of Cisco IronPort ESA records error that occurred for ESA configurations or internal issues. cisco:esa:content_scanner Content scanner logs of Cisco IronPort ESA scans messages that contain password-protected attachments for malicious activity and data privacy. cisco:esa:antispam Anti-spam logs record the status of the anti-spam scanning feature of your system, including the status on receiving updates of the latest anti-spam rules. Also, any logs related to the Context Adaptive Scanning Engine are logged here. cisco:esa:system_logs System logs record the boot information, virtual appliance license expiration alerts, DNS status information, and comments users typed using commit command."},{"location":"sources/vendor/Cisco/cisco_esa/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_esa cisco:esa:http email None cisco_esa cisco:esa:textmail email None cisco_esa cisco:esa:amp email None cisco_esa cisco:esa:authentication email None cisco_esa cisco:esa:cef email None cisco_esa cisco:esa:error_logs email None cisco_esa cisco:esa:content_scanner email None cisco_esa cisco:esa:antispam email None cisco_esa cisco:esa:system_logs email None"},{"location":"sources/vendor/Cisco/cisco_esa/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-cisco_esa.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-cisco_esa[sc4s-vps] {\n filter { \n        host(\"^esa-\")\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('cisco')\n            product('esa')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Cisco/cisco_imc/","title":"Cisco Integrated Management Controller (IMC)","text":""},{"location":"sources/vendor/Cisco/cisco_imc/#key-facts","title":"Key facts","text":" Ref Link Splunk Add-on na Product Manual multiple"},{"location":"sources/vendor/Cisco/cisco_imc/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:ucm None"},{"location":"sources/vendor/Cisco/cisco_imc/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_cimc cisco:infraops infraops None"},{"location":"sources/vendor/Cisco/cisco_ios/","title":"Cisco Networking (IOS and Compatible)","text":"

Cisco Network Products of multiple types share common logging characteristics the following types are known to be compatible:

"},{"location":"sources/vendor/Cisco/cisco_ios/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_ios/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/1467/ IOS Manual https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst2960/software/release/12-2_55_se/configuration/guide/scg_2960/swlog.html NX-OS Manual https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/6-x/system_management/configuration/guide/b_Cisco_Nexus_9000_Series_NX-OS_System_Management_Configuration_Guide/sm_5syslog.html Cisco ACI https://community.cisco.com/legacyfs/online/attachments/document/technote-aci-syslog_external-v1.pdf Cisco WLC & AP https://www.cisco.com/c/en/us/support/docs/wireless/4100-series-wireless-lan-controllers/107252-WLC-Syslog-Server.html#anc8 Cisco IOS-XR https://www.cisco.com/c/en/us/td/docs/iosxr/cisco8000/system-monitoring/73x/b-system-monitoring-cg-cisco8k-73x/implementing_system_logging.html"},{"location":"sources/vendor/Cisco/cisco_ios/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:ios This source type is also used for NX-OS, ACI and WLC product lines cisco:xr This source type is used for Cisco IOS XR"},{"location":"sources/vendor/Cisco/cisco_ios/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_ios cisco:ios netops none cisco_xr cisco:xr netops none"},{"location":"sources/vendor/Cisco/cisco_ios/#filter-type","title":"Filter type","text":""},{"location":"sources/vendor/Cisco/cisco_ios/#setup-and-configuration","title":"Setup and Configuration","text":"

If you want to send raw logs to splunk (without any drop) then only use this feature Please set following property in env_file:

SC4S_ENABLE_CISCO_IOS_RAW_MSG=yes\n
Restart SC4S and it will send entire message without any drop.

"},{"location":"sources/vendor/Cisco/cisco_ise/","title":"Cisco ise","text":""},{"location":"sources/vendor/Cisco/cisco_ise/#cisco-identity-services-engine-ise","title":"Cisco Identity Services Engine (ISE)","text":""},{"location":"sources/vendor/Cisco/cisco_ise/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_ise/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/1915/ Product Manual https://www.cisco.com/c/en/us/td/docs/security/ise/syslog/Cisco_ISE_Syslogs/m_IntrotoSyslogs.html"},{"location":"sources/vendor/Cisco/cisco_ise/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:ise:syslog Aggregation used"},{"location":"sources/vendor/Cisco/cisco_ise/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_ise cisco:ise:syslog netauth None"},{"location":"sources/vendor/Cisco/cisco_meraki/","title":"Cisco meraki","text":""},{"location":"sources/vendor/Cisco/cisco_meraki/#meraki-mr-ms-mx","title":"Meraki (MR, MS, MX)","text":""},{"location":"sources/vendor/Cisco/cisco_meraki/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_meraki/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3018 Product Manual https://documentation.meraki.com/zGeneral_Administration/Monitoring_and_Reporting/Syslog_Server_Overview_and_Configuration"},{"location":"sources/vendor/Cisco/cisco_meraki/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes meraki:accesspoints Not compliant with the Splunk Add-on meraki:securityappliances Not compliant with the Splunk Add-on meraki:switches Not compliant with the Splunk Add-on meraki For all Meraki devices. Compliant with the Splunk Add-on"},{"location":"sources/vendor/Cisco/cisco_meraki/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes meraki_accesspoints meraki:accesspoints netfw meraki_securityappliances meraki:securityappliances netfw meraki_switches meraki:switches netfw cisco_meraki meraki netfw"},{"location":"sources/vendor/Cisco/cisco_meraki/#parser-configuration","title":"Parser Configuration","text":"
  1. Either by defining Cisco Meraki hosts:

    #/opt/sc4s/local/config/app_parsers/app-vps-cisco_meraki.conf\n#File name provided is a suggestion it must be globally unique\n\nblock parser app-vps-test-cisco_meraki() {\n    channel {\n        if {\n            filter { host(\"^test-mx-\") };\n            parser { \n                p_set_netsource_fields(\n                    vendor('meraki')\n                    product('securityappliances')\n                ); \n            };\n        } elif {\n            filter { host(\"^test-mr-\") };\n            parser { \n                p_set_netsource_fields(\n                    vendor('meraki')\n                    product('accesspoints')\n                ); \n            };\n        } elif {\n            filter { host(\"^test-ms-\") };\n            parser { \n                p_set_netsource_fields(\n                    vendor('meraki')\n                    product('switches')\n                ); \n            };\n        } else {\n            parser { \n                p_set_netsource_fields(\n                    vendor('cisco')\n                    product('meraki')\n                ); \n            };\n        };\n    }; \n};\n\n\napplication app-vps-test-cisco_meraki[sc4s-vps] {\n    filter {\n        host(\"^test-meraki-\")\n        or host(\"^test-mx-\")\n        or host(\"^test-mr-\")\n        or host(\"^test-ms-\")\n    };\n    parser { app-vps-test-cisco_meraki(); };\n};\n

  2. Or by a unique port:

    # /opt/sc4s/env_file\nSC4S_LISTEN_CISCO_MERAKI_UDP_PORT=5004\nSC4S_LISTEN_MERAKI_SECURITYAPPLIANCES_UDP_PORT=5005\nSC4S_LISTEN_MERAKI_ACCESSPOINTS_UDP_PORT=5006\nSC4S_LISTEN_MERAKI_SWITCHES_UDP_PORT=5007\n

"},{"location":"sources/vendor/Cisco/cisco_mm/","title":"Meeting Management","text":""},{"location":"sources/vendor/Cisco/cisco_mm/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_mm/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual multiple"},{"location":"sources/vendor/Cisco/cisco_mm/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:mm:system:* final component take from the program field of the message header cisco:mm:audit Requires setup of vendor product by source see below"},{"location":"sources/vendor/Cisco/cisco_mm/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_mm_system cisco:mm:system:* netops None cisco_mm_audit cisco:mm:audit netops None"},{"location":"sources/vendor/Cisco/cisco_mm/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-cisco_mm.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-cisco_mm[sc4s-vps] {\n filter { \n        host('^test-cmm-')\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('cisco')\n            product('mm')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Cisco/cisco_ms/","title":"Meeting Server","text":""},{"location":"sources/vendor/Cisco/cisco_ms/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_ms/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual multiple"},{"location":"sources/vendor/Cisco/cisco_ms/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:ms None"},{"location":"sources/vendor/Cisco/cisco_ms/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_ms cisco:ms netops None"},{"location":"sources/vendor/Cisco/cisco_ms/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-cisco_ms.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-cisco_ms[sc4s-vps] {\n filter { \n        host('^test-cms-')\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('cisco')\n            product('ms')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Cisco/cisco_tvcs/","title":"TelePresence Video Communication Server (TVCS)","text":""},{"location":"sources/vendor/Cisco/cisco_tvcs/#links","title":"Links","text":"Ref Link Product Manual https://www.cisco.com/c/en/us/products/unified-communications/telepresence-video-communication-server-vcs/index.html"},{"location":"sources/vendor/Cisco/cisco_tvcs/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:vcs none"},{"location":"sources/vendor/Cisco/cisco_tvcs/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_tvcs cisco:tvcs main none"},{"location":"sources/vendor/Cisco/cisco_ucm/","title":"Unified Communications Manager (UCM)","text":""},{"location":"sources/vendor/Cisco/cisco_ucm/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_ucm/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual multiple"},{"location":"sources/vendor/Cisco/cisco_ucm/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:ucm None"},{"location":"sources/vendor/Cisco/cisco_ucm/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_ucm cisco:ucm ucm None"},{"location":"sources/vendor/Cisco/cisco_ucshx/","title":"Unified Computing System (UCS)","text":""},{"location":"sources/vendor/Cisco/cisco_ucshx/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_ucshx/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual multiple"},{"location":"sources/vendor/Cisco/cisco_ucshx/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:ucs None"},{"location":"sources/vendor/Cisco/cisco_ucshx/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_ucs cisco:ucs infraops None"},{"location":"sources/vendor/Cisco/cisco_viptela/","title":"Viptela","text":""},{"location":"sources/vendor/Cisco/cisco_viptela/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_viptela/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual multiple"},{"location":"sources/vendor/Cisco/cisco_viptela/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cisco:viptela None"},{"location":"sources/vendor/Cisco/cisco_viptela/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_viptela cisco:viptela netops None"},{"location":"sources/vendor/Cisco/cisco_wsa/","title":"Web Security Appliance (WSA)","text":""},{"location":"sources/vendor/Cisco/cisco_wsa/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cisco/cisco_wsa/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/1747/ Product Manual https://www.cisco.com/c/en/us/td/docs/security/wsa/wsa11-7/user_guide/b_WSA_UserGuide_11_7.html"},{"location":"sources/vendor/Cisco/cisco_wsa/#sourcetypes","title":"Sourcetypes","text":"

| cisco:wsa:l4tm | The L4TM logs of Cisco IronPort WSA record sites added to the L4TM block and allow lists. | | cisco:wsa:squid | The access logs of Cisco IronPort WSA version prior to 11.7 record Web Proxy client history in squid. | | cisco:wsa:squid:new | The access logs of Cisco IronPort WSA version since 11.7 record Web Proxy client history in squid. | | cisco:wsa:w3c:recommended | The access logs of Cisco IronPort WSA version since 12.5 record Web Proxy client history in W3C. |

"},{"location":"sources/vendor/Cisco/cisco_wsa/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cisco_wsa cisco:wsa:l4tm netproxy None cisco_wsa cisco:wsa:squid netproxy None cisco_wsa cisco:wsa:squid:new netproxy None cisco_wsa cisco:wsa:w3c:recommended netproxy None"},{"location":"sources/vendor/Cisco/cisco_wsa/#filter-type","title":"Filter type","text":"

IP, Netmask or Host

"},{"location":"sources/vendor/Cisco/cisco_wsa/#source-setup-and-configuration","title":"Source Setup and Configuration","text":""},{"location":"sources/vendor/Cisco/cisco_wsa/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-cisco_wsa.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-cisco_wsa[sc4s-vps] {\n filter { \n        host(\"^wsa-\")\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('cisco')\n            product('wsa')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Citrix/netscaler/","title":"Netscaler ADC/SDX","text":""},{"location":"sources/vendor/Citrix/netscaler/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Citrix/netscaler/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2770/ Product Manual https://docs.citrix.com/en-us/citrix-adc/12-1/system/audit-logging/configuring-audit-logging.html"},{"location":"sources/vendor/Citrix/netscaler/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes citrix:netscaler:syslog None citrix:netscaler:appfw None citrix:netscaler:appfw:cef None"},{"location":"sources/vendor/Citrix/netscaler/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes citrix_netscaler citrix:netscaler:syslog netfw none citrix_netscaler citrix:netscaler:appfw netfw none citrix_netscaler citrix:netscaler:appfw:cef netfw none"},{"location":"sources/vendor/Citrix/netscaler/#source-setup-and-configuration","title":"Source Setup and Configuration","text":""},{"location":"sources/vendor/Clearswift/","title":"WAF (Cloud)","text":""},{"location":"sources/vendor/Clearswift/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Clearswift/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual https://clearswifthelp.clearswift.com/SEG/472/en/Content/Sections/SystemsCenter/SYCLogList.htm"},{"location":"sources/vendor/Clearswift/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes clearswift:${PROGRAM} none"},{"location":"sources/vendor/Clearswift/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes clearswift clearswift:${PROGRAM} email None"},{"location":"sources/vendor/Clearswift/#parser-configuration","title":"Parser Configuration","text":"

```c

"},{"location":"sources/vendor/Clearswift/#optsc4slocalconfigapp-parsersapp-vps-clearswiftconf","title":"/opt/sc4s/local/config/app-parsers/app-vps-clearswift.conf","text":""},{"location":"sources/vendor/Clearswift/#file-name-provided-is-a-suggestion-it-must-be-globally-unique","title":"File name provided is a suggestion it must be globally unique","text":"

application app-vps-clearswift[sc4s-vps] { filter { host(\u201ctest-clearswift-\u201d type(string) flags(prefix)) }; parser { p_set_netsource_fields( vendor(\u2018clearswift\u2019) product(\u2018clearswift\u2019) ); }; };

"},{"location":"sources/vendor/Cohesity/cluster/","title":"Cluster","text":""},{"location":"sources/vendor/Cohesity/cluster/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cohesity/cluster/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Cohesity/cluster/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cohesity:cluster:audit None cohesity:cluster:dataprotection None cohesity:api:audit None cohesity:alerts None"},{"location":"sources/vendor/Cohesity/cluster/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes cohesity_cluster_audit cohesity:cluster:audit infraops none cohesity_api_audit cohesity:api:audit infraops none cohesity_cluster_dataprotection cohesity:cluster:dataprotection infraops none cohesity_alerts cohesity:alerts infraops none"},{"location":"sources/vendor/CyberArk/epv/","title":"Vendor - CyberArk","text":""},{"location":"sources/vendor/CyberArk/epv/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/CyberArk/epv/#product-epv","title":"Product - EPV","text":"Ref Link Splunk Add-on CyberArk https://splunkbase.splunk.com/app/2891/ Add-on Manual https://docs.splunk.com/Documentation/AddOns/latest/CyberArk/About"},{"location":"sources/vendor/CyberArk/epv/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cyberark:epv:cef None"},{"location":"sources/vendor/CyberArk/epv/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes Cyber-Ark_Vault cyberark:epv:cef netauth none"},{"location":"sources/vendor/CyberArk/pta/","title":"PTA","text":""},{"location":"sources/vendor/CyberArk/pta/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/CyberArk/pta/#links","title":"Links","text":"Ref Link Splunk Add-on CyberArk https://splunkbase.splunk.com/app/2891/ Add-on Manual https://docs.splunk.com/Documentation/AddOns/latest/CyberArk/About Product Manual https://docs.cyberark.com/PAS/Latest/en/Content/PTA/CEF-Based-Format-Definition.htm"},{"location":"sources/vendor/CyberArk/pta/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cyberark:pta:cef None"},{"location":"sources/vendor/CyberArk/pta/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes CyberArk_PTA cyberark:pta:cef main none"},{"location":"sources/vendor/Cylance/protect/","title":"Protect","text":""},{"location":"sources/vendor/Cylance/protect/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Cylance/protect/#links","title":"Links","text":"Ref Link Splunk Add-on CyberArk https://splunkbase.splunk.com/app/3709/"},{"location":"sources/vendor/Cylance/protect/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes syslog_protect Catchall syslog_threat_classification None syslog_audit_log None syslog_exploit None syslog_app_control None syslog_threat None syslog_device None syslog_device_control None syslog_script_control None syslog_optics None"},{"location":"sources/vendor/Cylance/protect/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes cylance_protect syslog_protect epintel none cylance_protect_auditlog syslog_audit_log epintel none cylance_protect_threatclassification syslog_threat_classification epintel none cylance_protect_exploitattempt syslog_exploit epintel none cylance_protect_appcontrol syslog_app_control epintel none cylance_protect_threat syslog_threat epintel none cylance_protect_device syslog_device epintel none cylance_protect_devicecontrol syslog_device_control epintel none cylance_protect_scriptcontrol syslog_protect epintel none cylance_protect_scriptcontrol syslog_script_control epintel none cylance_protect_optics syslog_optics epintel none"},{"location":"sources/vendor/DARKTRACE/darktrace/","title":"Darktrace","text":""},{"location":"sources/vendor/DARKTRACE/darktrace/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/DARKTRACE/darktrace/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/DARKTRACE/darktrace/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes darktrace none darktrace:audit none"},{"location":"sources/vendor/DARKTRACE/darktrace/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes darktrace_syslog darktrace netids None darktrace_audit darktrace_audit netids None"},{"location":"sources/vendor/Dell/avamar/","title":"Dell Avamar","text":""},{"location":"sources/vendor/Dell/avamar/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Dell/avamar/#links","title":"Links","text":"Ref Link Splunk Add-on na Add-on Manual https://www.delltechnologies.com/asset/en-us/products/data-protection/technical-support/docu91832.pdf"},{"location":"sources/vendor/Dell/avamar/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes dell:avamar:msc None"},{"location":"sources/vendor/Dell/avamar/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes dell_avamar_cms dell:avamar:msc netops none"},{"location":"sources/vendor/Dell/cmc/","title":"CMC (VRTX)","text":""},{"location":"sources/vendor/Dell/cmc/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Dell/cmc/#links","title":"Links","text":"Ref Link Splunk Add-on na Add-on Manual https://www.dell.com/support/manuals/en-us/dell-chassis-management-controller-v3.10-dell-poweredge-vrtx/cmcvrtx31ug/overview?guid=guid-84595265-d37c-4765-8890-90f629737b17"},{"location":"sources/vendor/Dell/cmc/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes dell:poweredge:cmc:syslog None"},{"location":"sources/vendor/Dell/cmc/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes dell_poweredge_cmc dell:poweredge:cmc:syslog infraops none"},{"location":"sources/vendor/Dell/cmc/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-dell_cmc.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-dell_cmc[sc4s-vps] {\n filter { \n        host(\"test-dell-cmc-\" type(string) flags(prefix))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('dell')\n            product('poweredge_cmc')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Dell/emc_powerswitchn/","title":"EMC Powerswitch N Series","text":""},{"location":"sources/vendor/Dell/emc_powerswitchn/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Dell/emc_powerswitchn/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual https://dl.dell.com/manuals/common/networking_nxxug_en-us.pdf"},{"location":"sources/vendor/Dell/emc_powerswitchn/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes dell:emc:powerswitch:n None"},{"location":"sources/vendor/Dell/emc_powerswitchn/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes dellemc_powerswitch_n all netops none"},{"location":"sources/vendor/Dell/emc_powerswitchn/#parser-configuration","title":"Parser Configuration","text":"
  1. Through sc4s-vps

    #/opt/sc4s/local/config/app-parsers/app-vps-dell_switch_n.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-dell_switch_n[sc4s-vps] {\n filter { \n        host(\"test-dell-switch-n-\" type(string) flags(prefix))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('dellemc')\n            product('powerswitch_n')\n        ); \n    };   \n};\n

  2. or through unique port

    # /opt/sc4s/env_file \nSC4S_LISTEN_DELLEMC_POWERSWITCH_N_UDP_PORT=5005\n

"},{"location":"sources/vendor/Dell/idrac/","title":"iDrac","text":""},{"location":"sources/vendor/Dell/idrac/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Dell/idrac/#links","title":"Links","text":"Ref Link Splunk Add-on na Add-on Manual https://www.dell.com/support/manuals/en-au/dell-opnmang-sw-v8.1/eemi_13g_v1.2-v1/introduction?guid=guid-8f22a1a9-ac01-43d1-a9d2-390ca6708d5e&lang=en-us"},{"location":"sources/vendor/Dell/idrac/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes dell:poweredge:idrac:syslog None"},{"location":"sources/vendor/Dell/idrac/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes dell_poweredge_idrac dell:poweredge:idrac:syslog infraops none"},{"location":"sources/vendor/Dell/rsa_secureid/","title":"RSA SecureID","text":""},{"location":"sources/vendor/Dell/rsa_secureid/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Dell/rsa_secureid/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2958/ Product Manual https://docs.splunk.com/Documentation/AddOns/released/RSASecurID/Aboutthisaddon"},{"location":"sources/vendor/Dell/rsa_secureid/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes rsa:securid:syslog Catchall; used if a more specific source type can not be identified rsa:securid:admin:syslog None rsa:securid:runtime:syslog None nix:syslog None"},{"location":"sources/vendor/Dell/rsa_secureid/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes dell-rsa_secureid all netauth none dell-rsa_secureid_trace rsa:securid:trace netauth none dell-rsa_secureid nix:syslog osnix uses os_nix key of not configured bye host/ip/port"},{"location":"sources/vendor/Dell/rsa_secureid/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app_parsers/app-vps-dell_rsa_secureid.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-dell_rsa_secureid[sc4s-vps] {\n filter { \n        host(\"test_rsasecureid*\" type(glob))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('dell')\n            product('rsa_secureid')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Dell/sonic/","title":"Dell Networking SONiC","text":""},{"location":"sources/vendor/Dell/sonic/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Dell/sonic/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual link"},{"location":"sources/vendor/Dell/sonic/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes dell:sonic None"},{"location":"sources/vendor/Dell/sonic/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes dell_sonic dell:sonic netops none"},{"location":"sources/vendor/Dell/sonic/#parser-configuration","title":"Parser Configuration","text":"
  1. Through sc4s-vps

    #/opt/sc4s/local/config/app-parsers/app-vps-dell_sonic.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-dell_sonic[sc4s-vps] {\n filter { \n        host(\"sonic\" type(string) flags(prefix))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('dell')\n            product('sonic')\n        ); \n    };   \n};\n

  2. or through unique port

    # /opt/sc4s/env_file \nSC4S_LISTEN_DELL_SONIC_UDP_PORT=5005\n

"},{"location":"sources/vendor/Dell/sonicwall/","title":"Sonicwall","text":""},{"location":"sources/vendor/Dell/sonicwall/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Dell/sonicwall/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/6203/"},{"location":"sources/vendor/Dell/sonicwall/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes dell:sonicwall None"},{"location":"sources/vendor/Dell/sonicwall/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes dell_sonicwall-firewall dell:sonicwall netfw none"},{"location":"sources/vendor/Dell/sonicwall/#options","title":"Options","text":"Variable default description SC4S_DEST_DELL_SONICWALL-FIREWALL_SPLUNK_HEC_FMT JSON Restructure data from vendor format to json for splunk destinations set to \u201cNONE\u201d for native format SC4S_DEST_DELL_SONICWALL-FIREWALL_SYSLOG_FMT SDATA Restructure data from vendor format to SDATA for SYSLOG destinations set to \u201cNONE\u201d for native format"},{"location":"sources/vendor/Dell/sonicwall/#note","title":"Note:","text":"

The sourcetype has been changed in version 2.35.0 making it compliant with corresponding TA.

"},{"location":"sources/vendor/F5/bigip/","title":"BigIP","text":""},{"location":"sources/vendor/F5/bigip/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/F5/bigip/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2680/ Product Manual unknown"},{"location":"sources/vendor/F5/bigip/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes f5:bigip:syslog None f5:bigip:irule None f5:bigip:ltm:http:irule None f5:bigip:gtm:dns:request:irule None f5:bigip:gtm:dns:response:irule None f5:bigip:ltm:failed:irule None f5:bigip:asm:syslog None f5:bigip:apm:syslog None nix:syslog None f5:bigip:ltm:access_json User defined configuration via irule producing a RFC5424 syslog event with json content within the message field <111>1 2020-05-28T22:48:15Z foo.example.com F5 - access_json - {\"event_type\":\"HTTP_REQUEST\", \"src_ip\":\"10.66.98.41\"} This source type requires a customer specific Splunk Add-on for utility value"},{"location":"sources/vendor/F5/bigip/#index-configuration","title":"Index Configuration","text":"key index notes f5_bigip netops none f5_bigip_irule netops none f5_bigip_asm netwaf none f5_bigip_apm netops none f5_bigip_nix netops if f_f5_bigip is not set the index osnix will be used f5_bigip_access_json netops none"},{"location":"sources/vendor/F5/bigip/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-f5_bigip.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-f5_bigip[sc4s-vps] {\n filter { \n        \"${HOST}\" eq \"f5_bigip\"\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('f5')\n            product('bigip')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/FireEye/cms/","title":"CMS","text":""},{"location":"sources/vendor/FireEye/cms/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/FireEye/cms/#links","title":"Links","text":"Ref Link Technology Add-On for FireEye https://splunkbase.splunk.com/app/1904/"},{"location":"sources/vendor/FireEye/cms/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes fe_cef_syslog"},{"location":"sources/vendor/FireEye/cms/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes FireEye_CMS fe_cef_syslog fireeye"},{"location":"sources/vendor/FireEye/emps/","title":"eMPS","text":""},{"location":"sources/vendor/FireEye/emps/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/FireEye/emps/#links","title":"Links","text":"Ref Link Technology Add-On for FireEye https://splunkbase.splunk.com/app/1904/"},{"location":"sources/vendor/FireEye/emps/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes fe_cef_syslog"},{"location":"sources/vendor/FireEye/emps/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes FireEye_eMPS fe_cef_syslog fireeye"},{"location":"sources/vendor/FireEye/etp/","title":"etp","text":""},{"location":"sources/vendor/FireEye/etp/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/FireEye/etp/#links","title":"Links","text":"Ref Link Technology Add-On for FireEye https://splunkbase.splunk.com/app/1904/"},{"location":"sources/vendor/FireEye/etp/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes fe_etp source does not provide host name constant \u201cetp.fireeye.com\u201d is use regardless of region"},{"location":"sources/vendor/FireEye/etp/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes FireEye_ETP fe_etp fireeye"},{"location":"sources/vendor/FireEye/hx/","title":"hx","text":""},{"location":"sources/vendor/FireEye/hx/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/FireEye/hx/#links","title":"Links","text":"Ref Link Technology Add-On for FireEye https://splunkbase.splunk.com/app/1904/"},{"location":"sources/vendor/FireEye/hx/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes hx_cef_syslog"},{"location":"sources/vendor/FireEye/hx/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes fireeye_hx hx_cef_syslog fireeye"},{"location":"sources/vendor/Forcepoint/","title":"Email Security","text":""},{"location":"sources/vendor/Forcepoint/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Forcepoint/#links","title":"Links","text":"Ref Link Splunk Add-on none Product Manual none"},{"location":"sources/vendor/Forcepoint/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes forcepoint:email:kv None"},{"location":"sources/vendor/Forcepoint/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes forcepoint_email forcepoint:email:kv email none"},{"location":"sources/vendor/Forcepoint/webprotect/","title":"Webprotect (Websense)","text":""},{"location":"sources/vendor/Forcepoint/webprotect/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Forcepoint/webprotect/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2966/ Product Manual http://www.websense.com/content/support/library/web/v85/siem/siem.pdf"},{"location":"sources/vendor/Forcepoint/webprotect/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes websense:cg:kv None"},{"location":"sources/vendor/Forcepoint/webprotect/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes forcepoint_webprotect websense:cg:kv netproxy none forcepoint_ websense:cg:kv netproxy if the log is in format of vendor=Forcepoint product= , the key will will be forcepoint_random"},{"location":"sources/vendor/Fortinet/fortimail/","title":"FortiWMail","text":""},{"location":"sources/vendor/Fortinet/fortimail/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Fortinet/fortimail/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3249"},{"location":"sources/vendor/Fortinet/fortimail/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes fml:<type> type value is determined from the message"},{"location":"sources/vendor/Fortinet/fortimail/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes fortinet_fortimail_<type> fml:<type> email type value is determined from the message"},{"location":"sources/vendor/Fortinet/fortios/","title":"Fortios","text":""},{"location":"sources/vendor/Fortinet/fortios/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Fortinet/fortios/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2846/ Product Manual https://docs.fortinet.com/product/fortigate/6.2"},{"location":"sources/vendor/Fortinet/fortios/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes fgt_log Catch-all sourcetype; not used by the TA fgt_traffic None fgt_utm None fgt_event None"},{"location":"sources/vendor/Fortinet/fortios/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes fortinet_fortios_traffic fgt_traffic netfw none fortinet_fortios_utm fgt_utm netfw none fortinet_fortios_event fgt_event netops none fortinet_fortios_log fgt_log netops none"},{"location":"sources/vendor/Fortinet/fortios/#source-setup-and-configuration","title":"Source Setup and Configuration","text":"
config log memory filter\n\nset forward-traffic enable\n\nset local-traffic enable\n\nset sniffer-traffic disable\n\nset anomaly enable\n\nset voip disable\n\nset multicast-traffic enable\n\nset dns enable\n\nend\n\nconfig system global\n\nset cli-audit-log enable\n\nend\n\nconfig log setting\n\nset neighbor-event enable\n\nend\n
"},{"location":"sources/vendor/Fortinet/fortios/#options","title":"Options","text":"Variable default description SC4S_OPTION_FORTINET_SOURCETYPE_PREFIX fgt Notice starting with version 1.6 of the fortinet add-on and app the sourcetype required changes from fgt_* to fortinet_* this is a breaking change to use the new sourcetype set this variable to fortigate in the env_file"},{"location":"sources/vendor/Fortinet/fortiweb/","title":"FortiWeb","text":""},{"location":"sources/vendor/Fortinet/fortiweb/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Fortinet/fortiweb/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4679/ Product Manual https://docs.fortinet.com/product/fortiweb/6.3"},{"location":"sources/vendor/Fortinet/fortiweb/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes fgt_log Catch-all sourcetype; not used by the TA fwb_traffic None fwb_attack None fwb_event None"},{"location":"sources/vendor/Fortinet/fortiweb/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes fortinet_fortiweb_traffic fwb_traffic netfw none fortinet_fortiweb_attack fwb_attack netids none fortinet_fortiweb_event fwb_event netops none fortinet_fortiweb_log fwb_log netops none"},{"location":"sources/vendor/Fortinet/fortiweb/#source-setup-and-configuration","title":"Source Setup and Configuration","text":"
config log syslog-policy\n\nedit splunk  \n\nconfig syslog-server-list \n\nedit 1\n\nset server x.x.x.x\n\nset port 514 (Example. Should be the same as default or dedicated port selected for sc4s)   \n\nend\n\nend\n\nconfig log syslogd\n\nset policy splunk\n\nset status enable\n\nend\n
"},{"location":"sources/vendor/GitHub/","title":"Enterprise Server","text":""},{"location":"sources/vendor/GitHub/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/GitHub/#links","title":"Links","text":"Ref Link Splunk Add-on Product Manual"},{"location":"sources/vendor/GitHub/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes github:enterprise:audit The audit logs of GitHub Enterprise server have information about audites actions performed by github user."},{"location":"sources/vendor/GitHub/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes github_ent github:enterprise:audit gitops None"},{"location":"sources/vendor/HAProxy/syslog/","title":"HAProxy","text":""},{"location":"sources/vendor/HAProxy/syslog/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/HAProxy/syslog/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3135/"},{"location":"sources/vendor/HAProxy/syslog/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes haproxy:tcp Default syslog format haproxy:splunk:http Splunk\u2019s documented custom format. Note: detection is based on client_ip prefix in message"},{"location":"sources/vendor/HAProxy/syslog/#index-configuration","title":"Index Configuration","text":"key index notes haproxy_syslog netlb none"},{"location":"sources/vendor/HPe/ilo/","title":"ILO (4+)","text":""},{"location":"sources/vendor/HPe/ilo/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/HPe/ilo/#links","title":"Links","text":""},{"location":"sources/vendor/HPe/ilo/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes hpe:ilo none"},{"location":"sources/vendor/HPe/ilo/#index-configuration","title":"Index Configuration","text":"key index notes hpe_ilo infraops none"},{"location":"sources/vendor/HPe/jedirect/","title":"Jedirect","text":""},{"location":"sources/vendor/HPe/jedirect/#jetdirect","title":"JetDirect","text":""},{"location":"sources/vendor/HPe/jedirect/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/HPe/jedirect/#links","title":"Links","text":"Ref Link"},{"location":"sources/vendor/HPe/jedirect/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes hpe:jetdirect none"},{"location":"sources/vendor/HPe/jedirect/#index-configuration","title":"Index Configuration","text":"key index notes hpe_jetdirect print none"},{"location":"sources/vendor/HPe/procurve/","title":"Procurve Switch","text":"

HP Procurve switches have multiple log formats used.

"},{"location":"sources/vendor/HPe/procurve/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/HPe/procurve/#links","title":"Links","text":"Ref Link Switch https://support.hpe.com/hpesc/public/docDisplay?docId=a00091844en_us Switch (A Series) (Flex) https://techhub.hpe.com/eginfolib/networking/docs/switches/12500/5998-4870_nmm_cg/content/378584395.htm"},{"location":"sources/vendor/HPe/procurve/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes hpe:procurve none"},{"location":"sources/vendor/HPe/procurve/#index-configuration","title":"Index Configuration","text":"key index notes hpe_procurve netops none"},{"location":"sources/vendor/IBM/datapower/","title":"Data power","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4662/"},{"location":"sources/vendor/IBM/datapower/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes ibm:datapower:syslog Common sourcetype ibm:datapower:* * is taken from the event sourcetype"},{"location":"sources/vendor/IBM/datapower/#index-configuration","title":"Index Configuration","text":"key source index notes ibm_datapower na inifraops none"},{"location":"sources/vendor/IBM/datapower/#parser-configuration","title":"Parser Configuration","text":"

Parser configuration is conditional only required if additional events are produced by the device that do not match the default configuration.

#/opt/sc4s/local/config/app-parsers/app-vps-ibm_datapower.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-ibm_datapower[sc4s-vps] {\n filter { \n        host(\"^test-ibmdp-\")\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('ibm')\n            product('datapower')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/ISC/bind/","title":"bind","text":"

This source type is often re-implemented by specific add-ons such as infoblox or bluecat if a more specific source type is desired see that source documentation for instructions

"},{"location":"sources/vendor/ISC/bind/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/ISC/bind/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2876/"},{"location":"sources/vendor/ISC/bind/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes isc:bind none"},{"location":"sources/vendor/ISC/bind/#index-configuration","title":"Index Configuration","text":"key index notes isc_bind isc:bind none"},{"location":"sources/vendor/ISC/dhcpd/","title":"dhcpd","text":"

This source type is often re-implemented by specific add-ons such as infoblox or bluecat if a more specific source type is desired see that source documentation for instructions

"},{"location":"sources/vendor/ISC/dhcpd/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/ISC/dhcpd/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3010/"},{"location":"sources/vendor/ISC/dhcpd/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes isc:dhcp none"},{"location":"sources/vendor/ISC/dhcpd/#index-configuration","title":"Index Configuration","text":"key index notes isc_dhcp isc:dhcp none"},{"location":"sources/vendor/ISC/dhcpd/#filter-type","title":"Filter type","text":"

MSG Parse: This filter parses message content

"},{"location":"sources/vendor/ISC/dhcpd/#options","title":"Options","text":"

None

"},{"location":"sources/vendor/ISC/dhcpd/#verification","title":"Verification","text":"

An active site will generate frequent events use the following search to check for new events

Verify timestamp, and host values match as expected

index=<asconfigured> (sourcetype=isc:dhcp\")\n
"},{"location":"sources/vendor/Imperva/incapusla/","title":"Incapsula","text":""},{"location":"sources/vendor/Imperva/incapusla/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Imperva/incapusla/#links","title":"Links","text":"Ref Link Splunk Add-on CEF https://bitbucket.org/SPLServices/ta-cef-for-splunk/downloads/ Splunk Add-on Source Specific https://bitbucket.org/SPLServices/ta-cef-imperva-incapsula/downloads/ Product Manual https://docs.imperva.com/bundle/cloud-application-security/page/more/log-configuration.htm"},{"location":"sources/vendor/Imperva/incapusla/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cef Common sourcetype"},{"location":"sources/vendor/Imperva/incapusla/#source","title":"Source","text":"sourcetype notes Imperva:Incapsula Common sourcetype"},{"location":"sources/vendor/Imperva/incapusla/#index-configuration","title":"Index Configuration","text":"key source index notes Incapsula_SIEMintegration Imperva:Incapsula netwaf none"},{"location":"sources/vendor/Imperva/waf/","title":"On-Premises WAF (SecureSphere WAF)","text":""},{"location":"sources/vendor/Imperva/waf/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Imperva/waf/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2874/ Product Manual https://community.microfocus.com/dcvta86296/attachments/dcvta86296/partner-documentation-h-o/22/2/Imperva_SecureSphere_11_5_CEF_Config_Guide_2018.pdf"},{"location":"sources/vendor/Imperva/waf/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes imperva:waf none imperva:waf:firewall:cef none imperva:waf:security:cef none"},{"location":"sources/vendor/Imperva/waf/#index-configuration","title":"Index Configuration","text":"key index notes Imperva Inc._SecureSphere netwaf none"},{"location":"sources/vendor/InfoBlox/","title":"NIOS","text":"

Warning: Despite the TA indication this data source is CIM compliant all versions of NIOS including the most recent available as of 2019-12-17 do not support the DNS data model correctly. For DNS security use cases use Splunk Stream instead.

"},{"location":"sources/vendor/InfoBlox/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/InfoBlox/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2934/ Product Manual https://docs.infoblox.com/display/ILP/NIOS?preview=/8945695/43728387/NIOS_8.4_Admin_Guide.pdf"},{"location":"sources/vendor/InfoBlox/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes infoblox:dns None infoblox:dhcp None infoblox:threatprotect None nix:syslog None"},{"location":"sources/vendor/InfoBlox/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes infoblox_nios_dns infoblox:dns netdns none infoblox_nios_dhcp infoblox:dhcp netipam none infoblox_nios_threatprotect infoblox:threatprotect netids none infoblox_nios_audit infoblox:audit netops none infoblox_nios_fallback infoblox:port netops none"},{"location":"sources/vendor/InfoBlox/#options","title":"Options","text":"Variable default description SC4S_LISTEN_INFOBLOX_NIOS_UDP_PORT empty Vendor specific port SC4S_LISTEN_INFOBLOX_NIOS_TCP_PORT empty Vendor specific port"},{"location":"sources/vendor/InfoBlox/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-infoblox_nios.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-infoblox_nios[sc4s-vps] {\n filter { \n        host(\"infoblox-*\" type(glob))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('infoblox')\n            product('nios')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Juniper/junos/","title":"JunOS","text":""},{"location":"sources/vendor/Juniper/junos/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Juniper/junos/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2847/ JunOS TechLibrary https://www.juniper.net/documentation/en_US/junos/topics/example/syslog-messages-configuring-qfx-series.html"},{"location":"sources/vendor/Juniper/junos/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes juniper:junos:firewall None juniper:junos:firewall:structured None juniper:junos:idp None juniper:junos:idp:structured None juniper:junos:aamw:structured None juniper:junos:secintel:structured None juniper:junos:snmp None"},{"location":"sources/vendor/Juniper/junos/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes juniper_junos_legacy juniper:legacy netops none juniper_junos_flow juniper:junos:firewall netfw none juniper_junos_utm juniper:junos:firewall netfw none juniper_junos_firewall juniper:junos:firewall netfw none juniper_junos_ids juniper:junos:firewall netids none juniper_junos_idp juniper:junos:idp netids none juniper_junos_snmp juniper:junos:snmp netops none juniper_junos_structured_fw juniper:junos:firewall:structured netfw none juniper_junos_structured_ids juniper:junos:firewall:structured netids none juniper_junos_structured_utm juniper:junos:firewall:structured netfw none juniper_junos_structured_idp juniper:junos:idp:structured netids none juniper_junos_structured_aamw juniper:junos:aamw:structured netfw none juniper_junos_structured_secintel juniper:junos:secintel:structured netfw none"},{"location":"sources/vendor/Juniper/netscreen/","title":"Netscreen","text":""},{"location":"sources/vendor/Juniper/netscreen/#netscreen","title":"Netscreen","text":""},{"location":"sources/vendor/Juniper/netscreen/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Juniper/netscreen/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2847/ Netscreen Manual http://kb.juniper.net/InfoCenter/index?page=content&id=KB4759"},{"location":"sources/vendor/Juniper/netscreen/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes netscreen:firewall None"},{"location":"sources/vendor/Juniper/netscreen/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes juniper_netscreen netscreen:firewall netfw none"},{"location":"sources/vendor/Kaspersky/es/","title":"Enterprise Security RFC5424","text":""},{"location":"sources/vendor/Kaspersky/es/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Kaspersky/es/#links","title":"Links","text":"Ref Link Splunk Add-on non"},{"location":"sources/vendor/Kaspersky/es/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes kaspersky:syslog:es Where PROGRAM starts with KES kaspersky:syslog None"},{"location":"sources/vendor/Kaspersky/es/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes kaspersky_syslog kaspersky:syslog epav none kaspersky_syslog_es kaspersky:syslog:es epav none"},{"location":"sources/vendor/Kaspersky/es_cef/","title":"Enterprise Security CEF","text":"

The TA link provided has commented out the CEF support as of 2022-03-18 manual edits are required

"},{"location":"sources/vendor/Kaspersky/es_cef/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Kaspersky/es_cef/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4656/"},{"location":"sources/vendor/Kaspersky/es_cef/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes kaspersky:cef kaspersky:klaud kaspersky:klsrv kaspersky:gnrl kaspersky:klnag kaspersky:klprci kaspersky:klbl"},{"location":"sources/vendor/Kaspersky/es_cef/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes KasperskyLab_SecurityCenter all epav none"},{"location":"sources/vendor/Kaspersky/es_leef/","title":"Enterprise Security Leef","text":"

Leef format has not been tested samples needed

"},{"location":"sources/vendor/Kaspersky/es_leef/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Kaspersky/es_leef/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4656/"},{"location":"sources/vendor/Kaspersky/es_leef/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes kaspersky:cef kaspersky:klaud kaspersky:klsrv kaspersky:gnrl kaspersky:klnag kaspersky:klprci kaspersky:klbl"},{"location":"sources/vendor/Kaspersky/es_leef/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes KasperskyLab_SecurityCenter all epav none"},{"location":"sources/vendor/Liveaction/liveaction_livenx/","title":"Liveaction - livenx","text":""},{"location":"sources/vendor/Liveaction/liveaction_livenx/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Liveaction/liveaction_livenx/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual None"},{"location":"sources/vendor/Liveaction/liveaction_livenx/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes liveaction:livenx none"},{"location":"sources/vendor/Liveaction/liveaction_livenx/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes liveaction_livenx liveaction:livenx netops None"},{"location":"sources/vendor/McAfee/epo/","title":"EPO","text":""},{"location":"sources/vendor/McAfee/epo/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/McAfee/epo/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/5085/ Product Manual https://kc.mcafee.com/corporate/index?page=content&id=KB87927"},{"location":"sources/vendor/McAfee/epo/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes mcafee:epo:syslog none"},{"location":"sources/vendor/McAfee/epo/#source","title":"Source","text":"source notes policy_auditor_vulnerability_assessment Policy Auditor Vulnerability Assessment events mcafee_agent McAfee Agent events mcafee_endpoint_security McAfee Endpoint Security events"},{"location":"sources/vendor/McAfee/epo/#index-configuration","title":"Index Configuration","text":"key index notes mcafee_epo epav none"},{"location":"sources/vendor/McAfee/epo/#filter-type","title":"Filter type","text":"

MSG Parse: This filter parses message content

"},{"location":"sources/vendor/McAfee/epo/#options","title":"Options","text":"Variable default description SC4S_LISTEN_MCAFEE_EPO_TLS_PORT empty string Enable a TLS port for this specific vendor product using a comma-separated list of port numbers SC4S_DEST_MCAFEE_EPO_ARCHIVE no Enable archive to disk for this specific source SC4S_DEST_MCAFEE_EPO_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source SC4S_SOURCE_TLS_ENABLE no This must be set to yes so that SC4S listens for encrypted syslog from ePO"},{"location":"sources/vendor/McAfee/epo/#additional-setup","title":"Additional setup","text":"

You must create a certificate for the SC4S server to receive encrypted syslog from ePO. A self-signed certificate is fine. Generate a self-signed certificate on the SC4S host:

openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout /opt/sc4s/tls/server.key -out /opt/sc4s/tls/server.pem

Uncomment the following line in /lib/systemd/system/sc4s.service to allow the docker container to use the certificate:

Environment=\"SC4S_TLS_MOUNT=/opt/sc4s/tls:/etc/syslog-ng/tls:z\"

"},{"location":"sources/vendor/McAfee/epo/#troubleshooting","title":"Troubleshooting","text":"

from the command line of the SC4S host, run this: openssl s_client -connect localhost:6514

The message:

socket: Bad file descriptor\nconnect:errno=9\n

indicates that SC4S is not listening for encrypted syslog. Note that a netstat may show the port open, but it is not accepting encrypted traffic as configured.

It may take several minutes for the syslog option to be available in the registered servers dropdown.

"},{"location":"sources/vendor/McAfee/nsp/","title":"Network Security Platform","text":""},{"location":"sources/vendor/McAfee/nsp/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/McAfee/nsp/#links","title":"Links","text":"Ref Link Product Manual https://docs.mcafee.com/bundle/network-security-platform-10.1.x-product-guide/page/GUID-373C1CA6-EC0E-49E1-8858-749D1AA2716A.html"},{"location":"sources/vendor/McAfee/nsp/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes mcafee:nsp none"},{"location":"sources/vendor/McAfee/nsp/#source","title":"Source","text":"source notes mcafee:nsp:alert Alert/Attack Events mcafee:nsp:audit Audit Event or User Activity Events mcafee:nsp:fault Fault Events mcafee:nsp:firewall Firewall Events"},{"location":"sources/vendor/McAfee/nsp/#index-configuration","title":"Index Configuration","text":"key index notes mcafee_nsp netids none"},{"location":"sources/vendor/McAfee/wg/","title":"Wg","text":""},{"location":"sources/vendor/McAfee/wg/#web-gateway","title":"Web Gateway","text":""},{"location":"sources/vendor/McAfee/wg/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/McAfee/wg/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3009/ Product Manual https://kc.mcafee.com/corporate/index?page=content&id=KB77988&actp=RSS"},{"location":"sources/vendor/McAfee/wg/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes mcafee:wg:kv none"},{"location":"sources/vendor/McAfee/wg/#index-configuration","title":"Index Configuration","text":"key index notes mcafee_wg netproxy none"},{"location":"sources/vendor/Microfocus/arcsight/","title":"Arcsight Internal Agent","text":""},{"location":"sources/vendor/Microfocus/arcsight/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Microfocus/arcsight/#links","title":"Links","text":"Ref Link Splunk Add-on CEF https://github.com/splunk/splunk-add-on-for-cef/downloads/"},{"location":"sources/vendor/Microfocus/arcsight/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cef Common sourcetype"},{"location":"sources/vendor/Microfocus/arcsight/#source","title":"Source","text":"source notes ArcSight:ArcSight Internal logs"},{"location":"sources/vendor/Microfocus/arcsight/#index-configuration","title":"Index Configuration","text":"key source index notes ArcSight_ArcSight ArcSight:ArcSight main none"},{"location":"sources/vendor/Microfocus/windows/","title":"Arcsight Microsoft Windows (CEF)","text":""},{"location":"sources/vendor/Microfocus/windows/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Microfocus/windows/#links","title":"Links","text":"Ref Link Splunk Add-on CEF https://bitbucket.org/SPLServices/ta-cef-for-splunk/downloads/ Splunk Add-on CEF https://bitbucket.org/SPLServices/ta-cef-microsoft-windows-for-splunk/downloads/ Product Manual https://docs.imperva.com/bundle/cloud-application-security/page/more/log-configuration.htm"},{"location":"sources/vendor/Microfocus/windows/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cef Common sourcetype"},{"location":"sources/vendor/Microfocus/windows/#source","title":"Source","text":"source notes CEFEventLog:System or Application Event Windows Application and System Event Logs CEFEventLog:Microsoft Windows Windows Security Event Logs"},{"location":"sources/vendor/Microfocus/windows/#index-configuration","title":"Index Configuration","text":"key source index notes Microsoft_System or Application Event CEFEventLog:System or Application Event oswin none Microsoft_Microsoft Windows CEFEventLog:Microsoft Windows oswinsec none"},{"location":"sources/vendor/Microsoft/","title":"Cloud App Security (MCAS)","text":""},{"location":"sources/vendor/Microsoft/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Microsoft/#links","title":"Links","text":"Ref Link Splunk Add-on CEF https://bitbucket.org/SPLServices/ta-cef-for-splunk/downloads/ Splunk Add-on Source Specific none Product Manual https://docs.microsoft.com/en-us/cloud-app-security/siem"},{"location":"sources/vendor/Microsoft/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cef Common sourcetype"},{"location":"sources/vendor/Microsoft/#source","title":"Source","text":"source notes microsoft:cas Common sourcetype"},{"location":"sources/vendor/Microsoft/#index-configuration","title":"Index Configuration","text":"key source index notes MCAS_SIEM_Agent microsoft:cas main none"},{"location":"sources/vendor/Mikrotik/routeros/","title":"RouterOS","text":""},{"location":"sources/vendor/Mikrotik/routeros/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Mikrotik/routeros/#links","title":"Links","text":""},{"location":"sources/vendor/Mikrotik/routeros/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes routeros none"},{"location":"sources/vendor/Mikrotik/routeros/#index-configuration","title":"Index Configuration","text":"key index notes mikrotik_routeros netops none mikrotik_routeros_fw netfw Used for events with forward:"},{"location":"sources/vendor/Mikrotik/routeros/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-mikrotik_routeros.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-mikrotik_routeros[sc4s-vps] {\n filter { \n        host(\"test-mrtros-\" type(string) flags(prefix))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('mikrotik')\n            product('routeros')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/NetApp/ontap/","title":"OnTap","text":""},{"location":"sources/vendor/NetApp/ontap/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/NetApp/ontap/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3418/ Product Manual unknown"},{"location":"sources/vendor/NetApp/ontap/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes netapp:ems None"},{"location":"sources/vendor/NetApp/ontap/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes netapp_ontap netapp:ems infraops none"},{"location":"sources/vendor/NetApp/storage-grid/","title":"StorageGRID","text":""},{"location":"sources/vendor/NetApp/storage-grid/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/NetApp/storage-grid/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3895/ Product Manual unknown"},{"location":"sources/vendor/NetApp/storage-grid/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes grid:auditlog None grid:rest:api None"},{"location":"sources/vendor/NetApp/storage-grid/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes netapp_grid grid:auditlog infraops none netapp_grid grid:rest:api infraops none"},{"location":"sources/vendor/NetScout/arbor_edge/","title":"DatAdvantage","text":""},{"location":"sources/vendor/NetScout/arbor_edge/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/NetScout/arbor_edge/#links","title":"Links","text":"Ref Link TA https://github.com/arbor/TA_netscout_aed"},{"location":"sources/vendor/NetScout/arbor_edge/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes netscout:aed"},{"location":"sources/vendor/NetScout/arbor_edge/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes NETSCOUT_Arbor Edge Defense netscout:aed netids NETSCOUT_Arbor Networks APS netscout:aed netids"},{"location":"sources/vendor/Netmotion/mobilityserver/","title":"Mobility Server","text":""},{"location":"sources/vendor/Netmotion/mobilityserver/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Netmotion/mobilityserver/#links","title":"Links","text":"Ref Link Splunk Add-on none Product Manual unknown"},{"location":"sources/vendor/Netmotion/mobilityserver/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes netmotion:mobilityserver:* The third segment of the source type is constructed from the sdid field of the syslog sdata"},{"location":"sources/vendor/Netmotion/mobilityserver/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes netmotion_mobility-server_* netmotion:mobilityserver:* netops none"},{"location":"sources/vendor/Netmotion/reporting/","title":"Reporting","text":""},{"location":"sources/vendor/Netmotion/reporting/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Netmotion/reporting/#links","title":"Links","text":"Ref Link Splunk Add-on none Product Manual unknown"},{"location":"sources/vendor/Netmotion/reporting/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes netmotion:reporting None"},{"location":"sources/vendor/Netmotion/reporting/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes netmotion_reporting netmotion:reporting netops none"},{"location":"sources/vendor/Netwrix/endpoint_protector/","title":"Endpoint Protector by CoSoSys","text":""},{"location":"sources/vendor/Netwrix/endpoint_protector/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Netwrix/endpoint_protector/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual na"},{"location":"sources/vendor/Netwrix/endpoint_protector/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes netwrix:epp None"},{"location":"sources/vendor/Netwrix/endpoint_protector/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes netwrix_epp netwrix:epp netops None"},{"location":"sources/vendor/Novell/netiq/","title":"NetIQ","text":""},{"location":"sources/vendor/Novell/netiq/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Novell/netiq/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Novell/netiq/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes novell:netiq none"},{"location":"sources/vendor/Novell/netiq/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes novell_netiq novell_netiq netauth None"},{"location":"sources/vendor/Nutanix/cvm/","title":"Nutanix_CVM_Audit","text":""},{"location":"sources/vendor/Nutanix/cvm/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Nutanix/cvm/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Nutanix/cvm/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes nutanix:syslog CVM logs nutanix:syslog:audit CVM system audit logs Considering the message host format is default ntnx-xxxx-cvm"},{"location":"sources/vendor/Nutanix/cvm/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes nutanix_syslog nutanix:syslog infraops none nutanix_syslog_audit nutanix:syslog:audit infraops none"},{"location":"sources/vendor/Ossec/ossec/","title":"Ossec","text":""},{"location":"sources/vendor/Ossec/ossec/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Ossec/ossec/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2808/ Product Manual https://www.ossec.net/docs/index.html"},{"location":"sources/vendor/Ossec/ossec/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes ossec The add-on supports data from the following sources: File Integrity Management (FIM) data, FTP data, su data, ssh data, Windows data, including audit and logon information"},{"location":"sources/vendor/Ossec/ossec/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes ossec_ossec ossec main None"},{"location":"sources/vendor/PaloaltoNetworks/cortexxdr/","title":"Cortext","text":""},{"location":"sources/vendor/PaloaltoNetworks/cortexxdr/#key-facts","title":"Key facts","text":" Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2757/"},{"location":"sources/vendor/PaloaltoNetworks/cortexxdr/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes pan:* pan:xsoar none"},{"location":"sources/vendor/PaloaltoNetworks/cortexxdr/#index-configuration","title":"Index Configuration","text":"key index notes Palo Alto Networks_Palo Alto Networks Cortex XSOAR epintel none"},{"location":"sources/vendor/PaloaltoNetworks/panos/","title":"panos","text":""},{"location":"sources/vendor/PaloaltoNetworks/panos/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/PaloaltoNetworks/panos/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2757/ Product Manual https://docs.paloaltonetworks.com/pan-os/9-0/pan-os-admin/monitoring/use-syslog-for-monitoring/configure-syslog-monitoring.html"},{"location":"sources/vendor/PaloaltoNetworks/panos/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes pan:log None pan:globalprotect none pan:traffic None pan:threat None pan:system None pan:config None pan:hipmatch None pan:correlation None pan:userid None"},{"location":"sources/vendor/PaloaltoNetworks/panos/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes pan_panos_log pan:log netops none pan_panos_globalprotect pan:globalprotect netfw none pan_panos_traffic pan:traffic netfw none pan_panos_threat pan:threat netproxy none pan_panos_system pan:system netops none pan_panos_config pan:config netops none pan_panos_hipmatch pan:hipmatch netops none pan_panos_correlation pan:correlation netops none pan_panos_userid pan:userid netauth none"},{"location":"sources/vendor/PaloaltoNetworks/panos/#filter-type","title":"Filter type","text":"

MSG Parse: This filter parses message content

"},{"location":"sources/vendor/PaloaltoNetworks/panos/#setup-and-configuration","title":"Setup and Configuration","text":""},{"location":"sources/vendor/PaloaltoNetworks/panos/#options","title":"Options","text":"Variable default description SC4S_LISTEN_PULSE_PAN_PANOS_RFC6587_PORT empty string Enable a TCP using IETF Framing (RFC6587) port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_PAN_PANOS_TCP_PORT empty string Enable a TCP port for this specific vendor product using a comma-separated list of port numbers SC4S_DEST_PAN_PANOS_ARCHIVE no Enable archive to disk for this specific source SC4S_DEST_PAN_PANOS_HEC no When Splunk HEC is disabled globally set to yes to enable this specific source"},{"location":"sources/vendor/PaloaltoNetworks/panos/#verification","title":"Verification","text":"

An active firewall will generate frequent events. Use the following search to validate events are present per source device

index=<asconfigured> sourcetype=pan:*| stats count by host\n
"},{"location":"sources/vendor/PaloaltoNetworks/prisma/","title":"Prisma SD-WAN ION","text":""},{"location":"sources/vendor/PaloaltoNetworks/prisma/#key-facts","title":"Key facts","text":" Ref Link Splunk Add-on none Product Manual https://docs.paloaltonetworks.com/prisma/prisma-sd-wan/prisma-sd-wan-admin/prisma-sd-wan-sites-and-devices/use-external-services-for-monitoring/syslog-server-support-in-prisma-sd-wan Product Manual https://docs.paloaltonetworks.com/prisma/prisma-sd-wan/prisma-sd-wan-admin/prisma-sd-wan-sites-and-devices/use-external-services-for-monitoring/syslog-server-support-in-prisma-sd-wan/syslog-flow-export"},{"location":"sources/vendor/PaloaltoNetworks/prisma/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes prisma:sd-wan:flow prisma:sd-wan:authentication prisma:sd-wan:event"},{"location":"sources/vendor/PaloaltoNetworks/prisma/#index-configuration","title":"Index Configuration","text":"key index notes prisma_sd-wan_flow netwaf none prisma_sd-wan_authentication netwaf none prisma_sd-wan_event netwaf none"},{"location":"sources/vendor/PaloaltoNetworks/traps/","title":"Traps","text":""},{"location":"sources/vendor/PaloaltoNetworks/traps/#traps","title":"TRAPS","text":""},{"location":"sources/vendor/PaloaltoNetworks/traps/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/PaloaltoNetworks/traps/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/2757/"},{"location":"sources/vendor/PaloaltoNetworks/traps/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes pan:traps4 none"},{"location":"sources/vendor/PaloaltoNetworks/traps/#index-configuration","title":"Index Configuration","text":"key index notes Palo Alto Networks_Traps Agent epintel none"},{"location":"sources/vendor/Pfsense/firewall/","title":"Firewall","text":""},{"location":"sources/vendor/Pfsense/firewall/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Pfsense/firewall/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/1527/ Product Manual https://docs.netgate.com/pfsense/en/latest/monitoring/copying-logs-to-a-remote-host-with-syslog.html?highlight=syslog"},{"location":"sources/vendor/Pfsense/firewall/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes pfsense:filterlog None pfsense:* All programs other than filterlog"},{"location":"sources/vendor/Pfsense/firewall/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes pfsense pfsense netops none pfsense_filterlog pfsense:filterlog netfw none"},{"location":"sources/vendor/Pfsense/firewall/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-pfsense_firewall.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-pfsense_firewall[sc4s-vps] {\n filter { \n        \"${HOST}\" eq \"pfsense_firewall\"\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('pfsense')\n            product('firewall')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Polycom/rprm/","title":"RPRM","text":""},{"location":"sources/vendor/Polycom/rprm/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Polycom/rprm/#links","title":"Links","text":"Ref Link Splunk Add-on none Product Manual unknown"},{"location":"sources/vendor/Polycom/rprm/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes polycom:rprm:syslog"},{"location":"sources/vendor/Polycom/rprm/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes polycom_rprm polycom:rprm:syslog netops none"},{"location":"sources/vendor/Powertech/interact/","title":"PowerTech Interact","text":""},{"location":"sources/vendor/Powertech/interact/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Powertech/interact/#links","title":"Links","text":"Ref Link Splunk Add-on None"},{"location":"sources/vendor/Powertech/interact/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes PowerTech:Interact:cef CEF"},{"location":"sources/vendor/Powertech/interact/#source","title":"Source","text":"source notes PowerTech:Interact:cef None"},{"location":"sources/vendor/Powertech/interact/#index-configuration","title":"Index Configuration","text":"key source index notes PowerTech_Interact PowerTech:Interact netops none"},{"location":"sources/vendor/Proofpoint/","title":"Proofpoint Protection Server","text":""},{"location":"sources/vendor/Proofpoint/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Proofpoint/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3080/ Product Manual https://proofpointcommunities.force.com/community/s/article/Remote-Syslog-Forwarding"},{"location":"sources/vendor/Proofpoint/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes pps_filter_log pps_mail_log This sourcetype will conflict with sendmail itself, so will require that the PPS send syslog on a dedicated port or be uniquely identifiable with a hostname glob or CIDR block if this sourcetype is desired for PPS."},{"location":"sources/vendor/Proofpoint/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes proofpoint_pps_filter pps_filter_log email none proofpoint_pps_sendmail pps_mail_log email none"},{"location":"sources/vendor/Proofpoint/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-proofpoint_pps.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-proofpoint_pps[sc4s-vps] {\n filter { \n        host(\"pps-*\" type(glob))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('proofpoint')\n            product('pps')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Pulse/connectsecure/","title":"Pulse","text":""},{"location":"sources/vendor/Pulse/connectsecure/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Pulse/connectsecure/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3852/ JunOS TechLibrary https://docs.pulsesecure.net/WebHelp/Content/PCS/PCS_AdminGuide_8.2/Configuring%20Syslog.htm"},{"location":"sources/vendor/Pulse/connectsecure/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes pulse:connectsecure None pulse:connectsecure:web None"},{"location":"sources/vendor/Pulse/connectsecure/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes pulse_connect_secure pulse:connectsecure netfw none pulse_connect_secure_web pulse:connectsecure:web netproxy none"},{"location":"sources/vendor/PureStorage/array/","title":"Array","text":""},{"location":"sources/vendor/PureStorage/array/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/PureStorage/array/#links","title":"Links","text":"Ref Link Splunk Add-on None note TA published on Splunk base does not include syslog extractions Product Manual"},{"location":"sources/vendor/PureStorage/array/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes purestorage:array purestorage:array:${class} This type is generated from the message"},{"location":"sources/vendor/PureStorage/array/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes purestorage_array purestorage:array infraops None purestorage_array_${class} purestorage:array:class infraops class is extracted as the string following \u201cpurity.\u201d"},{"location":"sources/vendor/Qumulo/storage/","title":"Storage","text":""},{"location":"sources/vendor/Qumulo/storage/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Qumulo/storage/#links","title":"Links","text":"Ref Link Splunk Add-on none"},{"location":"sources/vendor/Qumulo/storage/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes qumulo:storage None"},{"location":"sources/vendor/Qumulo/storage/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes qumulo_storage qumulo:storage infraops none"},{"location":"sources/vendor/Radware/defensepro/","title":"DefensePro","text":""},{"location":"sources/vendor/Radware/defensepro/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Radware/defensepro/#links","title":"Links","text":"Ref Link Splunk Add-on Note this add-on does not provide functional extractions https://splunkbase.splunk.com/app/4480/ Product Manual https://www.radware.com/products/defensepro/"},{"location":"sources/vendor/Radware/defensepro/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes radware:defensepro Note some events do not contain host"},{"location":"sources/vendor/Radware/defensepro/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes radware_defensepro radware:defensepro netops none"},{"location":"sources/vendor/Raritan/dsx/","title":"DSX","text":""},{"location":"sources/vendor/Raritan/dsx/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Raritan/dsx/#links","title":"Links","text":"Ref Link Splunk Add-on none Product Manual https://www.raritan.com/products/kvm-serial/serial-console-servers/serial-over-ip-console-server"},{"location":"sources/vendor/Raritan/dsx/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes raritan:dsx Note events do not contain host"},{"location":"sources/vendor/Raritan/dsx/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes raritan_dsx raritan:dsx infraops none"},{"location":"sources/vendor/Raritan/dsx/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-raritan_dsx.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-raritan_dsx[sc4s-vps] {\n filter { \n        host(\"raritan_dsx*\" type(glob))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('raritan')\n            product('dsx')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Ricoh/mfp/","title":"MFP","text":""},{"location":"sources/vendor/Ricoh/mfp/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Ricoh/mfp/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Ricoh/mfp/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes ricoh:mfp None"},{"location":"sources/vendor/Ricoh/mfp/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes ricoh_syslog ricoh:mfp printer none"},{"location":"sources/vendor/Ricoh/mfp/#sc4s-options","title":"SC4S Options","text":"Variable default description SC4S_SOURCE_RICOH_SYSLOG_FIXHOST yes Current firmware incorrectly sends the value of HOST in the program field if this is ever corrected this value will need to be set back to no we suggest using yes"},{"location":"sources/vendor/Riverbed/","title":"Syslog","text":"

Used when more specific steelhead or steelconnect can not be identified

"},{"location":"sources/vendor/Riverbed/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Riverbed/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Riverbed/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes riverbed:syslog None"},{"location":"sources/vendor/Riverbed/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes riverbed_syslog riverbed:syslog netops none riverbed_syslog_nix_syslog nix:syslog osnix none"},{"location":"sources/vendor/Riverbed/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-riverbed_syslog.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-riverbed_syslog[sc4s-vps] {\n filter {      \n        host(....)\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('riverbed')\n            product('syslog')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Riverbed/steelconnect/","title":"Steelconnect","text":""},{"location":"sources/vendor/Riverbed/steelconnect/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Riverbed/steelconnect/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Riverbed/steelconnect/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes riverbed:steelconnect None"},{"location":"sources/vendor/Riverbed/steelconnect/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes riverbed_syslog_steelconnect riverbed:steelconnect netops none"},{"location":"sources/vendor/Riverbed/steelhead/","title":"SteelHead","text":""},{"location":"sources/vendor/Riverbed/steelhead/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Riverbed/steelhead/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Riverbed/steelhead/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes riverbed:steelhead None"},{"location":"sources/vendor/Riverbed/steelhead/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes riverbed_syslog_steelhead riverbed:steelhead netops none"},{"location":"sources/vendor/Riverbed/steelhead/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-riverbed_syslog.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-riverbed_syslog[sc4s-vps] {\n filter {      \n        host(....)\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('riverbed')\n            product('syslog')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Ruckus/SmartZone/","title":"Smart Zone","text":"

Some events may not match the source format please report issues if found

"},{"location":"sources/vendor/Ruckus/SmartZone/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Ruckus/SmartZone/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Ruckus/SmartZone/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes ruckus:smartzone None"},{"location":"sources/vendor/Ruckus/SmartZone/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes ruckus_smartzone ruckus:smartzone netops none"},{"location":"sources/vendor/Schneider/apc/","title":"APC Power systems","text":""},{"location":"sources/vendor/Schneider/apc/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Schneider/apc/#links","title":"Links","text":"Ref Link Splunk Add-on none Product Manual multiple"},{"location":"sources/vendor/Schneider/apc/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes apc:syslog None"},{"location":"sources/vendor/Schneider/apc/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes schneider_apc apc:syslog main none"},{"location":"sources/vendor/Schneider/apc/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-schneider_apc.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-schneider_apc[sc4s-vps] {\n filter { \n        host(\"test_apc-*\" type(glob))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('schneider')\n            product('apc')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/SecureAuthIdP/secureauth_idp/","title":"SecureAuth IdP","text":""},{"location":"sources/vendor/SecureAuthIdP/secureauth_idp/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/SecureAuthIdP/secureauth_idp/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3008"},{"location":"sources/vendor/SecureAuthIdP/secureauth_idp/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes secureauth:idp none"},{"location":"sources/vendor/SecureAuthIdP/secureauth_idp/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes secureauth_idp secureauth:idp netops None"},{"location":"sources/vendor/Semperis/DSP/","title":"Semperis DSP","text":""},{"location":"sources/vendor/Semperis/DSP/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Semperis/DSP/#links","title":"Links","text":"Ref Link Splunk Add-on None"},{"location":"sources/vendor/Semperis/DSP/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes semperis:dsp none"},{"location":"sources/vendor/Semperis/DSP/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes semperis_dsp semperis:dsp netops None"},{"location":"sources/vendor/Solace/evenbroker/","title":"EventBroker","text":""},{"location":"sources/vendor/Solace/evenbroker/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Solace/evenbroker/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Solace/evenbroker/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes solace:eventbroker None"},{"location":"sources/vendor/Solace/evenbroker/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes solace_eventbroker solace:eventbroker main none"},{"location":"sources/vendor/Sophos/Firewall/","title":"Web Appliance","text":""},{"location":"sources/vendor/Sophos/Firewall/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Sophos/Firewall/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/6187/ Product Manual unknown"},{"location":"sources/vendor/Sophos/Firewall/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes sophos:xg:atp None sophos:xg:anti_spam None sophos:xg:anti_virus None sophos:xg:content_filtering None sophos:xg:event None sophos:xg:firewall None sophos:xg:ssl None sophos:xg:sandbox None sophos:xg:system_health None sophos:xg:heartbeat None sophos:xg:waf None sophos:xg:wireless_protection None sophos:xg:idp None"},{"location":"sources/vendor/Sophos/Firewall/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes sophos_xg_atp sophos:xg:atp netdlp none sophos_xg_anti_spam sophos:xg:anti_spam netdlp none sophos_xg_anti_virus sophos:xg:anti_virus netdlp none sophos_xg_content_filtering sophos:xg:content_filtering netdlp none sophos_xg_event sophos:xg:event netdlp none sophos_xg_firewall sophos:xg:firewall netdlp none sophos_xg_ssl sophos:xg:ssl netdlp none sophos_xg_sandbox sophos:xg:sandbox netdlp none sophos_xg_system_health sophos:xg:system_health netdlp none sophos_xg_heartbeat sophos:xg:heartbeat netdlp none sophos_xg_waf sophos:xg:waf netdlp none sophos_xg_wireless_protection sophos:xg:wireless_protection netdlp none sophos_xg_idp sophos:xg:idp netdlp none"},{"location":"sources/vendor/Sophos/webappliance/","title":"Web Appliance","text":""},{"location":"sources/vendor/Sophos/webappliance/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Sophos/webappliance/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Sophos/webappliance/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes sophos:webappliance None"},{"location":"sources/vendor/Sophos/webappliance/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes sophos_webappliance sophos:webappliance netproxy none"},{"location":"sources/vendor/Sophos/webappliance/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-sophos_webappliance.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-sophos_webappliance[sc4s-vps] {\n filter { \n        host(\"test-sophos-webapp-\" type(string) flags(prefix))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('sophos')\n            product('webappliance')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Spectracom/","title":"NTP Appliance","text":""},{"location":"sources/vendor/Spectracom/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Spectracom/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Spectracom/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes spectracom:ntp None nix:syslog None"},{"location":"sources/vendor/Spectracom/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes spectracom_ntp spectracom:ntp netops none"},{"location":"sources/vendor/Spectracom/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-spectracom_ntp.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-spectracom_ntp[sc4s-vps] {\n filter { \n        netmask(169.254.100.1/24)\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('spectracom')\n            product('ntp')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Splunk/heavyforwarder/","title":"Splunk Heavy Forwarder","text":"

In certain network architectures such as those using data diodes or those networks requiring \u201cin the clear\u201d inspection at network egress SC4S can be used to accept specially formatted output from Splunk as RFC5424 syslog.

"},{"location":"sources/vendor/Splunk/heavyforwarder/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Splunk/heavyforwarder/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual unknown"},{"location":"sources/vendor/Splunk/heavyforwarder/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes spectracom:ntp None nix:syslog None"},{"location":"sources/vendor/Splunk/heavyforwarder/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"

Index Source and Sourcetype will be used as determined by the Source/HWF

"},{"location":"sources/vendor/Splunk/heavyforwarder/#splunk-configuration","title":"Splunk Configuration","text":""},{"location":"sources/vendor/Splunk/heavyforwarder/#outputsconf","title":"outputs.conf","text":"
#Because audit trail is protected and we can't transform it we can not use default we must use tcp_routing\n[tcpout]\ndefaultGroup = NoForwarding\n\n[tcpout:nexthop]\nserver = localhost:9000\nsendCookedData = false\n
"},{"location":"sources/vendor/Splunk/heavyforwarder/#propsconf","title":"props.conf","text":"
[default]\nADD_EXTRA_TIME_FIELDS = none\nANNOTATE_PUNCT = false\nSHOULD_LINEMERGE = false\nTRANSFORMS-zza-syslog = syslog_canforward, metadata_meta,  metadata_source, metadata_sourcetype, metadata_index, metadata_host, metadata_subsecond, metadata_time, syslog_prefix, syslog_drop_zero\n# The following applies for TCP destinations where the IETF frame is required\nTRANSFORMS-zzz-syslog = syslog_octal, syslog_octal_append\n# Comment out the above and uncomment the following for udp\n#TRANSFORMS-zzz-syslog-udp = syslog_octal, syslog_octal_append, syslog_drop_zero\n\n[audittrail]\n# We can't transform this source type its protected\nTRANSFORMS-zza-syslog =\nTRANSFORMS-zzz-syslog =\n
"},{"location":"sources/vendor/Splunk/heavyforwarder/#transformsconf","title":"transforms.conf","text":"
syslog_canforward]\nREGEX = ^.(?!audit)\nDEST_KEY = _TCP_ROUTING\nFORMAT = nexthop\n\n[metadata_meta]\nSOURCE_KEY = _meta\nREGEX = (?ims)(.*)\nFORMAT = ~~~SM~~~$1~~~EM~~~$0 \nDEST_KEY = _raw\n\n[metadata_source]\nSOURCE_KEY = MetaData:Source\nREGEX = ^source::(.*)$\nFORMAT = s=\"$1\"] $0\nDEST_KEY = _raw\n\n[metadata_sourcetype]\nSOURCE_KEY = MetaData:Sourcetype\nREGEX = ^sourcetype::(.*)$\nFORMAT = st=\"$1\" $0\nDEST_KEY = _raw\n\n[metadata_index]\nSOURCE_KEY = _MetaData:Index\nREGEX = (.*)\nFORMAT = i=\"$1\" $0\nDEST_KEY = _raw\n\n[metadata_host]\nSOURCE_KEY = MetaData:Host\nREGEX = ^host::(.*)$\nFORMAT = \" h=\"$1\" $0\nDEST_KEY = _raw\n\n[syslog_prefix]\nSOURCE_KEY = _time\nREGEX = (.*)\nFORMAT = <1>1 - - SPLUNK - COOKED [fields@274489 $0\nDEST_KEY = _raw\n\n[metadata_time]\nSOURCE_KEY = _time\nREGEX = (.*)\nFORMAT =  t=\"$1$0\nDEST_KEY = _raw\n\n[metadata_subsecond]\nSOURCE_KEY = _meta\nREGEX = \\_subsecond\\:\\:(\\.\\d+)\nFORMAT = $1 $0\nDEST_KEY = _raw\n\n[syslog_octal]\nINGEST_EVAL= mlen=length(_raw)+1\n\n[syslog_octal_append]\nINGEST_EVAL = _raw=mlen + \" \" + _raw\n\n[syslog_drop_zero]\nINGEST_EVAL = queue=if(mlen<10,\"nullQueue\",queue)\n
"},{"location":"sources/vendor/Splunk/sc4s/","title":"Splunk Connect for Syslog (SC4S)","text":""},{"location":"sources/vendor/Splunk/sc4s/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Splunk/sc4s/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4740/ Product Manual https://splunk-connect-for-syslog.readthedocs.io/en/latest/"},{"location":"sources/vendor/Splunk/sc4s/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes sc4s:events Internal events from the SC4S container and underlying syslog-ng process sc4s:metrics syslog-ng operational metrics that will be delivered directly to a metrics index in Splunk"},{"location":"sources/vendor/Splunk/sc4s/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes splunk_sc4s_events all main none splunk_sc4s_metrics all _metrics none splunk_sc4s_fallback all main none"},{"location":"sources/vendor/Splunk/sc4s/#filter-type","title":"Filter type","text":"

SC4S events and metrics are generated automatically and no specific ports or filters need to be configured for the collection of this data.

"},{"location":"sources/vendor/Splunk/sc4s/#setup-and-configuration","title":"Setup and Configuration","text":""},{"location":"sources/vendor/Splunk/sc4s/#options","title":"Options","text":"Variable default description SC4S_DEST_SPLUNK_SC4S_METRICS_HEC multi2 event produce metrics as plain text events; single produce metrics using Splunk Enterprise 7.3 single metrics format; multi produce metrics using Splunk Enterprise >8.1 multi metric format multi2 produces improved (reduced resource consumption) multi metric format SC4S_SOURCE_MARK_MESSAGE_NULLQUEUE yes (yes"},{"location":"sources/vendor/Splunk/sc4s/#verification","title":"Verification","text":"

SC4S will generate versioning events at startup. These startup events can be used to validate HEC is set up properly on the Splunk side.

index=<asconfigured> sourcetype=sc4s:events | stats count by host\n

Metrics can be observed via the \u201cAnalytics\u2013>Metrics\u201d navigation in the Search and Reporting app in Splunk.

"},{"location":"sources/vendor/StealthWatch/StealthIntercept/","title":"Stealth Intercept","text":""},{"location":"sources/vendor/StealthWatch/StealthIntercept/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/StealthWatch/StealthIntercept/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4609/ Product Manual unknown"},{"location":"sources/vendor/StealthWatch/StealthIntercept/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes StealthINTERCEPT None StealthINTERCEPT:alerts SC4S Format Shifts to JSON override template to t_msg_hdr for original raw"},{"location":"sources/vendor/StealthWatch/StealthIntercept/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes stealthbits_stealthintercept StealthINTERCEPT netids none stealthbits_stealthintercept_alerts StealthINTERCEPT:alerts netids Note TA does not support this source type"},{"location":"sources/vendor/Tanium/platform/","title":"Platform","text":"

This source requires a TLS connection; in most cases enabling TLS and using the default port 6514 is adequate. The source is understood to require a valid certificate.

"},{"location":"sources/vendor/Tanium/platform/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Tanium/platform/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4439/"},{"location":"sources/vendor/Tanium/platform/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes tanium none"},{"location":"sources/vendor/Tanium/platform/#index-configuration","title":"Index Configuration","text":"key index notes tanium_syslog epintel none"},{"location":"sources/vendor/Tenable/ad/","title":"ad","text":""},{"location":"sources/vendor/Tenable/ad/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Tenable/ad/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4060/ Product Manual"},{"location":"sources/vendor/Tenable/ad/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes tenable:ad:alerts None"},{"location":"sources/vendor/Tenable/ad/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes tenable_ad tenable:ad:alerts oswinsec none"},{"location":"sources/vendor/Tenable/nnm/","title":"nnm","text":""},{"location":"sources/vendor/Tenable/nnm/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Tenable/nnm/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4060/ Product Manual https://docs.tenable.com/integrations/Splunk/Content/Splunk2/ProcessWorkflow.htm"},{"location":"sources/vendor/Tenable/nnm/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes tenable:nnm:vuln None"},{"location":"sources/vendor/Tenable/nnm/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes tenable_nnm tenable:nnm:vuln netfw none"},{"location":"sources/vendor/Thales/thales_vormetric/","title":"Thales Vormetric Data Security Platform","text":""},{"location":"sources/vendor/Thales/thales_vormetric/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Thales/thales_vormetric/#links","title":"Links","text":"Ref Link Splunk Add-on na Product Manual link"},{"location":"sources/vendor/Thales/thales_vormetric/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes thales:vormetric None"},{"location":"sources/vendor/Thales/thales_vormetric/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes thales_vormetric thales:vormetric netauth None"},{"location":"sources/vendor/Thycotic/secretserver/","title":"Secret Server","text":""},{"location":"sources/vendor/Thycotic/secretserver/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Thycotic/secretserver/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4060/ Product Manual"},{"location":"sources/vendor/Thycotic/secretserver/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes thycotic:syslog None"},{"location":"sources/vendor/Thycotic/secretserver/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes Thycotic Software_Secret Server thycotic:syslog netauth none"},{"location":"sources/vendor/Tintri/syslog/","title":"Syslog","text":""},{"location":"sources/vendor/Tintri/syslog/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Tintri/syslog/#links","title":"Links","text":"Ref Link Splunk Add-on None"},{"location":"sources/vendor/Tintri/syslog/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes tintri none"},{"location":"sources/vendor/Tintri/syslog/#index-configuration","title":"Index Configuration","text":"key index notes tintri_syslog infraops none"},{"location":"sources/vendor/Trellix/cms/","title":"Trellix CMS","text":""},{"location":"sources/vendor/Trellix/cms/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Trellix/cms/#links","title":"Links","text":"Ref Link Splunk Add-on None"},{"location":"sources/vendor/Trellix/cms/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes trellix:cms CEF"},{"location":"sources/vendor/Trellix/cms/#source","title":"Source","text":"source notes trellix:cms None"},{"location":"sources/vendor/Trellix/cms/#index-configuration","title":"Index Configuration","text":"key source index notes trellix_cms trellix:cms netops none"},{"location":"sources/vendor/Trellix/mps/","title":"Trellix MPS","text":""},{"location":"sources/vendor/Trellix/mps/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Trellix/mps/#links","title":"Links","text":"Ref Link Splunk Add-on None"},{"location":"sources/vendor/Trellix/mps/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes trellix:mps CEF"},{"location":"sources/vendor/Trellix/mps/#source","title":"Source","text":"source notes trellix:mps None"},{"location":"sources/vendor/Trellix/mps/#index-configuration","title":"Index Configuration","text":"key source index notes trellix_mps trellix:mps netops none"},{"location":"sources/vendor/Trend/deepsecurity/","title":"Deep Security","text":""},{"location":"sources/vendor/Trend/deepsecurity/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Trend/deepsecurity/#links","title":"Links","text":"Ref Link Splunk Add-on CEF https://splunkbase.splunk.com/app/1936/"},{"location":"sources/vendor/Trend/deepsecurity/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes deepsecurity-system_events deepsecurity-intrusion_prevention deepsecurity-integrity_monitoring deepsecurity-log_inspection deepsecurity-web_reputation deepsecurity-firewall deepsecurity-antimalware deepsecurity-app_control"},{"location":"sources/vendor/Trend/deepsecurity/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes Trend Micro_Deep Security Agent deepsecurity epintel Used only if a correct source type is not matched Trend Micro_Deep Security Agent_intrusion prevention deepsecurity-intrusion_prevention epintel Trend Micro_Deep Security Agent_integrity monitoring deepsecurity-integrity_monitoring epintel Trend Micro_Deep Security Agent_log inspection deepsecurity-log_inspection epintel Trend Micro_Deep Security Agent_web reputation deepsecurity-web_reputation epintel Trend Micro_Deep Security Agent_firewall deepsecurity-firewall epintel Trend Micro_Deep Security Agent_antimalware deepsecurity-antimalware epintel Trend Micro_Deep Security Agent_app control deepsecurity-app_control epintel Trend Micro_Deep Security Manager deepsecurity-system_events epintel"},{"location":"sources/vendor/Ubiquiti/unifi/","title":"Unifi","text":"

All Ubiquity Unfi firewalls, switches, and access points share a common syslog configuration via the NMS.

"},{"location":"sources/vendor/Ubiquiti/unifi/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Ubiquiti/unifi/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/4107/ Product Manual https://https://help.ubnt.com/"},{"location":"sources/vendor/Ubiquiti/unifi/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes ubnt Used when no sub source type is required by add on ubnt:fw USG events ubnt:threat USG IDS events ubnt:switch Unifi Switches ubnt:wireless Access Point logs"},{"location":"sources/vendor/Ubiquiti/unifi/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes ubiquiti_unifi ubnt netops none ubiquiti_unifi_fw ubnt:fw netfw none"},{"location":"sources/vendor/Ubiquiti/unifi/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-ubiquiti_unifi_fw.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-ubiquiti_unifi_fw[sc4s-vps] {\n filter { \n        host(\"usg-*\" type(glob))\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('ubiquiti')\n            product('unifi')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/VMWare/airwatch/","title":"Airwatch","text":"

AirWatch is a product used for enterprise mobility management (EMM) software and standalone management systems for content, applications and email.

"},{"location":"sources/vendor/VMWare/airwatch/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/VMWare/airwatch/#links","title":"Links","text":"Ref Link Product Manual https://docs.vmware.com/en/VMware-Workspace-ONE/index.html"},{"location":"sources/vendor/VMWare/airwatch/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes vmware:airwatch None"},{"location":"sources/vendor/VMWare/airwatch/#index-configuration","title":"Index Configuration","text":"key index notes vmware_airwatch epintel none"},{"location":"sources/vendor/VMWare/carbonblack/","title":"Carbon Black Protection","text":""},{"location":"sources/vendor/VMWare/carbonblack/#rfc-5424-format","title":"RFC 5424 Format","text":""},{"location":"sources/vendor/VMWare/carbonblack/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/VMWare/carbonblack/#links","title":"Links","text":"Ref Link Splunk Add-on none"},{"location":"sources/vendor/VMWare/carbonblack/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes vmware:cb:protect Common sourcetype"},{"location":"sources/vendor/VMWare/carbonblack/#source","title":"Source","text":"source notes carbonblack:protection:cef Note this method of onboarding is not recommended for a more complete experience utilize the json format supported by he product with hec or s3"},{"location":"sources/vendor/VMWare/carbonblack/#index-configuration","title":"Index Configuration","text":"key source index notes vmware_cb-protect carbonblack:protection:cef epintel none"},{"location":"sources/vendor/VMWare/carbonblack/#legacy-cef-format","title":"Legacy CEF Format","text":""},{"location":"sources/vendor/VMWare/carbonblack/#key-facts_1","title":"Key facts","text":""},{"location":"sources/vendor/VMWare/carbonblack/#links_1","title":"Links","text":"Ref Link Splunk Add-on none"},{"location":"sources/vendor/VMWare/carbonblack/#sourcetypes_1","title":"Sourcetypes","text":"sourcetype notes cef Common sourcetype"},{"location":"sources/vendor/VMWare/carbonblack/#source_1","title":"Source","text":"source notes carbonblack:protection:cef Note this method of onboarding is not recommended for a more complete experience utilize the json format supported by he product with hec or s3"},{"location":"sources/vendor/VMWare/carbonblack/#index-configuration_1","title":"Index Configuration","text":"key source index notes Carbon Black_Protection carbonblack:protection:cef epintel none"},{"location":"sources/vendor/VMWare/horizonview/","title":"Horizon View","text":""},{"location":"sources/vendor/VMWare/horizonview/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/VMWare/horizonview/#links","title":"Links","text":"Ref Link Splunk Add-on None Manual unknown"},{"location":"sources/vendor/VMWare/horizonview/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes vmware:horizon None nix:syslog When used with a default port this will follow the generic NIX configuration when using a dedicated port, IP or host rules events will follow the index configuration for vmware nsx"},{"location":"sources/vendor/VMWare/horizonview/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes vmware_horizon vmware:horizon main none"},{"location":"sources/vendor/VMWare/vsphere/","title":"Vsphere","text":""},{"location":"sources/vendor/VMWare/vsphere/#product-vsphere-esx-nsx-controller-manager-edge","title":"Product - vSphere - ESX NSX (Controller, Manager, Edge)","text":"

Vmware vsphere product line has multiple old and known issues in syslog output.

WARNING use of a load balancer with udp will cause \u201ccorrupt\u201d event behavior due to out of order message processing caused by the load balancer

Ref Link Splunk Add-on ESX https://splunkbase.splunk.com/app/5603/ Splunk Add-on Vcenter https://splunkbase.splunk.com/app/5601/ Splunk Add-on nxs none Splunk Add-on vsan none"},{"location":"sources/vendor/VMWare/vsphere/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes vmware:esxlog:${PROGRAM} None vmware:nsxlog:${PROGRAM} None vmware:vclog:${PROGRAM} None nix:syslog When used with a default port, this will follow the generic NIX configuration. When using a dedicated port, IP or host rules events will follow the index configuration for vmware nsx"},{"location":"sources/vendor/VMWare/vsphere/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes vmware_vsphere_esx vmware:esxlog:${PROGRAM} infraops none vmware_vsphere_nsx vmware:nsxlog:${PROGRAM} infraops none vmware_vsphere_nsxfw vmware:nsxlog:dfwpktlogs netfw none vmware_vsphere_vc vmware:vclog:${PROGRAM} infraops none"},{"location":"sources/vendor/VMWare/vsphere/#filter-type","title":"Filter type","text":"

MSG Parse: This filter parses message content when using the default configuration. SC4S will normalize the structure of vmware events from multiple incorrectly formed varients to rfc5424 format to improve parsing

"},{"location":"sources/vendor/VMWare/vsphere/#setup-and-configuration","title":"Setup and Configuration","text":""},{"location":"sources/vendor/VMWare/vsphere/#options","title":"Options","text":"Variable default description SC4S_LISTEN_VMWARE_VSPHERE_TCP_PORT empty string Enable a TCP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_VMWARE_VSPHERE_UDP_PORT empty string Enable a UDP port for this specific vendor product using a comma-separated list of port numbers SC4S_LISTEN_VMWARE_VSPHERE_TLS_PORT empty string Enable a TLS port for this specific vendor product using a comma-separated list of port numbers SC4S_SOURCE_VMWARE_VSPHERE_GROUPMSG empty string empty/yes groups known instances of improperly split events set \u201cyes\u201d to return to enable"},{"location":"sources/vendor/VMWare/vsphere/#verification","title":"Verification","text":"

An active proxy will generate frequent events. Use the following search to validate events are present per source device

index=<asconfigured> sourcetype=\"vmware:vsphere:*\" | stats count by host\n
"},{"location":"sources/vendor/VMWare/vsphere/#automatic-parser-configuration","title":"Automatic Parser Configuration","text":"

Enable the following options in the env_file

#Do not enable with a SNAT load balancer\nSC4S_USE_NAME_CACHE=yes\n#Combine known split events into a single event for Splunk\nSC4S_SOURCE_VMWARE_VSPHERE_GROUPMSG=yes\n#Learn vendor product from recognized events and apply to generic events\n#for example after the first vpxd event sshd will utilize vps \"vmware_vsphere_nix_syslog\" rather than \"nix_syslog\"\nSC4S_USE_VPS_CACHE=yes\n
"},{"location":"sources/vendor/VMWare/vsphere/#manual-parser-configuration","title":"Manual Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-vmware_vsphere.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-vmware_vsphere[sc4s-vps] {\n filter {      \n        #netmask(169.254.100.1/24)\n        #host(\"-esx-\")\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('vmware')\n            product('vsphere')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/Varonis/datadvantage/","title":"DatAdvantage","text":""},{"location":"sources/vendor/Varonis/datadvantage/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Varonis/datadvantage/#links","title":"Links","text":"Ref Link Technology Add-On for Varonis https://splunkbase.splunk.com/app/4256/"},{"location":"sources/vendor/Varonis/datadvantage/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes varonis:ta"},{"location":"sources/vendor/Varonis/datadvantage/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes Varonis Inc._DatAdvantage varonis:ta main"},{"location":"sources/vendor/Vectra/cognito/","title":"Cognito","text":""},{"location":"sources/vendor/Vectra/cognito/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Vectra/cognito/#links","title":"Links","text":"Ref Link Technology Add-On for Vectra Cognito https://splunkbase.splunk.com/app/4408/"},{"location":"sources/vendor/Vectra/cognito/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes vectra:cognito:detect vectra:cognito:accountdetect vectra:cognito:accountscoring vectra:cognito:audit vectra:cognito:campaigns vectra:cognito:health vectra:cognito:hostscoring vectra:cognito:accountlockdown"},{"location":"sources/vendor/Vectra/cognito/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes Vectra Networks_X Series vectra:cognito:detect main Vectra Networks_X Series_accountdetect vectra:cognito:accountdetect main Vectra Networks_X Series_asc vectra:cognito:accountscoring main Vectra Networks_X Series_audit vectra:cognito:audit main Vectra Networks_X Series_campaigns vectra:cognito:campaigns main Vectra Networks_X Series_health vectra:cognito:health main Vectra Networks_X Series_hsc vectra:cognito:hostscoring main Vectra Networks_X Series_lockdown vectra:cognito:accountlockdown main"},{"location":"sources/vendor/Veeam/veeam/","title":"Veeam","text":""},{"location":"sources/vendor/Veeam/veeam/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Veeam/veeam/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes veeam:vbr:syslog"},{"location":"sources/vendor/Veeam/veeam/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes veeam_vbr_syslog veeam:vbr:syslog infraops none"},{"location":"sources/vendor/Wallix/bastion/","title":"Bastion","text":""},{"location":"sources/vendor/Wallix/bastion/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Wallix/bastion/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3661/"},{"location":"sources/vendor/Wallix/bastion/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes WB:syslog note this sourcetype includes program:rdproxy all other data will be treated as nix"},{"location":"sources/vendor/Wallix/bastion/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes wallix_bastion infraops main none"},{"location":"sources/vendor/Wallix/bastion/#parser-configuration","title":"Parser Configuration","text":"
#/opt/sc4s/local/config/app-parsers/app-vps-wallix_bastion.conf\n#File name provided is a suggestion it must be globally unique\n\napplication app-vps-test-wallix_bastion[sc4s-vps] {\n filter { \n        host('^wasb')\n    }; \n    parser { \n        p_set_netsource_fields(\n            vendor('wallix')\n            product('bastion')\n        ); \n    };   \n};\n
"},{"location":"sources/vendor/XYPro/mergedaudit/","title":"Merged Audit","text":"

XY Pro merged audit also called XYGate or XMA is the defacto solution for syslog from HP Nonstop Server (Tandem)

"},{"location":"sources/vendor/XYPro/mergedaudit/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/XYPro/mergedaudit/#links","title":"Links","text":"Ref Link Splunk Add-on None Product Manual https://xypro.com/products/hpe-software-from-xypro/"},{"location":"sources/vendor/XYPro/mergedaudit/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes cef None"},{"location":"sources/vendor/XYPro/mergedaudit/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes XYPRO_NONSTOP cef infraops none"},{"location":"sources/vendor/Zscaler/lss/","title":"LSS","text":"

The ZScaler product manual includes and extensive section of configuration for multiple Splunk TCP input ports around page 26. When using SC4S these ports are not required and should not be used. Simply configure all outputs from the LSS to utilize the IP or host name of the SC4S instance and port 514

"},{"location":"sources/vendor/Zscaler/lss/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Zscaler/lss/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3865/ Product Manual https://community.zscaler.com/t/zscaler-splunk-app-design-and-installation-documentation/4728"},{"location":"sources/vendor/Zscaler/lss/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes zscalerlss-zpa-app None zscalerlss-zpa-bba None zscalerlss-zpa-connector None zscalerlss-zpa-auth None zscalerlss-zpa-audit None"},{"location":"sources/vendor/Zscaler/lss/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes zscaler_lss zscalerlss-zpa-app, zscalerlss-zpa-bba, zscalerlss-zpa-connector, zscalerlss-zpa-auth, zscalerlss-zpa-audit netproxy none"},{"location":"sources/vendor/Zscaler/nss/","title":"NSS","text":"

The ZScaler product manual includes and extensive section of configuration for multiple Splunk TCP input ports around page 26. When using SC4S these ports are not required and should not be used. Simply configure all outputs from the NSS to utilize the IP or host name of the SC4S instance and port 514

"},{"location":"sources/vendor/Zscaler/nss/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/Zscaler/nss/#links","title":"Links","text":"Ref Link Splunk Add-on https://splunkbase.splunk.com/app/3865/ Product Manual https://community.zscaler.com/t/zscaler-splunk-app-design-and-installation-documentation/4728"},{"location":"sources/vendor/Zscaler/nss/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes zscaler_nss_alerts Requires format customization add \\tvendor=Zscaler\\tproduct=alerts immediately prior to the \\n in the NSS Alert Web format. See Zscaler manual for more info. zscaler_nss_dns Requires format customization add \\tvendor=Zscaler\\tproduct=dns immediately prior to the \\n in the NSS DNS format. See Zscaler manual for more info. zscaler_nss_web None zscaler_nss_fw Requires format customization add \\tvendor=Zscaler\\tproduct=fw immediately prior to the \\n in the Firewall format. See Zscaler manual for more info."},{"location":"sources/vendor/Zscaler/nss/#sourcetype-and-index-configuration","title":"Sourcetype and Index Configuration","text":"key sourcetype index notes zscaler_nss_alerts zscalernss-alerts main none zscaler_nss_dns zscalernss-dns netdns none zscaler_nss_fw zscalernss-fw netfw none zscaler_nss_web zscalernss-web netproxy none zscaler_nss_tunnel zscalernss-tunnel netops none zscaler_zia_audit zscalernss-zia-audit netops none zscaler_zia_sandbox zscalernss-zia-sandbox main none"},{"location":"sources/vendor/Zscaler/nss/#filter-type","title":"Filter type","text":"

MSG Parse: This filter parses message content

"},{"location":"sources/vendor/Zscaler/nss/#setup-and-configuration","title":"Setup and Configuration","text":""},{"location":"sources/vendor/a10networks/vthunder/","title":"a10networks vthunder","text":""},{"location":"sources/vendor/a10networks/vthunder/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/a10networks/vthunder/#links","title":"Links","text":"Ref Link A10 Networks SSL Insight App https://splunkbase.splunk.com/app/3937 A10 Networks Application Firewall App https://splunkbase.splunk.com/app/3920 A10 Networks L4 Firewall App https://splunkbase.splunk.com/app/3910"},{"location":"sources/vendor/a10networks/vthunder/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes a10networks:vThunder:cef CEF a10networks:vThunder:syslog Syslog"},{"location":"sources/vendor/a10networks/vthunder/#source","title":"Source","text":"source notes a10networks:vThunder None"},{"location":"sources/vendor/a10networks/vthunder/#index-configuration","title":"Index Configuration","text":"key source index notes a10networks_vThunder a10networks:vThunder netwaf, netops none"},{"location":"sources/vendor/epic/epic_ehr/","title":"Epic EHR","text":""},{"location":"sources/vendor/epic/epic_ehr/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/epic/epic_ehr/#links","title":"Links","text":"Ref Link Splunk Add-on na"},{"location":"sources/vendor/epic/epic_ehr/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes epic:epic-ehr:syslog None"},{"location":"sources/vendor/epic/epic_ehr/#index-configuration","title":"Index Configuration","text":"key sourcetype index notes epic_epic-ehr epic:epic-ehr:syslog main none"},{"location":"sources/vendor/syslog-ng/loggen/","title":"loggen","text":"

Loggen is a tool used to load test syslog implementations.

"},{"location":"sources/vendor/syslog-ng/loggen/#key-facts","title":"Key facts","text":""},{"location":"sources/vendor/syslog-ng/loggen/#links","title":"Links","text":"Ref Link Product Manual https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.26/administration-guide/96#loggen.1"},{"location":"sources/vendor/syslog-ng/loggen/#sourcetypes","title":"Sourcetypes","text":"sourcetype notes syslogng:loggen By default, loggen uses the legacy BSD-syslog message format.BSD example:loggen --inet --dgram --number 1 <ip> <port>RFC5424 example:loggen --inet --dgram -PF --number 1 <ip> <port>Refer to above manual link for more examples."},{"location":"sources/vendor/syslog-ng/loggen/#index-configuration","title":"Index Configuration","text":"key index notes syslogng_loggen main none"},{"location":"troubleshooting/troubleshoot_SC4S_server/","title":"Validate server startup and operations","text":"

This topic helps you find the most common solutions to startup and operational issues with SC4S.

If you plan to run SC4S with standard configuration, we recommend that you perform startup out of systemd.

If you are using a custom configuration of SC4S with significant modifications, for example, multiple unique ports for sources, hostname/CIDR block configuration for sources, or new log paths, start SC4S with the container runtime command podman or docker directly from the command line as described in this topic. When you are satisfied with the operation, you can then transition to systemd.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-systemd-errors-occur-during-sc4s-startup","title":"Issue: systemd errors occur during SC4S startup","text":"

If you are running out of systemd, you may see this at startup:

[root@sc4s syslog-ng]# systemctl start sc4s\nJob for sc4s.service failed because the control process exited with error code. See \"systemctl status sc4s.service\" and \"journalctl -xe\" for details.\n
Most issues that occur with startup and operation of SC4S involve syntax errors or duplicate listening ports.

Try the following to resolve the issue:

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#check-that-your-sc4s-container-is-running","title":"Check that your SC4S container is running","text":"

If you start with systemd and the container is not running, check with the following:

journalctl -b -u sc4s | tail -100\n
This will print the last 100 lines of the system journal in detail, which should be sufficient to see the specific syntax or runtime failure and guide you in troubleshooting the unexpected container exit.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#check-that-the-sc4s-container-starts-and-runs-properly-outside-of-the-systemd-service-environment","title":"Check that the SC4S container starts and runs properly outside of the systemd service environment","text":"

As an alternative to launching with systemd during the initial installation phase, you can test the container startup outside of the systemd startup environment. This is especially important for troubleshooting or log path development, for example, when SC4S_DEBUG_CONTAINER is set to \u201cyes\u201d.

The following command launches the container directly from the command line. This command assumes the local mounted directories are set up as shown in the \u201cgetting started\u201d examples. Adjust for your local requirements, if you are using Docker, substitute \u201cdocker\u201d for \u201cpodman\u201d for the container runtime command.

/usr/bin/podman run \\\n    -v splunk-sc4s-var:/var/lib/syslog-ng \\\n    -v /opt/sc4s/local:/etc/syslog-ng/conf.d/local:z \\\n    -v /opt/sc4s/archive:/var/lib/syslog-ng/archive:z \\\n    -v /opt/sc4s/tls:/etc/syslog-ng/tls:z \\\n    --env-file=/opt/sc4s/env_file \\\n    --network host \\\n    --name SC4S \\\n    --rm ghcr.io/splunk/splunk-connect-for-syslog/container3:latest\n
"},{"location":"troubleshooting/troubleshoot_SC4S_server/#check-that-the-container-is-still-running-when-systemd-indicates-that-its-not-running","title":"Check that the container is still running when systemd indicates that it\u2019s not running","text":"

In some instances, particularly when SC4S_DEBUG_CONTAINER=yes, an SC4S container might not shut down completely when starting/stopping out of systemd, and systemd will attempt to start a new container when one is already running with the SC4S name. You will see this type of output when viewing the journal after a failed start caused by this condition, or a similar message when the container is run directly from the CLI:

Jul 15 18:45:20 sra-sc4s-alln01-02 podman[11187]: Error: error creating container storage: the container name \"SC4S\" is already in use by \"894357502b2a7142d097ea3ca1468d1cb4fbc69959a9817a1bbe145a09d37fb9\". You have to remove that container...\nJul 15 18:45:20 sra-sc4s-alln01-02 systemd[1]: sc4s.service: Main process exited, code=exited, status=125/n/a\n

To rectify this, execute:

podman rm -f SC4S\n

SC4S should then start normally.

Do not use systemd when SC4S_DEBUG_CONTAINER is set to \u201cyes\u201d, instead use the CLI podman or docker commands directly to start/stop SC4S.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-hectoken-connection-errors-for-example-no-data-in-splunk","title":"Issue: HEC/token connection errors, for example, \u201cNo data in Splunk\u201d","text":"

SC4S performs basic HEC connectivity and index checks at startup and creates logs that indicate general connection issues and indexes that may not be accessible or configured on Splunk. To check the container logs that contain the results of these tests, run:

/usr/bin/<podman|docker> logs SC4S\n

You will see entries similar to the following:

SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful; checking indexes...\n\nSC4S_ENV_CHECK_INDEX: Checking email {\"text\":\"Incorrect index\",\"code\":7,\"invalid-event-number\":1}\nSC4S_ENV_CHECK_INDEX: Checking epav {\"text\":\"Incorrect index\",\"code\":7,\"invalid-event-number\":1}\nSC4S_ENV_CHECK_INDEX: Checking main {\"text\":\"Success\",\"code\":0}\n

Note the specifics of the indexes that are not configured correctly, and rectify this in your Splunk configuration. If this is not addressed properly, you may see output similar to the below when data flows into SC4S:

Mar 16 19:00:06 b817af4e89da syslog-ng[1]: Server returned with a 4XX (client errors) status code, which means we are not authorized or the URL is not found.; url='https://splunk-instance.com:8088/services/collector/event', status_code='400', driver='d_hec#0', location='/opt/syslog-ng/etc/conf.d/destinations/splunk_hec.conf:2:5'\nMar 16 19:00:06 b817af4e89da syslog-ng[1]: Server disconnected while preparing messages for sending, trying again; driver='d_hec#0', location='/opt/syslog-ng/etc/conf.d/destinations/splunk_hec.conf:2:5', worker_index='4', time_reopen='10', batch_size='1000'\n
This is an indication that the standard d_hec destination in syslog-ng, which is the route to Splunk, is rejected by the HEC endpoint. A 400 error is commonly caused by an index that has not been created in Splunk. One bad index can damage the batch, in this case, 1000 events, and prevent any of the data from being sent to Splunk. Make sure that the container logs are free of these kinds of errors in production. You can use the alternate HEC debug destination to help debug this condition by sending direct \u201ccurl\u201d commands to the HEC endpoint outside of the SC4S setting.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-invalid-sc4s-listening-ports","title":"Issue: Invalid SC4S listening ports","text":"

SC4S exclusively grants a port to a device when SC4S_LISTEN_{vendor}_{product}_{TCP/UDP/TLS}_PORT={port}.

During startup, SC4S validates that listening ports are configured correctly, and shows any issues in container logs.

You will receive an error message similar to the following if listening ports for MERAKI SWITCHES are configured incorrectly:

SC4S_LISTEN_MERAKI_SWITCHES_TCP_PORT: Wrong port number, don't use default port like (514,614,6514)\nSC4S_LISTEN_MERAKI_SWITCHES_UDP_PORT: 7000 is not unique and has already been used for another source\nSC4S_LISTEN_MERAKI_SWITCHES_TLS_PORT: 999999999999 must be integer within the range (0, 10000)\n

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-sc4s-local-disk-resource-issues","title":"Issue: SC4S local disk resource issues","text":"

d_hec_debug and d_archive are organized by sourcetype; the du -sh * command can be used in each subdirectory to find the culprit.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-incorrect-sc4skernel-udp-input-buffer-settings","title":"Issue: Incorrect SC4S/kernel UDP Input Buffer settings","text":"

UDP Input Buffer Settings let you request a certain buffer size when configuring the UDP sockets. The kernel must have its parameters set to the same size or greater than what the syslog-ng configuration is requesting, or the following will occur in the SC4S logs:

/usr/bin/<podman|docker> logs SC4S\n
The following warning message is not a failure condition unless you are reaching the upper limit of your hardware performance.
The kernel refused to set the receive buffer (SO_RCVBUF) to the requested size, you probably need to adjust buffer related kernel parameters; so_rcvbuf='1703936', so_rcvbuf_set='425984'\n
Make changes to /etc/sysctl.conf, changing receive buffer values to 16 MB:

net.core.rmem_default = 17039360\nnet.core.rmem_max = 17039360 \n
Run the following commands to implement your changes:
sysctl -p restart SC4S \n

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-invalid-sc4s-tls-listener","title":"Issue: Invalid SC4S TLS listener","text":"

To verify the correct configuration of the TLS server use the following command. Replace the IP, FQDN, and port as appropriate:

<podman|docker> run -ti drwetter/testssl.sh --severity MEDIUM --ip 127.0.0.1 selfsigned.example.com:6510\n
"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-unable-to-retrieve-logs-from-non-rfc-5424-compliant-sources","title":"Issue: Unable to retrieve logs from non RFC-5424 compliant sources","text":"

If a data source you are trying to ingest claims it is RFC-5424 compliant but you get an \u201cError processing log message:\u201d from SC4S, this message indicates that the data source still violates the RFC-5424 standard in some way. In this case, the underlying syslog-ng process will send an error event, with the location of the error in the original event highlighted with >@< to indicate where the error occurred. Here is an example error message:

{ [-]\n   ISODATE: 2020-05-04T21:21:59.001+00:00\n   MESSAGE: Error processing log message: <14>1 2020-05-04T21:21:58.117351+00:00 arcata-pks-cluster-1 pod.log/cf-workloads/logspinner-testing-6446b8ef - - [kubernetes@47450 cloudfoundry.org/process_type=\"web\" cloudfoundry.org/rootfs-version=\"v75.0.0\" cloudfoundry.org/version=\"eae53cc3-148d-4395-985c-8fef0606b9e3\" controller-revision-hash=\"logspinner-testing-6446b8ef05-7db777754c\" cloudfoundry.org/app_guid=\"f71634fe-34a4-4f89-adac-3e523f61a401\" cloudfoundry.org/source_type=\"APP\" security.istio.io/tlsMode=\"istio\" statefulset.kubernetes.io/pod-n>@<ame=\"logspinner-testing-6446b8ef05-0\" cloudfoundry.org/guid=\"f71634fe-34a4-4f89-adac-3e523f61a401\" namespace_name=\"cf-workloads\" object_name=\"logspinner-testing-6446b8ef05-0\" container_name=\"opi\" vm_id=\"vm-e34452a3-771e-4994-666e-bfbc7eb77489\"] Duration 10.00299412s TotalSent 10 Rate 0.999701 \n   PID: 33\n   PRI: <43>\n   PROGRAM: syslog-ng\n}\n

In this example the error can be seen in the snippet statefulset.kubernetes.io/pod-n>@<ame. The error states that the \u201cSD-NAME\u201d (the left-hand side of the name=value pairs) cannot be longer than 32 printable ASCII characters, and the indicated name exceeds that. Ideally you should address this issue with the vendor, however, you can add an exception to the SC4S filter log path or an alternative workaround log path created for the data source.

In this example, the reason RAWMSG is not shown in the fields above is because this error message is coming from syslog-ng itself. In messages of the type Error processing log message: where the PROGRAM is shown as syslog-ng, your incoming message is not RFC-5424 compliant.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-terminal-is-overwhelmed-by-metrics-and-internal-processing-messages-in-a-custom-environment-configuration","title":"Issue: Terminal is overwhelmed by metrics and internal processing messages in a custom environment configuration","text":"

In non-containerized SC4S deployments, if you try to start the SC4S service, the terminal may be overwhelmed by the internal and metrics logs. Example of the issue can be found here: Github Terminal abuse issue

To resolve this, set following property in env_file:

SC4S_SEND_METRICS_TERMINAL=no\n

Restart SC4S.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-you-are-missing-cef-logs-that-are-not-rfc-compliant","title":"Issue: You are missing CEF logs that are not RFC compliant","text":"
  1. To resolve this, set following property in env_file:

    SC4S_DISABLE_DROP_INVALID_CEF=yes\n

  2. Restart SC4S.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-you-are-missing-vmware-cb-protect-logs-that-are-not-rfc-compliant","title":"Issue: You are missing VMWARE CB-PROTECT logs that are not RFC compliant","text":"
  1. To resolve this, set following property in env_file:

    SC4S_DISABLE_DROP_INVALID_VMWARE_CB_PROTECT=yes\n

  2. Restart SC4S.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-you-are-missing-cisco-ios-logs-that-are-not-rfc-compliant","title":"Issue: You are missing CISCO IOS logs that are not RFC compliant","text":"
  1. To resolve this, set following property in env_file:
    SC4S_DISABLE_DROP_INVALID_CISCO=yes\n
  2. Restart SC4S.
"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-you-are-missing-vmware-vsphere-logs-that-are-not-rfc-compliant","title":"Issue: You are missing VMWARE VSPHERE logs that are not RFC compliant","text":"
  1. To resolve this, set following property in env_file:

    SC4S_DISABLE_DROP_INVALID_VMWARE_VSPHERE=yes\n

  2. Restart SC4S.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-you-are-missing-raw-bsd-logs-that-are-not-rfc-compliant","title":"Issue: You are missing RAW BSD logs that are not RFC compliant","text":"
  1. To resolve this, set following property in env_file:

    SC4S_DISABLE_DROP_INVALID_RAW_BSD=yes\n

  2. Restart SC4S.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-you-are-missing-raw-xml-logs-that-are-not-rfc-compliant","title":"Issue: You are missing RAW XML logs that are not RFC compliant","text":"
  1. To resolve this, set following property in env_file:

    SC4S_DISABLE_DROP_INVALID_XML=yes\n

  2. Restart SC4S.

"},{"location":"troubleshooting/troubleshoot_SC4S_server/#issue-you-are-missing-hpe-jetdirect-logs-that-are-not-rfc-compliant","title":"Issue: You are missing HPE JETDIRECT logs that are not RFC compliant","text":"
  1. To resolve this, set following property in env_file:

    SC4S_DISABLE_DROP_INVALID_HPE=yes\n

  2. Restart SC4S and it will not drop any invalid HPE JETDIRECT format.

NOTE: Please use only in this case of exception and this is splunk-unsupported feature. Also this setting might impact SC4S performance.

"},{"location":"troubleshooting/troubleshoot_resources/","title":"SC4S Logging and Troubleshooting Resources","text":""},{"location":"troubleshooting/troubleshoot_resources/#helpful-linux-and-container-commands","title":"Helpful Linux and container commands","text":""},{"location":"troubleshooting/troubleshoot_resources/#linux-service-systemd-commands","title":"Linux service (systemd) commands","text":""},{"location":"troubleshooting/troubleshoot_resources/#container-commands","title":"Container commands","text":"

All of the following container commands can be run with the podman or docker runtime.

"},{"location":"troubleshooting/troubleshoot_resources/#test-commands","title":"Test commands","text":"

Check your SC4S port using the nc command. Run this command where SC4S is hosted and check data in Splunk for success and failure:

echo '<raw_sample>' |nc <host> <port>\n

"},{"location":"troubleshooting/troubleshoot_resources/#obtain-raw-message-events","title":"Obtain raw message events","text":"

During development or troubleshooting, you may need to obtain samples of the messages exactly as they are received by SC4S. These events contain the full syslog message, including the <PRI> preamble, and are different from messages that have been processed by SC4S and Splunk.

These raw messages help to determine that SC4S parsers and filters are operating correctly, and are needed for playback when testing. The community supporting SC4S will always first ask for raw samples before any development or troubleshooting exercise.

Here are some options for obtaining raw logs for one or more sourcetypes:

NOTE: Be sure to turn off the RAWMSG variable when you are finished, because it doubles the memory and disk requirements of SC4S. Do not use RAWMSG in production.

"},{"location":"troubleshooting/troubleshoot_resources/#run-exec-into-the-container-advanced-task","title":"Run exec into the container (advanced task)","text":"

You can confirm how the templating process created the actual syslog-ng configuration files by calling exec into the container and navigating the syslog-ng config filesystem directly. To do this, run

/usr/bin/podman exec -it SC4S /bin/bash\n
and navigate to /opt/syslog-ng/etc/ to see the actual configuration files in use. If you are familiar with container operations and syslog-ng, you can modify files directly and reload syslog-ng with the command kill -1 1 in the container. You can also run the /entrypoint.sh script, or a subset of it, such as everything but syslog-ng, and have complete control over the templating and underlying syslog-ng process. This is an advanced topic and further help can be obtained through the github issue tracker and Slack channels.

"},{"location":"troubleshooting/troubleshoot_resources/#keeping-a-failed-container-running-advanced-topic","title":"Keeping a failed container running (advanced topic)","text":"

To debug a configuration syntax issue at startup, keep the container running after a syslog-ng startup failure. In order to facilitate troubleshooting and make syslog-ng configuration changes from within a running container, the container can be forced to remain running when syslog-ng fails to start (which normally terminates the container). To enable this, add SC4S_DEBUG_CONTAINER=yes to the env_file. Use this capability in conjunction with exec calls into the container.

NOTE: Do not enable the debug container mode while running out of systemd. Instead, run the container manually from the CLI, so that you can use the podman or docker commands needed to start, stop, and clean up cruft left behind by the debug process. Only when SC4S_DEBUG_CONTAINER is set to \u201cno\u201d (or completely unset) should systemd startup processing resume.

"},{"location":"troubleshooting/troubleshoot_resources/#fix-time-zones","title":"Fix time zones","text":"

Time zone mismatches can occur if SC4S and logHost are not in same time zones. To resolve this, create a filter using sc4s-lp-dest-format-d_hec_fmt, for example:

#filename: /opt/sc4s/local/config/app_parsers/rewriters/app-dest-rewrite-fix_tz_something.conf\n\nblock parser app-dest-rewrite-checkpoint_drop-d_fmt_hec_default() {    \n    channel {\n            rewrite { fix-time-zone(\"EST5EDT\"); };\n    };\n};\napplication app-dest-rewrite-fix_tz_something-d_fmt_hec_default[sc4s-lp-dest-format-d_hec_fmt] {\n    filter {\n        match('checkpoint' value('fields.sc4s_vendor') type(string))                 <- this must be customized\n        and match('syslog' value('fields.sc4s_product') type(string))                <- this must be customized\n        and match('Drop' value('.SDATA.sc4s@2620.action') type(string))              <- this must be customized\n        and match('12.' value('.SDATA.sc4s@2620.src') type(string) flags(prefix) );  <- this must be customized\n\n    };    \n    parser { app-dest-rewrite-checkpoint_drop-d_fmt_hec_default(); };   \n};\n

If destport, container, and proto are not available in indexed fields, you can create a post-filter:

#filename: /opt/sc4s/local/config/app_parsers/rewriters/app-dest-rewrite-fix_tz_something.conf\n\nblock parser app-dest-rewrite-fortinet_fortios-d_fmt_hec_default() {\n    channel {\n            rewrite {\n                  fix-time-zone(\"EST5EDT\");\n            };\n    };\n};\n\napplication app-dest-rewrite-device-d_fmt_hec_default[sc4s-postfilter] {\n    filter {\n         match(\"xxxx\", value(\"fields.sc4s_destport\") type(glob));  <- this must be customized\n    };\n    parser { app-dest-rewrite-fortinet_fortios-d_fmt_hec_default(); };\n};\n
Note that filter match statement should be aligned to your data

The parser accepts time zone in formats: \u201cAmerica/New York\u201d or \u201cEST5EDT\u201d, but not short in form such as \u201cEST\u201d.

"},{"location":"troubleshooting/troubleshoot_resources/#issue-cyberark-log-problems","title":"Issue: CyberArk log problems","text":"

When data is received on the indexers, all events are merged together into one event. Check the following link for CyberArk configuration information: https://cyberark-customers.force.com/s/article/00004289.

"},{"location":"troubleshooting/troubleshoot_resources/#issue-sc4s-events-drop-when-another-interface-is-used-to-receive-logs","title":"Issue: SC4S events drop when another interface is used to receive logs","text":"

When a second or alternate interface is used to receive syslog traffic, RPF (Reverse Path Forwarding) filtering in RHEL, which is configured as default configuration, may drop events. To resolve this, add a static route for the source device to point back to the dedicated syslog interface. See https://access.redhat.com/solutions/53031.

"},{"location":"troubleshooting/troubleshoot_resources/#issue-splunk-does-not-ingest-sc4s-events-from-other-virtual-machines","title":"Issue: Splunk does not ingest SC4S events from other virtual machines","text":"

When data is transmitted through an echo message from the same instance, data is sent successfully to Splunk. However, when the echo is sent from a different instance, the data may not appear in Splunk and the errors are not reported in the logs. To resolve this issue, check whether an internal firewall is enabled. If an internal firewall is active, verify whether the default port 514 or the port which you have used is blocked. Here are some commands to check and enable your firewall:

#To list all the firewall ports\nsudo firewall-cmd --list-all\n#to enable 514 if its not enabled\nsudo firewall-cmd --zone=public --permanent --add-port=514/udp\nsudo firewall-cmd  --reload\n

"}]} \ No newline at end of file diff --git a/main/sources/vendor/Cisco/cisco_asa/index.html b/main/sources/vendor/Cisco/cisco_asa/index.html index e931545b69..3d2a7004d3 100644 --- a/main/sources/vendor/Cisco/cisco_asa/index.html +++ b/main/sources/vendor/Cisco/cisco_asa/index.html @@ -8272,7 +8272,6 @@

ASA/FTD (Firepower)

Key facts