-
A platform for searching, monitoring, and analyzing machine-generated data.
-
Used to collect, index, and correlate real-time data in a searchable repository, which can be used to generate graphs, reports, alerts, dashboards, and visualizations.
-
Data can be ingested in several ways:
- Log files: Configure Splunk to monitor your application log files.
- HTTP Event Collector (HEC): Send data directly from your application to Splunk via HTTP/HTTPS.
- Database connections: Splunk can pull data from various databases.
-
Key Components
- Forwarders: Lightweight agents installed on servers to collect and send data to Splunk indexers.
- Indexers: Process and store incoming data, making it searchable.
- Search Heads: The interface where users can search and analyze data, create dashboards, and set up alerts.
-
Splunk Search Processing Language (SPL) is the powerful and flexible query language used in Splunk used to search and manipulate your data:
search
: Filter eventsstats
: Calculate statisticstimechart
: Create time-based chartstable
: Format results into a table
search sourcetype=access_combined | stats count by status | sort -count
- Dashboards are used to visualize your data.
- They can be created using Simple XML or the Dashboard Studio.
<!-- Example Simple XML -->
<dashboard>
<label>Application Performance</label>
<row>
<panel>
<title>Response Time</title>
<chart>
<search>
<query>
sourcetype=access_combined | timechart avg(response_time) by endpoint
</query>
</search>
</chart>
</panel>
</row>
</dashboard>
- After initial setup in Linux, Splunk is installed in
/opt/splunk
:
/opt/splunk/bin/splunk start
/opt/splunk/bin/splunk stop