Security teams can’t defend what they can’t see. As organizations move more workloads to the cloud, security teams need added visibility into these new workloads or they risk having blind spots that lead to compromise. In this blog post, we demonstrate how to quickly and easily onboard data into Splunk Cloud.
Let’s Get Back to Basics
Organizations should understand their data in terms of its value to the business. Splunk asked security teams “What are the most important assets and applications to the business?” and they often get answers along the lines of “Routers, switches and firewalls.” While these tools are critical, focusing only on tools and not the data the tools interact with reflects a static view of a much more dynamic environment. The data creates a whole new world of insights that can help to provide business value and further reduce risk.
By leveraging the correct data, security teams can better protect their organizations — not just their IT systems — by first reframing their perspectives to identify business-critical assets and applications. To get started, analysts need to be able to answer foundational questions about their environment, such as naming assets they’re defending and what databases are behind these assets. This foundational knowledge enables security teams to answer more complex questions later on, establishing an effective and more proactive security program as a result of a strong understanding of their environment. This level of understanding will also help organizations of all sizes and industries identify the right data to onboard into Splunk.
Data Discovery Questions
We’ve outlined a few data discovery questions you can ask yourself to start getting valuable insights from your data:
- What is producing data? Is it an appliance, an application or a cloud service?
- Where is the data?
- Who needs access to the logs in Splunk?
- How long do I need to retain the logs?
- Do I have the right Sourcetype?
You might be surprised to find out that onboarding data into Splunk Cloud is not only fast and effective but also safe.
Data Source Methodology
Now that you’ve prepped and primed your data, it’s time to consider the different ways to onboard it. Splunk provides hundreds of ways to capture data, depending on where your data exists and how you’re bringing the data in. For example, when considering a data source methodology, you can use Splunk’s Universal Forwarder for local file monitoring, or you could use HTTP Event Collector (HEC) for mobile apps, IoT devices or applications where a forwarder can’t be installed. At the end of the day, these different methods will serve the same purpose – sending your data directly to Splunk Enterprise or Splunk Cloud for indexing.
There Might be an App (and Add-On) For That!
With Splunk’s robust community and ecosystem, you might not need to lift a heavy finger to onboard and massage your data into the platform. Customers can take advantage of apps and add-ons on Splunkbase to onboard data quickly and start getting immediate value from it.
As you onboard multiple data sources into the platform, normalizing the data is a crucial next step. The Common Information Model (CIM) provides a predictable field schema regardless of data source and helps standardize fields when onboarding data. The CIM will standardize values from a field, such as time, regardless of how the field was originally labeled from the source.
To learn more about Splunk products and solutions, please visit CDW.ca/Splunk