SPLK-1002 Splunk Core Certified Power User – Splunk Post Installation Activities : Knowledge Objects

  1. Uploading Data to Splunk

We will be seeing more about post-installation, that is, the configuration steps that are carried out in Splunk. Throughout this module, we will use three Splunk components hosted in our Amazon AWS Data Index search at our Universal Forwarder, which is part of our local installation and will simulate the real-time experience of sending logs from our local PC to our cloud instance where they are hosted, searched, and indexed. We will see some of the most common and important steps taken by Splunk, an administrator, or an architect throughout this course. Some of the activities that we’ll be discussing are how to get data inside Splunk, configuring source types for source index creation fields extraction, which is the most important part of our Splunk, and similarly configuring other knowledge objects like tags, event types, lookups, and macros.

Finally, we will create some of the sample alerts, reports, and dashboards. Now let us jump right to our first topic of discussion, which is getting our data inside Splunk. We’ll be using Tutorial Data, which is available for basic practice in search, that is, Tutorial Data Zip. You can click the link that is specified, and it will take you directly to the link where you can download this. tutorialdata.zip and prices.csv.zip We can extract and upload these two files, Prices CSV and Tutorial Data, as individual files after you have downloaded them, or we can extract and feed them through a forwarder. This would be a better lab exercise when you get access. Free lab access Upon purchase of this complete course, we will be uploading a zip file directly to our Splunk instance, which is the searcher, and analysing how the data is being indexed automatically by Splunk. To upload this data, first let us log into our search engine. This is our searcher.

Yes, we have logged in to upload this data that we have just downloaded. Navigate to the settings menu. There is a new date. Click on “Add Data.” Once you have this, click on “Upload.” We have downloaded a Zip file, and we can follow these on-screen instructions to finish our data uploading of tutorial data. I’ll select the files that we have downloaded recently, which is the tutorial data zip file, and by this method, you can upload a maximum of 500 MB. So make sure this method is not recommended for indexing your data throughout the organization. We can’t just dump all of the data and start uploading it to Splunk one by one. But this can be used for regular troubleshooting purposes and also to verify the field extractions and how Splunk is breaking the events. Let us follow this on-screen instruction. I selected tutorial data. I’ll select next. I’ll select the source type automatically determined by Splunk, and I’ll set the host field manually. That is our manual search manual Upload.

This is just for our understanding. You can give whatever the host value is that you need because we are just uploading the data. We are not collecting this data, and I’ll keep it set to “default.” That is nothing but your main index. I’ll select the main index, and if you need to create it, you can do so. a test index. We have seen how to create these indexes in our previous posts, so I’ll create a test index. I’ll save it so that whatever data will be uploading now using this method will be in the test index where we will be testing the uploaded data for the parsing of the data field extractions and what all the fields that need our intervention to parse them successfully are. I mentioned a constant value. You can. If the host value is present in your log data, you should also use a regular expression path. And also, you can mention if you have a complete location. Let’s say something like “see downloads,” and you have “hostname as.” You can specify which segment is a part of your overall location. So, for example, if we consider this one as a download, our segment number will be three. For now, let us keep it at its current value.

You have different methods of choosing. This. So let’s leave it for manual upload. Since we’ll be uploading this manually and testing it, not as a recommendation, we’ll see how we can get the same logs from our agents. The universal forwarder and other methods are examples of this. Click on Review: Yes, everything seems to be fine. click on “Submit,” so once we click on “Submit,” the file will be uploaded into our searcher after uploading. Click on “Start Searching” Once you have uploaded, you will be able to see that we’ll be able to search instantly, just like we uploaded probably seconds ago. And already the data has been indexed by Splunk, and it’s able to understand all this information and extract this much information from our logs, which is pretty good. We have this much information from our logs, which we uploaded as part of a zip file. So if you zip the file, you’ll be able to see that this is our source.

And inside the zip file, these Are these the files it has found and indexed successfully? And it has found three types of sources. This is the source type. One is secure. It is SSL-logged access. That is nothing but your web server logs and application logs. Something referred to as vendor sales. We’ll see how this has been broken down into pieces. And always remember, uploading the data this week can only be used for learning purposes and for a small amount of data that is less than 500 MB. Whereas if the data is large and continuous, like in typical real-time scenarios, we will be using universal forwarding now that we have successfully uploaded our test data.

  1. Adding Data to Splunk via configuration file edit

Now we have uploaded the file successfully. We’ll now see how to collect the logs from our universal forwarder, and this is how it is typically done in most of the organization. We will have to edit the Inputs.com file or add the inputs using the Splunk CLI; we will see both methods. First, we’ll be editing the configuration files to mention the location of the folder or file to which Spunk’s universal forwarder should continuously monitor or collect the logs. For that, we will be using our local laptops with universal forwarders. I’ve already extracted; where are my downloads? Yes, these are my downloads, and I’ve already extracted this tutorialdata.zip, which is this folder, and I’ll be monitoring this folder on this laptop for continuous changes.

So now we know that. Let us see how we can add this data using the Inputs.com file. Let’s start with inputs.com on our universal order, which is C programmer files. Splunk universal order: this is our local Splunk home, etc. system. Here we already have the Inputs.com file; we’ll edit the same. Here I have one syntax that is monitorable, followed by the location of our path where we need to monitor. This is our path, which we need to monitor consistently for the changes I’ll mention. If it is okay, I need to open this file, or I need to open this file as admin to edit. Let me quickly do that. Here we will add this complete path. Yes, now that we’ve added this path, I’ll just add it and monitor; we’ll see other parameters later, such as where I need to collect the locks. Since we have edited the configuration, let’s go ahead and restart our Splunk universal forwarded. This is my universal forward: Splunk exe and restart.

Once it has been successfully restarted, we should be able to see these logs in our search, and probably it will not take more than a minute or so to get these logs into our search ad. Now we know which host is receiving logs because, as I’ll demonstrate, I’ll search for all the indexes, but the host is this one because this is the host defined in our input configuration, and this is where we are receiving logs. So I’ll add a host filter and wait for the next 24 hours. Yes, as you can see, we have 2,500 events in the last 24 hours in the source, which is mostly Windows event logs and stuff, and we’ll see if we can get our newly added monitor of this tutorial data. Now, as you can see, we’re getting different types of sources. These are the default windows that appear during the installation of Universal for the order we began collecting; let me change this to all time so that our tutorial data is slightly backdated so that we can see all of this information. As you can see, we have different sources. We got our access logs, secure log, vendorsales, and all the other logs that were mentioned as part of our tutorial data location.

  1. Adding Data to Splunk via Splunk CLI

As you can see, the source now includes the full log path from which this data was collected by default. You can rename it. We’ll see in further tutorials how to set this source-type information and other information and how we can overwrite them to add more value to our Splunk installation. So in this tutorial, we have seen one that is adding data using unitsel forwarders through editing configuration. Now let us see how we can add configuration using the CLI. This process is relatively similar for both Windows and Linux. I’m in the bin directory of my Installation.

I’ll be using Spunk’s exe-added monitor. This is the same configuration that we added and the directory for which we’ll be monitoring. Let me copy that. For testing purposes, I’ll be adding tutorial data. one at the end. So that we have a new configuration updated on the Splunk Universal Forwarder, we’ll be able to see how this configuration has been added. It is asking for MySplunk universal forwarder credentials. It says the path does. Not exist. It is possible to determine that tutorial data one does not exist. So let’s create one. I created the path.

Let me try to add it again. This time it says it has successfully added the monitor path tutorial data; once we have added it successfully, you should be able to see the inputs. Similar to this one, where it is clearly stated that Tutorial Data 1 is being provided, let us now see if we are receiving our Tutorial Data 1. We should get it; that should not be an issue, but this is how similar practises are used for adding the data in large organisations that are using the CLI method or by editing configuration files directly, which is input conf.

  1. Validation of On Boarded Data

We have learned two methods for uploading data into Splunk. That is one via web that is directly uploading the package, which is less than 500 MB, into Splunk. That is one way of getting your data inside Splunk. The second method is to employ units and forwarders. We saw two methods. One is to edit the configuration file. Second is the CLI.

The third method is to use your deployment server to deploy the new data collection or configuration. We’ll be seeing this sooner in the next module. We can collect data using Deployment Server, i.e., Deployment Server deploys the configuration that we edited. That is our unit cell forwarder’s configuration without logging into our universal forwarder machine. Just keep this in mind so that it will be useful during our next discussion about our deployment server. Now we have uploaded the data into Splunk. Let us verify by using some of the basic search operations. This is our data. We have all the data inside Splunk. Let me now enter the Access Log source type. I’ll do a wildcard search for the Access log, which is Access Star. I’m searching for all the indexes currently, but we have only one index.

That is the default main index. And we have used a test index for uploading the data. But throughout this tutorial, we’ll be using either the main index or other custom-created indexes like Windows and Linux. I’ve listed three filters here on the search bar. That is the index. Allow me to narrow it down to Main rather than Wildcard. So the main index is the “universal forward” of a PC name followed by the source type of logs that I want to access. As I can see, there is a lot of information in this access log. One is the source IP. The second is the time. Make sure all these events are passed properly so that once you see this event, you should be able to break down each event into its respective fields. Here, if we are not able to break down each event into some of the fields or respective fields, that means the parsing is not proper and we need to work on our parsing.

One of the simple examples we can probably use is a command known as “IP Location,” and we can mention our IP address. The field name is “what is our IP address?” name the client’s IP address To get the location from where our customers or visitors are visiting our site, I’ll use Top Country. Using just the IP address, we’ll be able to determine the country from which they are accessing our sites. As we can see, 26% of the overall traffic comes from the United States, and only 1% to 2% comes from Germany and Brazil. Quickly, you can also visualise them based on whatever the recommended patterns are that you choose. You’ll be able to visualise them and present them. This confirms that we are able to pass these logs without any issues. And we have all our information inside Splunk using our universal forwarder.

  1. Source Sourcetype and Host Configuration

Now that we have the data inside Splunk, let’s see how we can modify some of the configuration to make our analysis of the data much more meaningful and efficient. The next step after getting data is to enrich it by adding meaningful information that makes more sense than plain old host names and IP addresses. The first field that is part of our default selection is our source type. If you’re not sure what selected fields are, I recommend going back to our first chapter to learn about these default fields or selected fields, as well as interesting fields.

The source type can be renamed by using Inputs at the time of deploying the configuration or at the time of parsing the data at the indexer level. The source type fields always hold information about technology. Let me give you some examples, like how the source type index is equal to the main source type. Or let me search all the logs for the last 24 hours. As you can see, these are the source types. The source types can be renamed by using our Inputs configuration file at the time of deploying the configuration or at the time of parsing the data at our indexer level. The source type fields always hold information about the technology of the logs or the application that is used for generating these logs. And the source type is used extensively for filtering the logs during searching. If I search with a source type of Performance CP load, this is used as a filter for all locks, not just these. This will be used extensively throughout your Splunk experience for filtering the logs extensively.

And this is one of the important fields that will give more meaning to your data during configuration, and this is also one of the default configurations that should be present for any data that is fed to Splunk. We will see a small lab exercise in which you will be able to get a clear picture of how you can set the source type or rename the source type, and we will use that information for searching or narrowing down specific logs. Since we have a universal forwarder installed on our local PC, let us go to Inputs.com. I’m in Splunk Home, Splunk Universal Forwarder, etc. System local. Let me open up the Inputs.com file. This is our inputs file, which has a couple of examples we added earlier.

I’ll just remove this. We have removed this. Here I can add host is equal to the new host value, which makes sense, source type is equal to the new source type, and source is equal to the new underscore source. This is how you typically define the new values. Let’s say the host is equal to my universal forwarder and laptop. Instead of it saying a regular name like Arun Kumar PC, it can say my universal forwarder installed on the laptop, which makes much more sense. Similarly, instead of saying some dumb source type, it can say that it will consist of the tutorial data that we have uploaded, which will make more sense. Similarly, source will be the location of my download. This will add much more meaning and less confusion to your data, allowing you to analyse it much better.

So let us save this configuration and restart our universal forwarder. Once it is restarted, let me make some changes so that our continuous monitor picks up the new changes I’ll make under the access log. I’ll just copy and paste some of the lines so that the timestamp varies and we get our new entries, so it will be under all time because our access logs are backdated. So since we can see only the top ten values here, let me see just the rare source type since we have like a couple of events. As you can see, our newly added source type, tutorial data, popped up, and we can narrow this down by using host, which is equal to the new host value that we set. As you can see, we have eight events that had duplicate values that we copied and pasted, and they were successfully renamed tutorial Data, source, and hostname. Instead of having the complete location, this makes much more sense. Similarly, if the source type is just an access log or a secure log, it makes sense for Splunk to say that to pass the logs, but as an analyst or a data person searching on Splunk, it makes much more sense to provide specific terms that any user who is accessing Splunk will be able to understand much better. This configuration can also be deployed from our deployment server. We will soon be seeing how we can deploy these configurations using our deployment. So.

  1. Source Parameter Explaination

The next one of the default fields is the source field, which is similar to our source type field but typically holds the location information of the logs. However, this can be renamed to something far more meaningful. This can be renamed to hold much more meaningful information we’ll see from our Windows locks the last 24 hours that are collected from our universal forwarder, rather than just the location of the logs, such as the method used for collection of the logs, such as Bash Python API or PowerShell.

As you can see, the source contains PerfMonCPU load and Promon available memory, as well as similar values in the source type. This represents the method of performance monitoring. We are collecting CPE load performance monitoring network interface data, which contains far more meaningful information than simply holding the location, which we will see in our lab when combined with source type source and how we can modify this information by editing the configuration files. That is rinputs.

  1. Field Extraction Using IFX

In this discussion, we will be going through one of the most important configurations of Splunk: field is Field Extractions. Assume you have 100 GB of data and are dumping it into Splunk, and you notice that it is a complete chunk because you will be unable to make sense of the data in Splunk. Without field extraction, the data is meaningless. When discussing Splunk field extraction, the most important configuration that holds this field extraction configuration is Props and transforms conf. Well, we have seen similar inputs and outputs during data collection. The props and transforms hold the field extraction information. The Props file is where you will configure line breaks and character encoding to set binary file processing.

By default, binary file processing in Splunk is disabled, and you can also set the timestamp and field overrides, like source, source type, fields extraction, and many more. We will see in our further lectures. Most of these features for field extraction have multiple ways of being implemented in Splunk. We will see them one by one, and let us start with IFX. The IFX stands for Interactive Field Extractor, and it is for complete beginners who want to quickly extract a field, and it is completely supported in your Splunk web, which is a GUI. So you can probably create your own field extractions in the lab, where it will be a good exercise to create some field extractions using IFX, which you can access by typing any search from which you want to extract fields. I’ll search by default index, that is, index is equal to me, and you can select all fields and click on this Extract New Fields button. Or you can scroll down to the section below, where you will get access to new fields.

So either way, you can enter IFX mode. Once you enter IFX mode, make sure you have your source type defined so that it is used for your extraction. Let me select one of the source types for which I need to extract the logs. Let us go back a few days to our access log information because that is one of the logs with a lot of detailed information and a lot of stuff that will add meaningful value. So, if the source type is Access Combine, I’ll go with that. Here I’ll show you a sample example of how to extract using delimiters. When we go to other commands, like in our discussion of the racks and rejects commands, we will be discussing how to use regular expressions to use or extract this field. For the sake of simplicity in understanding IFX, we’ll only use the delimiter options available in IFX. I’ll click on all fields and click on “exact new fields” to enter IFX mode.

Now we have source type “other” showing up in these fields. So I have selected one of the sample events. You can select any of the sample events. I’ll select one you can click on next. Regular expression will be covered in our upcoming tutorials, where we’ll learn the fundamentals of regular expression rex and reject commands. For now, click on delimiters. Click Next. So here you can select your delimiter, whether it is a comma, separated space, tab, pipe, or any other delimiter you can mention. Here, we know it is space. As you can see, it breaks down all the fields based on your space as the delimiter, which you have provided. You can rename these fields and click on Next. It will save all of these fields, and you can review them here if you want to. Assume I change my field to an IP address and rename it. I can see all the fields that are extracted for testing purposes. It samples only 1,000 events. If you want more, you can increase it.

So you can see that it does not extract this IP address at first; it does not extract in our first two events, but it does extract in our third event. This is based on basic delimiter functionality, but if we go to regex in our previous method where you write common regex patterns and you’ll be extracting each field more accurately, these are for basic IFX understanding purposes where you choose a field, rename it to an IP address, and click Next. Give it a name like CustomField or my first field extraction. So I’ll give it some simple name. If you keep it to yourself, as the creator of this extraction, it will be visible only to you. If you click on “App,” anybody who uses Search and Reporting will be able to see your newly added fields. Similarly, if you click on all apps, it will be global, so anybody who uses Plug will be able to see your newly added field. I will click on “finish.” I’ll simply click to return to my search and see how my newly created fields look. Look for fields; these are the fields that we have now recently extracted using our IFX since we didn’t rename all the fields, but these are space-delimited auto-extracted using If.

  1. Field Extraction Using REX

In our previous lecture, we saw how to extract fields using IFX, an interactive field extractor. And now let us see one more method of field extracting, which is using the Rex and Regex commands, which are available as part of your search commands. And these are commands that you can use as part of your search query. The fields might be gone once your search query is changed, and that is one of the major benefits of Splunk.

That is, nothing is permanent in Splunk. Anything you can create and delete, such as fields, alerts, reports, and dashboards, will provide you with significant benefits as an administrator and can be dropped if they are not adding value and you must restart from scratch. And also, we’ll see as part of our lab exercise how to extract the fields using Rex and Regex and how we can make them permanent using properties and transforms if we go to our searcher. So this is our search. So this is the basic practise I follow. I access a log, let’s say, and out of this, let’s assume we are not parsing the logs. Splunk is not able to understand these logs, which look something like this. We don’t have any information, and I need this IP address information to be shown up here or in any of these fields.

I’ll use a website called Rubilar.com on a regular basis to practice rejecting test or sample data. You can call it Rex. Here is a quick reference that can give you a good idea of how to use Rex. So here, I’ll be using my IP address. I always know it starts with the first field, which is my IP address. So there is a syntax for starting off the line or a regular expression that is used to determine the start of the line. That is the carrot symbol, which I will type. As you can see, I have not matched any condition. I’ve just mentioned that my field comes at the beginning of the line. So it is already showing me the matching results. That is, we have not matched anything. Once we match any of this stuff that will be highlighted, we’ll see it as we progress. The next step is knowing that an IP address is always a number. So what I’ll do is look for a digit that is backslashed. So I’ve selected Backlash D. As you can see, our first digit in our IP address is highlighted. So what do I do? I need one to three digits. We all know the IP address varies from one to three digits. So what do I do? I can either go for something like this. When enclosed in a curly bracket, it represents one, two, or three; any value between one and three will match. As you can see here, Peer wrote at the beginning of the line a digit, which can have a minimum of one and a maximum of three characters. As a result, it matched R 91.

The next one is the dot. So in order to mention any single character, there is already a dot in the reject syntax. However, if you want to match a dotitself, you must use a backwards escape. Then enter your dot. As you can see, we have now matched our 91 followed by dot. The next step is simple. Just copy our previous sentence and paste it next to our dot. As you can see, we again matched it. It digits it, which can be a minimum of one and a maximum of three. So now we have our second update of our IP address, where we need to match the rest of the stuff. So I’ll just copy the command-complete with a dot, knowing that after the second it will be the next dot. So here, as you can see, we have successfully masked our IP address, which is our primary focus of interest. So how do you add this as a field?

  1. Adding Field Extraction to Search

There is a syntax for rejects field definition that is open, braces, question mark, lesser and greater sign here to add this as a field. This is your field name. This is your field condition, which includes the matching condition, the ending condition, and the beginning condition.

So, in our condition or scenario of extracting an IP address, we have a beginning condition of a carrot symbol indicating the beginning of the line. The definition is followed by the field match condition. I’ll copy it from here. This is our IP match field condition. So as you can see, this is part of our definition of “open dresses.” The question mark is followed by less than and greater than. After the matching condition closes, the ending condition follows. Here we can see that after the IP address terminates, there is a space followed by iPhone. So I’ll say yes to save space. As you can see here, any white space character is a slash, including S and iPhone. Because iPhone is a special character, we’ll mention the character it should match. So this is our ending condition. So here we have our complete rejects to extract IP addresses from our logs. So now we have our rejects. Let’s see how we can use our search command, which is Rex.

So I’ll use Rex without mentioning anything. We can mention field values on multiple, but for the sake of simplicity, we’ll keep it as is: rex, and I copied and pasted my rejects, whichever I built here, as well as the same rejects with field definition. You can paste it here so that you will get an idea. You have extracted an IP address, and it matches this value.

So the beginning condition, the field name definition, the matching condition, and your ending condition So make sure you extract these fields as part of your lab exercise. Once you are thorough with field extraction in Splunk, nobody can stop you from getting what you want from the data that is present in Splunk. So just try to make sure you master field extraction. If you have difficulties, just leave a comment in the discussion so that I’ll be able to assist you and probably find some of the easiest methods to make you understand. And always go through this reference in order to meet your criteria of matching any fields in your logs. So let us proceed. So I just used a quotation and copied and pasted my rejections, which I wrote here, for this.

So I’ll hit Enter and let’s see if we get a field name for the IP address. As of now, we can’t see anything; don’t try to search through this massive list of fields; instead, click on Holdall fields and type the field you’re looking for. As you can see, an IP address has already been successfully extracted, and we can see all of the IP address values that are present. Now I’ll mark this as selected fields. So here we have a newly extracted IP address field. So I can write any command stats counting by IP address in this field. As you can see, we have a count based on IP address. We can also have IP location based on the newly extracted field. So this IP location by default adds three fields. That is, country one, region two, and city one. So if you want to see this information, you can use geostatistics, which will display it in a fantastic manner. So I’ll display the geostatistics of the count. So Pi is not the recommended one. As you can see, the cluster map is the recommended one. So here you’ll be able to see all the locations of the IP. address field that you extracted that Splunk did not extract by default. And we’ve created a graph based entirely on our custom field.

  1. REGEX searching in Splunk

We have learned how to extract the field using Revex, and now we will learn how to search using regular expressions using the command Reject. This is our base search. Consider that we’ve been running for 30 days. Search for source type is the same as access, and I only want to search where a specific item matches. Let’s say action is equal to “add to cart.” I can simply click on this, but in order to understand rejects, I’ll create a query in which you match just add to cart. Or you can mention the slash word from our previous encounter with rejects.

We already know that W stands for any word character, let’s say add. So we’ll start with a, followed by “it should match another two-word character.” Similar to the IP address, it will be a minimum of one and a maximum of two for add cart. We are trying to match it better if we use it here. So add it to the cart. I’m attempting to match my initial condition, which is that it begins with a and is followed by a word character that is a minimum of one and a maximum of two. As you can see, it has matched two terms in an addendum, and there can be any number in between. And it ends with a T, so it ends with an actual T.

So since it is an actual character, we can’t escape because slash T represents a tab character. So we’ll keep it as it is. I’ll get sorry here, and I’ll get all the strings that match my rejects in the Add to Cart section here. So this is one of the methods where, if you want to filter all the logs that contain IP addresses or all the logs that contain some specific string that you know partially the syntax of, you can write a reject to just search for locks instead of source type. The index is equal to main and stuff. You can directly write your reject pattern to match your search criteria. As you can see, an add to cart was successfully matched, as well as a purchase, if the referral is an add to cart. So here it is, matching so that our rejects are working fine. It is filtering the logs wherever the phrase “add to cart” appears. So let’s say we’ve already done everything necessary to match IP addresses. We’ll use the rejects command to get only IP addresses or results containing IP addresses. I’ve entered the rejects that are used for matching the IP address, and we start to see. We match all of these events that contain an IP address.

Let’s say if I try to add another byte to an IP address, which doesn’t make any sense, just to prove that we will not get any results because there is no five-byte IP address. As you can see, we will not get any results because our regex is trying to match something that is not there. Let’s go back and look at how our Regix is filtering all these IP addresses based on without specifying any IP address criteria. Let’s say I want to search for all IP addresses that start with this string, so I can include these three pieces of information, the first of which I know is expected, and the last as a regular expression, and see what happens. Let’s say I want to search for a complete subnet of information. As you can see, the only IP addresses that are extracted are from these two subnets. This is how you can use your regex to narrow down your searches in your day-to-day operations.

  1. Props Extract Command

Since we have seen how to extract fields using interactive field extraction and how to use the Rex command to extract fields on the fly that are on the searcher’s side, and how to use rejects in our searches, Now let’s see how to make this field extraction permanent so that any user should be able to view these fields and make visualizations based on these fields. To do so, we must edit Propsandranscops.com. To edit, let me go to our searcher. This is our searcher. I’m a Splunk user. I’ll go to Spunk’s home. I’ll increase the font size to make it look more detailed. Okay, so I’ll go to Etc system local, and I’ll create a new Props dot confab you can see, we’ve added two lines to these from our previous tutorial. We know this reject matches the IP address, and it adds a field name under IP Address. To make it clear, I have added the field name “IP address,” which is extracted from Props.com. This is followed by the source type of our logs.

As you can see in our search, we have mentioned the source type of these logs and that we’ll be extracting the IP address from our previous Rejects condition. As we know, this is the beginning condition, this is the ending condition, this is our matching condition for the IP address, and this will be our field name. We know that by now. Let me save this file. Once we have saved this file, we can rerun this search and find any fields that are named “IP address.” As of now, there is nothing. So in order to reflect your newly added field, there are two ways. One, you can restart your Splunk instance so that the newly added props.com will be reflected and picked up during Splunk’s Splunk starting.

Instead of that, the better way is to use the command “Extract reload is equal to True.” This ensures that all extractions in the prosand transforms are reloaded prior to the start of the searches. Just do an extract reload, and now that the search has been completed, let’s look for a new field. So it is an IP address with underscored Props. As you can see, our rejects are correctly extracting the IP address from our logs and using this IP address once it has been extracted. Now, if you rerun the search without the extract command, you should be able to see the same field because the props and transforms are already reloaded, and we’ll be able to see these fields now in a selected field. Now, if any user comes up and restarts or searches the same source type, he will be able to get this as part of his default fields. We now have our router’s IP address. This is one method of doing it. There are other methods you can use. We currently use a method known as the Extract command in ourProps.com. Other methods, such as a report and transforms, will be introduced. See how we can add these two in our next lecture.

img