SPLK-1002 Splunk Core Certified Power User – Splunk Post Installation Activities : Knowledge Objects Part 2

  1. Props Report and Transforms

Now we understand how to extract fields and make them available to all users so that we can place them under props.com, which is using our extract command. Now let us see how we can do the same using a report. The syntax is “report iPhone.” This is what I’ll call it; you can call it whatever you want. Let’s say I say “report IP is equal to”; this will be your function name, which will be defined in the transforms configuration file. Let’s say IP underscore extraction transforms this name into anything, but make sure you remember this name; I’ll copy it. Save this file.

Now we need to edit one more file called transforms in order to define these fields. So the function name will be the stanza name here, whatever we defined there. After you’ve added the function name, your parameters will be equal to whatever the regex that we’ve created. of lectures before, and I’ll paste the same. And there is a format that says where each field should be assigned to. I’ll change this name to “field name” to transform. so that we’ll understand it better in our GUI once it has been extracted. So this is the common syntax; it says this is the match value, and whichever matches the first argument, assign it to the field name called IP address underscore transforms.

As you can see, we’re using a single command to perform the same field extraction that we did in props in our previous lecture. Why has the Report function been made so difficult to use? Let’s say this extract command is good for extracting individual fields, or probably a couple of fields. But if you use the Report command, you can write a complete syntax with additional options that are available as part of your transforms. That is Dlim, where you can specify a custom delimiter like pi, comma, or even a white space where you can mention any number of fields and keep on assigning these fields to n number of fields. So if you want to extract fields in bulk, I would recommend that you go with Report; if you are extracting individual fields, it should be fine to use the extract command.

Now that we have added our props and transforms again, you can either restart your Splunk service so that the edited configuration will reflect, or you can simply use the command extract reload, which is equal to true. So once these fields are extracted, we’ll be able to see our new transformed IP address. As you can see now, we have our newly added field transforms, which perform the same function since we have added the same rejects. However, we extracted this using a different method. As you can see, we did one with a single command and another with multiple steps, but the results are identical. As previously stated, extract can be used to analyses a single field or possibly a couple of fields. However, Report can be used to extract the entire log in bulk.

  1. Props.conf Location

Now that we know about props and transforms and how to extract these fields, the next important thing is where to deploy these files. We will see how we can deploy these field extractions in our next part of the lecture, where we’ll be completely dealing with managing configurations via a deployment server under. Also, if you have a question about where to place these props and transforms, always use your local location. That is either system-local or app-local, depending upon the context you are using this field extraction for. These props.com will always be available, no matter where they are. It can always be made available to all users in Splunk. All objects are private by default, but you can make them available to all other Splunk users at any time.

  1. Eventtypes Creation and permission

We learned more about Spunk’s installation and components, as well as field extraction. Now let us understand the various knowledge objects of Splunk. The first knowledge object in our discussion is event types. When I see knowledge objects, they are nothing. But it is a method of enriching your data in Splunk so that users can add additional values to their data to get more information about the data that is present inside Splunk and also to teach Splunk about the information that a regular system administrator or any person who is regularly interacting with those data would have much better information about. We’ll be seeing how to add this information, which is available with the respective individual, into Splunk.

Let’s say I’m a system admin. I know to whom this IP belongs, in which part of my data center it exists, and which department is using this one. The same information can be taught to Splunk so that anybody using this system in their logs will also be aware of where this information, or where this server, has been placed. Now looking at “Event Type,” it is a user-defined field that represents a group of events. These events are grouped by the similarity of their technology or the conditions under which they occur. Let us jump into our lab and see how we handle these event types in Splunk. This is our search app. As we’ve seen before, I’ll write a simple query with the index equal to the main source type. I’m sticking with access equals combine search. For the last 30 days, I got the results as I integrated or as I uploaded this data.

I know these are my access logs in Splunk. So I’ll go to Save as a New Event Type and give any access underscore logs so that whenever I say Event Type is Equal to Access Log, I’ll get these results regardless of the tags we’ll be discussing in our next section. You can also specify what colour these tags should be and what type of event they should be. For now, let us leave it to default. I’ll save this. I’ll click on “done.” Once it is saved, I’ll remove my search and just type “event type is equal to access underscore logs.” As you can see now, I didn’t type my full query, but I still got the same results. This is because we have saved it as an event type. In order to see all your event types Navigate to Settings and select Event Types. I’ll open it in a new tab. This is my general procedure to ensure that my search is not disrupted. These are some of the event times that are, by default, present. As you can see, we have created our own access underscore logs, for which the search string is this one and the owner is Admin. Presently, it is private.

If you want to share this with other team members or other Splunk users, You can select which permissions the other users will have by clicking on permission. They will either have read permission or write permission. I’ll use anybody using this app, which is the Deep Fault app. Or you can also choose any app anywhere in Splunk that they are searching. Everybody has read permission from admin, and the power user has the right privileges. So I’ll choose these two roles as written privileges, and I can save them. As you can see now, the sharing permission got changed to “global.” This event type was created in the Search app. That is our search app. It was created by admin who was present at the time. If you want to disable this, you can disable these event types. You can copy and modify these event types by cloning them.

  1. Eventtypes Use Case

Similarly, let me go back to my previous search. This is our previous search by default. If I run this search, I should be able to get a new field that is our event type, which we created right now. As you can see, I can get an additional field that says it is an access log. Let me make some more event types so that we’ll get a better understanding. There is a field called Status that represents HTTP status in our access log. I’ll say status is not equal to 200. That means these are event types that are not 200. I’ll save the access logs for non-200 requests as well. Okay, I have saved that one. I’ll save the access log only for 200 requests. Save event type access (underscore logging, underscore 200) That means they successfully received a response back. Now, let us go back to our main search and rerun the search, and let’s see how many event types we’ll be getting. So we still have one. Let me refresh this because it is reloading from my browser cache. Once I reload the server, you should be able to see more event types. because we have included status 200 and other requests. Let us see if our event types are created successfully. They are, in fact, created. This is our Non-200 and Status 200. As you can see, there are two types of events: non-200 and access log 200.

Similarly, this can act as an additional field where you can filter multiple events. Instead of writing long queries, you’ll be able to write just one query. Rather than writing all 200 requests, I can simply type Event Type equals Access Log underscore 200. I’ll get the same results instead of writing the complete search: event type will be your field name, and value will be the name of the event type that you gave in your event type definition. As you can see, this type of search is known as “internal search,” and it returns all of the searches that have been performed. Another feature of event types is that they generate a file called “event types” whenever they are created. Confer let me quickly get into that folder we have created under Search Local. It should be under “Event Types dot conf.” As you can see, we have created access logs. This is the name, and this is the search for that event type. So all the event types will be under your event types.

  1. Tags Creation

The next knowledge object in our discussion is tags. Tags are also one of the knowledge objects that are used to enrich the data in Splunk. The tags can be used only with a field value combination. We will see that when we are creating tags, it always requires a field name and the value that is expected for this tag to be applied. It is always created for a field value pair, and you can always assign any number of tags to a field or event value. In this lab, we will see how to create tags, share them, and search using our newly created tags. And also, we’ll be seeing how tags will be stored in the configuration files. So let us go to our lab. This is our search ad. In order to see all the tags that are present, go to Settings, click on Tags, and you’ll be able to see these tags.

You can list them by three different methods, but we’ll list them by tag names just to see if there are any tags that are by default present in our Plains Flunk installation. As you can see, by default there are no tags. We’ll start creating them one by one. As I said, for creating tags, we almost always require a field-value combination. Even though you can write a search and tag it, you will not be able to do so using this method. However, since we created a new field, event types, in our previous lecture, let us access our event types. This is simply a new field that we added to Splunk that contains a field name that corresponds to your event type and a field value access log. We can see we have three event types.

Now let’s start tagging this field. This is our event type, which we have previously created previously. And you can click on one of the field types if you want to edit those, and I’ll add it. Let us select the combined access. The first event type is one to which we can add tag values, any number of them separated by a comma. I’ll say tag equals these Apache logs. These are complete logs. These include errors. Any information that can add value to your event type can be added here. I’ll just save it here. These are the three tags that have been added to our access log. Event Type I’ll go to Access log underscore 200, where we have a tag with an OK response, indicating that the response or request was successful.

So I’ll say the tag is okay, and this is our Apache logs. Again, this doesn’t contain an error. Without error, I have saved this information, and I also know 200. These are again Apache logs, and these might contain some errors. That is our 504 series of HTTP responses. These are specific to your environment. You can also add a couple of tags like “production,” “prod machine,” “prod Apache logs,” “staging,” or “QA.” Any value that you gain as part of SystemAdmin or as part of Splunk Admin during the integration can add value or add more information regarding these logs, devices, or what this event type does. We’ll be heading here.

  1. Manual Creation of Tags

We have created a couple of tags here. Let’s take a look at how they’ll populate additional fields in our search index, where main equals source type and access equals cookie. This was our base search without event types or tags. As we can see, new event types have been added. I’ve shown up here. I’ll just quickly catch hold of our tags. Even tags will be displayed, as you can see. Now we have our newly added tags, which are included. So let me select this too. These you can refer to quickly so that if I want any specific tags, I’ll click on them, and I’ll get only success logs from my Apache. So you can simply write your search by mentioning tag equals OK.

Similarly, these tags can be technology-based, that is, your tag is equal to “web,” which indicates all the web logs. Tag is equal to Windows, which indicates all your Windows logs. Let’s say I can add multiple tags; tag is equal to Windows; and tag is going to error, which should give me Windows errors. Similarly, here we can have multiple tag conditions that include errors, which is our next perspective, and those without errors. These are some of the tags that you can use in order to narrow down the information you are looking for or enrich your data so that the information is added to Splunk in order to enhance the value of it. And the last thing about tags is the file that has been created as part of tags. I’m on indexer. Let me go back to my searcher. As you can see, we have a newly created file called is tags conf.

So in this, we have seen that the field is called “event type” and “access logs.” Similarly, the value should be the access log, and we have named a tag Apache that is enabled, and we have also called it a complete log including errors. These two are also enabled. These are not restricted to specific event types. If you go to our tags, you can add a new tag by mentioning your tag name. We can keep the manual creation of it. Manual tag creation, and we can mention host is equal to some value we are familiar with, or source type is equal to access. star I mentioned a wild character, so it will add a new value called “manual creation of blocks.” Similarly, you can add multiple host values or field values that match this criterion. I’ll add one more. Sorry, this is not source; this will be source type, and you can also add if source, which is the location containing if it contains Apache, add it as manual creation of tags, and I’ll save this. You can add any number of fields.

So these are some of the tag names that we have created, and we’ll search for our newly created ones. That is manual creation. Star. I don’t remember the complete name of what we have given, but we have gotten our results, which are what we were expecting. Our tags, as you can see, also match some of the additional tagging that we have enabled here. Add a tag called manual creation because we have manual creation, which says source is equal to Apache. Similarly, if source type is equal to access, we have a matching criteria here. We don’t have Apache matching in the source, but the source type matches the access star that we have mentioned here. So it has created our tag. This is all about tags. We should be able to understand that these tags are created under the file tags.com. This will add more value in order to enhance the information that is already present in our logs.

  1. Lookups Creation in Splunk

The next knowledge object is one of the most widely used and most important knowledge objects: Splunk data. Lookups are another set of knowledge objects that provide data enrichment by mapping selected fields or values in an event to other fields or information in a specific set of events. For example, we can see in our lab how we can map each status code in our Splunk access log to its HTTP response, and we also have a prize CSV or lookup file that has been downloaded as part of our tutorial data. So these sources for lookups can be a CSV file or KB store or scripted lookups from an external database. In our lab, we will see how to create this lookup, share it, and use it in our search to enhance the data. First, let us go to our search engine.

So, this is our searcher to see what? All the lookups are available. Go to Settings lookups, and you will be able to see all the lookups that are present as part of our Splunk installation. So you can list them in a specific order here. I’ll just list them now based on lookup table files. As you can see, it has a couple of lookups that are geostatistics, which are built into our Splunk installation. If you want to create a new look up, I have downloaded the Prices CSV. This is the prices CSV, which is part of our tutorial data. And here is the actual file. This is our pricing CSV. As you can see, it has a product ID, a product name, and a price sale.

Price and code them so we can see which fields of these already exist in the logs and which fields we need to add. First, we need to create a lookup file. So I’ll upload a lookup file. That is our price. CSV select the file, and I’ll use the same file name to show that this blank is a prices CSV. Our prices CSV has been successfully added. Let’s say if this file has to be used by some other users or if Splunk clicks on permissions next to your newly added prices. As with the other permissions, this is very similar to all apps with global privileges; anyone who uses Splunk will have read access. Admin will have the authority. And the power user will be able to do so. These are some of the best practices, so I’ll be sticking to those admins and power users of Splunk who will have edit permissions for our prices.

Let’s return to our search app and see which fields in the CSV contain the information that is present in our locks. That is, the index is equal to the main source type and our access logs. This is the tutorial data. So you can probably run the same queries to see how the data was passed and how you can add more information. We’ll see what fields are present. I’ll select or deselect all other events. I’ll keep only the default selected fields, that is, host source and source type. So we’ll see. Do we have a product ID? Yes, via product ID. Do we have the product name? We don’t have the product name. Do we have price information? We do not have the price information to determine the sale price, no. and the code for the product. So we don’t have any information except our product ID. Let’s see how we can add all these fields as part of our Splunk fields. So, I’ll upload a lookup file—Price CSV—with all of this information.

I’ll use a command called lookup. Using this lookup, I need to mention which file I need to look up for. That is, we knew while uploading that we had given it a name called Prices CSV. So this is the file. The second option, or the argument, is which field in our CSV file we should be looking for. That is the product. ID. It should map to our field in Splunk that is already existing. This is our field in Splunk. All small product IDs match my product ID. That is from CSV to product ID. I output the new fields that are part of CSV files in my logs. “Let me go to the CSV file wherever the product ID matches,” says this command. So wherever the product ID matches this value, it will add all these values for that event. Similarly, if the product ID matches this one, it will add this information to our CSV.

  1. Searching Using Lookups in Splunk

So, when we wrote this lookup, we made the small assumption that all of the product IDs in this field were small. However, it is not the i. Here are the caps. Let us change this to I, capital I. Now we should be able to see all the product information that we have added. As you can see, we have a product name that is not included in our log data. Similarly, we will have a sale price that was also part of a price and sale price lookup. The final one is code. Once we have this information, we can probably guess how much was purchased considering the price and sale price based on a specific IPS or user. Similarly, what code was used for applying these prices and what products were purchased?

Let us see what all the top product names were that were searched for or purchased using these logs, which are available as part of Splunk. This information was never part of our Splunk, but we obtained it by including a lookup file that was a prices CSV containing this information. This will tell us that this was the most purchased product, as well as the bottom and top-ten product lists. Once we have added these products, you will also want to understand where these files are being stored. Specifically, the prices CSV that we uploaded via the Splunk web. If you go to our search engine, this is our search. Let us go to a Splunk directory and select Splunk; we have uploaded this as part of our search app. So under the search app, we know there is a default location. This is the default location, and there will be a local location. So these are the two locations. Along with this, there is another directory.

Allow me to exit local. Yes. So along with this directory, you also have a directory known as Lookups. If you go to the lookups directory there, you will be able to see the newly uploaded file prices CSV. So this lookup file is the one that contains the product name, product price, product sale price, and code that is being used in the search to get the information so quickly. We can also see what data Price CSV has. So as you can see, this is the same information that is present in our CSV file that we uploaded to fetch the information in our Splunk. We learned what lookups are, how to upload lookups, where lookup files are stored, and how to get information that is not present in the log but can be obtained by adding additional information using lookup files in this tutorial.

  1. Lookups Use Case Example

As part of our lab exercise, we will create an additional lookup file to add descriptions to all of our HTTP statuses in our web server logs. So, when you complete the purchase of this program, you will most likely be able to repeat the same exercise. So we’ll see how we can incorporate this.

I’ll go to the settings. Lookups. For this exercise, I’ve created a file that is our HTTP status CSV. If you look inside these files, you’ll see a status field and description. We have just two simple fields: status (referring to the HTTP status code present in the logs) and description (with their respective information). So, let’s see how we can incorporate this information into our Splunk lookup settings. Now I’ll add a new file, HTTP status. I’ll give the same file name; you can give whatever name you choose for status dot CSV, and once you’ve created it, it will be completely under your ownership where the user has uploaded, and if you want this information to be stated, click on Permission.

All apps can read; everybody can read, and admin and power can add information to or edit these files. Let us quickly go back to our search app where we can validate whether our newly created lookup adds new information or not. Once we are in our search app, I will again go to index is equal to main source, type is equal to access, which is my access log where all the statuses are present, and I’ll check for the last 30 days because I need to see what my status is. The field name of HTTP status is status. We know by now that we have uploaded a file called Lookup. The name is http underscore status CSV, and the field we are looking for is status in our CSV file as status itself in our logs, and we are outputting a new field that is “description,” which is not part of any web server log.

As you can see, we have a new field that is called “description.” These details were never part of our internal logs, but we added them based on our knowledge and Splunk training. If you look at this information, you can add these additional fields, which are descriptions. Similarly for prices, we have added the sale price code, price, and other information. This information can be in any number of columns. You can add an unlimited number of information pieces to your logs that all contain the same field. Let’s say the host field I have a universal forwarder installed on my PC, which I call Arun Kumar PC. The same information you can add as part of your CSV — latitude, longitude, owner, and the applications running on this PC — and when was the PC last patched? The options here are unlimited. You can add any number of fields.

  1. Creating Macros in Splunk

The next knowledge object in our discussion is macros. Splunk macros are similar to the macros in the previous excerpt in that they are small pieces of code that will be reused multiple times. Instead of rewriting all the search queries here, we will be using macros to reuse the searches multiple times in multiple places. In this example, we will see how to create a small piece of macro and how we can share these objects with other Splunk users so that they can start using a simple macro instead of creating separate and longer searches.

This is a searcher. Let me write a short query that is “index is equal to,” meaning the source type is equal to our access log. The source type is access log, and the status is 200. That is successful results, and I will get the location of our IP address that we have extracted using props, and I’ll get the top countries using those IP addresses. So let’s hit enter. I’m getting top-ten countries for my website that have been accessed and that had successful results. I got the top ten countries. Now every time I need these results, I don’t need to rewrite this search query or any place wherever it is required. So what I’ll do is just copy this search and go to the settings for advanced search.

In this place, you will also be able to see if there are any macros that are already present that you will be able to use. By default, there will be a few macros that are just used for searching in Splunk. So here we have two menu search macros and search commands. Search commands will come at a later time. We’ll go look for macros. In this, we’ll be able to see some of the internal macros that are created by Splunk for its internal searching purposes. These are some of the searches that Splunk internal search users conduct on a regular basis. So it has made it a macro, and it has stored it so that even as a user, you’ll be able to use it. We’ll create our own macro, which is our top countries that we got from this long query. What I’ll do is mention these top countries, and I’ll paste my search query. If I have any arguments, I can mention them here so that they can be passed as part of your macro. As of now, we don’t have any arguments for our simple search query.

So I’ll leave it as is. I’ll click on Save, and once it is saved, as you can see, it is private by default. If I want to share something with other Splunk users, I can go to share and select whether I want to share locally or globally. That is everyone using Splunk. Everyone has read permission from admin as “edit permission,” as well as power user permission as “edit permission.” I’ll click on Save, and once it is saved, go back to your search query. instead of writing the complete command. You can call this using the macro that we just created. To call the macro, there is a symbol that is the same key as your tilled symbol, but you need to press it with escape. So it is just below your escape character, which holds your till symbol, but it is usually called a backtick. Let me see if I can find this in my on-screen keyboard. No, not this.

So it is basically the till symbol, but if you are entering the till symbol, you have to press shift, but instead of pressing shift, it will be present on the same key. It is known as the back tick. Let me see if I can show it to you in Notepad. So this is the symbol. Basically, this is your back tick. This is known as “backtick,” which is part of your tilt key. Enter this and the macro name that we have just created, and close your backtick. So once you have entered this, your search will be able to load instead of completing the query. See? As you can see, we just called our macro function. We have defined it here among the top countries. So we called it backtracking, implying to Splunk that this is a macro rather than a search query. So Splunk invoked this macro in turn to give us the actual results we were looking for. Once we have created macros using the GUI, you should also know where these files are located. Let us go to our searcher.

So this is our searcher. I am in the local directory. As you can see, there is a file called macros.com that holds information about the name of the macro and the search used by the macro. If you have additional arguments and other values, it will be on the next line saying argument, and it continues with additional evil command validation. Furthermore, this is a simple macro that holds the macro’s definition and search.

To summarize a couple of knowledge objects, event types that are stored in event types, cone, tags, cone, and macros.com All names are relatively simple. It adheres to the idea that whatever we call the knowledge object, it is the same configuration file. It stores this information so that whenever you see these changes and want to see what it has stored; a special character or symbol may differ while telling a story. So make sure you know where to look for this information as part of CLI.

img