In our latest episode of Test. Learn. Grow., our hosts, Myles and Allyn, dive into behind the scenes of dashboard creation with Level’s Data Manager, Jordan Grace.
Rather listen than read? In this podcast episode, Myles and Jordan talk about what happens behind the scenes here at Level when it comes to analyzing and learning data.
Understanding What Data Clients Are Utilizing
Jordan talks about how his team at Level likes to dive in and start to understand what kinds of data the client is utilizing so that they can then provide the client with a series of questions that can be answered during their first data kick-off. During the kick-off, they then find out if there is or is not a way that they can extract the data sourcing from the client. If there isn’t a way to extract the data or in a sense set up an API, they then take the time during the meeting to identify a process that will work for both parties. The goal is to have a concrete handoff procedure where we can get the data from our client in a secure fashion to then add it to our system and dashboards that are organized.
At Level, we love to be able to provide data transparency and with that, we use a tool called Qlik Sense. Within the dashboards, we can provide a sheet that gives you a detailed breakdown of your ad, keyword, and geo-specific metrics. Depending on what type of information you’re looking for, Level can provide more detailed data or general summaries depending on what the client needs.
Behind The Scenes
Data might seem like a simple process, but there are a lot of moving parts that work behind the scenes. Jordan explains that it’s like a giant funnel, you have one piece that is doing cleaning up and storing. On a technical level in the data world this is called ETL (extract, transform, load) or ELT (extract, transform, load).
The easiest way to envision this is the following: Imagine you are setting up dinner and the first thing you are doing is thinking about what it is you want to make right? Then, you need to identify what recipe you want to make, which is essentially what the kickoff meeting is. So, with that we have built a layer that is called a Data Mart, data marts are your one-stop shop for all reporting and where all the data is finalized. This is what you would pull for a report, or what is being utilized by Qlik Sense or any other dashboards we use here at Level. So back to the dinner analogy, next need to ask yourself what ingredients do I need? This takes us back to the kickoff, we identified what we are trying to cook and now we must do the extraction or shopping in a sense.
Then we extract the data down, where we use two different systems Adverti (a third-party tool) is a low code where it can extract various sources and updates automatically. The second tool we use is called Jenkins which is a CI (continuous integrations tool that runs Python scripts for more customary extractions.
The data mart is the final piece of the funnel which at the start is called the staging area which is like a raw environment where you store all your unorganized data which makes it easier for data analysts to come back to it. From the staging area, the analyst cleans them up and better organizes them. Organizing them into specific channels or categories helps when you are trying to pull data from multiple sources that don’t relate, you can easily make them relate.
For example, when you extract and merge data from a set of Google ads and Bing ads through two advertising systems, they both have similar information, but they aren’t lined up one-to-one. Then it’s up to the Level team to then reach out across the agency to identify what they mean. When it comes to cleaning up data sources, we want to combine them down through specific filters and segment what the client wants to see, or what the team at Level wants to report.
Once you get through the following stages you can then start to analyze the information and figure out what is working and what is not working. Say you want to change up what data you are seeing and analyzing, you will want to create a new data mart that will then help break down the data information based on the specific ad channel. This helps lessen the amount of unnecessary data that you are looking for at that time. For example, you can have around 300,000 rows of sourced data but only need to see 100,000 rows of data that is relevant to your research. Creating a data mart is the best thing to do because it helps filter out all the other information.
If you’re not sure how to start when it comes to diving into and understanding data, you don’t need much to get started. You can use programs like Microsoft Excel where you can extract the data down, look through and see what is important to you, and then flag the dimensions or metrics. You can run a pivot table within Excel to filter your data source information, pivoting that information to the lowest form that you need it to be, and then pass along the data to those who need it.
Want to hear how Level can help you? Schedule a consultation with us.