Each tag placed on a webinar has its own unique loading time, which can be impacted by factors entirely outside of your control. With each additional tag added, they potential for reduced site performance increase.
“How can we optimize? What can we be doing to optimize in order to optimize performance of the site?”
In this webinar, we focused on the building blocks of performance metrics to create a solid foundation for protecting performance and conversions of your websites.
Specifically we discussed:
- How and why tags impact website performance
- How to use Tag Inspector’s Performance Module
- How to be ready to use the performance metrics to inform performance initiatives as they relate to tags
Did you miss the webinar? Don’t worry! We have a recap for you!
All right. Hello, everyone, and welcome to part three in our tags and performance webinar series.
Today, it’s going to be a shorter webinar, a 30 minute training webinar, just on the Tag Inspector platform itself. I want to get into within the UI, within export reports, exactly how we can evaluate, tag performance using Tag Inspector. I want to go through just high level some of those different metrics that we’ve discussed in our past two webinars for what to really look for, how to evaluate tag performance, how to optimize performance of tags. Then as a result, user experience on the website as well data collection with the different platforms that you’re using.
As always, just to start out here, a little bit of general housekeeping, as always, these webinars are brought to you by Tag Inspector.
Tag Inspector is a product of Info Trust. On the InfoTrust side of the house, we do digital analytics, tag management, consulting, as well as a number of educational programs. As you can see there, over 2,000 plus sites analyzed and supported annually. A number of different programs, offices in both the US as well as Dubai.
Then your presenter today, me, Lucas. I am the Tag Inspector Product Manager, and then also a tag management consultant here at InfoTrust, here at Tag Inspector.
Also, as always, if you have any questions, please, we encourage you to let us know throughout. Just drop them in that questions pane. They’ll come through. We’ll answer anything as we see them. Then anything that does not get answered, we can always circle back with you at the end or after the webinar. If you have any questions following the webinar, please reach out to us. We’re more than happy to help.
Just a quick agenda
Like I mentioned, going through first, a quick overview of some of the things that we’ve discussed before, some of those key metrics, key things we want to look at, want to evaluate where we are right now with the Tag deployment in the context of performance. Then where we can go to optimize, where we really want to circle the wagons, if you will, and really try to improve. Those results improving the tag architecture on our website.
We’ll look at it from the perspective Tag Inspector scans, first, what we can get out of the regular UI reports, then what we can get out of exports, both from the UI as well as API exports. Then we’ll hop over into Tag Inspector Realtime. The focus there today is going to primarily be on what we can get out of the UI reporting. We did, as was announced on our blog last week, released an API for Tag Inspector Realtime, so you do have access to a lot of the data points that we’ll go through today in a raw form in aggregate, so you can really do some cool analysis there. At the end, we’ll do a quick summary, and then just answer any questions that have come up throughout.
Perfect. To start with just an overall overview of what we’ve talked about over the past couple of months, over the past few webinars.
When it comes to tag performance, when it comes to optimization, some of the things, the primary pieces that we’re wanting to look at. Number one, just how does a tag load. All the different tags that are on your website. One, what’s there. How many different tags, how many different platforms are there, how are they loading? Are they loading from the page source? Are they loading through other tags? Are they loaded through your tag management systems? What does that behavior look like and is it optimal for that particular platform, for the construction of your site and what you have going on?
Getting into the individual tags themselves, we want to start looking at latency. How long is a request, does it take for a particular platform to return that response? As we’ve touched on, latency matters in a few different ways. One, if a tag is lading synchronously, the entire time that request is out there and you’re waiting on a response, nothing else can load, which means that any content is going to be blocked from loading on the page as well as any other tags are going to be blocked from executing. If something has a really poor latency, it’s loading synchronously, you could be in a situation where other critical tags aren’t even starting to fire, starting to load, until six, seven, eight seconds into the overall page load. At that point, good luck getting complete data collection.
In the asynchronous context, it’s just requests that would be outstanding due to high latency. It would be taking up browser resources for an unnecessary period of time. Again, potentially leading to some of those blocking sort of situations.
The next thing we want to look at is individual request characteristics.
A couple of things that we’ve covered before in the context of performance and why tag performance matters is the volume of requests that can be sent by different tags, and from all tags on your page. It’s both the size of those requests. Both of those factor into how much bandwidth do users have, how quickly can you even expect, in a perfect scenario, of your web page to load, and all the different tags and pixels on your page to also load. We want to look at request size, request volume, try to optimize and minimize those pieces, as well as overall page size. The smaller size of a page, the less resources that need to be loaded, the faster you have a shot of your pages loading. Then finally processing necessary. What exactly are those tags doing? Then how much CPU machine power from your users is that actually taking? Could that potentially be slowing down the experience and hindering the experience of your users?
Now that we’ve covered, again, these different points that we want to really analyze and really optimize for, let’s hop into Tag Inspector to see what this looks like within the tool, and then how exactly we can access this information.
As mentioned, I want to start here with our scan reports. This is what everyone has access to, if you’ve just started with the tool, you just have a free account, you have access to the scan reports. You can run a scan on a website, or on a domain. Depending upon your account levels, you might be able to run 50 pages within one scan. You might be able to run 10,000. It all depends. Regardless, when you run a scan, the standard output would be the user interface report here. What really we can get from the UI report in the performance context is we can understand what’s on the site at a high level, what tags are there, how many are there, where are they, and then how is everything loading. We can also start getting into piggybacking, and really see what tags are our worst defenders, and which ones are loading in the most additional third parties, which down the line, as we get a little bit more granular, those third parties that would be piggybacking, again, more requests, more size, more potential latency concerns.
In the UI report, first we have, obviously, the report overview. This is going to be great for just being able to come in here and seeing all the different tags around the website, and then how each one’s loading. I can see, again, anything loading through like my tag management system. Is there anything loading that’s hard coded that should be migrated to the tag management system, or are there any instances of piggybacking here that I’m unaware of that I want to address? These are just additional tags that are being loaded in? I could see all that here with a quick look just at my overview report.
You might come into the report details to get that comprehensive list of all the different tags that were found loading on the website. I can see that list of the different tags, platforms, and I can really start evaluating what’s necessary, what’s not necessary, what should be potentially removed, as well as what needs to be migrated again to our tag management system.
In many cases, in the performance sort of context, we’ll have people reach out to us, saying like, “Hey, there’s a couple of pages that I’ve noticed that are performing really poorly.” Or, “We just pushed some sort of an update to one section of the website. It’s performing really poorly.” One thing we can do here with the Tag Inspector reports as I can see for these different tags, pages that are containing that tags. If I see something that’s just showed up, right? Something that was newly implemented, I can see codes the pages on which that tag is on correspond to the pages that are performing poorly? I can start doing that for a number of different tags, combinations, and really trying to dig down into what is the offender, why is this happening?
For a more granular look at this, and like you mentioned, with the UI, everything is perfect for the high level information, what’s there, what can we remove, how is everything loading, where is piggybacking happening. We can take a look at the first step really bad offenders. We can clean all that up just from the information from the UI.
What’s really helpful also though is we take another step forward and again, going back to the optimization, looking at the number of requests, size of those requests, then overall general page size type information.
With that, we can get those from exports. If you export all from the UI or if you do a network log export via the API, you’re going to get an output that looks similar to this. What this is is basically the export all with the network log export. It’s going to export all tag information that’s being collected in the database in every scan. In every scan, all these different pieces are collected, and then you can just export it to have access.
Within here, we’re going to see the tag request.
This is the tag request for every single tag that we’ve collected information on on a website. This is across every single page as well. For each request, I can see latency of that specific request. I can see where did it start, where is it actually deployed within the page, start time, end time, as well as that tag size.
I can start with this, and I can pull out metrics by just being able to go through and breakdown, based upon the tag name here, so if I filter here for a tag name, for one specific tag, it doesn’t matter what, 360 yield 203 instances. Filter here for 360 yield, I could then figure out what is the average tag size. I can do that for every single tag across my … Or for every single request for each tag across my entire website to start really seeing, okay, what are the really large ones. What are the big requests? As well as I can see because each of these requests ae listed, I can just get a count. For these different pages, on average, what is the total number of requests sent by different platforms. We’re really surprised, due to the functionality and how some of these different tags, different platforms work, you might see that one tag that you might be using just for remarketing, so just collecting information, about a user or cooking a user, you might see five, six, eight different requests from that one individual platform.
Again, that is a situation where it’s just taking up additional browser resources, and potentially having an effect. From an optimization standpoint, we could see something like that maybe deprioritized that tag for loading. If nothing else, it’s loading a little bit later in the page load, so that some of your other critical tags have a chance to actually get their request out before that thing starts executing a whole bunch of back and forth with their servers.
In addition here, we also have this latency metric.
Again, you can see what is the average latency for an individual tag and an individual platform. A lot of our clients are starting to put standards in place around what is the maximum latency that’s allowed for a particular tag, a particular platform, and really holding their vendors accountable for that. It should be built into their SLAs and it’s part of that just security review. All that information is available in that export.
We also have, if you have access to the API from the scans, you can also run a general pages import, where for every single, unique URL that was scanned, you can see overall page size. Again, something that we’re optimizing it for, depending on the page size, that just gives you the chance to load within a reasonable time period, depending upon the user’s device, and then we can see just that raw request count. How many tags, how many requests are being made on each page. If we have some of these pages that are really poor performers, we can come in here, probably, and we have seen analysis, some correlation between overall number of requests as well as overall timing figures for pages. We can see here what is that low hanging fruit, what can we look to to optimize, what pages do we really want to dig into first. Then also, we have some general just page timing events for each URL here.
These reports and these exports, can give you a couple of things.
One, it can help you identify where do you need to start, where in this pages report, what sections of the site, what areas of the site are really needing addressing first. Then with that overall network log exports of the individual tags information that are all export all, basically, we can get in there and start identifying, okay, what tags are the worst offenders, what tags can we really look to optimize. In the case of piggybacking, we can see what is the true effect of a tag that’s piggybacking in a bunch more tags, a bunch more platforms, because we might see only one request from that thing that is piggybacking in and loading in five more, but those additional five, again, you’re adding requests, you’re adding size. What is the true effect of having that tag on the site that is then loading in additional ones?
Perfect. That is the Tag Inspector scan reports. A lot there when it comes to performance.
We also then have the Tag Inspector Realtime module, where you can get even more information
Now, again, Tag Inspector Realtime, as many of you know, works a little bit differently. It is a tag based solution. It’s our own tag. You can deploy it however you would like, through a tag management system directly in the page. What we do is record tag behavior in the live environment. Here you can get a look at what is really happening on the site. Super helpful when it comes to optimizing timing and ordering of firing, as well as looking at actual tag behavior from a latency standpoint.
To quickly run through this reporting, the first thing you’ll see when you hop on over into real time, it’s just a dashboard. In that first graph, you’ll see average latency for all tags across the entire site. Now this is, again, aggregate for each individual tag across all pages in all page loads of a site. This is a true look at what is the actual latency in the live environment of the different tags.
I can get a little bit more granular here. I can start looking at individual URLs. I can see average down complete time for each URL. I might want to dig into some of those that are really performing poorly. For each page, I can then come in and see all of the page loads that have happened of that page. Typically, the use case here would be we’re trying to optimize performance of our order confirmation page. Typically, it’s the end of the funnel, so not necessarily something where from a user experience standpoint, is a huge, huge emphasis, just because typically, users going to leave from that page anyway, but from a data collection perspective, it’s the most important page on our website. It’s where all those conversion tags are most likely firing. It’s where all of that transaction and conversion information’s being collected by things like analytics, by things like whatever advertising or media platforms that we’re using, so that we can evaluate effectiveness and properly allocate our marketing, advertising budget.
On a page like that, we want to go in there, we want to be able to see what is the tag behavior for all different users, for users of different types, coming from different locations. We can do that here within real time. I can see all different page loads. I can see complete time for each. For individual page loads, I can come here and see all tag behavior. Now what’s really useful here is the hit timings for each of the different tags into the different hits that are being sent. I can see the start time. When was that tag first initiated? Then also the end time. When did that response complete? When was a response sent back?
What this allows me to do is see, “Hey, I might have Google analytics implemented perfectly on the website.
I know that. In my manual test, everything works. I looked at the configuration. Everything’s right with the tags. My data layer’s there. It’s populating as I’m expecting it to. For some reason, I still have a big discrepancy in the data between what I’m seeing in GA and what I’m seeing in the backend.,/p>
Oftentimes, it’s due to a performance and just other tags. I can look to see are there a bunch of other requests that are happening for Google Analytics? Once I identify what those tags are, I can start looking into where are they loading? Is it something where they’re also loading in my tag management system, so I can de-prioritize those? Is it something where they’re loading from the page source, so I’m going to need to have our development team get my tag management system, or this particular tag up higher in the page, so the browser is reading it and executing it first. All of that type of information is possible because we can see these different hit timings.
Also helpful here, and I’m not going to dig into it too deep here today, but as I mentioned at the outset last week, release the API for Tag Inspector Realtime, so we can start seeing that information in aggregate as well. What is the average start time of my transaction hit for my analytics platform on a confirmation page? We can now answer that question. You can now look at that from the standpoint of for this type of user, for this location, for this browser, for this type of device, which then becomes, again, really powerful for trying to troubleshoot any issues as well as identify where to optimize.
With that, I want to just turn it over to any questions that might be out there.
Again, if we have any questions that don’t get answered, please, please don’t hesitate to reach out to us. Happy to help out in any way. Happy to help you take a look at your website and really think through how can we optimize what can you be doing in order to optimize performance of the site, as well as of the tags and data collection.
All right, I’m not seeing any questions come through, which is fine. As I mentioned, anything … You think of anything moving forward, please reach out. Let me know. I’ll go back through and see if maybe we missed anything. If so, follow up with emails to answer any of those additional questions as well. Thank you all again very much for joining us and learning a little bit more about tag performance and how Tag Inspector can help you analyze as well as optimize the performance of the tags on your website. Until next time, we will see you later.