Webinar Wrap-Up: Evaluating the Performance Impact of Tags with Tag Inspector

Courtney MorganPerformance

Evaluating Performance Impact of Tags

Yet again, Lucas hosted another fantastic webinar! Did you get a chance to join us?

If you did, share what you liked best about the webinar in the comments below!

If you were not able to join us, don’t worry! We have a recap recorded with you in mind.

In this webinar, we discussed in depth the performance metrics in Tag Inspector.

Specifically, we covered:

  • An introduction to the Performance Module
    • How to access the Performance Module
    • Metrics included in the Performance Module
  • Performance metrics found in Realtime reporting
  • How to use these metrics to/ inform performance initiatives as they relate to tags

Thank you all for joining our webinar today.

It’s part two of our two-part series on tags and performance. Today we’ll be covering how to, and what to look for when evaluating tag performance, specifically with Tag Inspector. So more in line with our training webinars, but I want to go through a quick overview of some of the different metrics and things to look for when evaluating performance. And then how exactly you can get down to that, and use Tag Inspector to find those different things.

So to start, it’s always a little bit of general housekeeping, just about us and myself.

So Tag Inspector is a tool for tag auditing, monitoring and validation. It is a product of InfoTrust, LLC.

On the InfoTrust site we do web analytics, tag management work, as well as obviously software development, which is where Tag Inspector comes in. As you can see on the thousands of sites that we work with on a regular basis, a couple of different offices are both here in Cincinnati, US, as well as in Dubai. Then myself, your presenter, Lucas, I am our product manager for Tag Inspector, and also I work with a number of our clients in the tag management consulting space. So walking people through this entire process for optimizing tag performance.

We’re looking at tags as a part of overall site performance and user experience initiatives.

We’re going to go through what we’re going to cover here. First just a performance review. For those of you who missed it, we did have a full educational webinar last week on the performance impact of tags. I’m not getting into the weeds with that here today. But I do want to give a brief overview, just so we can go through some of the different factors, how do tags play into the performance of your website? And some of the different metrics that you can be looking at, in order to be able to optimize. I’ll then hop into the Tag Inspector scan reports, both within the UI, as well as what’s possible from exports and via the API.

Included in this section will be a brief overview of our new performance module, with some new interesting and exciting performance metrics that are now being collected and surface with the Tag Inspector scan reports. I’ll then hop into Tag Inspector Realtime, which is our live environment, monitoring solution, and walk through to see UI there. But some of the different places, and how you can go about finding some information around tag performance, and finding some different metrics that you can then optimize for. At the end we’ll have a summary and Q&A. If you have any questions at all, as I’m going through these different items, feel free to drop them in the questions pane, I will come back to them at the end.

As always, if you have any questions after this, or anything comes up, never hesitate just to reach out. Let me know, happy to help out in any way.

Perfect. So to start here, just to get into the performance review … and for some of you this might be a refresh, because, again, we’ve covered a lot of these things last week.

But when it comes to the basics of performance, there’s three primary factors.

You have the client side, which is the user, the browser, the machine that they’re accessing your site with. You have the connection or the pipe, which is the Internet. So the means by which the information is sent and received from the client side, and then also the server side. Which is going to be, in the context of the tags, the third-party vendors, the services that they’re working with.

So when requests go up for information, where those are processed, and information is then returned. A few different factors here for each of these on the client side, like I mentioned, the machine, so CPU. How much can the user’s machine handle? Then, how fast can these different functions and things that are being returned, and what’s being asked to be executed by the tags, how much can it handle at one time? So the processing time. There’s also some browser limitations, we’ll get into. There are limits to the number of concurrent requests, that can be send by individual browsers. Then even further than that, limits to the number of requests that can be send to the same domain.

So say you have a whole bunch of Google Analytics tags, they’re all trying to send to Google-Analytics.com domain. There’s going to be some limits there, based upon your browser. Those different things can factor in, and weighing down the browser, and slowing down how quickly requests are able to be sent and received, which can then slow down performance. In terms of the connection, or the pipe in the middle, this is where bandwidth comes in to play. So, what type of bandwidth, what type of speed or Internet connection are your clients working with? Part of the factor here is also, what type of device are they reaching your site with? You can think of bandwidth as the size of the pipe. So how much information is able to be sent and received at one time?

So we’re a user using a nice, new MacBook Pro. They’re going to have much more bandwidth than someone accessing your sites on, say, your old mobile device. Also the Internet connection comes into play there, so we want to always optimize for, what types of devices are our users using? We don’t want to be trying to send, and have too much weight of our pages, and too much weight to the tags that we’re adding there, as a result, in order to optimize. Third piece here, server side, that’s on the client. And the primary metric that you would want to look at here is around latency. So the amount of processing time that the platform that you’re working with is taking, from the time that a request goes up, until a response is returned.

Any time that a respond is outstanding, in that sense, your browser could be waiting for it to return response, prior to moving on to the next item. And it’s just clogging up the amount of requests that are outstanding. Which, as we’ve mentioned before, there are some problems with limitations to that. Quickly, how tags factor into this. One, you have synchronous versus asynchronous requests. Asynchronous requests, all the different tags, all the different requests that can be sent up concurrently, or at the same time as others. Synchronous requests are the ones that are going to block anything else from happening. A lot, like your A/B testing platforms, are sending synchronous requests, because they’re returning what content should be shown to the user. So those are going to greatly affect performance a little bit more than your asynchronous requests, which are able to be sent in parallel.

You have request size and volume. So as mentioned before with bandwidth, there are bandwidth implications when it comes to request size, or the weight, quote end quote. You’ll hear that term thrown around quite a bit, as well as the volume of requests. So that pure number of requests being sent out. One, just in general, the more requests, the more that your browser is having to handle, and the more potential there are for latency issues, and then also those browser limitations in general. One big area that we see, especially the request of volume coming into play, is with the instance of piggybacking. A lot of times it’s not the number of tags that you have deployed directly on your site that you’re aware of, but it’s the number of tags that are being loaded in by those different platforms that you have on your site that you’re aware of.

It’s those that are piggybacking, or being Daisy Chained … or in the ad ops world of the waterfall of different tags, different platforms that are being loaded, that’s what’s affecting performance on the website. Third big thing to keep in mind here is the nature of the request. Obviously, all different tags, all different pixels, there’s different types. They’re all performing different functions. Some have nothing to do with the content that’s on the page. Some are scraping the page, in order to be able to collect the information necessary to send up to that particular platform. Other things are fully interacting with the page, and what the user is seeing. A personalization platform, recommendation or advertising tool.

So based upon what their function is, you’re going to have some different standards, and some different rules for those. Also, to keep in mind the amount of processing that’s required on the user site, and that’s where those CPU limitations are going to come into play. Then finally latency of vendor servers, that’s pretty easy to see, as I’ll show you. But you want to have some standards around what type of … what is the max latency that’s acceptable for your website, and for all the tags on your website? So quickly, to run through those metrics again, what tags are there piggybacking? How many tags do I have, where are they, how are they loading?

Got to know that first at a really high level, before I can really get into the specifics. Number two, latency of requests. What is my standard, 300 milliseconds, 500 milliseconds? I need these requests and the responses to be returned within this amount of time, and holding your vendors, holding your third-party tag partners responsible for those. Number of requests, do I have a limit to the volume that is being sent? What is that limit? And with the tags that are being sent, is it a concern? A lot of times, again, volume is not going to come into too much of a factor with the tags that you’re aware of, but if there’s a lot of piggybacking going on, volume can become a concern.

Size of the request, that’s that total weight of the page. There is a max, based upon the user’s bandwidth, of what can be sent and received at one time. Tags are playing a factor into that weight, along with the other elements on the page. So, knowing, what is the overall weight of my page? What are the factors contributing to that? How do tags contribute to that? And then, are there any particular platforms that are really, really adding to the weight of my page? At that point you need to start evaluating, is it worth it? Is there anything I can do to optimize? Is there anything this partner or this platform can do to optimize? And if not, I might need to start making some trade-offs, either removing this tag, in order to be able to free up some access capacity.

Or I need to start moving some other platforms, in order to make room for what this is doing. Your order of tags and hits, that’s more on the data collection side of things. Some things should be taking higher priority than others. You can see that within Reporting, and you can really start evaluating that there, thinking about it a little bit for performance. And then also, are there any scripts that are blocking others? And how we can go about seeing that. If there are, again, that can be an issue with data collection. So that was a quick, high-level review of some of the things that we had gone over, and how tags play into performance.

Now let’s look at how we can actually find this information.

How we can leverage Tag Inspector to find these different pieces, these different metrics, different bits of data, so that we can then start evaluating where we stand now, and optimizing poor performance. So let’s hop in first to our Tag Inspector Scan reports. As mentioned, and as many of you that have been in the platform are aware, within these scan reports we have the overview and the details. Run a scan on any website, take a look at the overview first. First thing you’re going to see here is those instances of piggybacking.

We need to know at a high level what’s going on. So I want to know, one, how many tags, volume, are on my site? And then, are there any that are really bad offenders, in terms of loading in a whole lot of other third parties, or piggyback tags that I’m not aware of? This scan right here is on our TagInspector.com site. No really egregious items, but here you can see, like [Disqus 00:13:42] for example, floating into three different platforms, piggybacking off of it. Google Tag Manager, which obviously we’re aware of, is loading in a number of different tags, as is YouTube. Three different platforms are piggybacking off of YouTube. That’s the first place to look, Overview. Are there any tags that are really piggybacking in, loading in a whole lot of other tags that I’m unaware of? Easy to pick those off.

I identify this it’s loading in five, six, seven, fifteen additional platforms. I’m not really using that anymore. I can remove it from my site. Boom. You’ve already reduced the weight of those pages, and made some optimizations. Low-hanging fruit. Second thing you’re going to want to look at here, is within the details. If there is tags … as you’re trying to determine, okay, where are they, what are they doing? Within the details here we can see that. So you have the full list of the different tags that are on the site. How many pages are they on? Is it really a bad offender? If you know you’re having performance issues on a particular page, coming in here, and being able to identify, what tags are on there that could be contributing to the issues that I’ve identified?

A lot of the additional metrics here now are available either via exporting directly from the UI, by just exporting all here. Or hitting up our API, if you do have access to that with your license. So first thing I’m going to show you here, foreign export, is essentially what that export looks like. Now this is API export, but it contains all that same information. Some key things here … let me zoom in, make it a little bit easier. Within any export report, you’re going to have the raw data that’s collected. So you have your page ID, which is going to be a unique page. You have the request that’s been sent. Over here on the end you’re going to have the actual URL on which that request is being sent, and then the tag name.

So one thing that’s unique here within the exports, as well as within the API exports, is, it’s not at the high-level tag basis, but it is at the hit level. So I’m going to say that you have something like Google Analytics. For every one page load hit that’s being sent, there’s typically on average about three other hits that are going back and forth through the Google-Analytics.com domain. They have to load in their Javascript library. They have to form the hit, and then the actual hit with the data needs to be send out. So you can see each of those. And you can see latency figures for each hit. Now latency, for those unfamiliar technically with the term, is going to be from the time that a request is executed … so actually sent up by the client or the browser.

Latency is the time until it then takes for it to receive the first response, which is going to be when the response begins to come in.

So latency, you’re going to want to optimize. Again, that’s on the vendor, on the tag side. But everything here is in milliseconds, so this first request going up is actually loading in that Javascript library super-fast, four milliseconds. But we can look through here for anything that is maybe above and beyond what our standard or our policy says for the website. So if our policy is, no tag should have a latency above 300 milliseconds, I can look here and see, in a simulated scan environment, what is surpassing that limit? I can see some more granular information here, with my DNS connection, start to send in all that good information. But I can also then see, again, the page, and then the tag [theme 00:18:09] that this is loaded on.

Primarily here in these exports, the biggest thing, again, around the performance is that latency number. Now, one thing that I am excited to announce and share is the new performance metrics that are now being collected by the Tag Inspector crawler. These are available in custom exports currently, where we can create directly from the database. But they will be available via API here very, very soon, hopefully by the end of the week. So what these are. Currently, within those exports, as you saw, everything was on the hit level for the different tags. This is now segmented out.

We have page performance information, as well as a few additional metrics for a tag performance. So at the page level, the two things that I had mentioned previously that we’re looking to optimize for, is the weight of pages, and then potentially the total volume of requests that are being sent out. Any page that it scans may not have the page size, this is the total number of bytes that the page is. So the total weight of the page, and then the total request, this is the total number of tag requests that are being sent up on that particular page. So again, we can come in here. We can start to optimize for the total volume of requests. If we notice that on a particular area of our website, or on a particular page, there seems to be a whole lot more hits, we can start looking at that individual page, what tags are on it?

What do those requests look like?

What is for something the size or the weight? Something like this multi-brand page on our InfoTrustLLC.com site, talking about 300 percent larger than the others. So what is factoring into that? Is it the images and content that’s actually on that page, or is it the tags that are on that page? And I can filter, to be able to determine that. On an individual tag basis, I can now also dig into individual tag size. So for each hit that’s being sent up, what is the size of the request that’s being sent? In my previous example, I noticed that on a particular page, there’s a lot of weight to that page. Okay. What hits are primarily factoring into that weight?

And then, from an optimization standpoint, what can I remove? And when I remove that, what type of gains am I going to see, from just a pure size standpoint? We can also start thinking about, as you look into your analytics reports, what are the sections of your site, what are the pages that are being accessed primarily via, say, mobile devices? And once we determine, what is our allocation …? What is the maximum size that we’re comfortable with for all requests being sent from that page, what is the allocation then for tags? And then, how do the different tags play into that budget?

Again, these are all available with the Tag Inspector scans via exports, soon to be via the API. And we can see those different metrics around the weight, the volume of requests, the number of requests, what the requests are, where exactly they’re loading, what is the latency of each? And then we can start optimizing from there. The next piece I want to cover here is what is available within the Tag Inspector Realtime module. So as many of you are familiar, Realtime differs a little bit from the scans, in how we’re going about in collecting the information. While the scans are a simulated scan, where we’re crawling the websites … it’s loading in a virtual browser, it’s optimal conditions. With the realtime it’s our Tag Inspector tag.

It’s loading on each page as users come and interact with the page, click on events, the whole nine. It’s live tag monitoring, in the real user environment, so we can see what is actually happening with the tags. How many hits are actually being sent? What is the latency in the live environment for the different tags that are being loaded? So here within the Tag Inspector Realtime UI, you can come into the tags report. This is going to give me a coverage, a high-level overview, of what tags are being loaded on the page. How many unique pages, out of the total unique pages on my site, are these different tags being loaded on? Included here are also some average latency figures.

Now these are averages for a particular platform across all page loads. I can see here … again, obviously, I’m doing some simulation, monitoring four different requests, and four different tags, with … it could be any number of tools. Or maybe I’m looking at it within those Tag Inspector scan exports. But I want to see, am I testing? Everything looks good. How is it then actually performing? So I can come in here, tag latency. I can see there, Optimizely, up close to a second on average in terms of latency. I recognize Optimizely as a particular tag that loads synchronously. It’s loading at the very top of my page, because it is A/B testing, so it’s blocking my page content from loading.

So in some of those instances where my site, or the user’s browser, is waiting on that response from Optimizely, sometimes for almost a second, I know nothing else is loading on my web page. That’s a problem. So that’s something that I’m going to want to dig in, and really start identifying, where is Optimizely? What is it loading on? On what pages is this latency really an issue, and how can I optimize for it? Is it something where we reach out to that vendor? Is there anything they can help me with? Is it something with my implementation? But something needs to change, something needs to be addressed there. I can then start digging into, for these different tags, tag-specific information.

So by clicking on one of those tags, I’ll pull up this overlay where I can see all of those different, unique pages on which a tag was found. I can see the average start time, so when did the tag begin being initiated? I can see average download, so when is that download event happening for that particular page? Down at the bottom here, I can see individual page loads, containing the tags. So when a user accesses this particular page, and my page loaded, during the time that my user was on that page, this tag … in this case Google Universal Analytics, loaded. I can go in and then look at all the tag behavior, tag activity, during that particular instance. We’ll get to that here in a second.

But first, very important, is, again, some of the latency information. So here I can see, for this particular tag, on an hourly basis over the past week, what is the average latency for this particular platform? I can see, where are the spikes? Where is it optimal? Is it during a particular time of day? Is it maybe an instance where there’s just a lot of hits going up, which is increasing latency? I can see, is it high traffic times? Exactly where do these issues …? Where are they happening? I can then go in and identify, saying my pages reports, I can say, okay, during this particular hour obviously the latency of this tag was astronomical. I can come in and find a page load during that time period, to see what’s going on.

Is it something where there’s just a lot happening? Was it the download time? So maybe my user’s connection was super, super-slow, and it took a really long time for that request even to get up to the Google servers. There’s a number of factors that can play in there for latency. But this is at least going to tell you what to look at, and where to look. As I dig deeper here, again, I can go in and click on an individual page. I want to see how our tags are performing on this page on my actual site. When I look at a page, I can see all unique page loads, or all unique instances of a user accessing this page. So I can go through this and see, how are tags performing, and how is my page performing for users from different locations, using different browsers, using different browser versions?

And then when I choose a specific one, I can see all that tag behavior. Here I can say, for this page, when a user from the United States, using a Firefox browser, came … download time of the page, 3.77. Decent. And then I can see all hits that were being sent up. Now here within this report, you have the start time and then the response end. The difference there is going to be right around that latency figure. So I can see all hits that went up for Google Analytics here, Google Tag Manager. At what point did they initiate? And then when did that response end, fully executed? So I can get that latency, are there any issues? I can also see here, are there any instances of only a single tag loading at one point? And start to dig into, hey, is it maybe blocking others from firing?

I can obviously then also see the order here. So if I know in my reporting I’m missing a large chunk of information for a particular platform, I know it’s implemented correctly, isn’t maybe a performance issue. Is it loading after a lot of other hits, and is it not being initiated until really late on the page, causing that sort of discrepancy? So between the Tag Inspector scans, between the Tag Inspector Realtime, we can really get all those different metrics around the weight of requests, around the latency of requests. Around what tags are on the page. Around what order they’re firing in, how are they loading? And then, are they loading in any other tags that could be causing other issues?

And that is about all we have time for today. We always keep these training webinars right around 30 minutes. We are at just about 12:30 at this point.

So now we’ll turn it over to any questions that you all might have.

One question here, it’s just around, are you supposed to use a tag management system to load all different tags, especially pixels and GA Universal, specifically in the context of Google Tag Manager? Yes. I always recommend loading as many tracking tags as you possibly can through your tag management system. One, just from a pure management standpoint. But two, a lot of tag management systems, especially if you’re using their templates, they’re already optimized for performance.

Even with the custom HTML, by leveraging the functionality of your tag management system … especially a combination of your firing rules. You can conditionally fire tags, it’s much easier to add in Logic. So you don’t even have to execute tags on pages where they’re not necessary. For that, as well as for the purpose of … typically when using a tag management system, you’re able to use some macros, or dynamic variables, in order to populate data points. As opposed to using jQuery, or something like that, to scrape the page to collect that information. So, yes. I always recommend adding and using tracking, [inaudible 00:31:24] as many tags as you can within your tag management system, as opposed to directly on the page. A little bit of confusion due to the examples shown.

So in some instances you’re going to have … I think that’s in reference to the Tag Inspector scan report here on our website. So here you can see, if looking for … and a great question for even migrating tags. As I just mentioned, I definitely recommend loading as many tags as possible via the tag management system, and migrating those that are on the physical page to the tag management system, that are not. So when you see instances here, for example, a Facebook pixel, in some cases you’re unable to load a tag through the tag management system. It might be because it’s connected to a specific forum on the page.

In the case of a lot of things, even your A/B testing platforms, it might load synchronously. That’s why Optimizely is loading here directly from the page it’s supposed to, within the tag management system. Sometimes you will see tags like double-click, loading through YouTube. YouTube is piggybacking off of that. We don’t have access to that double-click. YouTube is using double-click in order to get their own statistics from our website. I guess that could be a security issue, things that are a little bit difficult to optimize. Now if you see an instance here of, say, that Facebook pixel, not being loaded through Google Tag Manager, then you want to see where that’s happening, in order to be able to migrate. I can also see that’s within those exports that I’ve showed you.

And then also here within the UI. So if I click on a specific tag, a Facebook pixel, I can see how this tag is being loaded in these different instances. And then I can come in here to the stack trace, and I can export all instances of it loading directly from my page source. So I can start identifying, why is it loading from the page source on this page? And if it shouldn’t be, this is where it is, so that I can work on migrating again. Removing it from the source, and then loading it through my tag management system. The same thing can also be said, like I mentioned, for why some of those tags would not be loaded through a tag management system, but have to be embedded in the page.

There’s a number of items there. It looks like that’s all of the questions here today. As always, we do record this. We’ll be sending out, probably tomorrow, the full recording, along with the deck. But as always, if you have any questions at all, always feel free to reach out to us here at Tag Inspector. Very happy to help with anything. If you have any questions … or you’re working on a performance initiative and want to bounce some ideas off of us, and start to determine, what exactly should we be looking at? I’m more than happy to see if we can help in that. See if we can help grab some of those different metrics.

As always, thank you all very, very much.

We’ll be coming back with another educational webinar next week, doing a deep dive in the data layer. We’re going to be joined by one of our developers, who’s really going to be able to get into some of the specifics, and some of the technical details around that. It should be very interesting. Don’t miss that. In the meantime, thank you all very much, and have a great rest of the week.

Article Filed Under