When you send website data to a vendor, your tag management system must source that data from somewhere. If there is no structured, centralized location for your data, that often means scraping the data from wherever it exists in the source code. This could be from the paragraph text, image file names, linked URLs, and all other kinds of messy locations.
However, scraping data from various places (known as DOM scraping when on the website page) is labor-intensive and has the potential to set your data collection initiatives up for failure down the road when something changes. A basic, structured data layer on your site (known to developers as a JSON) centralizes your data and gives your tag manager a single, secure source to use to gather information.
When you rely on DOM scraping to get your data, you never know when someone might unwittingly pull the rug out from under you by making a change to your website or app. It requires very little understanding of any programming language and becomes much easier to maintain and review your data without constant developer support when you are using a JSON. Additionally, it’s easy for your developers to deploy quick fixes to your JSON since it is very clearly labeled in the page’s source.
Installing a structured data layer in the form of a JSON object is a simple and clean method for maintaining the data on your site and ensuring it’s always available when you need it. Although DOM scraping may seem like it is saving you time, as implementing a JSON can be time-consuming at first, it has the potential to cost you a lot more time down the road as your site and data availability changes. A JSON is like a high-interest investment in your future—the rewards will absolutely be greater than the cost.