Over the many years I’ve been working with clients, I’ve come to the conclusion that any successful content, KM, or DAM initiative is going to have to address a strikingly wide range of issues.
These problems can’t be solved with a technology-first approach. On the contrary, if you apply a tech-first mindset to these types of initiatives, they will inevitably fail.
Instead, we have to look at content, KM, and DAM projects based on what characteristics they need in order to succeed. The following six characteristics, in no particular order, are critical.
- Easy-to-use interfaces that fit into existing workflows
Whatever system you’re using, it has to fit into people’s flow of work.
There are two different extremes to account for here.
One is the person who goes into that system once, maybe twice a month to update something. For that person, the system has to be extremely easy to use and use-case focused because they’re not going to develop the muscle memory that tells them exactly what to do every time they go in. If the system isn’t incredibly simple and intuitive with on-demand help, there’s a good chance that you’ll lose data quality because that employee is going to be accomplishing their task a different way each time.
On the other end of the spectrum is the person whose work is almost entirely performed in that system. If the tagging interfaces are clunky and hard to use, or it simply doesn’t fit into that person’s workflow, they’re going to find workarounds. They’ll take shortcuts around steps that take too long or that they don’t see as valuable. This also leads to a loss of data quality, plus it will make adoption much more difficult.
Any solution you design will need to first, consider both of these scenarios, and second, treat them as separate design problems.
- Auto-categorization for either tagging or tag suggestion
Given the volume of content that’s being generated and the need to make sure that content is properly tagged, there’s no way to move forward at scale without auto-categorization.
The variable here is that whatever approach you take needs to be flexible enough to support both unmoderated and moderated tagging.
Technology is at the forefront of this problem, but whatever tool you choose still needs to be able to be incorporated into users’ workflows. IT will also need its own staffing and oversight to make sure the auto-categorization is effective. If it’s not effective, you’ll be both getting bad data and wasting people’s time.
- A content curator whose job it is to oversee and/or do the actual tagging
Think of this person as the content Lorax. This person will speak for the content itself.
It’s their job to deeply understand how the system works, what content is in there, how it’s getting tagged, and if and why it’s getting tagged in different ways.
This is not just a job function to give someone who does other things for 90 percent of their day. You want this person to be like an accountant— an accountant’s primary focus is your books all day. You’re not going to ask them to go perform a different job function during their downtime, because their entire job is to know your finances inside and out.
Not only that, but a good accountant also knows the context and history of your finances, and is able to differentiate between what’s important and what’s not, and why.
Similarly, your content curator should have the time and support to be able to become the expert on your content, the content systems, the needs of the different content creators, and, most importantly, your users. While this setup is still less common than it should be, more organizations are seeing the value of a position like this. It’s not yet showing up as a line item at the beginning of most projects, but it should.
- An organizational structure that supports the tagging effort
This refers to the larger organizational structure and job descriptions. If you want folks to tag content appropriately, it has to be in their job description and it has to be part of their monthly and quarterly reviews. They need the tools to do this work, and the tags need to be relevant to them.
This goes back to the content curator, and having a content team that doesn’t have overlapping responsibilities.
- Analytics and feedback that report on overall usage as well as impact of the tagging
What type of analytics is your project going to need for success? These need to be part of the process up front, before you choose a tool or begin implementation. Most tools will have feedback on content performance, because that’s what everyone wants to know. But feedback on how often different terms are being used, what metadata fields are being populated consistently, etc. is often seen as an afterthought even though it’s just as critical.
- A tech stack that can handle metadata well
During one of Factor’s engagements with a multinational client, we had to come to a screeching halt because the tool they were using couldn’t handle the taxonomies it needed to handle.
Just like the analytics and feedback, your tech stack is one of those things that tends to get de-emphasized even though it really is a first-order issue. Often, limitations won’t make themselves known until it’s too late to switch, so identifying the right tech tools at the start of the process is key.
That was part of the issue during the aforementioned engagement. Once it became clear that the tool wasn’t going to be up to what we needed, we had to stop work until the vendor was able to come up with a solution. That slowdown could have been avoided if the vendor selection process included the tagging and metadata requirements.
Content initiatives are complex animals because they rely on people as much as they do on technology. If you’re embarking on a content initiative, ask yourself: Are we ready to invest in a content curator and a full-time content team? Do we have the executive support and strategy to move forward?
Without those pieces in place, any content initiative will likely reach only a percentage of its full potential.