So, you did all this work and built these fantastic taxonomies and navigation models. Aren’t you done yet?
Not really. Like architectural or backcountry trip plans, the realities of the situation will require modifications and adjustments to the information architecture. The taxonomies, metadata, content models, and navigation that make up the IA will all be tested during implementation.
As an organization moves through implementation, a thousand small decisions and a few really big ones will be necessary. If you’ve done your due diligence by assessing your situation, basing your decisions on users, and building the right models, the work will stand up to these realities. We know, though, from hard-earned experience that seemingly small, inconsequential decisions can have huge impacts later in the project. In particular, the realities of the technical environment and its ability to implement the models at scale will probably be the biggest area where decisions need to be made. This is often also the case for workflow and governance. User testing during this phase can also have a big impact on the models, but for this post we will focus on the other aspects of the implementation.
The reason problems often arise during implementation is because that is when details regarding capabilities of systems, integrations between systems, internal workflows, and content management typically emerge. As the impacts of these details surface, teams commonly identify areas where different parts of the models cannot be implemented for technical reasons, or they identify areas where implementation is too expensive, either in terms of development cost or management cost. While a well-planned project will dig into these areas, there always will be on-the-ground details that require refinements and small changes.
These are some of the more common bumps in the road:
Perhaps the biggest area where these issues arise is in how the models are designed and modeled in the technical systems. Different modeling approaches will have an impact on scalability, latency, and cost of change going forward. For example, a proposed navigation model may rest heavily on the use of facets to help drive the experience, and each facet represents a query of some sort on the backend. We have seen projects where cacheing capabilities or database table design have had impacts on the taxonomies and navigation. If these queries are expensive from a performance perspective, changes to the information model are often required. Where a Content or Product Management System is in use, they often have limitations in the number of attributes that can be searched quickly, while others are stored as a blob somewhere. Identifying the best attributes to put where is essential in this case.
The implementation of search often has an impact on the model. The capabilities and limitations of the search infrastructure should have been identified in the assessment phase and taken into account during modeling. However, during implementation the impact of managing and then integrating synonyms, broader, narrower, or associated terms will come into play. Conceptually, these are all part of one model, but in reality they may have to be modeled in different systems. If this is the case the taxonomy design and integration will need to be built around these limitations.
Meeting UX requirements
These requirements assume that information be presented in different ways and that the information be available across screens, channels, and locations. Supporting the particulars of these different retrieval and publication processes will often require the addition of context related metadata. For example, different channels may require slightly different copy or descriptions for products, or content may need to be chunked with greater fidelity than expected to support a responsive environment.
Governance and security
This is often a surprising driver of refinements during implementation. Unfortunately, governance is often overlooked (or just given a cursory review) during the assessment phase. During implementation (even if governance was given a proper assessment) people start to realize the essential governance implications of managing taxonomies, metadata, navigation, etc. in a more transparent and efficient process. We have seen projects bog down because essential reviews by legal, compliance, or HR teams, which had previously been completed via e-mail or in person were now visible to a wider range of people. As it turns out, some of this review needs to occur out of sight. In one extreme example, a client needed a set of terms in the a taxonomy to tag documents, but the governance process required that they be hidden from all but a few people in the organization. This requirement was identified with a client many Aprils ago, and one of the terms in question was “October Layoffs.”
There are two types of workflows that commonly impact the the information or navigation models. First are the workflows that are driven by metadata, and second are the workflows required to manage the models. In each case, it is a common mistake to over-architect the workflows in the design process. This is understandable, as there are many capabilities that open up with a well designed system. Content workflows can be rationalized, notifications can be sent, people can get just in time notifications. Taxonomies can be updated frequently based on input from users, search logs, or other metrics.
This is as it should be
Planning for and embracing these types of changes is an essential part of a successful implementation. Just as a building plan needs to account for potential unknowns like site conditions, material availability, and coordination of sub-contractors, implementing information and navigation models needs to account for all the common pitfalls above. Define your business goals, do your due diligence with assessments, and you can avoid the common mistakes and move on to the interesting ones.