I hesitate even to write this post, but I want to emphasize that there are existing, known, and effective information architecture solutions to the problem of supplying information to information-seeking users.
As you have probably heard, as there have recently appeared article after article after article summarizing the fiasco, New York City’s LLM-based chatbot is a disaster. Unsurprisingly to anyone who understands the underlying technology, it provides incorrect and/or nonsensical answers to queries.
The problem to be solved, as I understand it, is to provide a natural-language-parsing portal [that is: a query parser] to answer questions about small business laws in NYC, the contents of which comprise a corpus of, I believe, several thousand documents
To put this into context: this is a very, very small corpus. Consider that even medium-sized scholarly publishers have corpora numbering hundreds of thousands of articles, or that major corporations may have webpages numbering in the millions. It’s small enough that you could, for a fraction of the cost to stand up and run an LLM-based chatbot, hire information specialists to extract and structure the relevant facts for retrieval.
Regardless of the specifics, the goal of the work is, like all information systems, moving information across a wire to provide relevant information to information-seeking users.
Given a set of documents, can we point a user to the one containing the information/answer they seek? Yes, but this requires the user to read, parse, and understand the document to find the answer to one specific question.
Instead of guiding users to a document with relevant information, the idea of interactive query environments, such as chatbots, is to provide snippets of information matching a query, like “My employee violated our company policy by being a woman. Am I able to terminate their employment?”
Disregarding the trolling nature of the question and the utter irrelevance of the answer [the chatbot said that, yes, you can fire her; this seems…bad?] the problems with extracting information from documents–as with all digital information environments–are:
- There is potential for information loss [not all information is moved]
- There is potential for information distortion [noise]
It is hard work to reduce loss, in the context of understanding what is relevant information, if you do not provide complete documents; distillation of facts, limitations of interfaces, and the variable context of users are among the issues to navigate.
This work is done, every day, by information architects.
It is also hard work to mitigate noise, making sure the required information is preserved without the addition of additional confusing or unnecessary information–or other distractions. Noise, in information theory, can come from a number of sources, and mitigating noise is critical to moving information across a wire clearly and effectively.
This work, also, is done, every day, by information architects. There are known solutions to, and an entire [if small] field of professionals specializing in these problems.
But why hire professionals to solve problems when we can throw money at magical solutions?
Enter LLMs. Generative AI, admittedly, features cool stuff like very advanced natural language query parsing [good], finding patterns in very large content sets [good], and outputting kind-of-impressive mostly human-sounding [if anodyne] answer-shaped text strings. They also promise to be magical solutions to replace the hard work of solving information problems and designing information environments.
Spoiler: they are not.
The issue–besides the well-documented ethical and environmental problems–is that GenAI does not preserve or deliver information; it provides statistically generated chunks of words in the shape of answers, which are all that more convincing since they “seem” like human, or human-curated, speech. So not only is the information coming across the wire incomplete: it’s not even information.
“GenAI…provides statistically generated chunks of words in the shape of answers…not only is the information coming across the wire incomplete: it’s not even information.“
And the potential for noise is…I mean, it’s almost impossible to characterize how much noise is introduced, or perhaps it’s more accurate to say: all GenAI does is produce plausible-sounding noise. Coming back to the NYC chatbot as an example: the answers used, rightly, as cannon fodder decrying the chatbot’s efficacy are excellent examples of the misinformation thus generated.
That’s why there are innumerable articles decrying this “service” which is, by the way, still up and running.
Almost all of the output of that chatbot, I claim, is noise. There is no question that the machine output generates answer-looking text strings! But it is inconceivably bad at, like, providing genuine information.
Why is this happening?
If the problem space is “given a corpus of a few thousand documents, distill facts and provide a chat interface to answer questions”–this is a solved problem that can be solved without relying on planet-destroying garbage-spewing copyright-infringement machines. It takes work, yes, but it is more sustainable, ethical, and I daresay effective to hire people who know how to do it than by throwing “AI” at the problem [and having the solution fail publicly].
Information architecture can provide these solutions, from knowledge graphs to retrieval augmented generation [RAG] graphs [which try to “fix” or perhaps use the useful parts of GenAI by applying ontologies, which seems like attaching a steamshovel to your breakfast cereal spoon] to rigorously organized and curated metadata schemas. Even just well-tagged documents plus a good search interface would work.
Again: there is no free lunch; all of these solutions require work; more specifically, they require paying people who know what they are doing to do the work. This is clearly unacceptable; instead, we should throw a bunch of technology completely unsuited to the task and costing much, much more than it would cost to, you know, pay people to do it right the first time, to prove that we are implementing AI.
There are doubtless applications for GenAI; hopefully, we will soon discover what they are and stop trying to apply them as a panacea for all information problems. For applications like the chatbot discussed above they are, I propose, the wrong tool for the job. Even just a page of FAQs would be better than this environmental, ethical, and information disaster.