Modern AI systems are a "black box" - that is, they can produce useful output, but there is no way to look inside the box, to ask why it answered the way it did.
At Neocortix, we've invented a new technology called Deep Attribution Networks, which allow Modern AI Systems to explicitly remember their training data, so that they can identify the source of what they are talking about. This can be displayed directly to the user in the form of a citation and footnote with link to the source, which may be desirable as a product feature in some application areas. You can see it working in the demo, above.
But it can also be useful to internally identify the source, even if it is not displayed to the user. This important capability can be used for reducing hallucinations, since the system now "knows what it is talking about", and can make better decisions about what to say next. Can you imagine holding an intelligent conversation, without being able to remember anything you have learned or experienced? Naturally, you can converse more intelligently, the more you can remember what you have learned and experienced. The ability to converse intelligently depends on the ability to access the memories of what you know.
Deep Attribution Networks by Neocortix are an essential technology for providing citations, and for detecting, monitoring, and eliminating hallucinations in Large Language Models.