Natural language is designed to be a good way to represent internal mental states. And internal mental states are where we exploit the brain's amazing capabilities to do parallel search for interconnections. So natural language has to be at the core of communication of clear thought. However when you get a real lot of natural language, like a large text book, I wonder how easy it is to get that into a good internal brain structure.
Anyway this set me wondering whether one might try to copy the brains internal structures a bit. The idea is to have nodes that are connected in multiple ways and amenable to computer processing. The text is unambiguous (as far as possible) because the ontology and parsing is specified. Nodes can link to other nodes in various ways, including:
- (parameterized) Bayesian network specifying the probability of a node given another (when meaningful);
- software module interaction for nodes with associated software;
- just links;
- ...
The hope would be that you could put in a statement (like the economics one given partially above) and it would search around, find other relevant stuff, find data which might bear on the matter, code that might let you do relevant calculations on the data, and other useful stuff. This would be linked to information relevant to the individual. Individuals can specify how much they understand nodes, how much they agree with them. If you want to understand something new then it would lead you through other stuff you need to understand first. And it could do lots of other useful things to help you understand the subject...
No comments:
Post a Comment