I really like that it talks about the ontological systems that are completely and utterly disregarded by the models. But then the article whiffed and forgot all about how those systems could inform models only to talk about how it constrains them. The reality is the models do NOT consider any ontological basis beyond what is encoded in the language used to train them. What needs to be done is to allow the LLMs to somehow tap into ontological models as part of the process for generating responses. Then you could plug in different ontologies to make specialized systems.
In theory something similar could be done with enough training. Guess what that would cost. Does enough clean water and energy exist to train it? Probably best not to find out, but techbros will try.
I really like that it talks about the ontological systems that are completely and utterly disregarded by the models. But then the article whiffed and forgot all about how those systems could inform models only to talk about how it constrains them. The reality is the models do NOT consider any ontological basis beyond what is encoded in the language used to train them. What needs to be done is to allow the LLMs to somehow tap into ontological models as part of the process for generating responses. Then you could plug in different ontologies to make specialized systems.
In theory something similar could be done with enough training. Guess what that would cost. Does enough clean water and energy exist to train it? Probably best not to find out, but techbros will try.
I don’t think a logical system like an ontology is really capable of being represented in neural networks with any real fidelity.
Well it does great with completely illogical systems. I wonder if one can be used for a random seed? 🤔