Curious what people think about next steps: would you keep it ontology-light, or try to add ontology-aware reasoning (subclasses, property chains, constraints) on top of these hubs before/after retrieval?
Curious what people think about next steps: would you keep it ontology-light, or try to add ontology-aware reasoning (subclasses, property chains, constraints) on top of these hubs before/after retrieval?
This “hub” idea also seems like a smarter alternative to random text chunks in vanilla RAG. Precomputing small, meaningful subgraphs around papers/contributions lets embeddings see multi-hop structure, not just isolated sentences.
This “hub” idea also seems like a smarter alternative to random text chunks in vanilla RAG. Precomputing small, meaningful subgraphs around papers/contributions lets embeddings see multi-hop structure, not just isolated sentences.
What I really like here is that it’s RDF-based but schema-agnostic – no NL→SPARQL pipeline, no heavy TBox logic. Just using triples as graph-shaped context and for provenance, which feels very pragmatic for scholarly QA.
What I really like here is that it’s RDF-based but schema-agnostic – no NL→SPARQL pipeline, no heavy TBox logic. Just using triples as graph-shaped context and for provenance, which feels very pragmatic for scholarly QA.