One Web. One API.

Retire the CRUD Monkey: Knowledge Graphs & REST for Web-Native Architecture

August 2, 2025

Most developers are CRUD monkeys. They just make systems that create, read, update, or delete rows in a database.

David Heinemeier Hansson

The legacy foundation we still code on

Relational databases and SQL were invented in the 1970s—well before hyperlinks, URIs, or JSON. The MVC pattern was formalised for desktop GUIs in the 1980s. Today we still bolt these technologies onto HTTP and call it a web stack.

  • Every HTTP request is routed through a controller that translates URIs to primary keys.
  • Object-relational mappers duplicate the database schema in code, then break whenever the database schema evolves.
  • Each micro-service publishes its own JSON contract.

This complexity is part of what The Atlantic dubbed “the coming software apocalypse”: bugs hiding in layers of glue code, outages costing billions, and developers spending careers shuffling boiler-plate instead of solving business problems.

A web-native alternative

Uniform interface

The web-native solution involves two fundamental principles that replace the whole translation tower:

  • Common, web-native data model — global identifiers and first-class links
  • Uniform interface — standard protocols that work consistently across resources

In practical terms, this means:

Principle Implementation What it gives you
Common data model RDF Global, web-native identifiers (URIs) and first-class links
Uniform interface SPARQL 1.1 protocols Standard query (SPARQL Protocol) and CRUD (Graph Store Protocol) over HTTP

A request URI is the identifier used inside the triplestore. No mapping layer, no ORM, no duplicated "DTOs".

Engineering & business impact

Dimension MVC / SQL stack Knowledge Graph + REST
API surface Dozens of hand-crafted, app-specific endpoints One semantic, URI-based API for all resources
Schema change Manual migrations, fragile mappings, app redeploys Add, update, or deprecate terms without breaking consumers
Integration cost ETL pipelines, brittle gateways, custom sync logic Seamless federation across domains; link instead of sync
Developer hours Controller boilerplate duplicated across services One generic controller; reuse logic across the stack
Operations & cost Multiple stacks, version drift, gateway maintenance Zero-gateway architecture; HTTP caching and invalidation
Agent / AI readiness Isolated silos, non-linkable IDs, hardcoded contracts Globally dereferenceable data; machine-readable semantics out of the box

Data-first, future-proof

Traditional stacks revolve around code and tables: databases define schemas, which shape JSON payloads, which are routed through controller logic. The data model is an implementation detail—tight-coupled to apps, rewritten every few years.

RDF flips this. The graph comes first—global, self-describing, and independent of any single application. Software is designed around the data, not the other way around.

  • Data lives longer than code — RDF models persist across app rewrites, frameworks, and org charts.
  • One model, many consumers — agents, APIs, UIs all draw from the same source of truth.
  • Standards, not snowflakes — RDF and SPARQL are W3C standards with decades of tooling and adoption.

Investing in RDF is investing in the long game. It’s a stack that doesn’t assume your framework, cloud, or team will stay the same. It assumes only this: the data will still matter.

But "my team doesn't know RDF"

True—SPARQL isn't yet mainstream. Organisations therefore face a choice:

  1. Continue funding CRUD boiler-plate and accept the high integration costs that come with maintaining dozens of bespoke APIs.
  2. Invest in — or hire — RDF expertise. A small team learns SPARQL once, then eliminates controller code permanently. In most cases the training cost is a fraction of annual API maintenance.

Migration isn't all-or-nothing

Teams can adopt RDF incrementally:

  • Read-only virtual graphs — expose existing SQL via Virtual Knowledge Graph (VKG) such as ontop, following the zero-ETL approach.
  • Progressive write paths — new services write natively to the triplestore; legacy tables retire over time.

Agents need Linked Data

Personal data locker is David Siegel's 2009 concept for a personal cloud repository containing all your data—identity, preferences, transaction history—with granular privacy controls.

The AI agent revolution isn't coming—it's here. But today's agents hit the same integration walls that plague human developers: incompatible APIs, brittle JSON contracts, and endless authentication flows.

Consider a travel agent that needs to book flights, reserve hotels, and update your calendar. In today's push-driven web, it must navigate dozens of proprietary APIs, each with different authentication schemes, data formats, and error handling. One API change breaks the entire chain.

Linked Data solves this through three architectural principles that make the web agent-ready:

  • Global identifiers (URIs) — every resource has a web-native address that's both human-readable and machine-actionable
  • Machine-readable semantics (RDF vocabularies) — data describes itself, eliminating guesswork about field meanings and relationships
  • Predictable read/write interface (REST) — one protocol for all CRUD operations, regardless of domain or vendor

RDF + REST eliminates the controller layer that currently sits between agents and data. A huge step towards the personal data locker would be agents pulling exactly the data they need from globally addressable, self-describing resources.

This isn't a new idea. In 2001, Tim Berners-Lee envisioned exactly this in his foundational Scientific American article "The Semantic Web"—agents that could automatically book medical appointments by discovering doctors, checking insurance networks, and coordinating calendars without human intervention. What seemed like science fiction then becomes practical reality when LLM-powered agents meet semantic web infrastructure.

Conclusion

The web is global, but much server code still targets local, table-oriented storage. RDF for data and REST for CRUD access replace layers of translation with one standards-based API.

It's 2025. Retire the CRUD monkey and redirect talent to the higher-value layers where innovation happens.

Further reading