Wednesday, November 13, 2013

Why Graph Databases are the best tool for handling connected data like in Diaspora

Handling connected domains with the “right tool for the job”

Michael Hunger
Sarah Mei recently wrote a great blog post describing the problems she and her colleagues ran into when managing highly connected data using document databases.

Document databases (like other aggregate-oriented databases) do a good job at storing a single representation of an aggregate entity but struggle to handle use-cases that require multiple, different views of the domain. Handling connections between documents tends to be an afterthought that isn’t covered well by the aggregate data model.

Real world use-cases

Sarah described how she worked on a TV show application at Pivotal and discussed the modeling and data management implications that surfaced when the application’s use-case evolved.

The same applied when working on the Diaspora project which started out as a Ruby on Rails application using MongoDB.

For both projects, these requirements caused difficulties with the chosen data model and database which triggered the move to PostgreSQL. A relational database was chosen as it allowed some of the fidelity in the model to return.

Unfortunately this comes at the cost of dealing with queries with a high number of JOINS which can cause performance issues.

Fortunately there is a data model that embraces rich connections between your domain entities: graph databases.

Live Graph data models of Diaspora and the TV-Show

To show how a graph database would handle these use-cases we created two live graph data-models of both the a social network like Diaspora and the TV-Show. For that we set up a small example data set and then represent the use-cases she mentioned as a set of graph search queries with the graph query language Cypher. These GraphGists allow easy modeling discussions and a live exploration of the dataset and use-cases and provide a good starting point for your own (forked) variant of the domain model.

Example graph model - TV Shows

To quickly develop the models we use the typical patterns that we’re looking for in a graph when answering the use-cases described. We call it whiteboard-friendlyness :)

Shows, seasons and episodes

Characters played by actors featured in a episode
(:Episode)   -[:FEATURED_CHARACTER]->(:Character),
(:Character)<-[:PLAYED_CHARACTER  ]- (:Actor)

Users writing reviews for individual episodes

Using these basic patterns we can quickly create sample data for the domain and also develop the queries used to solve the use-cases. For example:

Listing all the episodes (filmography) of an actor across episodes and shows

(actor:Actor)-[:PLAYED_CHARACTER  ]->(character),
(character) <-[:FEATURED_CHARACTER]- (episode),
WHERE = "Josh Radnor"

Please check it out in more detail in the live graph model.

Example graph model - Social Network

Users, friends, posts


Posts, comments and commenters


Users like posts


Find the posts made by Rachel’s friends

MATCH (u:User)-[:FRIEND]-(f)-[:POSTED]->(post)
WHERE = "Rachel Green"
RETURN AS friend, post.text as content

List people who commented on posts by Rachel’s friends

MATCH (u:User)-[:FRIEND]-(f)-[:POSTED]->(post)<-[:LIKED]-(liker)
WHERE = "Rachel Green"
RETURN AS friend, post.text as content,
      COLLECT( as liked_by

Please check it out in more detail in the live graph model.

Graph Databases as a niche technology?

As you can see, it is incredibly easy to model these use-cases with a graph database. So why weren’t they considered? To quote from the article:

But what are the alternatives? Some folks say graph databases are more natural, but I’m not going to cover those here, since graph databases are too niche to be put into production.

This is an interesting observation, as Neo4j is the most widely used graph database and has been running in production setups for more than 10 years now. Neo Technology has more than 100 paying customers (30 of which are Global 2000 companies) and there are tens of thousands of community users that deployed Neo4j as a database backing production applications. The industries of these use-cases span everything from network management, gaming, social, finance, job search to logistics and dating sites.

We can understand why some people may have felt that graph databases were a niche technology in 2010 when Diaspora got started - we actually backed Diaspora on Kickstarter and offered our help at the time - but now the landscape has changed and graph databases are an uncontroversial choice.

Judge for yourself

If you work in a domain with richly connected data, we encourage you to try to model it as a graph and manage it with a graph database. For some more insights of how this works feel free to check out the freely available book “Graph Databases” by O’Reilly.

Also, the offer to support Diaspora still stands! We’re happy to help so please reach out to us if you’re interested. You can also follow the discussion with Sarah on Twitter. Feel free to jump in!


Michael Hunger (@mesirii) 

with help from Kenny Bastani, Mark Needham and Peter Neubauer


Ludovic Urbain said...

Bullshit gold.

Joins or subqueries can be optimized into one another and vice-versa depending on the specific characteristics (SQL Server for example requires you to write only subqueries and auto-optimizes into joins where applicable - supposedly).

"Graph" databases are just a simplified case of SQL (arguably, the most useful one) and are subject to the exact same slowdown due to the number of relations.

There is no magic that prevents any database from doing a lookup or a hashjoin or an index join to actually get to that relationship and the supposed benefits of neo4j are thus null.

Additionally, pretending Neo4j is better than PostgreSQL when you discard a mature and advanced language like SQL for a simplified incomplete semi-specific language is ludicrous.

Once again, NoSQL proponents show just how little they know about databases, congratulations.

Philip Rathle said...

Ludovic: what you say may be true of some graph databases--like FlockDB which is just a thin layer atop MySQL-- but it's not true of Neo4j.

Here's the tech secret (not magic, even if it can sometimes feel that way to users): records are stored on disk & in memory in fixed-length buffers, and point to each other using (offset) pointers. This technique, which amounts to storing how data is (not only logically, but physically) related at write time, rather than at read time, allows millions of hops per second per thread. It also means that queries like ShortestPath (aka Kevin Bacon queries) take the same amount of time whether your data set has 1000 people or 1B people (which some of our production customers do).

By contrast, queries like this take progressively longer to run in RDBMSs as the database grows, because the b-tree lengthens and deepens as the database grows in size. Semantically ShortestPath is a great example of an arbitrary path length query where a specialized language can be very helpful. I've seen four-line Cypher queries that require 50+ lines of SQL.

If it was question of hype, the companies below-- esp the large, conservative ones--wouldn't have bothered moving from RDBMSs:

PostgreSQL and other RDBMSs have their place--like you, I made a career of working with them until 18 months ago. Graph databases have their place though, and can do amazing things for the right kinds of data & problems.

Facebook was forced to build its own graph database to handle features like graph search. LinkedIn and Twitter did the same: built their own, because there was no off-the-shelf option when they got started. For people who don't have the time, money, or wherewithal to build & maintain their own (graph) database, there's Neo4j.

Ludovic Urbain said...

Philip Rathle: You can't set pointers at write time that will also be correct when loading in-memory, unless of course you load the whole block as once, which doesn't make much sense.

According to you, the particularity with Neo4j is that it stores the relations with the item, which is basically just another way to model your data, that does provide faster access to relations.

Your queries don't take the same amount of time whether your data set has 1000 or 1B people, just accessing the nearby nodes is already going to be slower.

RDBMS's indexes aren't limited to B-trees, and I'm sure there are problems that are easier to solve with this tool.

It's just obviously wrong to present it as a general tool when it only serves very specific purposes.

If i understand you correctly, what neo4j does that is interesting, is a) load data in-memory in a relation efficient way and b) provide a simpler relation querying language.

If that is correct, it would be much better as a plugin to a real database than as a standalone tool.

DB > in-memory graph > result

Kenny Bastani said...

Ludovic: There are no statements mentioned here that claim that Neo4j is "better" than PostgreSQL in general sense. The article simply articulates that Graph Databases allow you to do fairly complex things very simply.

Take a look at

You'll find a set of interactive tutorials that show you the benefit of using Neo4j to solve some complex problems.



pandahands said...

I would like to learn this stuff for handling recruitment data .. looks good seems intuitive one problem i have is building the web interface not got much experience of that .. also best way to model the data....