In the last few weeks since we announced Neo4j 1.4 GA, we’ve been busy working on improvements to the codebase for more predictability, better backup performance, and improved scripts for the server. Ordinarily we’d roll these improvements into a milestone, but this time around we think they’re important enough to warrant a stable release, and so today we’re announcing the release of Neo4j 1.4.1 GA.
Predictable commit semantics
When working with indexes, there had been some confusion about when index data would become visible with respect to the corresponding graph data. In this release we’ve taken a firm stance on predictability so that in a two-phase commit, the graph datasource will always commit first, and then the index providers.
Large backup support
In previous versions of Neo4j, very large online backups over the course of many hours could cause the online backup tool support to throw out of memory errors, making for an inconvenient backup process. We’ve hardened the online backup tool now, and made chunk size and client read timeout configurable, so things should be much smoother.
Server scripts made more cross-platform
In the 1.4 GA release we removed the 3rd party server wrappers from the codebase since they’d caused so much pain. Instead, we provided bash scripts and batch files to run the Neo4j server. Even though we thought we had some leet bash skills, it turns out that some of the scripts we’d written didn’t work so well with some bash variants. This time around we’re confident that our server management scripts will work on pretty much any environment, so give them a spin.
Bug fixes and improvements
A big thanks to our community for finding and reporting their experiences with the database. Because of these efforts, we’ve fixed some bugs and annoyances in this release including fixing up relationship counts, a possible null pointer exception when adding properties, and dealing with the intricacies of file handling on different operating system and file system combinations. And we took the opportunity to improve our logging for critical exceptions within the transaction manager.