Ready? Grab the tarball or deb from http://aphyr.github.com/riemann/
0.1.3 is a consolidation release, comprising 2812 insertions and 1425 deletions. It includes numerous bugfixes, performance improvements, features–especially integration with third-party tools–and clearer code. This release includes the work of dozens of contributors over the past few months, who pointed out bugs, cleaned up documentation, smoothed over rough spots in the codebase, and added whole new features. I can’t say thank you enough, to everyone who sent me pull requests, talked through designs, or just asked for help. You guys rock!
I also want to say thanks to Boundary, Blue Mountain Capital, Librato, and Netflix for contributing code, time, money, and design discussions to this release. You’ve done me a great kindness.
- streams.tagged-all and tagged-any can take single strings now, not just vectors of tags to match.
- bin/riemann scripts exec java now, instead of launching in a subprocess.
- Servers bind to 127.0.0.1 by default, instead of (possibly) ipv6 localhost only.
- Fixed the use of the obsolete :metric_f in the default package configs.
- Thoroughly commented and restructured the default configs to address common points of confusion.
- Deb packages will not overwrite /etc/riemann/riemann.config by default, but consider it a conffile.
- Librato metrics adapter is now built in.
- riemann.graphite includes a graphite server which can accept events sent via the graphite protocol.
- Scheduled tasks are more accurate and consume fewer threads. Riemann’s clock can switch between wall-clock and virtual modes, which allows for much faster, more reliable tests.
- Unified stream window API provides fixed and moving windows over time and events.
- riemann.time: controllable centralized clocks and task schedulers.
- riemann.pool: a threadsafe fixed-size bounded-latency regenerating resource pool.
- where*: like where, but takes a function instead of an expression.
- smap: streaming map.
- sreduce: streaming reduce.
- fold-interval: reduce over time periods.
- fixed-time-window: stream which passes on a set of events every n seconds.
- fixed-event-window: pass on sets of every n disjoint events.
- moving-time-window: pass on the last n seconds of events.
- moving-event-window: pass on the last n events.
- (where) can take an (else) clause for streams which are called when the expression does not match a given event.
- Converted useless multimethods to regular methods.
- TCP and UDP servers are roughly 15% faster.
- New Match protocol for matching predicates like functions, values, and regexes. Used in (where) and (match).
- streams/match is simpler and more powerful, thanks to Match.
- Numerous concurrency improvements.
- Pagerduty adapter is included in config by default.
- Graphite adapter includes a connection pool, reconnects properly, bounded latency.
- Email formatting shows more state information in the body.
- Indexes are seqable.
- Travis-CI support.
- Unified protocol buffer parsing paths.
- Clearer, faster tests, especially in streams.
- New tasks for packaging under lein2.
- riemann.deps provides an experimental dependency resolver. API subject to change. If you’re working with dependent services in Riemann, I’d like your feedback.
What’s next for Riemann?
We have quite a few new features in riemann-tools master, so that release should be coming up shortly. The dashboard is in a poor state right now, halfway between old-and-nextgen interfaces: I need to reach feature parity with the old UI and finish styling, then make a release of riemann-dash. I’m also going to rework the web site to be friendlier to beginners, and add a howto section with cookbook-style directions for solving specific problems in Riemann.
In Riemann itself, I have plans to improve Netty performance, and I want to write some Serious Benchmarks to explore concurrency tuning. After that I plan to tackle a Big Project: either persistent indexes or high availability. Those two features will comprise 0.1.4 and 0.1.5.
If you’re interested in funding any of this work, please let me know. :)
Awesome work, Kyle! I’ll throw this on a server by the weekend!