WHAT'S NEW The check types available in Circonus offer many protocol-specific methods of data collection for our various integrations. In addition to these pre-made integrations, Circonus has custom checks. These include the JSON checks that allow Circonus to collect data from absolutely anywhere, but it also includes the CAQL checks, which enable users to create checks based on data analysis. Of course, the API unlocks even more flexibility. This week, Circonus added API auto broker selection for CAQL checks.
Want to see more innovations happening in Circonus? Read More »
THE PRACTICE Last time, we revealed a few things we’re working on in the Circonus Roadmap. We mentioned how our changes to ZFS defragmentation will reduce or eliminate the need for certain maintenance procedures for our On-Premise users of Circonus and IRONdb. One component of that comes from just one of the benefits of another big change we’ve been planning for a long time, one that will be very helpful to both our On-Premise and SaaS users.
Until now, there have been two approaches to data collection: collect raw data and store it forever (which is inefficient and severely limits how much you can collect), or collect data and roll it up into averages (which lets you store more data, but you lose resolution). Circonus stores the complete distribution of your data at 1-minute resolution indefinitely. This is the best case scenario for the second approach, since we’re alerting on the data in real-time before the rollup, rolling it up only once at a relatively high resolution, and storing a distribution instead of just an average.
Circonus will be the first monitoring platform to adopt a third, hybrid approach. Soon, Circonus will begin collecting and storing raw data, with millisecond resolution, for 4 weeks. After that, the complete distribution will be stored for each 1-minute interval at 1-minute resolution indefinitely, just like it is now. This gives you the visualization accuracy benefits of being able to zoom in on millisecond resolution data, while retaining the visualization performance benefits of rollups that load years of 1-minute resolution data in seconds. It’s the best of both worlds, whether you’re debugging your system after you get one of our real-time alerts or performing analysis on years of historical data.
If you’re running IRONdb or Circonus on premises and you have the disk space, you’ll be able to tweak that 4 week storage limit for raw data at the cost of retrieval performance. We can’t stop you, so instead we’re developing a way to store the raw data and rollup data side-by-side, so that with enough disk space there would be no performance loss. Our hybrid approach really will be the best of both worlds.
Stay tuned as more big changes are underway. We’re all very excited about our plans because these ideas came from both our experts’ insights and feedback from users like you.
DID YOU KNOW? Many of you are familiar with RUM. Not the beverage, we’re talking about Real User Monitoring. Some of you may remember the old school synthetic monitoring that predated RUM, using samples of simulated users. Synthetic monitoring still has its uses, but obviously you’d rather monitor the real user data directly. Of course, the main reason that RUM wasn’t always the industry standard is that we didn’t always have the power to track and store that volume of data.
But what about Systems Monitoring? The situation isn’t that different when the “users” accessing your system are other systems; it’s just a matter of scale. Most of the industry is still using the synthetic monitoring model for this reason, but what if we had the power to do what RUM did for user monitoring? Is the industry ready for RSM; Real Systems Monitoring? In this blog post, Theo Schlossnagle says “yes,” Systems Monitoring is Ripe for a Revolution.