The huge amount of data that IOT systems will generate will call for analyses of different types. A brief review of some systems and what they are used for.
Apache Spark: Uses distributed memory abstractions for primarily in-memory processing. Built with Scala. Useful for finding data clusters and for detecting statistical anomalies by looking for distance from the cluster. Comes with a machine learning system on top. Does not come with its own file system (use nfs/hdfs). Useful for complex processing, where state needs to be maintained in the event stream for correlations. Described as ‘batch processing with micro-streaming analysis’, but looks headed to cover streaming analyses as well.
Apache Storm: Real-time Streaming data analysis. Developed at Twitter, written in Clojure. Unlike Hadoop which has two layers (map, reduce), Storm can have N layers and a flexible topology consisting of Spouts (data source units) and Bolts (data processing units). Storm has been superceded by Heron in terms of performance. IBM Streams is a commercial offering also for stream processing..
Ayasdi: Topological data processing allows one to discover what interesting features of the data are, without knowing what to look for in advance. This is in contrast to most systems where one needs to know what one is looking for. Claims insight discovery.
Hadoop: Used for batch processing of a large amounts of data, using map/reduce primitives. Comes with HDFS. Cloudera (and others) have made significant improvements to it with an interactive SQL interface and usability improvements for BI (Impala).
InfluxDB: Time-series db for events and metrics. Optimized for writes and claims to scale to IOT workloads.
ZooKeeper: A coordination service for distributed applications.