A few weeks ago, I wrote about the recent rise in popularity of open data, and how these public data sets can be easily processed with Essentia. All the examples in that post are based on Amazon’s AWS Public Data Sets, which are (for the most part) large databases put together by organizations for public access and use. However, because the AWS data sets are voluntarily published by each organization, many are not regularly updated. In the US Transportation database available on AWS, aviation records and statistics are provided from 1988 to 2008. More recent data (through April 2016) can be found on the US Department of Transportation’s website, but in a format different from that of the data provided on AWS. Other open data are not prepackaged at all: for example, the US Census Bureau has information on state tax collections from 1992 to 2014, but on the website, recent data is separated from historical data, and from there, visitors can only view data for one year at a time. Furthermore, while tables for recent years can be downloaded as CSV or Excel workbooks, older tables are only available as Excel files. How do these issues affect people seeking to work with open data? Added to the complexity of processing large amounts of data is the challenge of first collecting all of the available files, then processing each one (separately, if they come in different file types and data formats) before putting everything together. Read on to see how Essentia rises to the occasion.
Open data as a term is relatively new, but the concept is not. The idea that information should be freely available for unrestricted use has been around for awhile, but didn’t really take off before the rise of the Internet made it feasible to share data quickly and globally. Add in the recent popularity of big data, and it makes a lot of sense that public datasets are on the rise as well. Enormous amounts of valuable data, including everything from climate projections to genome sequences, have been made available by the organizations that own them and are now free for download: on Amazon Public Data Sets, Data.gov
, and more. The possibilities are endless, as researchers, businesses, and citizens from around the world have access to data that would otherwise be extremely expensive and time-consuming to collect. The challenge that follows is how these researchers and businesses are going to handle these large datasets, including storage, organization, and analysis.
Essentia on AWS Marketplace
There’s a new version of Essentia available on the AWS Marketplace! The upgrade from version 2.0.21 to 2.1.7 includes a few key changes.
First, Essentia now works on HVM instances instead of PV instances. This allows users to take advantage of Amazon’s newer generation of instance types, and makes security management much easier. Second, the new version of Essentia provides many new features including the ability to stream, clean, and move massive amounts of data from S3 data stores directly into a Redshift data warehouse. This streaming capability works effortlessly, regardless of compression type, and without the need to generate intermediate files or write complicated code.
The AWS community site contains a free version of the AMI that limits use to a single instance at a time.
Amazon developed its Redshift service to accommodate data warehousing needs on a reliable, scalable platform. With a highly efficient SQL engine that executes queries in parallel, users can gain insight into their data quickly. But to access that power, the data first needs to be loaded into the service.
Going from raw data into a form that an application (in this case Redshift) is used, is one of the main strengths of Essentia. Let’s focus one one the most common scenarios: data is stored in its raw form on S3, which can be accessed by Redshift or any other relevant service. Typically, the data is ‘dirty’, containing missing, irrelevant, or otherwise unneeded data.