“Look, Ma! No Server!” Analytics in the Age of Data Acceleration

by | Aug 15, 2017

For more than a decade, CVP has been using the Amazon Web Services (AWS) cloud platform to provide solutions to our clients. Over that time, AWS has pushed its offerings up to more than 90 services. CVP has increased its own use of AWS services for corporate purposes and recently became a Standard Consulting Partner on the AWS Partner Network (ANP).

Today, we have scores of developers and architects tinkering away on the AWS platform, using it for data storage, computing power, training, pilots, and development, testing, staging and production environments – all delivered as a utility: on-demand, scalable, available in seconds, with pay-as-you-go pricing.

Now think of our AWS bill. It is worse than an old-fashioned phone bill, only with the equivalent of 90 long-distance services, and delivered as a data dump of codes and numbers. AWS gives you the total bill to pay, but it is hard to decipher.

But in this age of data acceleration, it’s sometimes better to skim off the cream of the data while it’s fresh, rather than let it pile up in a warehouse for monthly or quarterly batch jobs. If a developer started racking up lots of charges accidentally, we don’t want to wait until tomorrow.

That’s where serverless analytics comes in. Serverless does not mean “no servers.” AWS (or other players like Google Cloud and Microsoft Azure) offers a set of sophisticated functions already ingrained or that we can customize in the cloud service. These functions are useful for quickly parsing events and data streams to give a read. You no longer worry about server capacity, unpredictable traffic flow or who might access that server.

To solve the AWS billing issue, CVP developers set up an AWS usage dashboard, data lake, and query engine that harvests cost data flows and analyzes them to show, for instance, which CVP developer is using what services how heavily; what sets of data might reside better on a cheaper AWS service; what data stores are not being accessed; or what pilots have served their purpose and should be dismantled.

Other enterprises are facing this challenge every day, dealing with a deluge of data coming from multiple sources – internet of things (IoT) devices, advanced sensors, and user click streams tracking mobile applications and websites.  These data sources generate a high volume and variety of data. They also provide the opportunity to analyze and understand what is happening on the monitored services, giving instant feedback to organizations in near real-time.

Compare this method above to the traditional data warehouse environment where you would have to set up, maintain, and back up large servers, and refreshes happened once a day or once a week. Forecasts may already be based on stale data and you may have spent more money on the IT infrastructure than the services cost.

At CVP, we help customers determine the appropriate way to approach data analytics with serverless and managed data options. Our in-depth data acceleration assessment looks at the possible return on investment of performing near real-time analysis and helps suggest architectural  options like traditional on premise, managed database or fully serverless.

The decision model checks many variables such as potential benefits derived from the analysis of the data, time value of the data, which technologies best address the data source, capacity, growth, and infrastructure costs all to help customers make an informed build, buy, or rent decision. Depending on specific circumstances, we have found scenarios where the serverless model was cheaper in the long run due to intermittent use. In other cases, we suggested a model where the analytical solution was priced out with reserved capacity for predictable pricing for a fixed throughput, while only costing $200/year. In either case, we must of course show the business case justifying the extra costs and benefits.

Pin It on Pinterest

Share This