Guest Column | June 20, 2018

Leveraging Smart Availability To Enhance Data Analytics And Decision-Making Precision

By Don Boxley, DH2i

Cloud And On-Premise Decisions

Today, it is widely accepted that decision-making precision can best be accomplished with accurate and timely data analytics. Seemingly every day, new technologies and methodologies are introduced promising data analytics nirvana, for any number of use cases.

And, as the motives for applying analytics to business purposes have increased, so has the complexity of technology deployments. Organizations across virtually every industry routinely encounter situations in which data is spread across a multitude of environments, making it onerous to centralize for a single use case. Even more prevalent is the reality in which it’s beneficial to deploy in alternative settings (such as with Linux platforms, in the cloud, or with containers), but monetary, human-resource or technological shortcomings make it impossible.

However, in today’s ever-shifting data space, enterprise agility for analytics is required as much as for any other aspect of competitive advantage. And, such processing is optimized by performing analytics as close to data as possible, which may need to switch locations for disaster recovery, scheduled downtime, or limited-time pricing offers in the cloud.

By adopting an agile approach predicated on Smart Availability (as opposed to traditional high availability or HA), organizations can dynamically provision analytics in a multitude of environments to satisfy business use cases, seamlessly transferring data between on-premises environments (including both Windows and Linux machines), the cloud and containers.

Consequently, they reap decreased infrastructure costs, effective disaster recovery (DR), and an overall greater yield for analytics—and that of data in general.

Analytics In The Cloud

One of the more prevalent ways in which Smart Availability improves analytics is with cloud deployments. There are several advantages to going to the cloud for analytics, not the least of which are the pay-per-use pricing model, decreased infrastructure, and elastic scalability of cloud resources. There are also several software as a service (SaaS) and platform as a service (PaaS) options—some of which involve advanced analytics capabilities for machine learning and neural networks—for users without data science teams. Nevertheless, the most compelling reason for running analytics in the cloud is the alternative: attempting to scale on-premises. Historically, scaling in physical environments involved an exponential curve with numerous unalterable costs which commonly limited enterprise agility. By scaling in the cloud and with other modern measures, however, organizations experience a far more affordable linear curve.

This point is best illustrated by a healthcare example in which a large healthcare group was using SQL Server on premises for its OLTP, yet they wanted to deploy a cloud model for Business Intelligence. The choice was clear: either ignore budget constraints by splurging on additional physical infrastructure (with all the requisite costs for licenses and servers), or deploy to the cloud for real-time data access of their present kit. The latter option decreased costs and maximized operational efficiency, as will most well implemented cloud analytics solutions.

Optimizing Cloud Analytics

In this example and countless others, optimizing cloud analytics involves continually replicating on-premises data to the cloud. Shrewd organizations minimize these costs by opting for asynchronous replication; the aforementioned healthcare entity did so with about a second of latency for near real-time access of its healthcare data. Replication to the cloud is often inexpensive or even free, making the data transfer component highly cost-effective. By making this data available for BI in the cloud, this organization enjoyed several advantages. The most prominent was the reproducibility of a single dataset for multiple uses. Business users—in this case physicians, nurses, clinicians, etc.—are able to access this read-only data for intelligence to impact diagnosis or treatment options. Moreover, they do so while the original data is accessible to additional users on premises for functions related to OLTP.

This latter point is critical. With this paradigm, there are no performance issues compromising the work of those using on-premises data because of reporting—which might occur if each group was accessing the same copy of the data for their respective uses. Instead, each party benefits mutually from this model. The healthcare group is aided by the primary data being stored on premises, which is important for compliance measures in this highly regulated industry. It’s also vital to note the flexibility of this architecture, which most immediately affects cloud users. Organizations can establish clusters in any of the major cloud providers such as Amazon Web Services, Azure, or any private or hybrid clouds they like. They can also readily transition resources between these providers as they see fit: perhaps according to use case or for discounted pricing. Better yet, when they no longer require those analytics they can quickly and painlessly halt those deployments—or simply transfer them to other environments involving containers, for example.

Automatic Failover In The Cloud

The aforementioned healthcare group also gets a third advantage when utilizing the Smart Availability approach for running analytics in the cloud: automatic failover. In the event of any sort of downtime for on-premises infrastructure (which could include scheduled maintenance or any sort of catastrophic event), its active workloads will automatically failover to the cloud using Smart Availability techniques. The subsequent continuity enables both groups of users to continue accessing data so that there is no downtime. Those primary workloads simply transfer to cloud servers, so workloads are still running. This advantage typifies the agility of the Smart Availability approach. Workloads are able to continuously run despite downtime situations. Additionally, they run where users specify them to create the most meaningful competitive advantage. Most HA methods don’t provide users with the flexibility of choosing between Linux or Windows settings. There’s also a simplicity of management and resiliency for SQL Server Availability Groups facilitated by Smart Availability solutions, which provision resources where they’re needed without downtime.

Smart Availability And Recurring Advantages

Smart Availability methods enable users to maximize analytic output by creating recurring advantages from what is essentially the same dataset. They allow users to move copies of that data to and between cloud providers for low latency analytics capabilities, with some of the most advanced techniques in use today. Moreover, this approach does so while maintaining critical governance, compliance, legal and performance requisites for on-premises deployments. Best of all, it maintains these benefits while automatically failing over to offsite locations to preserve the continuity of workflows in an era in which information technology is anything but predictable.

About The AuthorDon Boxley, DH2i

Don Boxley Jr is a DH2i co-founder and CEO. Prior to DH2i, Don held senior marketing roles at Hewlett-Packard where he was instrumental in sales and marketing strategies that resulted in significant revenue growth in the scale-out NAS business. Don has spent more than 20 years in management positions for leading technology companies, including Hewlett-Packard, CoCreate Software, Iomega, TapeWorks Data Storage Systems and Colorado Memory Systems. Don earned his MBA from the Johnson School of Management, Cornell University.