Demystifying on premise cloud deployments

Read Article

By Vivek Sharma, Managing Director, DCG, Lenovo India

 

In a digitally driven era, India stands at the brink of a digital revolution. Augmented by the Government’s Digital India initiative, digital products and services created directly through the use of technologies like Cloud, IoT and AI contributed to 4% of GDP in 2017. IDC predicts that the digital transformation will contribute to $154 billion to India’s GDP by 2021. In another survey conducted by IDC, 1,560 business decision makers across 15 economies expect to see over 40% improvements in areas such as productivity, customer advocacy and profit margin due to digital transformation.

To say the phrase digital transformation is a hot topic in IT would be an understatement of epic proportions. Digital transformation initiatives are top of mind for CEOs and CIOs, and one of the most critically overlooked aspects of a successful project is the speed of application development. Rapid application deployment can be a defining difference maker that gives organizations a competitive edge, turning IT from a cost center to a profit center for the business.

Hybrid data center infrastructure engineers must keep up with the demand or be left in the dust.They must identify the resources each application needs, configure the hardware and provision virtual machines and containers. So what questions should they ask as they try to keep pace with increased development requests? Here are a few factors worth considering:

1. What resources are available to deploy new applications?Resources can be distributed anywhere within the data center, anywhere in the world.How do you keep track of managed resources? How do you keep track of how each resource is used?

2. How are current conditions affecting existing resources?In public cloud and an on-premise environments, customers require a service. Maintaining stable services requires reliable hardware.Can you monitor all of the hardware that is in use? Do you understand how existing conditions affect running services?

3. How can I leverage all of my resources to maintain stability?Understanding the impact of existing conditions in the infrastructure is step 1. Step 2 may involve migrating workloads so that the infrastructure can be updated or reconfigured. Can you identify suitable infrastructure to perform the migration?

Services deployed to a cloud provider hide the machinery that answers these questions. Application owners desire this same level of invisibility when these services run on-premise.Keeping up with demand in an on-premise environment involves delivering:

• Infrastructure as a Service – A self-service portal for specifying the needs of the application.The machinery behind the service determines the virtual platforms that are required. The service locates the hardware that supports the virtual platform. Provide the right resources, at the right time to meet the needs of the application.

• Timely Reaction to Infrastructure Changes– Seamlessly manage the maintenance of the underlying infrastructure with minimal impact to running applications.Migrate workloads to adjacent resources while modifying configurations.

• Manage Capacity and Utilization – Understand how existing infrastructure is used, but do not stop there. Use your insight to predict future use.Make strategic investments based on historical trends to make the best use of infrastructure.

Bridging the gap between physical and virtual infrastructure
Supporting this environment requires a deep understanding of the deployed virtual infrastructure and the physical infrastructure it simulates.In the past, spreadsheets or wiki pages kept track of physical resources.These static representations also attempted to map virtual machines to the hardware.Understanding the impact of hardware failures and maintenance involved manual extrapolations. In today’s data center, it is also increasingly important to automate these extrapolations. Use this data to make informed decisions and automate action across managed resources.

A centralized dashboard, with customizable gauges and metrics,can provide the right level of insight.Dashboards provide information about the deployment of virtual infrastructure, the status of network resources and the availability of physical servers for on-premise deployments. The dashboard can also serve as a single pane of glass to quickly view metrics and make decisions about deployment requests, running services and service migrations. The dashboard allows you to make sense of the abundance of available data so that you can maximize the benefit of managed resources.

Conclusion
The acceleration of application development and deployment requires efficient resource management.IT leaders must understand capacity and utilization across global resources with the speed and flexibility of cloud-based deployments.To keep pace,it is imperative to leverage the availability of management dashboards and use the insights to manage the relationship of both virtual and physical resources. Effective use of the data will prevent impacts to critical services and highlight resource availability, allowing operators to deliver infrastructure as a readily consumable service.


If you have an interesting article / experience / case study to share, please get in touch with us at editors@expresscomputeronline.com

cloud computingData centerlenovoVivek Sharma
Comments (0)
Add Comment