Digital Resilience in Banking Industry: Do not let downtime have the upper hand

The digital revolution has picked up a faster pace since the Covid-19 outbreak and data has become the most valuable commodity. The potential threat of downtime is keeping enterprises on their toes as it can jeopardize their goodwill and market reputation and have a long-lasting impact on revenue, productivity, and overall customer experience. A disruption like this can even pose a threat to their existence.

However, despite being informed and aware of the consequences of the downtime, we keep hearing about incidents across the globe where power or IT outages have wreaked havoc on organisations. And it is not a new phenomenon. These kinds of incidents have been taking place over a decade now. But the most surprising part is that the industries like BFSI, who are the flag-bearers of digital transformation, have also been the downtime victims.

A case in point here is India’s leading private sector bank, which recently suffered an unexpected power outage at its primary data center. It impacted several of its services for a few hours leading to a string of unhappy customers and the loss of millions in revenue, thereby affecting its brand reputation. And this was not the first time – the bank faced outages in 2018 and 2019 as well. In December 2019, technical glitches in one of the bank’s data center affected its digital banking transactions.

Mitigating the risk of downtime

Such disruptions in the digital operations of the leading bank of the country rightly point towards the enterprise segment’s lack of preparedness in case of downtime. The banking industry has a lot of catching up to do on the technological front.

Indian banks’ digital transformation exercise gained momentum during the current unprecedented situation. Thanks to the scalable data center infrastructure being the backbone of their operations, all-digital banking channels have been open for customers in these times of uncertainty. Banks certainly realise the critical role played by data centers that not only help accelerate their digitalisation journey and power their mission-critical facilities but also keeps them functional and boost digital engagement with customers (the foundation of customer experience starts with the data center).

As data centers are essential for the Indian banking industry to remain resilient, banks need to strategically look at them to continue innovating without facing any downtime. For instance, a short power flick in a data center can bring down the entire banking system for at least a couple of hours. Hence, apart from making increased technology investment, banks need to plan to mitigate against all kinds of risks.

The results of the Uptime Institute Annual Data Center survey indicates that outages are becoming more damaging and expensive. A single outage can cost over $1 million and power failures, which impact everything on-site and can cause knock-on effects, are the most likely cause of major outages.

The right colocation partner can make all the difference

It is a known fact that data centers are extremely demanding and complex infrastructure to manage. At the same time, enterprises understand the inherent risks of a power outage. Hence, they are gradually moving away from a captive setup to third-party data centers as part of the risk mitigation strategies.

This holds for banking organisations to ensure 100% uptime of all their critical infrastructure and systems. We live in an era where there’s a strong push towards digital payments, but frequent outages won’t do good to either banks or their customers. And that’s why many Indian banks partner with multi-tenant data centers that deliver superior uptime compared to a captive data center.

While selecting a colocation partner, banks need to look at data center infrastructure and how it is designed, built, and operated to the highest global standards for resiliency and reliability. Simultaneously, the data center or colocation provider needs to assure banks of their guaranteed performance. The SLA should provide the uptime of the server racks and IT equipment. In case of a disaster or crisis, is the colocation provider equipped to ensure business continuity?

Key considerations

The focus should be on real redundancy. The ideal resilient and scalable colocation should:

Be able to sustain any single point of failure.

Be truly fault-tolerant.

Be resilient in all respects – electrical, cooling, building structure, accessibility, fiber redundancy, 48 hours of backup via generators, stay facility for client’s IT staff in case of urgent deployments.

In the light and learning from what has happened with the leading private bank or one of the largest cloud companies in not-so-distant past, and many such examples in the past, I would say that whether you are hosted at your own captive data center or a third-party data center, it need a serious audit in terms of its fault tolerance.

Additionally, it should meet the scaling needs of the bank and deliver rack and power capacity even after 25 years. It should also allow you to scale down without any capital or operational cost implications.

Hence, BFSI companies need to ensure that they host at an Uptime Institute design certified Tier IV data center. An Uptime Institute design certified Tier IV Data center can function uninterrupted in power outages and disasters. Any failure in power or cooling systems or any other parameter will not bring down a customer’s rack or any other infrastructure at any point of time, thus ensuring customers’ operational continuity. If you are hosting or planning to host at any data center, check their Uptime Institute Tier IV certification status here.

The most viable option

By now, it is evident that no organisation is immune to the threat of downtime. Coincidentally, the banking sector has been facing the wrath of these outages more than any other industry. We may agree that one-off such unexpected incidents temporarily disrupt their services or lead to other intermittent issues. Still, if the banks continue to grapple with frequent downtime, then it not only causes serious inconvenience to the customers but also exposes the weakness of their digital infrastructure and operational resilience. Besides, this puts their brand reputation at risk and increases customers’ chances of switching to other banks.

With the Indian government pushing digital transactions, the IT infrastructure that supports the digital delivery of financial services must be reliable. Looking at the significant rise in the failure rates, the industry experts are calling for greater investment by banks to overhaul their infrastructure to keep pace with the growing customer demand.

In the wake of these system outages and lapses in providing digital banking services by the country’s major banks, even Reserve Bank of India has urged banks and financial institutions to increase investments and strengthen their IT systems and technology.

In this endeavour, banks must do all the due diligence regarding reliability, redundancy, resiliency, and scalability, before selecting their digital infrastructure partner. Upon closely looking at the cause of the outage incidents that rocked the services of India’s leading banks, you realise that they can be better prepared if they have a robust supporting infrastructure. Hence, banking organisations must keep in mind that if their colocation provider is not Uptime Institute Tier IV-certified, it would not be able to deliver 100% uptime, which exposes a direct vulnerability to their business.

Protecting Data in the Face of Disasters and Calamities

In recent years there has been a marked increase in the number of adverse incidents and contingencies that organizations prepare for as part of their business continuity planning (BCP). There are enough incidents of fire and other natural calamities that have destroyed offices, IT equipment, data servers, and more. In 2012, a fire at Mantralaya (the administrative HQ of the Government of Maharashtra in Mumbai) caused the loss of ten terabytes of data including crucial and sensitive records, and more recently, a fire incident at a business hub in South Mumbai caused widespread damage to IT equipment including captive data servers. Considering the digital age that we live in, data is the most valuable resource, so any outage in accessing that data comes with a hefty price tag.

According to recent research by Uptime Institute, which calculated the cost of downtime, approximately 33% of all incidents cost enterprises over the US$250,000 with 15% of downtime episodes costing more than US$1 million. So, it is no surprise that ensuring the physical safety and security of all data within the network has emerged as a top agenda for the custodians of infrastructure and operations. And this has made CIOs who invest in captive data centers re-think their data center strategy.

Limitations of captive/on-premise data centers

Traditionally, businesses have preferred captive data storage solutions. While it provides a sense of safety, any threat to an establishment that restricts physical access to the data center becomes a business-critical risk. This includes natural calamities like a cyclone or an earthquake, floods as experienced by Mumbai and Chennai in recent years that result in a power outage, fire, or a situation like the COVID-19-induced lockdown. Besides not having access to data, some of the other key issues with the traditional approach include:

High cost of data center downtime: Outages caused due to faults with the data center or any other factor that hinders the access to data can cripple business operations. So, if technical support staff is unable to attend a fault at an on-premise data center due to the lockdown, the operations come to a standstill. And in the competitive world that we operate in today, the cost of such a dent in reputation can be quite high.

Lack of agility: In-house data centers are slower to respond to business requirements as well as external factors such as a change in technology because of the capital outlay and intensive planning it takes. This lack of agility and ability to scale up or down based on business demand places businesses with captive data centers at a distinct disadvantage.

Incompatibility with BCP of the future: While we try and figure all the nuances of a post-pandemic world, the ability of an organization to enable work from home has become the starting point of business continuity. This includes providing all employees with secure and reliable access to all data, at all times. So, for organizations looking ahead, a diversified enterprise data policy will form an essential aspect of their future BCP.

Inaccessibility due to lockdown: Amidst this pandemic, where organizations are running on minimum strength or are under lockdown, they face a volatile situation in case of a disaster as they cannot access their premises for preventive or regular maintenance. On the other hand, an MTDC operates with specialized technicians and support staff to maintain the same. Also, data centers are categorized under essential services; hence they work without any disruption due to the lockdown and ensure business continuity for you and your customers.

Why moving to a Multi-Tenant Data Center (MTDC) makes business sense?

As more businesses accept the new realities and look to upgrade their IT infrastructure to prepare for tomorrow, multi-tenant data centers (MTDC) have emerged as a viable solution. However, MTDCs come in all shapes and forms, while most of them offer a set of basic advantages, the following parameters are useful to evaluate the best option for your requirements:

Reliability of service: The primary purpose of adopting an MTDC approach is to ensure reliable service with an assured near-zero downtime and continued operability even in the worst-case scenario. Tier IV certified MTDCs like the Yotta NM1 data center in Panvel are designed for power outages and are equipped with redundant backup facilities that ensure uninterrupted service for up to 48 hours with full IT load without any supply from the primary electrical grid.

Physical safety and security: Third-party data centers have dedicated technicians and support staff for the smooth functioning and quick redressal of issues. Additionally, MTDCs also provide multiple layers of protection such as security guards, biometrics and access management to prevent unauthorized access, as well as disaster recovery protocols for severe worst-case scenarios.

Tier IV certification: Uptime Institute’s certification for data centers has become a leading benchmark for the industry. It ranges from ‘I’ through ‘IV’ with Tier IV Certification being the highest. Listed below are some of the service criteria you could use to evaluate the provider along with the benchmarks that Tier IV certified MTDCs offer:

Concurrent maintenance: In traditional systems, any planned or unplanned maintenance in any equipment meant that the data center had to go offline. While some older systems offer redundancies for some functions, Tier IV MTDCs provide a minimum N+1 redundancy at all levels which allow administrators to carry out concurrent maintenance without disrupting IT loads or any operation.

Fault-tolerant: Power and cooling systems play a crucial role in the normal functioning of a data center. Any sudden defect caused in either system due to a leak in the chilled water pipeline or a glitch in a related component can cause a shutdown. Tier IV certification requires automatic detection, isolation, arrest and containment of such situations. The automatic switching to standby equipment through redundant distribution paths avoids any downtime.

Protection against fire: Compartmentalisation offers fire separation among working and standby equipment for power, cooling and BMS including that for active distribution paths that control those functions. Additional physical measures and use of fire-rated components also shield against consequential damage and ensure that server racks suffer no downtime. Even IT cooling loads continue operating at full capacity.

Consequential Effects of Fire Protection: Uptime Institute designed Tier IV data centers also continue to function despite any consequential damage caused to the equipment resulting from the heavy spray of water from the fire protection systems. Thanks to the concurrent Ingress Protection and Fire Ratings of Bus-ducts and all control cables that limit the damage, all system including server racks, full IT load and cooling load remain operational in the aftermath of a fire incident.

Business continuity, reliability, and peace of mind

As the lifeline of any modern-day organization, data is priceless. Additionally, any downtime in access to data has a real-world impact on businesses in the form of tarnished reputation or even financial losses. So, despite a marginally higher capital cost, MTDCs make for a better business decision in comparison with on-premise data centers. Thus, while from an operational perspective, it offers better business continuity, a managed data center service provider or an MTDC that offers end-to-end management also provides reliability and most importantly – peace of mind.

Smart data storage: How MSMEs, startups can channelise limited resources for maximum benefits amid Covid

Technology for MSMEs: Outsourcing with a third-party data center and cloud service provider helps small businesses to utilize modern IT infrastructure and updated IT services. Startups have been depending on data centers and cloud solutions to run smoothly without having to establish additional technology infrastructure.

Technology for MSMEs: India is proving to be a land of promise for the emergence of new-age companies after the Government of India’s massive push on the Startup India campaign. The year 2019 was a big hit for Indian startups; with technology startups in the country raising a record $14.5 billion in investments from Indian and international investors according to a report by Tracxn. However, the global pandemic of Covid-19 has brought with it many unexpected challenges for the startup ecosystem. Startups are now looking towards technology solutions to combat these challenges and come out successfully from the downturn.

Over the years, startups have been depending on data centers and cloud solutions to run smoothly without having to establish additional technology infrastructure of their own. In the current scenario, startups must channelize their limited resources intelligently to reap out maximum benefits for continuity of their operations, focus on customer acquisition and expansion. Outsourcing with a third-party data center and cloud service provider helps small businesses to utilize modern IT infrastructure and updated IT services. Here are some reasons why data centers and cloud solutions are paramount for small business and startups:

Scalable Data storage

Data generation is increasing at a flying pace. Storing data in their own servers can be difficult and would call for additional investment every time there’s a need for more storage capacity. When a startup uses a colocation data center to fulfil their storage needs, they can increase their capacity whenever required and as quickly as the need arises. Moreover, the need is addressed easily without any hassle by altering the current plan and updated as per estimated increase. What’s equally important is the ability to scale down when the demand is low so that the company is able to make savings on their infrastructure costs when the demand diminishes.

Reliable connectivity

A robust multi-carrier network ensures that start-ups have 24×7 access to the data stored and their workloads. Data centers with multi-tenant facilities enable small businesses and startups to enjoy the features of a modern data center that could traditionally only be afforded by big companies. Startups can gain from technology stability, boosted performance and high-end hosting capabilities with the latest software applications.

Improved security & compliance

Data is an asset for every company. As operations increase, businesses must make sure that they take appropriate measures to secure their data. Co-locating at a third-party data center will help start-ups to safeguard their own data as well as client data due to strict access protocols/industry standards being adhered by data centers.

Additional managed services

A business needs much more than data storage and data management for a robust IT infrastructure. Multi-tenant colocation providers serve as integrated IT managed providers along with storage facilities to companies, for example, cloud computing, managed security and IT managed services. These services allow small businesses to work with big data analytics to retract potential insights at a small price, which ultimately leads to improved efficiency and productivity. Businesses are planning to explore innovative and cost-effective BCP solutions and are inclined to move to ‘Anything as a Service’ (XaaS) to harness new technologies. This would provide them with a strong suite of services from service providers and support to drive their business growth.

Enhanced cost efficiencies

Last but not the least, opting to outsource data center services would help startups save a lot of funds. One of the most common reasons due to which startups cease their operations is a shortfall of funds. Building and maintaining an on-premise data center takes a lot of financial, manpower, and time investments that small businesses may not be able to afford. Data center service providers allow them to pay as per their usage and work on the OPEX model instead of blocking funds in a capital investment.

A startup business is known for its innovative ideas, raw energy, incredible passion, amazing talent, and hunger to reach the top. But with the disruption that Covid-19 has brought into an already highly competitive market requires startups to be leaner than ever before and maximize their resources. Hence, reaching out to a colocation data center service provider for a highly scalable and efficient infrastructure layer and the best-in-class cloud services for their business will enable them to succeed in the ecosystem.

Source: https://www.financialexpress.com/industry/msme-tech-smart-data-storage-how-msmes-startups-can-channelise-limited-resources-for-maximum-benefits-amid-covid/2022351/

Is India the Next Hyperscale Data Center Destination?

Data consumption and data generation in India is growing exponentially. We’ve seen unprecedented growth when it comes to mobile internet penetration due to cheap data tariffs. The internet penetration of the country crossed 30% and is increasing rapidly. As per a recent study by Ericsson, data traffic per month will grow at a CAGR of 23%, from 4.6 exabytes in 2018 to 16 exabytes in 2024. This means, almost 18 GB data per user per month will be generated and will be majorly fuelled by rich video content.

Data explosion is further driven by various digitisation initiatives of the Indian government such as Smart Cities and Digital India, and rapid digital transformation of various industries such as financial services, telecom, online food delivery apps, e-commerce and even the manufacturing sector.

On the enterprise side, Public Cloud adoption is disrupting the traditional data storage and management practices. IDC reports that by 2022, 40% of new enterprise applications developed in India will be cloud-native and by 2023, the top 4 clouds (”mega-platforms”) in the country will be the destination of choice for 50% of workloads. In addition to this, as Internet of Things, Artificial Intelligence and Machine Learning get increasingly woven into enterprise fabric, there is an added load on applications which is driving the need for data centres.

As the volume of data that is generated rises, enterprises, OTT and cloud players will be required to have a robust backend infrastructure which can effectively cater to the demand of the users. Availability, Scalability and Reliability, the age-old tenets apply but the magnitude of dependence on them has significantly increased.

Adding to this demand is Data Localisation. This means, companies are now required to store critical data of Indian users within national boundaries. This regulatory requirement will help in better management, access and sovereignty of the data but at the same time will require the OTT players, Cloud Service Providers, social media, E-Commerce, Global offshore centre and Search engine players to partner with Indian data center service providers to meet their infrastructure needs.

Across the globe, data is exploding at a high velocity and there are no signs of data generation slowing down in near future. Scalability is thus an absolute necessity for data infrastructure. Hyperscale data centers as a phenomenon are gaining popularity worldwide. In 2015, there were 259 Hyperscale DCs globally which currently stands at approximately 450 DCs and is projected to cross 628 by 2021. Nearly half of hyperscale data center operators are located inside the U.S. India has much to catch up to but the outlook is very positive.

In the present scenario, the existing players are seen reacting to the demands and have been adding capacities (both space and power) when need arises rather than building purposeful Hyperscale Datacentre Parks.

However, the situation will improve drastically as some of the biggest Indian conglomerates such as the Hiranandani Group and Adani have announced their plans to come up with integrated, hyperscale Data Center parks. These organizations bring in strengths such as ownership of large land parcels across India, construction capabilities and a scalable power generation/distribution infrastructure.

Adani Enterprise has announced plans to build large data center parks in Andhra Pradesh over the next 20 years. International data center firm, Colt, has announced an upcoming facility in Mumbai to build a 100MW IT hyper-scale data centre facility.  Ascendas-Singbridge Group will also be making an investment of $1bn in new data centre builds across India over a period of five years. Many others are certain to join the bandwagon.

At Yotta Infrastructure, we have laid out a plan to build 3 data center parks with 11 hyperscale data center buildings with a combined capacity of 60,000 racks in the next 5-7 years. Our first data center will go live by end of December 2019.

Market analysis firm BroadGroup believes that the data centre capacity in India is set to increase by almost 68% from 2018 to the end of 2020. Over the next few years, Mumbai, Bangalore, Chennai and Hyderabad will witness major investments in DC infrastructure by local and international players in the market.

As India goes through a complete digital transformation, right from its public sector to private companies, and internet users increasing at a breakneck speed, majority of this growth will be linked to hyperscale data centers to accommodate and process the large volume of data generated.

Decoding Hyperscale Data Centers

You might not realize it, but the amount of data we are consuming and creating is leading to a data explosion. According to the latest report by DOMO, ‘Data Never Sleeps’, by 2020, for every person on earth, 1.7 MB of data will be created every second. Storage giant, EMC claims that there will be around 40 trillion gigabytes of data by next year. These staggering numbers almost feel unreal and abstract. Much like the data centers where all this data is physically stored.

Data Centers – The Unsung Heroes

Data centers, across the world, have been in the background doing their work round the clock, while we have been busy surfing the internet, using instant messengers or binge watching on Netflix. Not too long ago, data centers were treated more like a processing and storage space around the world. However, with the advent of cloud, big data and analytics, data centers are finally taking center stage in the IT world.

Hyperscale Data Centers are the cool kids on the block.

But What Exactly Are Hyperscale Data Centers?

Well as the name suggests, hyperscale is an ability to scale at a hyper speed to meet hyper demand. It is the ability to scale, in order to respond to the increasing demand. Hyperscale demand means ability to add capacity quickly and efficiently, with speed to market being a priority. Increased space, power, computing ability, memory, networking infrastructure, storage resources with optimized performance, is how one would generally define hyperscale data centers.

For example, while a data center (DC) may support hundreds of physical servers and thousands of virtual machines, a hyperscale facility will be able to support thousands of physical servers and millions of virtual machines. While IDC defines a facility as hyperscale if it has at least 5,000 servers and a total size of no less than 10,000 square feet, hyperscale data centers are generally much larger in size and area.

To give you a perspective, Microsoft’s hyperscale DC in Quincy, Washington, has 24,000 miles of network cable, which is nearly enough to go around the earth, and the Azure data center in Singapore is twice of that, as well as has enough concrete to build a sidewalk from London to Paris. Facebook is planning a mega-hyperscale data center in Singapore that will be 11 stories tall and will have an area spanning 1.8 million square feet. Yotta is going live with India’s largest data center at 8.2 Lakh sq.ft and 7,200 racks.

Going Beyond Scale

Apart from sheer size, one of the biggest advantages of a hyperscale DC is upward scalability. For a legacy system to scale up at a rapid pace is a big challenge. A hyperscale data center on the other hand will be able to handle horizontal or vertical scaling efficiently with minimum fuss. It will improve uptime and load times for end-users and run high-volume workloads that also require substantial power easily. A top layer of analytics and machine learning is added in a hyperscale DC.

As efficiency is the mantra of a hyperscale DC, automation is inevitable. Generally, companies that build and operate these DCs focus a lot on automation and self-healing processes. The system thus created is so controlled and automated that the inevitable breaks and delays in an environment will correct themselves, encouraging significant efficiency from the data.

Power efficiency is another pillar of a hyperscale data center. A hyperscale facility will have maximum optimization of its power architecture, bringing the costs and the environmental impact that it has significantly down. A hyperscale data center optimizes airflow throughout the structure. It ensures that the hot air flows in one direction and often reclaims the heat from that exhaust flow for recycling purposes. The Power usage effectiveness (PUE) of a hyperscale facility is much lower than the traditional DCs and much greener.

A Gold Standard: Here to Stay

According to a whitepaper by Linesight called ‘Hyperactive Hyperscale: The next step of the digital revolution’, these facilities are expected to account for more than half of all data centre traffic within the next two years, as data storage requirements grow by 40% annually. JLL reports that the hyperscale market is expected to grow at an annual compound rate of 26.3 percent to $80.6 billion by 2022.

Currently, the hyperscale market is dominated by giants like Google, Microsoft, Amazon and Facebook. However, with prominent Indian conglomerates joining the data center bandwagon, hyperscale DCs will become a norm rather than a trend.

How will Data localization impact the Data Center Market in India?

India – the Land of Rising Data

India is one of the largest generators of data currently. Thanks to our young demographic and deep technology penetration, our data consumption is expected to grow at the rate of 72.6% by 2020 according to a study by Assocham-PwC. Digital data in India was around 40,000 petabytes in 2010; it is likely to shoot up to 2.3 million petabytes by 2020 — twice as fast as the global rate. There is a debate going on in the country currently to store the enormous amount of data within national borders.

Data Localization – Gathering Momentum

The Data Protection Act suggested by the Srikrishna committee, aims at protecting the data of citizens by storing it locally. Another reason for data localization is to help government form better domestic policies for its citizens; RBI has already come out with the mandate for companies to store all the financial data locally.

This move has led to many companies ramping up their data center capacity in the country. Amazon has invested around $197 million (Rs 1,380 crore) in its data services arm in the country. Similar aggressive plans have been announced by ByteDance, Google, Microsoft and many financial institutions. Flipkart too has been strengthening its technology infrastructure. It opened its third data center in Hyderabad in April this year after Chennai and Mumbai, especially after acquisition by Walmart. The Securities and Exchange Board of India (SEBI) has also announced its intention to come up with guidelines that will mandate foreign entities to store data pertaining to India locally.

This has generated a lot of interest in the data center business, among large conglomerates and global tech giants.

Rush for the Data Center Pie

The Hiranandani Group recently entered the data center space with Yotta Infrastructure with plans to build 3 data center parks across Mumbai, Navi Mumbai and Chennai with a capacity of 60,000 racks. The Adani Group has committed to developing large data center parks in Andhra Pradesh over the next 20 years. Existing data center players like Sify, STT, CtrlS, NTT are planning to ramp capacities and international players like Colt and Bridge have also announced their first data center project in India.

Most of the players have officially made statements in media that government’s decision to move forward with data localization is one of the major reasons why they are bullish on data center market. India currently needs to ramp up its data center capacity by at least 15 times in next 7 to 8 years to be able to handle the massive amount of data influx that will enter its borders because of data localization.

How Does this Help Local Businesses?

The next logical question is – will data localization help Indian businesses? It certainly will. Storing data locally will reduce network latency and improve speed. Companies can expect availability of quality talent at lower cost with all data getting stored locally and with the existence of many other strong market drivers like growth of user data, e-commerce, growth of cloud etc. Some of the latest providers with resource ownership will be able to build massive capacities of data centers at much higher scalability and quality but at much reduced costs and round-the-clock personal service. Big Basket, the online grocery store shifted its data centre from Singapore to Mumbai and noticed up to 10 per cent improvements in transaction efficiency.

If one was to compare the cost of manpower, real estate and bandwidth, India is at least 60% cheaper than US or Singapore. These savings will ultimately go to the customers looking for rack space. With large corporate houses having their own power generation and distribution capacities coming in, the cost of data centers should also reduce significantly. Some providers will also utilise selective benefits as made available by Government in terms of duties and taxes levied on power and on the imported equipment/services.

India is a more viable and economic place to build and operate large scale Data centers. Hopefully the government will stick with its decision to go ahead with data localization and we will soon be storing our data in our own land.