The ColoPulse Knowledge Base

What is Colocation?

Looking for general information about colocation? Start Here.

What Options Are Available?

Colocation is the practice of outsourcing data center services to third party providers who are in the business of building and operating data centers. Companies may consider colocation to offset risk and cost or reallocate a company's resources to target its core competencies.

Choosing a Location

When considering location, organizations often prioritize based on proximity to their operations. What will the response time look like to the new facility? If a task is something the colocation provider cannot do, how quickly can your staff fly in and arrive at the data center? Should air travel be blocked, how accessible is the location by highway? Beyond accessibility, latency - or the time is takes for data to travel from an origin location to a destination - has a large impact on the location of a data center. What applications are running and where are theirs users? A data center supporting a west coast sales force would be better suited in Arizona rather than Virginia, for example. The user base is the most important user of the data center. While having a support staff within a safe distance of the data center is also key, if the users aren't finding a good experience with applications, then the data center is in the wrong location (or doesn't have big enough pipes!).

Some organizations may choose a location in order to diversify their geographic footprint. This practice has numerous benefits, such as increased disaster protection. A hurricane crashing into the north east is unlikely to affect Houston or Phoenix datacenters. Many large financial institutions observe this practice by spreading their data centers among multiple regions, where some sites may serve as primary locations and others serve as disaster recovery locations.

Compare Markets

What are Service Level Agreements?

One of the most fundamental features of any data center design is how it protects against service outages and other interruptions from the landlord's essential services. Various redundancies are employed to protect against failures and to maintain uptime reliability for the tenant. These redundancies are often quantized in terms of "N", where "N" represents the number of components necessary to support the data center. For example, an "N+1" cooling infrastructure setup means the data center has one spare piece of cooling equipment on standby in case of a failure. A "2N" electrical infrastructure (also called "A/B" feeds, at the rack level) would mean that for every electrical component, a duplicate piece exists in a separate, discrete lineup. Some of the most mission critical data centers employ "2N+2" setups -- duplicate redundancy and two spares -- however this setup is very costly and may be more than what is truly needed by the customer.

Regardless of technology and levels of redundancy, every data center will guarantee their uptime in a Service Level Agreement (SLA). The SLA specifies exactly how much downtime a data center can have in a year. Data centers employing an "N+1" topology will likely have a much lower SLA than a "2N" facility. Some providers offer "utility grade power" in a lower SLA compared to others offering a 99.999% (five nines) SLA promising just five minutes and fifteen seconds of downtime or less per year all the way to 100% uptime SLAs. These SLAs are a way for a user to have a contractual guarantee of their uptime. Should this guarantee be broken, the data center will offer various credits or other means of refunding the user.

For most of mission critical companies, a few hours of downtime may mean a severe inconvenience for its customer base and consequently, may mean cause millions of lost dollars. In most cases, no amount of remedies paid out by a SLA will cover a customers "losses" as the impact can go far beyond the servers themselves.

Having a strong SLA does not provide for much security if the data center isn't built to support the SLA. Further, verifying a data center provider's claims is important. Do they really offer 2N or is that simply two whips to the cabinet with a less redundant system employed upstream? Understanding the power configuration throughout the facility is important when making your data center decisions.

A Variety of Options

As the colocation industry matures, there are myriad options to consider. The first consideration is likely location. It could be possible that your organization wants geo-diversity among its installations; major financial institutions often have a primary data center location and a backup facility across the country to mitigate their disaster risk. Also, there may be a human element to consider when choosing location. An organization might try to locate near their IT team to allow for a rapid response time. Also, what kind of latency will the location provide to primary users of the facility. For example, is the data center powering a sales force based on the West Coast? There are several options to consider when choosing where to locate IT.

Another major factor is at what scale to deploy. Tenants can from just a 1U server, to an entire rack, cage space, dedicated pods or rooms, and even all the way to entire data halls and full scale data centers. No requirement is too large or too small. Scale comes down to what an organization need is and matching a provider to meet that current need and future growth.

Moving past location and scale, the data centers themselves are all built a little differently. As technology constantly advances at a rapid pace, there is no single best-practice design. For example, raised floor is the most common design choice among data centers, however strong arguments can be made that slab is a better choice article Here. Going further, some data centers support modular deployments, a system where for example a structure resembling a shipping container houses the servers, power, and cooling equipment. Each solution fits different needs.

One of the most fundamental features of any data center design is how it prevents against outage. For example, some facilities may employ an "A / B" or "primary / redundant" power supply for redundancy further backed by backup generators. Regardless of technology, every data center will offer a Service Level Agreement (SLA). Some providers offer "utility grade power" in a lower SLA to others offering a 99.999% (five nines) SLA promising just five minutes and fifteen seconds of downtime or less per year all the way to 100% uptime SLAs. For some, a few hours of downtime may mean a severe inconvenience and for others it may mean cause millions of lost dollars.

Users also have flexibility in the level of service the facility provides managing their equipment. Some tenants may choose to have no extra service by using a "pure play" provider that simply supplies power and cooling and the tenant is responsible for power distribution, switches, server maintenance, etc. Many providers offer a "remote hands" it service, where onsite IT technicians or available to perform certain updates and maintenance when necessary. Other managed services layered on top may involve an increased outsourcing of common IT functions such as firewall management or tape backups.

Different Data Center Technologies

All data centers are built a little differently. The technology behind data center infrastructure changes at rapid pace; there is no single best practice for building a data center. The result is that a potential user has a wide range of data center technologies to choose from. This flexibility is beneficial but can make deciding among different providers and locations difficult.

Raised floor is an industry standard, but other technologies exist with their own merits. Some data centers are built on concrete slab. In fact, very strong arguments can be made favoring slab over raised floor for reasons such as cooling capacity and security article Here. Discerning which method is best is relative to specific needs. Going to the extreme, some data centers support modular systems resembling shipping containers. They vary in that some contain just IT equipment while others have electrical and cooling infrastructure built in.

While all data centers are built a little differently, there are some common industry standards. One of the most commonly used design practices is the use of a raised floor. Cold air is pumped underneath raised floor, and special tiles let it surface only where servers need it. Additionally, electrical and network wiring is of often routed below a raised floor allowing for a clean looking datacenter. One of the main benefits of raised floor is the ease of expansion or modification. Cooling new server racks is as simple as changing a solid tile to a vent tile where new cooling is needed.

Another popular method is a modular or pod based setup. Discrete container like structures house IT equipment with built in electrical and cooling infrastructure. These modular solutions allow for an easy, rapid deployment. In very little time, a pod can be purchased fully loaded with servers and delivered via flatbed truck to even remote locations. These modular setups are very customizable and offer users a wide variety of environmental controls over their systems.

Other data centers opt for a concrete slab design. There is certainly a wide debate about which design is more efficient or preferable. In many cases, slab environments are used for high density deployments that opt for other cooling methods that don't use air. While higher densities can be reached, there is a certain level of commitment to rack layouts and configurations in a slab environment based on pre-existing ducting or more "hard-coded" cooling solutions whereas raised floor provides more flexibility with the option to move perforated tiles.

How has colocation changed in the past decade?

The history of colocation is tied to the history of the data center. At the start of the 21st century, the term "data center" may have referred to a set of servers sitting in an office building or a fortress-like building that was built, owned, and operated by large enterprises. At this time, colocation did not exist, at least not in the way it does today. Enterprises were investing in large data centers with 20-year life expectancies but finding that their technology was obsolete and inefficient within 5 years. Enterprises discovered that running their own data centers was a costly and complex endeavor taking focus away from their core activities.

The cost of building a new data center, the need for more data centers, and the difficulty of keeping technology up to date spurred a new industry focused solely on operating data centers. These data center providers were housing multiple tenants in the same data center: colocation. The industry slowly grew as enterprises started to become comfortable with outsourcing their data needs to these providers. Once institutional buyers became aware of the returns available in the data center industry, developers flocked to an industry that was primarily real estate centered at that time.

The cost of building a new data center, the need for more data centers, and the difficulty of keeping technology up to date spurred a new industry focused solely on operating data centers. These data center providers were housing multiple tenants in the same data center: colocation. The industry slowly grew as enterprises started to become comfortable with outsourcing their data needs to these providers. Once institutional buyers became aware of the returns available in the data center industry, developers flocked to an industry that was primarily real estate centered at that time.

As it stands today, the colocation industry has developed a myriad deployment options, almost to a commoditized level. This flexibility empowers users, but gives them more to consider. The single most important goal of an individual seeking out colocation is to determine the best fit for his or her organization's needs; a colocation provider will become a long-term partner and strategic alignment is paramount to a company's success.

What industry trends are driving the next wave of data center technologies?

One of the most dominant trends in the colocation industry is a flexible implementation strategy. A common practice is to build as little as possible with infrastructure in place to scale up in the future. The goal is to be able to deploy capital when necessary and build modularly into the future in order to support the growing demand of IT infrastructure and facilities. According to Gartner, worldwide shipments of mobile devices are expected to ship this year. The connection between pervasive data collection and big data analytics has left industry experts curious about whether IT infrastructure will be able to keep pace with accelerating business demands. Also, incremental additions help avoid technological obsolescence, as each update will phase in the newest technologies on a just-in-time approach.

Increasing power densities and higher performance servers are other trends that continue to be discussed in the industry. Ten years ago, one server rack could only consume so much power and many data centers were designed to cool that load. Today, however, newer equipment and engineering breakthroughs, such as cold-aisle containment, have allowed for more efficiency within the data center. While many customers do not have high-density requirements, providers who can support those higher loads will be more future-proof and retain a competitive advantage.

Increasing energy efficiency is trending for both environmental and economic reasons. Power Usage Effectiveness (PUE) measures how much total energy is used by a data center (power distribution and transformation losses, cooling loads, IT loads, and other associated facility loads) in relation to the energy used specifically for the IT equipment. To help their PUE, many new data center providers are leveraging "free cooling" - using outside air to dissipate thermal energy - making it more advantageous for enterprise-owned data centers like Google and Facebook to identify regions where free cooling is apparent, regardless of proximity to local markets. Newer techniques are opening up new regions to tremendous efficiency gains.

Above all else, the demands for flexibility in the colocation arena shape the way data center developers are thinking about new builds. The colocation industry, at its roots, is a means of turning capital costs into operating costs. Why build an expensive data center when data center construction is not a core competency? The conversation turns to gaining a higher return on dollars reinvested to a business on their core competencies and not the larger capital outlays necessary to operate a data center. With the rapid development of technology, that expensive capital investment may very well be outdated if expansion for future technologies or deployment methods were not taken into consideration. Today, colocation providers are building to account for these challenges. Building data center is their core competencies; constructions costs are modularized to offset future expenses to build what is needed today while planning for what could be needed tomorrow.

In simple terms, how does cloud affect colocation?

Enterprises that are considering more than colocation may look to managed services or cloud services to augment the "brick and mortar" aspects of the space. For the enterprise that does not want to own their own hardware or manage the operating systems and back-up processes of their equipment, managed service providers offerings allow a greater degree of flexibility in freeing up resources. Cloud solutions can provide additional flexibility to the enterprise. Ten different people will define "cloud" with ten different definitions. In the most basic terms, cloud services allow enterprises to expand or contract resources on-demand to hit the spikes for IT resources that may occur cyclically or unpredictably. Cloud services come in private/dedicated, public/open, and hybrid flavors, each with varying advantages and risks.

The What, Where & Why of Top Data Center Markets

Data centers are part of a rapidly changing environment. Data centers built ten years ago are, with few exceptions, obsolete technology. How many of us have computers longer than five years, let alone cell phones? This trend is only going to continue to accelerate.

The enterprise customer - meaning a large Fortune 1000 customer in dedicated IT space - is migrating out of older, legacy space, which is often operated in-house, into a colocation facility. Colocation facilities are massive "data warehouses" that are custom-built, ensuring there is enough power, Internet connectivity, and cooling, to support the efficient operation of servers. Colocation facilities - "colos" in short - are built to meet the demands of today's IT users. Increasingly, more colos are supporting over 1,000 Watts per square foot, allowing more compute power to be operated in a smaller amount of space.

The "instant on" mentality of IT today drives the business models of the colo developers. Gone are the days when an operator would build out 100,000 square feet of space for IT equipment. Modularity is the buzzword. The largest data center REIT in the world, Digital Realty Trust, builds most of their new facilities in modular fashion by prefabricating the components of the data center off-site and delivering them in a "just in time" system. Modularity also has an impact in the construction of the building itself. Developers of this product type are more likely to build out a data center campus in phases, driven by demand, instead of erecting a massive behemoth hoping that customers follow suit. Another new model in the data center industry is containerization.

Containerization, another form of modularity, is the delivery of data center space in containers. These containers range from the standard shipping container one might expect to see arriving in the Port of Long Beach to a customized and optimized "module" of roughly the same size as a shipping container. There are nearly two dozen manufacturers in the containerized data center market. Some recognizable names include Dell and HP, as well as the Phoenix-based IO Data Centers. Containerization allows customers to buy what they need today and preserve some of their capital. Large companies, like eBay, have begun to embrace modularity and containerization as part of their IT strategy.

While raising the bar in the data center in new technologies, operators are pressed to lower the cost. End users are pushing systems manufacturers to increase the efficiencies in hardware. Current standards mandate that data center temperatures be maintained at no more than 80 degrees Fahrenheit. The next wave of technology will look at pushing that threshold higher. Intel, for example, has experimented in New Mexico with increasing the temperatures to as much as 92 degree Fahrenheit, while exerting no control on humidity, and the semiconductor giant reported an estimated power savings of 67 percent. Like the hardware manufacturers, data center operators are tasked to push down the cost of construction. One avenue, as discussed, is through modularity. However, the larger question is "do we really need this much redundancy?" Data centers sometimes have as much as four extra systems to support the primary system in the event of a failure. This adds both capital and operating costs. Some operators are now choosing to build systems with less redundancy, and therefore fewer points of failure, at a lower cost. Data center customers are questioning if a "Tier IV" system topology should be required as it used to be.

Help me choose a location for my IT. How do I find a place for a data center? What makes a good data center location? These are questions that data center users ask as they begin to make the transition from older, legacy space to a colo environment. The answer depends on a company's industry. The data center industry has grown up around the companies in a market. Media and entertainment companies flock to Los Angeles. Entrepreneurial web start-ups tend to open up shop in Silicon Valley with easy access to capital. Opening their data center nearby is a logical first step. Financial services firms tend to locate near the exchanges in Chicago and New York.

Going further, an organization's application is a driving factor. A financial services firm is going to have drastically different requirements than a retail company, an e-commerce start-up, or a gaming platform. Take the finance vertical, for example. Financial firms that focus on options trading will likely end up in Chicago or New York, simply because they cannot afford to be far from the trading algorithms. Latency, or the time it takes to transmit data, plays an incredibly key role in the decision-making process. A few milliseconds can translate to millions on the balance sheet. Gaming companies that offer online networks must operate with geographic diversity. With locations in a few targeted markets, they can offer a smooth user experience to gamers.

How much risk can an organization tolerate? A national defense contractor will require absolute redundancy in geographic diversity and within the facilities. However, a company website is not nearly a mission critical application. For a Bay Area start-up that wants a website, locating within San Francisco or Silicon Valley to the South is not a bad option. The cost of power and risk of seismic events do not outweigh the convenience of because close to the IT equipment.

We are in a global market: does that affect data centers? Absolutely. For international companies operating their facilities within the United States, the networking and connectivity component plays a large role. The fiber optic network connecting the world lands at a few key locations on the coasts in the country. On the East Coast, New York and Miami are the key entry points to Europe and South America. West Coast cities like Los Angeles, San Luis Obispo, San Francisco, and Seattle as well as Bandon, Oregon connect the U.S. to the Far East - Tokyo, Guam, Taipei, Hong Kong, and Sydney. Locating near the key cities is important to maintain low latency. Within the country, a few key cities fall on the "backbone" of the Internet. As a rule of thumb, cities that have an NFL presence are considered key cities. These include LA, Phoenix, Dallas, Atlanta, Miami, San Francisco, Seattle, Denver, Chicago, and New York. Other cities fall on the backbone, but these are the "key" markets.

What else goes into choosing a location? There are a number of other drivers, including tax incentives, risk of natural disasters, cost of power, fuel mix, access to airports, availability of free cooling, and long-term water considerations.

Companies are looking for tax incentives and abatements to help ease the operating expenses of these facilities. The cost of purchasing servers and related equipment and the cost of purchasing annual software licenses far exceed the cost of building a facility. Some enterprise users navigate to areas with tax friendly climate specifically for this reason. Oregon has made recent headlines with the announcement of Amazon, Apple, and Facebook building facilities in the Central and Western regions of the state.

The cost of power has a significant swing in the total cost of operations. Certain regions in California, for example, are nearly twice as expensive as the national average power rates. This savings is passed through to the tenants in a colo, as well. As organizations seek to reduce their carbon footprint, the easiest way to do this is through their data center, typically the most "power hungry" operation for any company. Operations in the Pacific Northwest are heavy in hydroelectric power and are attractive for the environmentally conscious. Further, looking for the lowest cost of power is not the full story. While Denver offers fairly inexpensive power, the fuel mix is predominantly coal.

The cost of power has a significant swing in the total cost of operations. Certain regions in California, for example, are nearly twice as expensive as the national average power rates. This savings is passed through to the tenants in a colo, as well. As organizations seek to reduce their carbon footprint, the easiest way to do this is through their data center, typically the most "power hungry" operation for any company. Operations in the Pacific Northwest are heavy in hydroelectric power and are attractive for the environmentally conscious. Further, looking for the lowest cost of power is not the full story. While Denver offers fairly inexpensive power, the fuel mix is predominantly coal.

The most attractive data center markets in the country each have a unique history. Data center users gravitate to these markets for a variety of reason.

Silicon Valley has a rich history of data center operations and is home to the most active data center market in the country. With an entrepreneurial aura, the Valley is home to many of the Web 2.0 startups of today like Facebook and Zynga as well as the tech titans that started the industry. The market is distinctly separated - one market on the peninsula and another in the Santa Clara / San Jose area. With over 400 tenants in the facility, 365 Main, a Digital Realty Trust-operated colo facility in downtown San Francisco, represents the quintessential "retail colocation" offering for small and medium-sized businesses, whereas Digital Realty Trust, DuPont Fabros Technology, CoreSite, Quality Tech, and Vantage Data Centers represent the wholesale market for larger users South of the Bay. The Bay Area, despite the increased cost of power and heightened risk of earthquake, serves as an incubator for data center operations. As the local companies mature, they tend to look to other markets outside the state to mitigate risk and scale operations.

Los Angeles is the media and entertainment hub for data centers in the West coast. Online streaming giants Netflix and Hulu have data center operations within the city. Unlike Santa Clara, Los Angeles grew up as a telecom market. Many of the buildings in downtown LA operate as data centers, but their primary purpose was originally hosting networking gear. The prime example, One Wilshire, represents, perhaps, one of the most important connected buildings in the country. One Wilshire has over 200 connections via the Transpacific fiber cables to the Far East, enabling low latency for international operations. The Los Angeles market, unlike the wholesale giants in Silicon Valley, is predominantly a retail market focusing on smaller businesses looking for a few cabinets of IT space as opposed to entire rooms. The El Segundo market near LAX and Irvine to South of the city are growth areas in the city with new space coming online.

Located along the long haul fiber network, Dallas is an attractive and active data center market. Bolstered by short flight times and low latency to most of the country, the market in Dallas has grown significantly. A number of the nation's top colocation operators have facilities in Dallas, including Digital Realty Trust, CyrusOne, Equinix, and more. The market has embraced the existing Fortune 500 companies with headquarters in the Dallas/Fort Worth area. Many of the colocation facilities are home to the oil and gas conglomerates.

Phoenix has rapidly emerged as a top market for colocation in the United States. The Valley of the Sun has had a longstanding history of enterprise and corporate data centers with companies like American Express and Charles Schwab building their first facilities here over twenty years ago. The lack of natural disasters, inexpensive and reliable power, and position on the nation's Internet backbone make it an attractive market. With the addition and expansion of national colocation operators like San Francisco-based Digital Realty Trust, Dallas-based CyrusOne, and activity by CenturyLink Technologies Solutions and ViaWest, eyes have shifted to the city as a national player in the colocation arena. Other data center start-ups have recently broken ground on new builds for the same reasons. Other operations within the city include IO and Phoenix NAP. IO manufactures their modular data center product in Phoenix.

In the Midwest, the Chicago data center market has established itself as one of the primary hub in the country for financial institutions and high volume trading firms. Latency is a key metric within and outside of the city. Proximity to the exchanges and trading partners are key. The Chicago Board of Trade has built its own colocation facility to offer premium access to traders. Colocation facilities and private data centers tend to gather in clusters around the fiber in both downtown and suburban Chicago.

Along the East Coast, the Northern Virginia market has made a name for itself. A cluster of cities near the nation's capital, including Reston, Chantilly, Herndon, Sterling, Ashburn, Vienna, and others, are home to campus-like data center environments. Both retail and wholesale colocation operators have a presence in the market, including: CoreSite, CyrusOne, Equinix, Digital Realty Trust, SunGard, DuPont Fabros Technology, Savvis, Sabey, RagingWire, Net2EZ, Telx, Rackspace, and Latisys. The market was originally thought to be a boon for government-based IT operations. However, budgetary concerns have led the government to undertake a massive consolidation of data centers, leaving substantial supply in the market. However, a number of larger enterprises have made the decision to place their IT in this market due to the availability of free cooling, relatively low cost of power, as well as an attractive tax climate.

The New Jersey market is a natural complement to New York. In a post-9/11 world, companies that originally housed IT operations in Manhattan sought a less risky alternative. With close proximity to Manhattan and a robust infrastructure, Northern New Jersey rose as one of the top data center markets in the country. With a long roster of enterprise facilities, Northern Jersey caters to the financial sector much like Chicago. More recently, the market has established itself as a wholesale and retail colocation hub with the addition of groups like Net2EZ, IO, Sentinel, and new construction from DuPont Fabros Technology. The expensive power and lack of attractive tax programs are obstacles for companies choosing to locate their, but the tie to Manhattan outweighs the hurdle for some. As Los Angeles has its One Wilshire, New York is home to several carrier hotels. Recognizable addresses in Manhattan include 111 8th Avenue, which was purchased by Google in late 2010 for $1.9 billion, and 60 Hudson, another carrier hotel. 111 8th Avenue is among the most connected buildings in the world, as it serves as a significant landing point for fiber in New York.

The term "best data center market" is relative. The best market is not always the "top" market. IT departments, historically, have chosen markets and sought out data center space. However, the process should be strongly aligned with their goals as an organization, running through the checklist of their priorities and then determining what market best fits their requirement.