At CtrlTech, we have long experience in the rapidly changing our environment and the provision of quality services throughout Middle-East and the world.
For clients from different vertical sectors, we have a long history of delivering successful business results.
Our slogan "Excellence" gives us this success. Our culture to go beyond and beyond, no matter what!
Our company consists of passionate Engineers and certified professionals who are able to handle all aspects of the environment industry - from search to portable to industrial dehumidifier. We all live with our #Excellence motto and know what it takes to succeed.
Changes of standards, especially in design, installation, and maintenance, occur at each level of the data centre. The most recent standard is the distribution of electricity from the UPS Data Center to the PDU by connected cable assemblies (power distribution unit). With the use of a connected cable mount, the entire data centre is simplified by lowering costs and installation time, while increasing profitability.
If the fire had been allowed to burn unchecked, it may have spread throughout the institution, inflicting catastrophic damage to the computer system and the structure. This would have resulted in the Customer Manufacturing Unit being destroyed. Data protection was a top priority, as it contains all donor records, donor test results, as well as financial and personnel details for all Victorian branches, including mobile blood banks.
Smoke detectors spotted the fire in its early stages, activating the FM-200 fire extinguishing agent. In less than ten seconds, the FM-200 dishcarged, rapidly extinguishing the fire and averting further damage.
FM-200 was discharged in less than 10 seconds.
The computer system was unharmed and continued to work normally, with no data loss. Within 12 hours, Kidde's after-hours emergency service specialists were able to recharge and re-start the system.
A data centre is a long-term investment, and locating it involves consideration of both the business's and the neighbouring community's demands.
With the data centre sector experiencing rapid expansion, the imperative to find the ideal location and quickly build capacity has never been higher. To deploy capacity where digital businesses require it, site selection combines financial modelling, community relations, engineering design, construction planning, and even a little fortune reading.
The site selection process must take a range of elements into account, not all of which are straightforward. A data centre is a long-term investment, and locating it involves consideration of both the business's and the neighbouring community's demands.
As you begin the process of locating a new data centre, keep these five rules in mind.
Over a third of businesses think that server downtime costs over $1 million per hour, while 15% put the cost at more than $5 million. While data centres are vulnerable to the same forces of nature as any other structure, the consequences of a catastrophic failure might be far higher. While physical resilience is critical, so is catastrophe response. Consider the following factors.
Climate Change has resulted in an increase in the frequency of extreme temperatures, record precipitation, and damaging storms during the last several years. The impact of climatic forces extends beyond the damage caused by wind and rain during a storm occurrence. Hurricane Harvey's 2017 landfall on the Texas coast resulted in storm surges of up to 12 feet. For nearly three months following severe rainfall in 2019, the Mississippi River was at flood stage. Areas that were formerly deemed safe may no longer be so in years to come.
Climate can also have an effect on the surrounding environment, which can have an effect on the data centre. Heat waves, for example, might result in brownouts, forcing data centre managers to rely on backup generators. Hurricanes and tornadoes-related blackouts can have a similar effect. Extreme weather may also have an effect on crucial personnel' capacity to report to work. Site selection must account for these concerns and incorporate suitable backups and redundancy.
While the west coast of the United States is notorious for earthquake activity, pockets of frequent seismic activity exist throughout North America.
Indeed, Missouri and South Carolina are two of the most seismically active states in the United States. Regular shaking also occurs in portions of the southeast, northeast, and northern Rocky Mountains.
Seismic activity is often highly confined. For instance, the US Geological Survey predicts that the risk of a magnitude 7 earthquake affecting the San Francisco Bay area in the next 30 years is 51%, yet Sacramento, which is less than a 90-minute drive away, is considered one of the state's least dangerous areas. Other than property damage, seismic occurrences can produce interruptions such as power outages, water line ruptures, and damage to roads and bridges. Risk assessments and proper failover measures must be incorporated into site selection.
Utilize this data centre checklist to establish a resilient IT power infrastructure. Additionally, apply these recommendations to create a faultless IT power infrastructure architecture.
When it comes to constructing IT power infrastructure in a place where electricity is scarce, the more you require, the less appears to be accessible. This highlights the critical nature of getting the IT power infrastructure design right from the start. The business case is methodically developed and is typically strengthened by the promise of significant financial and energy savings, as well as efficacy and efficiency. The following data centre checklist will assist you in maximising the efficiency and productivity of your organization's IT power infrastructure design.
Power backup is a vital component of ensuring the data center's 100 percent availability. The information technology power infrastructure should be constructed in accordance with the following specifications:
Tier-2: A tier-2 data centre configuration has two UPSs (uninterruptible power supplies) running in parallel to provide redundancy. Thus, if one fails, the other automatically takes over via a bypass.
Tier-3: This configuration includes three UPSes to assist the company in ensuring redundancy and concurrent maintenance. It requires at least n+1 redundancy, which means that when one path is active, the other is inactive.
Tier-4: This configuration includes four uninterruptible power supplies to ensure concurrent maintenance and fault tolerance. Additionally to the supply-side redundancy, starting with the first point of upstream, you must have two sets of input power: two DGs and four UPSes. (2+1 is the fault-tolerant UPS, and n+1 is the redundant UPS that may be maintained concurrently).
Tier-3 and tier-4 configurations must have no power distribution (PD) and no variation in voltage between earth and neutral. The transformer integrated into the UPS should be situated 75 feet away from the load; otherwise, it would generate a harmonic PD in the voltage, which will result in noise. Voltage fluctuations can be catastrophic for the server and, indirectly, the IT power infrastructure.
The configuration of the IT power infrastructure is determined by the IT workload, i.e., the power factor of the IT servers, storage, and networking equipment. Server configurations aid in the distribution of power and load by changing load configurations to work in conjunction with power and cooling requirements.
Ideally, choose a UPS with energy optimizers. This results in an intelligent integrated infrastructure (III) that senses and adapts dynamically to the load, hence increasing overall efficacy and efficiency. The lower the loads, the lesser the product's efficiency. For instance, if the load is evenly distributed among four UPSes, efficiency is reduced by nearly 92 percent. On the other hand, energy optimizers integrated into UPSs will consolidate 80 percent of the load on a single UPS and keep the remaining three in idle mode, raising efficiency to 96 percent.
A completely dust- and water-free environment is required to support the data center's IT power infrastructure, even beneath the raised floor. Cleaning should be performed with vacuum filters equipped with a high efficiency particle arrester (HEPPA filters), as blowers are hazardous. Assure that there are no leaks of water in the datacenter, as water is a PCB spoiler.
Consider the entire IT power infrastructure when developing it. The stronger the foundation, the more durable the eco-setup. system's
This is when adaptive architecture enters the picture. Three levels comprise adaptive architecture: business, services, and IT infrastructure. The applications or ERP are operated on the uppermost business layer, followed by the service layer, which comprises mail services, print services, and RDBMS services. The final layer is the information technology infrastructure, which includes servers, storage, networking, power, and cooling equipment.
Vendors of facilities should examine the following options for enhancing the IT power infrastructure: How is the power infrastructure currently performing? Is it a greenfield venture or a legacy system build-up, or are you wanting to migrate or graduate from a modest setup to a larger one?
A design for an adaptive architecture must be flexible and scalable without incurring incremental losses due to existing power systems. System availability and reliability should be 99.999 percent of the time.
It is vital to reduce the total cost of ownership. CAPEX refers to the equipment that will be installed in your data centre, providing room for future power needs to scale up; whereas OPEX is directly related to the data center's operational efficiency.
Finally, a Power Quality Audit is critical to ensuring the incoming power is of high quality.
UPS systems should be located away from the server room to protect it from electromagnetic interference. PDUs should be located adjacent to the IT load, preferably attached to the rack, to minimise their physical footprint. The fundamental rule is to have zero PD between earth and neutral into the load; otherwise, noise may occur, with the possibility of booting the system, as indicated previously.
By using a blade architecture, the footprint is reduced. In a 7U rack (1U = 48.26cm), a blade chassis with a massive compute capability of 14 servers can be housed. The heat density will increase as you crush maximum compute capabilities into the lowest form factor possible. As a result, do not overlook the cooling requirement when developing the IT power infrastructure.
Finally, even the best data centre may become obsolete without proper maintenance. Once the power and data centre cables are routed beneath the raised floor, be certain they are not parallel, as this may produce an electromagnetic field that impairs the data center's operation. They should ideally be separated by a minimum of 60cms.
Aisle-by-aisle approach
A hot-aisle/cold-aisle data centre is constructed by alternating rows of server racks with cold-air intakes facing one aisle and hot-air exhausts facing the other aisle.
Additionally, learn how to properly size CRAC units.
This concludes the data centre checklist. The data center's servers must be cooled because they are designed to operate at temperatures of up to 24 degrees Celsius. Adopt the cost-effective free cooling methodology. By clearly delineating and arranging hot and cold aisles, energy efficiency can be increased. Cool air is drawn into the server racks via the cold aisle's perforated tiles in the raised floor. Similarly, another way is to distribute the load uniformly over the racks and cubic feet of cooling space available in the data centre.
Organizations are also implementing high density servers to optimise technologies such as virtualization. Additionally, extreme dense cooling and high sensible cooling are two methods for achieving an effective IT power infrastructure.
However, another widely held belief is Moore's Law, which asserts that a processor's computational performance doubles every 18 months. As a result, the power requirements of a data centre typically quadruple every 18 months. As a result, if servers are not renewed every two years, power consumption and heat density can increase, resulting in higher operational costs. As a result, the IT power infrastructure should be capable of supporting this refresh cycle.
This data centre checklist demonstrates how paying attention to little issues can result in huge energy savings in data centres.
Center for data storage Uptime refers to a data center's guaranteed annual availability. Uptime is one of a data center's primary commercial objectives. To meet modern business objectives, data centres must achieve exceptional uptime. Data centre providers invest significant time and money on redundancy, processes, and certifications to meet these uptime requirements.
Uptime is essentially the computation of how frequently a particular resource is available across a given year's minutes or seconds. While it is a very straightforward notion in general, it can become convoluted in the data centre.
Typically, uptime is expressed in terms of "nines," as is the norm in the business for calculations starting at 99 percent. This figure represents a ratio between the number of minutes and the total number of minutes in a year that a particular system or platform is available and functioning properly, which aids in optimising the operational performance of a data centre portfolio, an individual data centre, or a system or system component.
The data center's uptime is often classified as follows:
Tiers of data centres are an effective approach to characterise the infrastructure components used in a data centre. While a Tier 4 data centre is more complicated than a Tier 1 data centre, this does not always mean it is the greatest fit for a business's needs. While investing in Tier 1 infrastructure may expose a corporation to risk, investing in Tier 4 infrastructure may be excessive.
Tier 1: data centres have a single power and cooling line and have few, if any, redundancy and backup components. It is planned to operate at a 99.671 percent capacity (28.8 hours of downtime annually).
Tier 2: data centres have a single power and cooling channel, as well as some redundant and backup components. It is planned to operate at a 99.741 percent capacity (22 hours of downtime annually).
Tier 3: data centres include numerous power and cooling paths, as well as procedures in place to update and maintain them without bringing them offline. It is planned to operate at a 99.982 percent capacity (1.6 hours of downtime annually).
Tier 4: A Tier 4 data centre is designed to be totally fault resistant and to have redundancy throughout. It is projected to operate at a rate of 99.995 percent (26.3 minutes of downtime annually).
The primary components of a data centre are as follows:
Power - The most critical component of any data center's operation is power. Not just the servers, storage, and networking equipment consume energy. HVAC (heating, ventilation, and air conditioning) systems require a significant amount of energy. Modern data centres consume enormous quantities of electricity and frequently have backup generators on-site to provide power in the event of a power outage.
Cooling - Another critical component of a data centre is cooling, which keeps the IT equipment cool and operational. To minimise the amount of energy necessary to cool equipment, the majority of data centres employ some variation of the hot/cold aisle approach. Racks are built in this type of hot/cold aisle design so that all racks in a row face the same direction and opposing rows face each other back-to-back. This means that neighbouring rows of equipment are always faced back-to-back or front-to-front. Modern IT equipment generates a great deal of heat and, if not kept cool, will fail. Separating hot and cold air can significantly lower the cost of cooling the data centre.
Racks — All of the data center's IT equipment is housed on standardised 19-inch racks. These racks will be stocked with a variety of computer, storage, and networking components.
All data centres will feature elevated flooring. A raised floor is a man-made floor that is often constructed of floor tiles that rest on pedestals. Engineers can elevate the floor tiles and obtain access to the void between the raised floor and the solid floor beneath. This empty floor is frequently utilised to route network and power cables, as well as to channel possibly cold air.
Rows - Racks are typically organised in rows in data centres. This way, racks may be easily located by specifying both their row and rack locations.
Cabling Structures - Cabling is a critical component of the data centre. Make no compromises when it comes to structured cabling, and ensure that the cable solution you develop and deploy is scalable and will meet the needs of your data centre in ten years.
To create the greatest possible environment for servers, storage, and networking equipment, we must build the data centre using the most efficient racks, power units, and cooling machines feasible.
Generally, data centres are managed by specially trained employees, and access to the data centre is restricted according to company policies.
Breakered Amp Power is a word that is frequently used in retail colocation leasing to refer to the economic model under which a customer will be charged for power usage. In a breakered amp colocation model, the client receives a specified breaker size and voltage and is charged the same rate for power distribution whether or not they utilise it ("use it or lose it"). Generally, the provider has the ability to adjust unit pricing in response to an increase in the underlying utility provider's unit pricing per kwh.
Planning for business continuity and resilience is crucial to an enterprise's continued operation and success. Data centres and network architecture are critical components of an enterprise's disaster recovery plan in the event of a catastrophic geopolitical event. How much does downtime cost a business each second, hour, day, or even week? Enterprises are becoming increasingly reliant on information technology systems for purchasing, selling, and creating products. While it may seem self-evident to prepare for natural disasters such as hurricanes, tornadoes, and earthquakes, are you prepared for extended electricity outages or fibre cuts? A robust business continuity strategy will adequately prepare the firm for the most improbable worst-case scenarios.
1. What if my data centre was down for ten minutes, ten hours, or ten days?
2. What would happen if employees were unable to access my corporate headquarters?
3. What if one of my existing carriers suffered a fibre cut?
4. What would happen if such and so in the information technology department resigned?
5. What would we do if our business doubled in size during the following month?
In a retail colocation model, cages and cabinets denote the type of space that a colocation provider will communicate to a customer. Cages are movable walls mounted on raised floors that are used to divide the space of one consumer from that of another. Cabinets, on the other hand, are often lockable single-rack configurations used to contain server, storage, or communications equipment. Cages are normally reserved for larger retail colocation clients, whereas cabinets are available in 1/3, 1/2, or full capacities. Both are utilised in communal space settings (other customers).
A data centre is said to be carrier-neutral if it enables customers to order cross connections or communications services from any existing provider and the data centre provider actively pursues additional carriers for the facility.
Colocation is the term used to describe the arrangement of several users on the same mechanical and electrical systems. Typically, these users will share the same four walls and HVAC system. Colocation enables a data centre provider to give significant economies of scale to consumers that are not large enough to benefit from the lower cost per crucial KW offered by a scaled data centre. Typically, more redundancy and dependability may be accomplished per unit of expense than a single user could achieve on her own.
IT managers have been deploying servers and IT equipment in hot and cold aisles for years. Two rows of equipment racks' front sides face each other, drawing cool air into each rack's equipment intake. As a result, the rear sides of two rows each vent hot air into the hot aisle. While this is an efficient approach, it may fall short of the mark when dealing with higher power loads. A solution for increasing the efficiency of the data centre is to implement either hot or cold aisle confinement. Chilly aisle confinement entails augmenting cold aisles to effectively "trap" cold air within the cold aisle. This enables the data centre operator to increase the air handling set points and effectively cool the server intakes. On the other hand, hot aisle containment is a technique for isolating the hot air exhaust generated in a hot aisle. In both circumstances, the objective is to prevent large temperature differences in the air from mixing. Both of these strategies have the potential to significantly reduce PUE.
A data centre is said to be carrier-neutral if it enables customers to order cross connections or communications services from any existing provider and the data centre provider actively pursues additional carriers for the facility.
Understanding your entire critical load is crucial for appropriately sizing your data centre. To determine the total wattage consumed, multiply the voltage by the amps consumed (Volts X Amps = Watts). If you have three-phase electricity and the amps on each phase are identical, double the resulting wattage estimate by the square root of three.
"Critical" power or "IT load" is frequently used to refer to the amount of data centre power utilised by or dedicated to IT equipment such as servers, storage, and communications switches and routers. Lighting and cooling electricity for the data centre are not considered "essential" power. It is crucial for an end user to understand their critical load, as the data centre - whether operated domestically or outsourced - will be sized based on the quantity of critical power consumed currently or anticipated in the future.
Cross connections are most frequently used to connect two networks at the layer 1 or physical layer. Cross connections are often segmented by the cabling type utilised to construct the connection - copper, coaxial, or fibre. Cross connections are often handled for a non-recurring (NRC) and monthly recurrent (MRC) fee by the data centre supplier.
Remote hands is a collection of on-site, physical IT administration and maintenance services offered by data centre and colocation operators for a fee.