Location
Sharjah, U.A.E
Call Us
+971 65489626

Server Room Solutions

We Design Server Rooms

CtrlTech – Best Server Room Solutions Company in UAE

We built complete IT server room services, including rack layouts, cooling, fire suppression, monitoring, and security system designing.

Server room Designing

Choose CtrlTech as your Server Room maintenance Company

  • CtrlTech infrastructure, network and team will protect customers` most critical https://www.ctrltechnologies.com/assets so you can rest easy – that’s how we earn customers trust.
  • Collaboration is the key element in the mutual success of our clients, our partners and our team.

Designing a server room

Server room layout and construction

Uptime, security, and safety are three critical requirements for a reliable datacenter operation. The data centre hosts various applications that manage the organization's fundamental business and operational data.

In today's world, datacenters have become the backbone of virtually every corporation. Continuous power supply is a critical requirement for ensuring the maximum possible uptime for Datacenter operations.

Along with providing continuous power, it should be devoid of contaminants such as excessive or inadequate voltage/current, sag, or swell impulse. Such contaminants in the power supply could damage computer hardware or, in the worst-case scenario, cause the power distribution system to fail.

Power conditioning is the process of purifying the power supply. Additionally, in the event of grid or local power generator failure, a backup power device such as an Uninterruptible Power Supply (UPS) should be installed to provide the load. Thus, to assure uninterrupted power supply, we must build a power distribution system that provides both continuous and pure electricity. Ctrltech offers a turnkey system for ensuring uninterrupted and pure power supply and distribution to your datacenter. Our product line comprises UPS with redundancy options, generators, ATS, and isolation transformers, among others, for the purpose of establishing a stable power supply for datacenters. Along with continuous electricity, efficient cooling is a critical component of a datacenter's continuous operation.

Data centre equipment generates a significant quantity of heat, and the issue becomes more acute with more density IT equipment (e.g. Blade Servers). It is therefore critical to employ the appropriate sort of cooling that is capable of precise temperature and humidity control, as well as to understand the air movement requirements within the room. Failure to do so can have fatal consequences, with 'deadly' hot-spots occurring precisely where you do not want them - around your crucial mission-critical servers.

Ctrltech supplies Computer Room Air Conditioners (CRAC), also known as Precision Air Conditioners (PAC) or closed control units (CCU). CRAC systems provided effective heat removal, superior humidity control, increased airflow, improved air filtration, increased flexibility and expandability, and a variety of alarm and redundancy choices. It is offered in upflow or downflow configurations depending on the air throw. It is widely split into two types based on the coolant and coil location. The first is chilled water (CW) and the second is direct expansion (DX).

To maintain the security of your Datacenter, Ctrltech offers a variety of safety equipment, including a water leak detection system, an FM200 fire suppression system, raised flooring, and an access control system.

Profitability Is Ensured Through the Use of Novel Data Centre Solution Architectures.

Over the last decade, the data centre business has grown at an unparalleled rate. With the widespread adoption of the Internet of Things (IoT) in all spheres of business and pleasure, the demand for processing power continues to grow - even more so as the allure of big data gains traction.

Standardization of Design

The task of designing a fortress-like data centre capable of safely and securely storing and managing mission-critical data and applications in any possible condition, while also allowing both short- and long-term development, is daunting. In general, this needs more adaptable and scalable electrification designs. These designs should feature a common power block that is repeated throughout the design in order to accommodate future growth. These design concepts represent substantial advancements over earlier data centre models.

Electrical Topology Selection Is a Function-dependent Decision.

Conventional electrical topologies, which are frequently used in data centres, can be configured in a variety of ways to meet the most rigorous project needs and site constraints. The actual layout is determined by several factors, including the load kilowatts (kW), the available utility service voltages, and the initial cost.

UPS was chosen because it is more effective at meeting reduced power requirements and has the ability to scale as GIGA grows. Lithium-ion batteries were the ideal choice since they are smaller and lighter than lead-acid batteries and can operate at greater temperatures, obviating the need for an additional cooling system.

FAQs

The power supply for data centres is a vital component of the data centre. Data centres, like anything else that uses energy, rely on it for almost everything. Without power, there is no data centre.

To guarantee that everything operates well in the data centre at all times, facility managers must ensure that important equipment receives a continual supply of clean, uninterrupted electricity - all without increasing monthly utility expenses.

Power Infrastructure for a Typical Data Center

The majority of data centres receive their principal power from the larger municipal electric grid. The facility will next employ one or more transformers to absorb the energy while also guaranteeing that the power arriving is of the proper voltage and kind of current (converted from AC to DC typically).

Certain data centres supplement or eliminate their reliance on the grid entirely with on-site electrical generation equipment – either stand-alone generators or alternative energy sources such as solar photovoltaic panels and wind turbines.

The electricity is subsequently distributed to the Main Distribution Boards (MDBs). According to engineer Hans Vreeburg, they are "panels or enclosures that contain fuses, circuit breakers, and ground leakage protection devices that deliver low-voltage energy to a variety of endpoints, such as UPS systems or load banks."

Not only does a UPS assist in "cleaning up" the electricity flowing through by ensuring that surges do not damage equipment, but each one is also responsible for powering a number of breakers. In a typical data centre setup, no more than seven or eight servers are connected to a single breaker, but this number is dependent on both the breaker's capacity and the server's efficiency.

Additionally, UPS systems act as an initial backup in the event of a power outage or other comparable occurrence. A standard UPS can power servers and breakers for up to five minutes; this provides ample time to start a backup generator immediately following an outage or other disruption to the larger electric system.

Power Backup in Data Centers

To ensure continuous uptime and to keep outages to a minimum, the majority of data centres have a backup power supply on-site or nearby. Typically, backup power is provided by a fuel generator that is driven by gasoline or diesel.

Managers must consume a large amount of electricity in order to keep data centres running continuously and without interruption. According to one report, the data centre business as a whole consumes more than 90 billion kilowatt-hours of electricity every year. This is approximately the output of 34 coal-fired power facilities.

On a worldwide basis, data centres consume 3% of all electricity consumed. These 416 terawatts are significantly higher than the whole electricity consumption of the United Kingdom.

There are several causes for the high - and escalating - energy consumption in data centre facilities. Not only do servers and other essential pieces of IT equipment need a significant amount of energy to operate, but so does all associated equipment. Electricity is required for lighting, cooling systems, monitors, and humidifiers, among other things, and can occasionally result in increased energy expenses.

Efficiency in Energy Consumption (PUE)

To assess how much electricity is used by servers versus non-IT equipment in a data centre, facilities calculate their energy input and utilisation effectiveness using a Power Usage Efficiency (PUE) score. A score of 1 indicates that every iota of energy in a data centre is dedicated entirely to servers, while a score of 2 indicates that ancillary equipment consumes the same amount of energy as servers and other IT components.

According to the Uptime Institute's newest survey, almost one in five racks had a density of 30 kilowatts (kW) or greater, demonstrating the expanding prominence of high density computing. The majority of respondents said that their current rack density was between 10 and 29 kW. On an individual server level, the majority are set to operate at a maximum of 600 watts.

A power outage can bring an entire data centre operation to a halt. Power outages are detrimental to information technology systems because they can result in the loss of data, corrupted files, and damaged equipment. A data centre that is located on-premises requires a backup system that is connected directly into the power infrastructure to ensure that vital systems remain operational even when the power goes out.

The data center's power infrastructure should be capable of operating in two modes: normal and emergency. A data center's normal operation requires it to connect to the main power grid and purchase electricity from one or more utility firms. Certain groups may supplement their energy demands with generators.

Generators, on the other hand, frequently perform another critical function in the power infrastructure: they offer backup power during emergencies. In this arrangement, the data centre is powered by one or more generators that deliver electricity via alternating current (AC), similar to the main power system.

The critical nature of UPS hardware

A data center's primary and secondary power sources are often connected via one or more automatic transfer switches. When the primary power source fails, the transfer switch sends a startup order to the associated generator and then switches to secondary power once the generator begins producing electricity. When primary power is restored, the transfer switch reverts to primary power and sends a shutdown command to the generator.

The power infrastructure of a data centre often includes one or more uninterruptible power supply (UPS) appliances. The UPS is critical for two reasons: it protects equipment from voltage spikes and provides emergency power during an outage.

Another frequent component of the power infrastructure is the power distribution unit (PDU), which is a high-quality power outlet that accepts power from a UPS and distributes it to IT systems. Because a PDU does not generate electricity or defend against power surges, it is typically used in conjunction with a UPS.

Switchgear or main distribution boards, as well as transformers, are used to regulate and route power. This guarantees that the voltage and current of the power source are correct. The actual arrangement of the equipment is determined by the size, quantity, and kind of IT systems in the data centre.

Ensure uninterrupted operations

Power infrastructure for data centres must support facility systems such as lights, alarms, sensors, fire monitors, cooling units, and dehumidifiers. Additionally, the infrastructure must deliver power to IT systems such as servers, storage devices, and network components to ensure that they continue to operate uninterrupted in the event of a power outage.

During normal operation, power directed towards IT systems is continuously routed through the UPS. The UPS guarantees that the batteries linked to it stay fully charged and prepared to assist with emergency operations. The batteries provide sufficient power to keep IT systems – and possibly some facilities systems – functioning for a brief period of time, as little as five minutes, depending on the amount and kind of batteries.

Several UPSs employ flywheels or supercapacitors in lieu of batteries. Flywheels store kinetic energy that can be converted to electricity; supercapacitors use electrostatics to provide high-density energy storage when needed.

A UPS can be classified as offline, line-interactive, or online. If the primary power source fails, the UPS automatically changes to battery power, converts it to alternating current, and distributes it to the PDUs. A line-interactive UPS operates in a similar fashion, except that it conditions primary electricity as it goes through the device, so preventing voltage spikes.

A standby power supply converts AC power to DC power and charges the batteries. Any residual power is conditioned and converted to alternating current for PDU output. Large data centres that operate mission-critical workloads frequently install online UPS equipment to ensure the highest level of security, even if they increase running expenses.

Regardless of the UPS type or how an organisation stores and designs backup power, the objective is the same: to allow sufficient time to shut down IT systems or restart backup generators. Ideally, the generators will come online within a minute after a detected failure, restoring normal operation to the UPS systems.

Generally, backup generators run on diesel or gasoline. The majority of data centres maintain sufficient fuel to run systems for 24 to 48 hours. The number of generators and total needed voltage are determined by the unique power requirements of the data centre.

Businesses must guarantee that their generators function properly, in accordance with current environmental rules, and with administrators monitoring for exhaust gases such as carbon monoxide or nitric oxide.

Prepare for power disruptions in advance

Each data centre should have a disaster recovery plan in place that details the steps to take and the roles assigned to each individual in the case of a power failure; adhering to this plan is critical during an outage.

This includes shutting down systems in a defined order, ensuring generators are running and adequately ventilated, monitoring temperatures to avoid overheating, and confirming the working of emergency systems such as pumping equipment.

Additionally, there must be a plan in place once power is restored. Systems should be recalibrated in the proper order and tested to ensure everything is operating normally. Shut down emergency equipment and prepare for the next catastrophe. Above all, participants should maintain effective communication during and after a power outage, informing anybody affected by the outage of the state of the infrastructure.

Additionally, routine equipment maintenance and testing are required for data centre power infrastructure. When the lights go out, a UPS with faulty batteries or a generator with polluted fuel provides little assistance to the data centre and can significantly slow down troubleshooting time.

The creation of heat is a natural byproduct of the operation of any electrical equipment, including data centre equipment. However, in a data centre, excessive heat buildup might cause damage to your equipment. Servers may shut down if temperatures rise too high, and operating at temperatures over what is considered acceptable might impair the life of your equipment.

A related issue is excessive humidity. If the humidity level is too low, electrostatic discharge, a rapid passage of electricity between two items, can occur, causing harm to the equipment. When the humidity level gets too high, condensation and corrosion of your equipment can occur. Additionally, contaminants such as dust are more prone to accumulate on equipment during periods of high humidity, decreasing heat transfer. To assist avoid these concerns, maintain the proper temperature in your data centre, which can be accomplished through the installation of a cooling system.

Standards for Data Center Cooling

The American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) releases standards on the minimum temperatures required to operate a data centre reliably. For the majority of types of information technology (IT) equipment, the most recent recommended is a temperature range of 18 to 27 degrees Celsius (°C) or 64 to 81 degrees Fahrenheit (°F), a dew point (DP) of -9 to 15 degrees Celsius (°C), and a relative humidity (RH) of 60%. These recommendations apply to equipment classified as A1 to A4 by ASHRAE.

Additionally, ASHRAE makes specific guidelines for each of its equipment classes. These suggestions apply only to powered-on equipment and to information technology equipment, not to power equipment.

Temperatures between 15 and 32 degrees Celsius are recommended for class A1. The recommended dew point and relative humidity ranges are -12 degrees Celsius dew point and 8% relative humidity to 17 degrees Celsius dew point and 80% relative humidity.

Temperatures between 10 and 35 degrees Celsius are recommended for class A2. The recommended dew point and relative humidity ranges are -12 degrees Celsius dew point and 8% relative humidity to 21 degrees Celsius dew point and 80% relative humidity.

Temperatures between 5 and 40 degrees Celsius are suggested for class A3. The recommended dew point and relative humidity ranges are -12 degrees Celsius dew point and 8% relative humidity to 24 degrees Celsius dew point and 85% relative humidity.

Temperatures between 5 and 45 degrees Celsius are recommended for class A4. The recommended dew point and relative humidity ranges are -12 degrees Celsius dew point and 8% relative humidity to 24 degrees Celsius dew point and 90% relative humidity.

Temperatures between 5 and 35 degrees Celsius are recommended for class B. The dew point and relative humidity ranges recommended are 8% RH to 28C DP and 80% RH.

Temperatures between 5 and 40 degrees Celsius are recommended for class C. The dew point and relative humidity ranges recommended are 8% RH to 28C DP and 80% RH.

ASHRAE formerly recommended a tighter temperature range in prior versions of its standards. The guidelines placed a higher premium on reliability and uptime than on energy expenses. As data centres became more concerned with energy efficiency, ASHRAE established classes that allowed for a larger temperature range.

Certain older equipment may have been built to an earlier version of the ASHRAE standard. When a data centre contains a mix of older and newer equipment, determining which recommendations to employ might be more difficult. If your facility contains a variety of equipment, you must choose a temperature and humidity range that is suitable for all of the equipment.

Calculating a Data Center's Total Cooling Requirements

Once you've determined an optimal temperature range, you'll need to establish your system's heat production in order to determine how much cooling capacity you'll require. This is accomplished by estimating the amount of heat generated by all of the IT equipment and other heat sources in your data centre. This information will assist you in determining the amount of cooling power required.

This information will assist you in selecting a cooling system that will reliably satisfy your needs while avoiding splurging on capacity you do not require. Anyone may determine a data center's cooling requirements using the method given below to help protect its equipment and data. This section will demonstrate how to calculate the cooling requirements for your data centre.

Measuring Heat Emission

Heat, or energy, is quantifiable in a variety of ways, including British thermal units (BTU), tonnes, calories, and joules. BTU per hour, tonnes per day, and joules per second are all units of heat output that are equivalent to watts.

Having so many distinct ways to quantify heat and heat production can be perplexing, even more so when multiple measuring units are employed concurrently. At the moment, there is a push to make the watt the standard unit of heat output measurement. The BTU and tonne units are being phased out.

You may still have data that is based on alternative measures. If your data contains numerous units, you'll need to convert it to a standardised format. You can convert them to the watt standard, or you can convert them to the most frequent measurement in your data. Here's how to perform some necessary conversions:

  1. Multiply by 0.293 to convert BTU per hour to watts.
  2. Multiply by 3,530 to convert tonnes to watts.
  3. Multiply watts by 3.41 to obtain BTU per hour.
  4. Multiply watts by 0.000283 to convert to tonnes.

The power drawn from the AC power supply of IT equipment is almost all converted to heat, whereas the power transmitted through data lines is insignificant. As a result, the thermal output of a piece of equipment is equal to its power consumption in watts. Occasionally, data sheets include a BTU per hour value for heat output, but you only need to utilise one of these values in your calculations. Generally, it is more convenient to utilise watts.

There is one exception to this rule: routers that support voice over internet protocol (VoIP). As much as one-third of the power consumed by these routers may be sent to remote terminals, resulting in a heat output that is less than the power consumed. The difference between the heat production and power consumption of VoIP routers is normally insignificant, but you can include it if you want a more precise result.

Heating, ventilation, and air conditioning are abbreviations for HVAC. HVAC systems regulate the ambient environment (temperature, humidity, air flow, and air filtering) in computing and, more specifically, enterprise data centres. They must be planned for and operated in conjunction with other data centre components such as computing hardware, cabling, data storage, fire protection, physical security systems, and power. The selection of an HVAC contractor is a critical stage in the data centre planning process.

Almost all physical hardware devices have environmental criteria, which include temperature and humidity limits that are acceptable. Typically, environmental needs are specified in a product specification document or a physical planning guide.

A separate space, referred to as a plenum, is frequently reserved to contain and circulate air for HVAC and communication cables. This space is typically positioned between the structural ceiling and a drop-down ceiling or beneath a raised floor.

Designing a server room

Request a Call Back Now