Skip to main content
Logo print

Start Campus and Legrand: A Strategic Alliance for Sustainable Innovation in Next-Generation Data Centers

Start Campus partnered with Legrand to create one of Europe’s most sustainable hyperscale data centers in Sines, Portugal. Powered entirely by renewable energy, the 1.2 GW campus integrates Legrand’s ColdLogik rear door cooling and seawater-based thermal management to achieve exceptional efficiency, with a PUE target of 1.1 and WUE of 0. This long-term alliance combines innovation, lifecycle support, and AI-ready infrastructure to set a new standard for sustainable data center design.

NorthC and Legrand: a new class of AI-enabled infrastructure

Blog 22/10/2025
LegrandUsystemsData Center White SpaceCooling

Read Time: Approx. 6 Min

NorthC

What do you do with a low-density environment when high density is required? This was the challenge facing data centre operator NorthC. Its response? Find the right partner.

With Legrand, NorthC upgraded its data centre to deliver scalable, energy-efficient, and reliable high-density infrastructure tailored for the demands of artificial intelligence (AI), big data, and high-performance computing (HPC) workloads. Today, NorthC is an AI enabler.

 

The Challenge

NorthC, one of Europe's leading providers of regional data centres, operates Swiss data centres in Münchenstein (Basel), Biel/Bienne (Bern) and Winterthur (Zurich). When a customer requested AI and HPC capacity, this meant NorthC needed to rapidly upgrade its existing low-density environment at its Münchenstein site near Basel. NorthC Site Manager Wolfgang Voigt was faced with a challenge: how could NorthC convert the data centre within the given timeframe of six months to meet the customer's requirements?

AI servers can consume several hundred watts per unit, with peak loads pushing this energy demand even higher. This presents providers with completely new challenges, especially in terms of power supply and cooling. While traditional air cooling is sufficient in many conventional data centres, this is often inadequate for AI workloads. So how can cooling capacities be increased to cope with the requirements of AI and HPC applications, and within such a tight timeframe?

Wolfgang Voigt

Wolfgang Voigt: "The traditional concept of cooling, where cooling and power are separated and you try to keep water out of the data hall, is no longer valid today. In addition, the new cooling concept presents DC operating with new challenges that call for changes to processes and employee behaviour. Therefore, we needed to find a supplier that could deliver innovative cooling systems quickly and reliably. That's how our great partnership with Legrand began."


Why AI and HPC require new cooling technologies

The scale and complexity of AI and HPC models are growing rapidly. These workloads increase thermal loads and generate far more heat than conventional servers, meaning that cooling has become a key factor in infrastructure performance and reliability, requiring customised and flexible cooling solutions.

  • Rising energy consumption due to powerful hardware: AI models run on specialised hardware (e.g. NVIDIA or Google TPUs).
  • Sustained high loads with intensive peaks: AI training and HPC simulations require high-performance computing over long periods of time, with sudden load peaks when processing large datasets.
     

In conventional data centres, energy consumption typically ranges between 3 and 12 kW per rack. In AI environments, this can easily rise to 100 kW per rack. This leads to a higher continuous load as well as peak loads during intensive AI training phases. In this complex environment, innovative cooling solutions are at the top of the sustainability agenda.
 

The Solution

Market research led NorthC to select USystems’ ColdLogik rear door heat exchangers (RDHx) – an advanced, energy-efficient cooling solution from Legrand’s specialist brand, USystems. This cooling method for server racks involves attaching a heat exchanger to the rear of a rack to dissipate the heat generated by the servers. The hot air from the servers is passed through a heat exchanger, where it is cooled by water. Since water can absorb over 3,000 times more heat than air, this results in huge efficiency gains. The cooled air is then fed back into the room.

For NorthC, adopting the energy-efficient RDHx technology has been a game-changer: enabling high-density computing, while reducing energy consumption, making it a compelling choice to meet AI and HPC requirements.

The benefits

Colin Rowlands, European Technical Support, USystems, explains: "Installing the basic infrastructure for our cooling solutions in a data centre, whether in the entire data centre or just part of it, makes upgrading easy. The joint solution, which we are very proud of, provides NorthC with future-proof and flexible infrastructure."

Wolfgang Voigt praises the rapid, efficient cooperation with Legrand. The solution was also modified and tailored to NorthC's needs. Colin Rowlands: "This specific focus on customer needs has led to many deep and lasting relationships around the world. In addition to NorthC, in Switzerland CERN has also placed its trust in us, for example." The rear door heat exchangers were installed while operations were ongoing. Looking back, Wolfgang Voigt says it was like "open-heart surgery". Despite this challenge, the project was completed to everyone's full satisfaction and without any major issues. Rowlands recalls: "Well, we managed to juggle everything on their behalf. That’s part and parcel of our partnership!"

 

The Benefits

 

Colin Rowlands & Wolfgang VoigtNorthC’s Münchenstein (Basel) 1 site is now AI- and HPC-enabled. The upgraded infrastructure supports GPU-driven clusters, hybrid cloud environments and advanced connectivity for data-intensive operations. This means that NorthC can now offer its customers a high-speed environment with low latency, in complete compliance with regulatory requirements (e.g., data residency) and Swiss data laws. Customers consequently have access to a core-optimised data centre that supports high-density environments, alternative cooling methods, fast networks, and a sustainable power supply.

The infrastructure can grow alongside customers' needs. Since cooling accounts for the bulk of power consumption in a data centre, this is where the greatest improvements in power usage effectiveness (PUE) can be achieved. 

Colin Rowlands highlights the energy-saving potential: "If you equip a data centre with rear door heat exchangers, you should be able to reduce energy consumption by around 80%. And don't forget that a data centre runs 24 hours a day, 365 days a year. If you can cut 80% of your cooling costs, that's huge, isn't it?" He concludes: "Our collaboration with NorthC has been and continues to be excellent. Not only is our system extremely robust, but our partnership is too."

 

And the key results?

  • Energy consumption reduced by around 80%
  • Improved PUE
  • Smaller carbon footprint
  • Use of existing infrastructure
  • Rapid rollout
  • Further expansion possible
  • More flexibility for customers

 

Next steps

Next Steps
At the Münchenstein (Basel) 1 site, the first racks have already been equipped with rear door heat exchangers, with more to follow as customer demands grow. As Wolfgang Voigt explains: "New requirements from our customers are coming thick and fast these days, almost always ad hoc in nature. We're ready to step up whenever a customer needs us."

Looking ahead, he sees a combination of rear door and direct chip cooling as the future. NorthC data centres will be designed with flexibility in mind to meet all customer needs, whether conventional setups, rear door cooling, or direct chip cooling solutions. Here too, NorthC will continue to rely on its successful partnership with Legrand for market insight and future-ready solutions.

Colin Rowlands, European Technical Support, USystems, a Legrand brand: "If you equip a data centre with rear door heat exchangers, you should be able to reduce energy consumption by around 80%."

Wolfgang Voigt, Site Manager, NorthC: "New requirements from our customers are coming thick and fast these days, almost always ad hoc in nature. We're ready to step up whenever a customer needs us."


About NorthC

The NorthC Group operates data centres in Switzerland, the Netherlands, and Germany. NorthC is characterised by a strong local presence in various regions, high-quality services, as well as customised connectivity and hybrid cloud solutions. NorthC aims to be completely climate-neutral by 2030, based on its sustainability pillars: green hydrogen, 100% green energy, optimal use of residual heat, and modular construction. Head to the NorthC Datacenters website to learn more here.

 

About Legrand - Data Center Solutions

Managing a data centre is complex. It requires careful consideration of a range of factors to ensure performance, reliability, scalability and sustainability. To help you navigate these challenges, we offer tailored grey and white space solutions at scale to meet individual needs through our specialist brands, including Minkels, Raritan, Server Technology, Starline, and USystems.

Our approach combines technical expertise with a deep commitment to sustainability, helping you optimise data centre performance while reducing environmental impact.

We build lasting partnerships with our clients based on trust, collaboration, and shared aspirations. Whether improving reliability or increasing efficiency, we work to ensure that every detail is correct.
With our support, you can approach the future with confidence, knowing that your data centre is ready for what's next. With Legrand, we go further.
 

Cybersecurity Awareness Month: Building Resilient Data Center Infrastructure

Blog 22/10/2025
LegrandRaritanServertechData Center White SpaceData Center Grey SpacePDUsUPSBusway

Cybersecurity Awareness Month

October marks Cybersecurity Awareness Month, a global initiative that raises awareness about online safety and encourages organizations to strengthen their defenses against cybercrime.

Launched in 2004 by the U.S. Department of Homeland Security and the National Cybersecurity Alliance, this campaign reminds us that security is not solely the responsibility of end users; it also falls on those who design and maintain the physical infrastructure that supports our digital systems.

Each year, the campaign focuses on a different theme. This year’s theme, “Stay Safe Online”, emphasizes practical steps such as stronger password management, multi-factor authentication, regular software updates, and timely incident reporting.

However, beyond the individual steps we can all take to protect our own personal data lies another critical layer of defense: the physical infrastructure that keeps digital systems operational. In data centers, power interruptions, environmental fluctuations, and unauthorized access can all create vulnerabilities that cybercriminals can exploit. In an era where downtime and breaches can cost millions, building infrastructure resilience is a cybersecurity imperative.

 

Protecting the Physical Infrastructure

When people think about cybersecurity, they often envision software firewalls, encryption protocols, or identity management tools. However, modern threats are no longer confined to the digital layer; they increasingly target the physical systems that house, power, cool, and connect to IT infrastructure.

 

Consider a few examples:

  • A power outage or surge can disable defenses or create opportunities for malicious access during recovery.
  • Inadequate environmental monitoring might allow tampering or sabotage to go unnoticed.
  • Poorly managed racks or cabling can lead to accidental cross-connections, data leakage, or unauthorized device insertion.

In today’s interconnected world, cybersecurity must extend beyond software to include the physical infrastructure that keeps operations running securely.

 

How Legrand Solutions Support a Secure and Resilient Data Center

Legrand’s data center solutions portfolio is built around one key principle: resilience through intelligent infrastructure. By designing systems that deliver visibility, control, and reliability across power, cooling, and connectivity, we support organizations in maintaining both operational continuity and security integrity.

 

Uninterruptible Power Supply (UPS)

UPS Keor FLEX

UPS systems protect sensitive equipment from power fluctuations and outages, ensuring data centers remain operational. However, their increasing interconnectivity also heightens their exposure to cyber threats that can severely disrupt data center operations.  

With UPS systems embedded in IT and operational technology (OT) networks, this connectivity creates new opportunities for risk, making it vital for organizations to adopt integrated security approaches. As cyber threats evolve, protecting UPS systems against attacks is essential to minimize downtime, safeguard data, and maintain service reliability.  

Common threats include malware, ransomware, DDoS attacks, and phishing. Vulnerabilities such as default credentials, unpatched software, and unsecured network exposure make these systems potential entry points for cyber intrusions.  

Recognizing this, Legrand’s UPS portfolio - including the Keor FLEX - is designed with security, scalability, and resilience at its core.  

 

Some of the features of the Keor FLEX UPS include:

  • Multiple communication interfaces: Includes SNMP, TCP/IP, USB, Modbus, dry contacts, for secure integration with network management and monitoring platforms.
  • Predictive diagnostics and remote monitoring: Detects abnormal operating conditions early, helping prevent failures that could expose vulnerabilities.
  • Hot-swappable modular design: Ensures service continuity and reduces risk during maintenance or module replacement.
  • Front-access maintenance design: Limits physical exposure by allowing all servicing from the front, supporting controlled and secure operation.
  • Compliance with international standards: Includes IEC 62040-1/-2/-3 /-4, IEC 62443-4-2, and CE marking, certifying compliance with stringent safety and performance requirements.

 

Cybersecurity Regulations and Standards

Legrand’s UPS systems comply with European cybersecurity standards EN IEC 62443-4-1  and EN IEC 62443-4-2, which specify secure product development lifecycle requirements and technical security criteria for industrial automation and control systems (IACS).

These standards cover key security principles, including identification and authentication control, data confidentiality, system integrity, restricted data flow, timely response to events, user control, and resource availability.

Security assurance is reinforced through formal certification processes conducted by accredited testing laboratories and National Certification Bodies (NCBs) within Europe.

By implementing these standards, Legrand ensures UPS systems remain resilient to cyberattacks throughout their lifecycle by emphasizing secure design, implementation, defect and patch management, and incident response.

 

Intelligent PDUs

Raritan PX4 and Server Technology PRO4X PDUs

Rack power distribution units (PDUs) have evolved far beyond basic power strips. Today, they function as intelligent, networked devices that provide advanced management and monitoring capabilities, delivering real-time visibility, reporting, and alerting of power metrics and events. However, this connectivity also introduces new opportunities for cyber-attacks. Many legacy PDUs and infrastructure components still lack basic cybersecurity protections and are vulnerable to attack.

Legrand has embedded advanced cybersecurity features across its Raritan PX4 and Server Technology PRO4X intelligent rack PDUs powered by the Xerus™ firmware platform.

 

Why Xerus™ Stands Out:  

  • Secure Boot: Ensures only verified firmware runs on your PDU, preventing tampering or malicious code execution.
  • Vulnerability Testing (VAPT): Each firmware version undergoes rigorous internal and third-party testing, including penetration testing with industry-standard tools such as Nessus.
  • Encrypted Communications: All devices use AES 128b/256b encryption with firewall support and strong password policies.
  • SB 327 & NISTIR 8259 Compliance: Meets or exceeds the requirements of leading security regulations for IoT devices.
  • Stringent Internal Standards: Ensure all connected products meet the LNCA security policies for IoT devices
  • Frequent Firmware Updates: Two major and six minor releases annually, with urgent patches delivered fast.
  • Customizable Alerts: Intelligent features such as SmartLock™ and webcam triggers deliver real-time visual alerts and automated response protocols for unauthorized access events.

 

Overhead Busway

Busway

As an integral part of the broader data center power distribution system, overhead track busways also help secure operational infrastructure. The Starline M70 Critical Power Monitor is a next-generation power monitoring device designed to optimize electrical infrastructure in mission-critical data centers. It delivers real-time, revenue-grade power monitoring with unparalleled precision and control to ensure peak performance, balance loads, and prevent failures.  

 

Key cybersecurity-supporting features include:

  • Encrypted password storage: Protects login credentials from unauthorized access.
  • Role-based access control: Admin privileges restrict configuration rights for enhanced security.
  • Read-only display mode: Limits configuration access for certain users.
  • Firmware vulnerability scans: Identifies and addresses potential weaknesses.
  • Support for secure communication protocols: HTTPS and SSH protect data in transit.
  • Multiple communication protocols: Includes SNMPv1/v2c, BACnet, Modbus TCP/RTU, etc., for secure integration and interoperability.
  • Strong default credentials: Randomly generated 15-character passwords that must be changed upon first use.

 

Looking Ahead: Securing Data Center Infrastructure 

Data center power systems, ranging from UPSs to PDUs and busways, are critical infrastructure that require vigilant cybersecurity practices to prevent costly disruptions. A comprehensive approach that combines risk assessment, employee training, and advanced technologies can help organizations protect these systems against evolving threats.

By embedding cybersecurity into every layer of infrastructure, data centers can ensure continuity, data protection, and operational trust, reinforcing resilience where it matters most.

Are you looking to protect your critical infrastructure from evolving cyber threats? Contact our team to learn how Legrand's data center solutions can strengthen your data center from the inside out. Contact us here.

 

Immersion Cooling: What is it and key considerations!

Blog 01/10/2025
LegrandData Center White SpaceCooling

What Is Immersion Cooling?

Immersion cooling is a cutting-edge thermal management method where entire servers or IT components are submerged in a thermally conductive, dielectric liquid. This liquid directly absorbs the heat from the hardware, eliminating the need for air cooling, heat sinks, or traditional server fans.

 

There are two primary types of immersion cooling:

  • Single-phase: The liquid stays in liquid form as it absorbs heat, which is then removed via a heat exchanger.
  • Two-phase: The liquid evaporates when it absorbs heat, forming a gas. The gas then condenses back into liquid in a closed loop.

Both methods outperform air and even direct-to-chip cooling in terms of thermal efficiency and rack density potential.

 

Advantages

  • Extreme Density Support: Enables ultra-high-density deployments well beyond what air or even direct-to-chip systems can handle.
  • Superior Thermal Efficiency: Liquid removes heat far more effectively than air. With direct contact cooling, server components remain consistently cool under full load.
  • Silent Operation: Fans are no longer needed in the server, reducing power usage and noise in the data hall.
  • Reduced Mechanical Complexity: Fewer moving parts mean less wear and lower failure rates.
  • Improved Energy Use: Immersion systems often enable Power Usage Effectiveness (PUE) < 1.05, especially in optimized environments.
  • Potential for Heat Reuse: Captured heat can be reused for district heating, industrial processes, or converted into chilled water, aiding sustainability goals.
  • Space Efficiency: Immersion tanks can consolidate massive compute power into a much smaller physical footprint.

 

Considerations and Trade-Offs

Despite its technical appeal, immersion cooling requires a rethinking of how data centers are designed, operated, and serviced:

  • Server Compatibility: Not all IT equipment is immersion-ready. Servers often need to be purpose-built or modified to operate submerged in dielectric fluid.
  • Physical Access and Maintenance: Servicing submerged components can be more complex and time-consuming compared to traditional racks.
  • Operational Culture Shift: Technicians need new workflows, tools, and safety training to handle immersion systems effectively.Fluid Management: Dielectric fluids must be monitored and maintained over time. Disposal and environmental considerations also apply.
  • Limited Industry Standardization: While gaining traction, immersion cooling still lacks some of the maturity and interoperability seen in air or D2C systems.
  • Upfront Investment: Initial CapEx for immersion infrastructure is high, though often offset by long-term energy and space savings.

 

Ideal Use Cases

  • AI & Machine Learning Clusters: Workloads that generate intense, sustained heat benefit from immersion’s consistent cooling performance.
  • HPC Environments: Where performance and density outweigh other concerns, immersion is often the best fit.
  • Edge Computing & Harsh Environments: Immersion systems offer sealed, ruggedized designs ideal for dusty, remote, or temperature-variable locations.
  • New Data Center Designs: Immersion cooling shines in greenfield builds, where infrastructure can be optimized from day one.

 

Conclusion

Immersion cooling represents a transformative leap in data center thermal management though it requires a shift in operations and hardware readiness. 

Data Center Installation: How to Design and Install Data Centers

Blog 01/10/2025
LegrandData Center White SpaceCooling

Data center installation requires meticulous planning, specialized expertise, and comprehensive understanding of complex infrastructure systems. Successful data center installation projects demand careful coordination between multiple engineering disciplines, from electrical systems to network infrastructure. This guide addresses the most critical aspects of data center installation, covering everything from initial design considerations to final commissioning processes.


Modern data center installation involves integrating sophisticated IT systems, power distribution networks, cooling infrastructure, and security components into a cohesive facility. The data center installation process requires specialist teams working together to deliver reliable, scalable solutions that meet current business requirements while providing capacity for future growth.


What are the fundamental requirements for data center installation design?


Data center installation begins with comprehensive capacity planning and infrastructure assessment. Engineers must evaluate current power requirements, cooling needs, and network connectivity demands while ensuring the design provides adequate headroom for future expansion. The installation process involves creating a structured approach that ensures all components work seamlessly together.


Key design considerations include:


Power Infrastructure: Determining electrical supply requirements, UPS capacity, and backup generator specifications. The power system must deliver reliable electricity to all equipment while maintaining redundancy levels appropriate for business continuity requirements.


Cooling Infrastructure: Calculating thermal loads and designing advanced cooling systems (such as CRAC/CRAH, in-row cooling, or liquid-based systems) that maintain optimal environmental conditions. Proper cooling design prevents equipment overheating and ensures consistent performance across all server racks.


Network Architecture: Planning structured cabling systems that support current applications while providing scalability for future growth. This includes both copper and fiber optic cabling infrastructure.


Physical Security: Implementing access control systems and secure rack installations that protect critical hardware from unauthorized access while allowing authorized maintenance teams to perform necessary services.


Cable Management: Higher bandwidth demands and denser connectivity require structured, well-planned cable management. Effective routing and separation of power and data cables ensure performance, simplify maintenance, and support future scalability.


Physical Security: Implementing access control systems and secure rack installations that protect critical hardware from unauthorized access while allowing authorized maintenance teams to perform necessary services.


How do you plan cabling infrastructure for data center installation?


Cabling forms the backbone of any data center installation, requiring careful planning to ensure optimal performance and maintenance accessibility. Structured cabling systems must accommodate both current requirements and future expansion needs while meeting industry standards for reliability and performance.
 

Cable Management Strategy: Implementing organized cable routing systems that separate power and data cables while maintaining proper bend radii and avoiding electromagnetic interference. Overhead cable tray systems provide flexible routing options that support future modifications without disrupting existing infrastructure.


Pathway Design: Creating clear pathways for different cable types, including power distribution, network connectivity, and management systems. Proper pathway design ensures maintenance teams can access individual cables without disrupting adjacent systems or compromising operational performance.


Labeling and Documentation: Establishing comprehensive labeling standards that enable quick identification of individual cables and connections. Documentation must include cable specifications, routing information, and connection details for all network and power systems.


Legrand's comprehensive cabling solutions provide the complete infrastructure needed for professional data center installation, including cable management systems, connectivity products, and structured cabling components that exceed industry standards.


What IT systems require integration during data center installation?


Modern data center installation involves integrating multiple IT systems that work together to deliver reliable services. Each system requires careful coordination during the installation process to ensure proper functionality and performance across the entire facility.


Server Infrastructure: Installing and configuring server hardware within properly designed rack systems. This includes ensuring adequate power supply, cooling, and network connectivity for each server while maintaining organized cable management that supports future modifications.


Network Equipment: Implementing switches, routers, and other network hardware that provide connectivity between servers and external networks. Network equipment requires both power and data connections, plus environmental monitoring to ensure optimal performance levels.


Storage Systems: Installing storage arrays and backup systems that provide data protection and performance optimization. Storage systems often have specific power, cooling, and cabling requirements that must be addressed during the installation process.


Management Systems: Deploying monitoring and management software that provides visibility into system performance, environmental conditions, and security status. These systems require network connectivity and integration with existing business management processes.


How do you ensure reliable energy sources during data center installation?


Energy infrastructure represents the most critical component of any data center installation project. Reliable power supply ensures continuous operation and protects against business disruption from electrical failures, making power system design a fundamental consideration for all installation projects.


Primary Power Systems: Installing main electrical distribution equipment that receives utility power and distributes it throughout the facility. This includes transformers, switchgear, and distribution panels that must meet strict electrical standards and provide adequate capacity for current and future loads.


Backup Power Solutions: Implementing UPS systems and backup generators that provide emergency power during utility outages. UPS systems deliver immediate backup power while generators provide long-term emergency supply for extended outages, ensuring continuous operation under all conditions.


Power Distribution: Installing power distribution units (PDUs) that deliver electricity to individual server racks. PDUs must provide adequate capacity for current loads while supporting future expansion requirements and maintaining high quality power delivery.


Monitoring Systems: Deploying power monitoring equipment that tracks electrical consumption, identifies potential issues, and provides data for capacity planning. Real-time monitoring enables proactive maintenance and prevents unexpected failures that could disrupt operations.


Legrand's power distribution solutions provide the complete range of equipment needed for reliable data center power installation, from UPS systems to rack-level power distribution components.


What infrastructure components are essential for data center installation?


Data center installation requires numerous infrastructure components that work together to create a reliable, secure, and efficient facility. Each component must be carefully selected and installed to ensure optimal performance while meeting strict industry standards for reliability and safety.


Rack Systems: Installing server racks that provide secure mounting for IT equipment while ensuring proper airflow and cable management. Racks must accommodate different equipment form factors while maintaining structural integrity and accessibility for maintenance teams.


Environmental Systems: Implementing cooling, humidity control, and air circulation systems that maintain optimal conditions for electronic equipment. Environmental systems must operate efficiently while providing adequate capacity for current and future heat loads generated by IT hardware.


Security Infrastructure: Installing access control systems, surveillance equipment, and intrusion detection systems that protect critical hardware and data. Security systems must provide comprehensive protection while allowing authorized personnel to perform necessary maintenance services.


Cable Management: Deploying organized cable routing systems that separate different cable types while maintaining accessibility for maintenance and modifications. Proper cable management prevents interference and simplifies troubleshooting processes for technical teams.


How do you manage the data center installation process?


Successful data center installation requires coordinated project management that ensures all systems are installed correctly and on schedule. The installation process involves multiple specialist teams working together to deliver a complete facility that meets all performance requirements and industry standards.


Project Planning: Developing detailed installation schedules that coordinate the work of different teams while ensuring critical dependencies are met. Planning must account for equipment delivery schedules, installation sequences, and testing requirements to ensure smooth project execution.


Quality Control: Implementing testing and inspection procedures that verify all systems meet specifications and performance requirements. Quality control ensures that installed equipment operates correctly and meets reliability standards expected in mission-critical environments.


Team Coordination: Managing electrical engineers, network specialists, and other technical teams to ensure all installation work is completed correctly. Coordination prevents conflicts between different installation activities and ensures optimal results across all system components.


Documentation: Creating comprehensive documentation that includes installation procedures, system configurations, and maintenance requirements. Proper documentation enables future maintenance and system modifications while ensuring compliance with industry standards.


What are the key success factors for data center installation?


Delivering successful data center installation requires attention to multiple factors that influence project outcomes. Understanding these factors helps ensure projects meet performance, schedule, and budget requirements while delivering reliable, scalable infrastructure solutions.


Specialist Expertise: Engaging experienced engineers and installation teams who understand the complexities of data center infrastructure. Specialist knowledge ensures all systems are installed correctly and operate reliably throughout their expected service life.


Quality Components: Selecting high quality equipment and materials that meet industry standards and provide long-term reliability. Quality components reduce maintenance requirements and prevent unexpected failures that could compromise business operations.


Proper Planning: Developing comprehensive plans that address all aspects of the installation process, from initial design through final commissioning. Thorough planning prevents delays and ensures all requirements are met within established timeframes.


Testing and Validation: Implementing comprehensive testing procedures that verify all systems operate correctly under various conditions. Testing ensures the installation meets performance requirements and operates reliably under normal and emergency conditions.


Ongoing Support: Establishing maintenance and support services that ensure continued reliable operation after installation completion. Ongoing support maximizes system uptime and extends equipment life while maintaining optimal performance levels.


Conclusion


Data center installation represents a complex undertaking that requires careful planning, specialist expertise, and high quality components working together as a single integrated system. Success depends on understanding the requirements for power systems, cooling infrastructure, network connectivity, and security systems while ensuring all components meet strict industry standards.


By following structured installation processes and working with experienced teams, organizations can deliver data centers that meet current requirements while providing flexibility for future growth. Proper data center installation ensures reliable operation, efficient resource utilization, and the capacity to support evolving business needs.


Legrand's comprehensive data center solutions provide the complete range of products and services needed for successful data center installation, from initial design consultation through ongoing maintenance support. Our experienced teams help ensure projects deliver reliable, efficient facilities that support critical business operations.
 

Hyperscale Data Centers: The Backbone of Modern Digital Infrastructure

Blog 01/10/2025
LegrandData Center White SpaceCooling

What is a hyperscale data center?


A hyperscale data center is a massive facility designed to support the enormous computing and storage requirements of cloud-based services and applications. These facilities represent the largest scale of data center infrastructure, typically housing thousands of servers across tens of thousands of square meters. The sheer size of these operations enables them to deliver computing resources, storage solutions, and network services to millions of users worldwide.


Unlike traditional enterprise data centers that serve specific organizations, hyperscale facilities are built to support global cloud providers and technology companies that require unprecedented capacity for their operations. The scale of these facilities allows for significant efficiency gains in energy consumption, management systems, and operational costs compared to smaller, distributed data centers.


How do major cloud providers utilize hyperscale data centers?


Leading technology companies like Google, Amazon Web Services (AWS), Microsoft Azure, and Meta have invested heavily in hyperscale data center infrastructure to support their global services and applications. These providers operate networks of hyperscale facilities strategically located around the world to ensure optimal performance and access for their customers.


Google's Hyperscale Infrastructure

Google operates some of the most advanced hyperscale data centers globally, with facilities designed to support their search services, cloud computing platform, and artificial intelligence processing requirements. Their data centers feature custom-designed servers and cooling systems that maximize efficiency while minimizing environmental impact.


Amazon Web Services (AWS)

Amazon Web Services has built an extensive network of hyperscale facilities to support their cloud infrastructure, providing computing capacity and storage solutions to enterprise customers and individual developers. Their facilities are designed with redundancy and security as primary considerations, ensuring reliable service delivery across global markets.


Microsoft Azure

Microsoft Azure's hyperscale data centers enable the company to deliver cloud services, productivity applications, and AI-based solutions to businesses worldwide. Their facilities incorporate advanced technologies for energy management and operational efficiency, supporting the massive demand for cloud computing resources.


What are the key characteristics of hyperscale data centers?


Hyperscale data centers are distinguished by several critical characteristics that enable them to operate at unprecedented scale and efficiency. The facility design focuses on maximizing computing density while maintaining optimal environmental conditions for equipment operation.


Energy efficiency is a fundamental consideration in hyperscale designs, with advanced cooling systems, power management technologies, and renewable energy sources integrated throughout the infrastructure. These facilities often consume as much power as small cities, making energy optimization essential for both operational costs and environmental sustainability.


Key characteristics include:

  • Massive scale - Facilities often exceed 10,000 square meters with thousands of servers
  • Standardized infrastructure - Consistent designs enable efficient management and maintenance
  • High density computing - Optimized server configurations maximize processing power per square meter
  • Advanced cooling systems - Sophisticated environmental control manages heat from dense equipment
  • Redundant systems - Multiple layers of backup ensure continuous operation
  • Automated management - AI and machine learning optimize facility operations
  • Global connectivity - High-speed networks connect facilities worldwide
  • Scalable architecture - Modular designs allow rapid capacity expansion


How do hyperscale data centers handle artificial intelligence and machine learning workloads?


The growth of artificial intelligence and machine learning applications has significantly increased demand for specialized computing infrastructure. Hyperscale data centers are uniquely positioned to support these requirements through their massive processing capacity and advanced hardware configurations.


Specialized Processing Requirements

AI workloads require specialized processors, including graphics processing units (GPUs) and tensor processing units (TPUs), which generate substantial heat and require sophisticated cooling solutions. Hyperscale facilities incorporate these specialized systems while maintaining the environmental controls necessary for optimal performance.


High-Performance Storage and Networking

Machine learning applications also demand high-speed storage systems and network connectivity to process vast amounts of data efficiently. The scale of hyperscale facilities allows providers to implement cutting-edge storage technologies and network infrastructure that would be cost-prohibitive in smaller facilities.


What operational challenges do hyperscale data centers face?


Operating hyperscale data centers presents unique challenges due to their massive scale and complexity. These facilities must address multiple operational areas simultaneously to maintain reliable service delivery.


Power and Energy Management

Power management becomes critical as these facilities can consume tens (in the largest cases, hundreds) of megawatts of electricity, requiring sophisticated distribution systems and backup power sources to ensure continuous operation. Managing such massive electrical loads requires advanced monitoring and control systems.


Security and Access Control

Security considerations are amplified in hyperscale environments due to the concentration of valuable data and computing resources. These facilities implement multiple layers of physical and digital security measures, including advanced access controls, surveillance systems, and intrusion detection technologies.


Automation and Maintenance

Maintenance and management of thousands of servers across massive facilities requires advanced automation and monitoring systems. Traditional manual approaches are impractical at hyperscale, necessitating AI-driven management solutions that can predict equipment failures and optimize resource allocation.


Additional Operational Considerations

Common operational challenges include:

  • Power distribution - Managing massive electrical loads across large facilities
  • Cooling management - Maintaining optimal temperatures with dense equipment configurations
  • Equipment lifecycle - Coordinating maintenance and replacement of thousands of components
  • Network optimization - Ensuring high-performance connectivity across global locations
  • Staff coordination - Managing operations teams across multiple facility locations
  • Regulatory compliance - Meeting data protection and industry requirements across different markets


How do hyperscale data centers compare to traditional enterprise and colocation facilities?


Hyperscale data centers operate at a fundamentally different scale compared to traditional enterprise or colocation facilities, with distinct advantages and characteristics that set them apart from conventional data center approaches.


Scale and Purpose Differences

While enterprise data centers typically serve single organizations and colocation facilities house multiple customers in shared spaces, hyperscale facilities are purpose-built for massive cloud services and global applications. This fundamental difference in approach affects every aspect of their design and operation.


Economic and Operational Advantages

The economic advantages of hyperscale operations include significant cost reductions through economies of scale, standardized designs, and automated management systems. These facilities can achieve much higher efficiency levels in energy consumption and operational costs compared to smaller alternatives.
 

Infrastructure Design

Infrastructure requirements also differ substantially, with hyperscale facilities incorporating custom-designed systems optimized for specific workloads, while traditional data centers often rely on standard commercial equipment and solutions.


What role does location play in hyperscale data center deployment?


Location selection for hyperscale data centers involves complex considerations that significantly impact operational efficiency, performance, and costs. Strategic positioning of these facilities requires careful analysis of multiple factors.


User Proximity and Performance

Providers strategically position facilities to minimize latency for their services while optimizing operational costs. Proximity to major population centers and business hubs ensures optimal user experience and access to services.


Power and Energy Considerations

Access to reliable power sources has become increasingly important as hyperscale operators seek to reduce their environmental impact. Many facilities are located near solar, wind, or hydroelectric power generation to support sustainable operations and reduce energy costs.


Climate and Environmental Factors

Climate considerations also influence location decisions, as cooler environments can reduce cooling costs and improve overall energy efficiency. Some hyperscale operators have built facilities in northern climates to take advantage of natural cooling opportunities.


Regulatory and Market Access

Location selection must also consider regulatory requirements, data sovereignty laws, and market access requirements that vary by region and industry.


How are hyperscale data centers evolving to meet future demands?


The hyperscale industry continues to evolve rapidly as demand for cloud services, AI applications, and digital transformation increases globally. Emerging technologies like edge computing are driving the development of smaller, distributed hyperscale facilities that bring processing closer to end users.


Sustainability initiatives are becoming central to hyperscale operations, with providers investing in renewable energy sources, advanced cooling technologies, and circular economy principles for equipment lifecycle management. These efforts address both environmental concerns and operational efficiency requirements.


The integration of artificial intelligence into facility management systems is improving operational efficiency and enabling predictive maintenance capabilities. AI-based solutions can optimize energy consumption, predict equipment failures, and automatically adjust system parameters for optimal performance.


Future developments in hyperscale data centers will likely focus on:

  • Edge computing integration - Distributed processing to reduce latency
  • Sustainable operations - Renewable energy and efficient cooling systems
  • AI-driven management - Automated optimization and predictive maintenance
  • Specialized hardware - Custom processors for AI and machine learning workloads
  • Advanced connectivity - High-speed networks supporting global services
  • Modular designs - Flexible infrastructure for rapid deployment and scaling


Hyperscale data centers represent the pinnacle of modern computing infrastructure, enabling the digital services and applications that power today's global economy. Their massive scale, advanced technologies, and operational efficiency make them essential for supporting the ever-increasing demand for cloud computing, artificial intelligence, and digital transformation across industries. To learn more about how Legrand's infrastructure solutions support hyperscale data center operations, contact our team of specialists who understand the unique requirements of these massive facilities.
 

Adiabatic Cooling: Considerations Before You Invest In It

Blog 01/10/2025
LegrandData Center White SpaceCooling

What is an adiabatic cooling system?


An adiabatic cooling system is an energy-efficient cooling solution that leverages the natural process of evaporation to reduce air temperature without the need for traditional refrigeration. This technology works by introducing water into hot air streams, where the evaporation process occurs naturally, creating a cooling effect that reduces the overall temperature of the medium being cooled.


Unlike conventional cooling systems that rely on mechanical refrigeration, adiabatic cooling uses the principles of thermodynamics to achieve temperature control through minimal energy consumption. The process involves no heat transfer to or from the surrounding environment, making it an efficient solution for various industrial and commercial applications.


How do adiabatic cooling systems work?


Adiabatic cooling systems operate on the fundamental principle that when water evaporates, it absorbs heat from the surrounding air, effectively reducing the temperature. The process begins when hot air enters the system, where it encounters water through various delivery methods such as spray nozzles or wetted media.


As the air passes through the system, evaporation occurs when water molecules absorb energy from the hot air and transform into vapor. This transformation removes heat from the air stream, resulting in cooler, more humid air exiting the system. The fan units within the system ensure proper air circulation and pressure management throughout the cooling process.


The effectiveness of an adiabatic system depends on several factors, including ambient temperature, humidity levels, and the design of the equipment. In environments with lower humidity, the evaporation process is more efficient, allowing for greater temperature reductions with minimal water consumption.


What are the advantages of adiabatic cooling?


Adiabatic cooling offers significant benefits for business operations, particularly in terms of cost management and energy efficiency. The system requires substantially less energy compared to traditional mechanical cooling methods, as it relies primarily on natural evaporation rather than energy-intensive compressors and refrigeration units.


The operating costs are typically lower due to reduced electricity consumption and the use of water as the primary cooling medium. This makes adiabatic cooling an attractive solution for businesses looking to optimize their cooling requirements while maintaining effective temperature control in their facilities.


Key advantages include:

  • Energy efficiency - Significantly lower power consumption compared to traditional cooling
  • Cost-effective operation - Reduced electricity bills and maintenance requirements
  • Environmental benefits - Uses water instead of refrigerants, reducing environmental impact
  • Scalable design - Can be adapted for various facility sizes and applications
  • Reliable performance - Consistent cooling with proper system management


Where are adiabatic cooling systems commonly used?


Adiabatic cooling finds application across a wide range of industries and environments where efficient temperature control is essential. Data centers increasingly rely on these systems to manage heat generated by server equipment, ensuring optimal operating conditions while minimizing energy costs.


Manufacturing facilities utilize adiabatic cooling to maintain comfortable working environments and protect sensitive equipment from overheating. The technology is particularly effective in industrial settings where large volumes of air require cooling, such as warehouses, production facilities, and processing plants.


Common applications include:

  • Data centers - Cooling server rooms and IT equipment

  • Manufacturing facilities - Maintaining optimal production environments

  • Commercial buildings - Providing cost-effective climate control

  • Industrial processes - Managing heat in production operations

  • Outdoor cooling - Creating comfortable spaces in hot climates


What maintenance considerations are important for adiabatic cooling systems?


Proper maintenance is crucial for ensuring the long-term performance and efficiency of adiabatic cooling systems. Regular attention to water quality and system cleanliness helps prevent issues such as legionella growth and mineral buildup that can affect system operation.


Water treatment and filtration are essential components of maintenance programs, as they help control biological growth and minimize scaling within the system. The fan units require periodic inspection and cleaning to maintain optimal air flow and pressure levels throughout the cooling process.


Essential maintenance practices include:

  • Water quality management - Regular testing and treatment to prevent contamination

  • System cleaning - Periodic cleaning of components to prevent buildup

  • Filter replacement - Maintaining clean air filters for optimal performance

  • Leak detection - Monitoring for water leaks that could affect efficiency

  • Performance monitoring - Tracking system output to identify potential issues


How does adiabatic cooling compare to other cooling methods?


When compared to conventional mechanical cooling systems, adiabatic cooling offers distinct advantages in terms of energy consumption and operating costs. Traditional systems rely on refrigeration cycles that require significant electrical power, while adiabatic systems use the natural cooling properties of water evaporation.


The initial investment for adiabatic cooling systems is often lower than traditional alternatives, and the ongoing operating costs are reduced due to minimal energy requirements. However, the effectiveness of adiabatic cooling varies based on local climate conditions, with optimal performance occurring in environments with lower humidity levels.


Comparison factors include:

  • Energy consumption - Adiabatic systems use 75-90% less energy than traditional cooling

  • Installation costs - Generally lower initial investment requirements

  • Climate dependency - Performance varies with local humidity and temperature conditions

  • Water usage - Requires water supply but eliminates refrigerant needs

  • Maintenance requirements - Different maintenance focus on water quality rather than mechanical components


What are the design considerations for implementing adiabatic cooling?


Successful implementation of adiabatic cooling requires careful consideration of environmental factors and system design parameters. The local climate conditions, including temperature and humidity ranges, directly impact the system's cooling capacity and efficiency.


Proper sizing of units and fan systems ensures adequate cooling performance while minimizing water and energy consumption. The design must account for air flow patterns, water distribution systems, and control mechanisms that optimize the evaporation process under varying operating conditions.


Key design considerations include:

  • Climate assessment - Evaluating local temperature and humidity conditions

  • Capacity requirements - Determining cooling loads and system sizing

  • Water supply planning - Ensuring adequate water availability and quality

  • Air flow design - Optimizing air circulation for maximum efficiency

  • Control systems - Implementing monitoring and control technologies


How can businesses evaluate if adiabatic cooling is right for them?


Businesses considering adiabatic cooling should evaluate their specific cooling requirements, environmental conditions, and operational priorities. The technology is particularly well-suited for operations in moderate to low humidity environments where traditional cooling costs are a significant concern.


Factors to consider include current energy costs, facility size and layout, available water resources, and maintenance capabilities. A thorough analysis of these elements helps determine whether adiabatic cooling aligns with business objectives and provides the expected return on investment.


Evaluation criteria include:

  • Current cooling costs - Assessing potential savings from reduced energy consumption

  • Environmental conditions - Determining suitability based on local climate

  • Facility requirements - Matching system capabilities to cooling needs

  • Resource availability - Ensuring adequate water supply and maintenance support

  • Long-term benefits - Considering operational efficiency and cost management


Understanding adiabatic cooling principles and applications helps businesses make informed decisions about their cooling infrastructure. The technology offers an efficient, cost-effective solution for managing heat in various environments while supporting sustainable operations. 
 

Data Center Infrastructure

Blog 01/10/2025
LegrandData Center White SpaceCooling

Why is the data center infrastructure so critical?


A data center is a specialized facility designed to house computer systems, networking equipment, and related components that store, process, and distribute digital information. These facilities serve as the backbone of our connected world, enabling everything from cloud computing and business applications to the software that drives modern industry operations.


Data center infrastructure encompasses all the physical and digital systems required to support continuous operations. This includes power supply systems, cooling equipment, security technologies, and network components that work together to create a controlled environment for critical computing processes. Without robust infrastructure, even the most advanced server technologies cannot deliver reliable service to users worldwide.


What are the essential components of data center infrastructure?


Power and Electrical Systems


Power infrastructure forms the foundation of any data center facility. Uninterruptible Power Supply (UPS) systems provide instant backup during utility outages while conditioning electricity to protect sensitive equipment. These systems must deliver consistent energy to servers, networking gear, and cooling systems without interruption.


Key power components include:

  • UPS systems - Provide backup power and electrical conditioning
  • Power distribution units (PDUs) - Route electricity to individual server racks
  • Generators - Supply long-term backup power during extended outages
  • Transformers - Convert utility power to appropriate voltage levels
  • Switchboards - Safely distribute and control electrical power across the facility


Cooling and Environmental Control


Data center equipment generates significant heat that must be managed to prevent failures and maintain optimal performance. Cooling systems remove excess heat while controlling humidity and air quality throughout the facility.


Essential cooling components include:

  • CRAC/CRAH units, in-row cooling and FanWall systems - Deliver precision temperature and humidity control
  • Advanced cooling such as rear-door heat exchangers (RDHX), direct-to-chip (D2C) and immersion - Manage high-density loads efficiently
  • Heat rejection systems - Remove heat from internal cooling systems
  • Environmental monitoring - Track temperature, humidity and leakage conditions


Physical Security and Access Control


Security systems protect valuable equipment and sensitive data from unauthorized access. Physical security measures control who can enter the facility and monitor all activities within the data center.


Critical security components include:

  • Access control systems - Manage entry to different facility areas
  • Surveillance cameras - Monitor all areas for security threats
  • Biometric scanners - Verify identity before granting access
  • Intrusion detection - Alert operators to unauthorized entry attempts


Network and Connectivity Infrastructure


Network infrastructure enables data centers to connect with the outside world and facilitate communication between internal systems. This includes both physical cabling and networking equipment that route data efficiently.


Key network components include:

  • Fiber optic cables - Provide high-speed data transmission
  • Network switches - Route data between connected devices
  • Routers - Direct traffic between different network segments
  • Cable management systems - Organize and protect network connections


How do cloud providers utilize data center infrastructure?


Cloud service providers rely on massive data center infrastructure to deliver computing resources, storage, and applications to customers worldwide. These facilities house thousands of servers that provide the processing power needed for cloud-based services.


Cloud infrastructure requires:

  • High-density server configurations - Maximize computing power per rack
  • Redundant systems - Ensure continuous service availability
  • Scalable architecture - Support rapid capacity expansion
  • Advanced cooling - Handle heat generated by dense equipment layouts


The infrastructure must support multiple types of cloud services, from basic storage to complex processing applications that serve business customers across various industries.


What role does Legrand play in data center infrastructure?


Legrand's comprehensive data center solutions provide essential infrastructure components that enable reliable facility operations. Our product range includes power distribution, cooling support, and physical infrastructure systems designed for mission-critical environments.


Legrand solutions address key infrastructure needs:

  • Critical power – UPS & STS solutions, switchgears, cast resin transformers, high-power busbars and Starline track busway for scalable, resilient energy distribution.
  • Physical infrastructure – Modular server and network racks & cabinets, hot/cold aisle containment, overhead cable management and fire-resistant EZ-Path devices.
  • IT infrastructure – Intelligent rack PDUs with sensors, structured cabling (copper & fibre), KVM & serial consoles, and connectivity fibre solutions for high-density environments.
  • Cooling solutions – In-row active cooling, rear door heat exchangers (RDHx), immersion and air-assisted liquid cooling for efficiency at any scale.
  • Management & monitoring – DCIM software integration with intelligent metering, environmental sensors and access control for full visibility and control.


How is data center infrastructure evolving for the future?


The future of data center infrastructure is being shaped by emerging technologies and changing business requirements. Edge computing is driving demand for smaller, distributed facilities that bring processing closer to end users.


Key trends include:

  • Rising rack densities – AI workloads are pushing power demands beyond 100 kW per cabinet, requiring new approaches to cooling and distribution.
  • Shift to liquid cooling – Direct-to-chip and rear-door heat exchangers are replacing traditional air-based systems for high-density environments.
  • Smarter power distribution – Higher voltage architectures and modular busway systems deliver greater efficiency and scalability.
  • Modular design – Standardised, prefabricated blocks enable faster deployment and stepwise scaling without downtime.
  • Sustainability focus – Solutions aim to cut energy use and water consumption while integrating renewables and circular practices.
  • Intelligent infrastructure – AI-driven monitoring, automation and adaptive systems improve resilience, efficiency and predictive maintenance.


Legrand supports these shifts with solutions across critical power, liquid cooling, modular containment, intelligent PDUs and integrated DCIM platforms, helping operators design for density, sustainability and long-term resilience.


What are the different types of data center facilities?


Data centers vary significantly in size, purpose, and infrastructure requirements. Understanding these different types helps organizations choose the right infrastructure approach for their specific needs.


Enterprise Data Centers

Large organizations often operate their own facilities to house critical business systems and applications. These data centers require comprehensive infrastructure to support diverse computing needs and ensure business continuity.


Colocation Facilities

Colocation providers offer shared data center space and infrastructure services to multiple customers. These facilities must support various equipment types and provide flexible power and cooling options.


Cloud Data Centers

Cloud providers operate massive facilities designed to deliver computing resources and software services to customers worldwide. These facilities require highly scalable infrastructure that can handle rapid capacity changes.


Edge Data Centers

Edge facilities bring computing resources closer to end users, reducing latency for time-sensitive applications. These smaller facilities require efficient infrastructure that can operate with minimal on-site support.


How do organizations plan data center infrastructure investments?


Successful data center infrastructure planning requires careful analysis of current needs and future growth projections. Organizations must balance performance requirements with cost considerations while ensuring adequate capacity for business-critical operations.


Key planning considerations include:

  • Capacity requirements - Current and projected computing needs
  • Power and cooling - Infrastructure needed to support equipment
  • Security requirements - Physical and digital protection measures
  • Scalability - Ability to expand infrastructure as needs grow
  • Compliance - Meeting industry and regulatory requirements


Proper planning ensures that infrastructure investments provide long-term value while supporting evolving business needs and technological advances.


What makes data center infrastructure resilient and reliable?


Resilient data center infrastructure incorporates multiple layers of redundancy and protection to ensure continuous operations. This includes backup systems for power, cooling, and network connectivity that can maintain service during equipment failures or external disruptions.


Essential resilience features include:

  • Redundant power systems – Multiple UPS units and backup generators
  • Diverse network connections – Multiple internet service providers and routing paths
  • Environmental controls – Backup cooling systems and environmental monitoring
  • Physical security – Multiple access control and surveillance systems
  • Proper planning & capacity headroom – Designing for growth, with spare power, cooling and space to accommodate future demand without disruption


Building resilient infrastructure requires careful coordination between all facility systems and regular testing to ensure backup systems function properly when needed.


Understanding data center infrastructure is essential for any organization that depends on reliable computing resources. From power distribution and cooling systems to physical security and monitoring solutions, every component must work together seamlessly to ensure operational excellence. To learn more about how Legrand's comprehensive infrastructure solutions can support your data center requirements, contact our team of specialists today.
 

Uninterruptible Power Supply for Business: Ensuring Continuity, Protecting Operations

Blog 01/10/2025
LegrandData Center Grey SpaceUPS

What is an Uninterruptible Power Supply?


An uninterruptible power supply (UPS) is an electrical system that provides immediate backup power when the main utility source fails. Designed to maintain energy flow during short-term outages and disturbances, a UPS protects sensitive equipment and ensures critical business operations continue without interruption. Beyond emergency backup, UPS systems also condition incoming power, filtering out surges, spikes, and other anomalies that could compromise system stability.


Whether the goal is safeguarding digital infrastructure, preventing production downtime, or complying with safety protocols, a UPS system acts as the first layer of resilience in your electrical infrastructure and distribution. By bridging the gap between utility power and long-term backup generators, it ensures that businesses avoid costly disruptions and maintain operational continuity.


Why Do Businesses Rely on UPS Systems?


A power loss—even momentary—can have serious operational consequences. In environments where uptime is non-negotiable, a UPS enables immediate response and smooth transition to alternate power sources such as generators. Downtime costs can range from thousands to millions of dollars per hour, depending on the industry, making UPS systems indispensable for risk management and operational resilience.


UPS systems deliver key advantages to businesses, including:

  • Continuity of critical operations during outages
  • Protection for sensitive devices from surges, spikes, and voltage sags
  • Controlled shutdowns to prevent data loss and hardware damage
  • Improved compliance with industry safety standards
  • Confidence in system availability across essential services
  • Support for digital transformation initiatives by providing a reliable power foundation


Industries such as healthcare, finance, industrial automation, and data center operations view UPS systems not as optional add-ons—but as infrastructure essentials. Without them, the risks of financial loss, reputational damage, and even safety hazards increase exponentially.


How Does a UPS System Work?


UPS systems function by storing energy in an internal battery and delivering it instantly when a disturbance is detected. Core components include:

  • Rectifier: Converts AC input power to DC for battery charging
  • Battery Bank: Stores energy for emergency use
  • Inverter: Converts stored DC power back to clean, stable AC output
  • Control Systems: Monitor, regulate, and optimize power flow


In online (double-conversion) systems, the inverter is always on—continuously powering the load and fully isolating it from raw utility input. This architecture ensures seamless power delivery, even during fluctuations. In line-interactive models, voltage regulation can reduce reliance on the battery, extending its lifespan. Offline models, while simpler, switch to battery only when disruptions occur.


Advanced models incorporate real-time diagnostics, bypass mechanisms, and environmental sensors to further enhance resilience and system flexibility. With intelligent management software, operators can monitor performance remotely, predict failures, and optimize energy efficiency across the entire infrastructure.


What Types of UPS Systems Are Available?


Different UPS technologies are suited to different applications. Choosing the right type depends on equipment sensitivity, availability requirements, and environmental conditions. Factors such as scalability, efficiency, and integration with renewable energy sources increasingly influence selection in modern facilities.


Offline (Standby) UPS

  • Simple and cost-effective
  • Engages battery power only during outages
  • Suited for non-critical devices such as PCs or peripheral office equipment
  • Not ideal for environments requiring zero transfer time


Line-Interactive UPS

  • Automatically corrects minor voltage fluctuations
  • Maintains regulated power supply without switching to battery
  • Common in small businesses, retail, or network cabinets
  • Provides an effective balance between performance and cost


Online (Double Conversion) UPS

  • Delivers continuous conditioned power
  • Eliminates transfer time, ideal for critical infrastructure
  • Used in data centers, industrial control systems, and healthcare facilities
  • Ensures the highest level of protection against all power anomalies


What Should Be Considered When Choosing a UPS?


Selecting a UPS involves more than sizing batteries or matching voltages. A well-designed solution accounts for both current requirements and future growth. Businesses must also weigh total cost of ownership, balancing capital expenditure with operating efficiency and maintenance needs.


Important selection factors include:

  • Power rating (kVA/kW) of protected equipment
  • Required runtime to cover transition to generator or safe shutdown
  • Redundancy needs (e.g., N+1, N+N configurations)
  • Environmental conditions including space, cooling, and airflow
  • Integration with existing infrastructure and monitoring systems
  • Battery type, lifespan, and replacement strategy
  • Efficiency levels and impact on sustainability targets


While typical runtimes range from 5 to 15 minutes under standard configurations, extended runtimes can be achieved through external battery packs or integration with standby generators. In mission-critical facilities, modular solutions allow for flexible expansion without major redesigns.


UPS decisions should be aligned with wider electrical infrastructure and distribution strategy and business continuity planning. A properly sized and configured UPS not only protects equipment but also ensures compliance with service-level agreements and regulatory frameworks.


Where Are UPS Systems Commonly Used?


UPS systems are deployed across industries where stable power is essential:

  • Data Centers – Ensuring uptime for mission-critical server infrastructure
  • Healthcare Facilities – Protecting life-saving medical equipment
  • Industrial Operations – Securing automation systems and machinery
  • Telecommunications – Maintaining signal transmission and network uptime
  • Commercial Buildings – Supporting lighting, access control, and HVAC systems
  • Retail and Banking – Preventing transaction failures and ensuring customer trust


From facility-level installations to rack-mounted systems, UPS solutions scale to fit diverse operational needs. They form the backbone of modern infrastructure, enabling organizations to pursue innovation without fear of unexpected downtime.


Legrand UPS Solutions


Legrand offers a robust and extensive range of UPS technologies engineered to support demanding business environments. Here are some examples from Legrand portfolio:


Keor HPE – Conventional Three-Phase UPS

  • On-line double-conversion system with PWM high-frequency design
  • Available in N+X configurations for increased resilience
  • Compact form factor ideal for industrial and data center applications
  • Provides high efficiency with low total cost of ownership


Keor FLEX – Modular High-Power UPS

  • Scalable up to 4.8 MW through hot-swappable 100 kW modules
  • Built with Silicon Carbide components for efficiency up to 98.4%
  • Supports Lithium-Ion batteries, predictive diagnostics, and Smart Grid integration
  • Designed for sustainability and reduced carbon footprint


When considering Lithium-Ion upgrades, it's important to account for their unique charge profiles and thermal management needs, which may require system-level adjustments. These solutions not only reduce maintenance but also contribute to energy savings and greener operations.


Both systems deliver high-performance protection with space-saving footprints and simplified maintenance, ensuring continuity for mission-critical services. Legrand’s portfolio spans from entry-level solutions to large-scale enterprise systems, giving businesses the flexibility to select the right fit for their operations.


What Maintenance Is Required for UPS Systems?


Ongoing maintenance is essential to ensure system reliability and extend operational life. Without it, the risk of sudden outages and costly repairs increases significantly.


Recommended practices include:

  • Battery inspection and performance monitoring
  • Firmware updates and system diagnostics
  • Cleaning and airflow management
  • Load testing and runtime verification
  • Monitoring of environmental factors (temperature, humidity, dust…etc.)
  • Periodic review of redundancy configurations


Modern UPS systems often integrate with SNMP or building management system platforms, providing real-time alerts, remote diagnostics, and performance analytics that support proactive maintenance and faster fault resolution. AI-enabled predictive monitoring is also emerging, allowing operators to prevent issues before they occur.


Scheduled preventive maintenance reduces the risk of unexpected failures and supports regulatory compliance in sensitive industries. Many organizations also adopt service contracts to guarantee response times and replacement parts availability.


Frequently Asked Questions


How long can a UPS provide backup power?


Typical runtime varies from 5 to 15 minutes depending on system size, battery type, and load. For extended runtimes, external battery cabinets or generator integration is recommended. In mission-critical industries, runtime planning is a cornerstone of business continuity strategies.


How often do UPS batteries need to be replaced?


VRLA (Valve-Regulated Lead-Acid) batteries typically last around 3–5 years, while Pure Lead Acid batteries can offer a slightly longer lifespan of approximately 5–8 years. Lithium-Ion batteries provide an even greater service life, often lasting 8–12 years. With the right predictive maintenance plan, battery lifetime can be significantly extended, maximizing performance and reducing unexpected failures. Advanced monitoring tools help track battery health and predict wear, enabling proactive maintenance before issues occur.


Can UPS systems be scaled for future growth?


Yes. Modular UPS designs like Keor FLEX allow businesses to expand capacity without replacing the entire system, supporting right-sizing from day one. This ensures capital efficiency and scalability, adapting to evolving operational demands.


 

Subscribe to