1 7
1 7
LinkedIn: The Open19 standard defines four brick form factors:
- Brick (½ wide 1RU)
- Double High Half Width (2RU)
- Double Wide Brick (1RU)
- Double High Brick (2RU)
The bricks will all have linear power and data growth, meaning the bigger the data center, the more power and data available to you. The baseline brick starts with 200W in an unmanaged system, limited by a maximum of 400W in a managed system. The brick will also have day-one 50GE network interface with a cable capable of up to 200G.
All of the Open19 servers are self-sustained, from EMI to safety to cooling. This means that even in an environment with no external assistance, the server will operate properly between 10C and 40C completely EMI-contained and safe.
LinkedIn: The Open19 cages come in two sizes: 12RU and 8RU. They are entirely passive, and every 2RU can be converted to be half-width or full-width. The cages are straightforward, cost-effective, and foundational to the set up of the standard form factors for the Open19 technology.
LinkedIn: The Open19 cages come in two sizes: 12RU and 8RU. They are entirely passive, and every 2RU can be converted to be half-width or full-width. The cages are straightforward, cost-effective, and foundational to the set up of the standard form factors for the Open19 technology.
LinkedIn: The first generation of the Open19 cable system — power cable and data cable — has been designed to optimize for ease of use and future-proofing level for the next three to five years.
The Open19 data cable has been optimized for speed and density.
Each server is connected with four bidirectional channels rated to 50G PSM4 on day one, which enables up to 200G per half-width server. Since the only switch we have is 3.2T capacity, we are only enabling a 50G connection per server (growing linearly) to be able to support 48 servers per ToR switch. We leveraged two channels of 25G initially and the two other ports used for optional 1GE OOB network connectivity and optional console connection.
In the initial configuration, the cable will support 100G per half-width server with an option to move to 200G when needed.
LinkedIn: The Open19 power shelf is a combination of open standard form factor, connectors, external mechanical configuration, and management CPUs, combined with a proprietary smart e-fusing system and off-the-shelf power modules.
It is universal and can support most AC and DC configurations for a feed. It was designed with a standard power shelf input connector and a specific whip cable per input standard while the power modules can handle the AC and DC inputs. The shelf is generating 12v output—yes, 12v not 48v—and directly feeding the server motherboards.
For a 3+3 configuration, the power shelf can handle 9.6kw of total output power. For a 5+1 configuration, the shelf rating jumps to 15.5kw but loses the A/B feed redundancy.
All six module outputs are shared, so any module configuration is possible based on the user's needs and the level of redundancy they expect from the system. The power shelf has a per-server protection and monitoring function based on a dual Linux-based BMC module. It is multi-sourced with a limiting factor that a supplier-specific shelf will only take the power modules from that supplier.
LinkedIn: The Open19 platform defines a 3.2T switch to terminate the special data cable and creates the blind-mate connectivity for all servers.
The Open19 switch is optimized for cost-effectiveness (power supply-free) and for a data path and OOB switching function. While keeping the data path and OOB switching function separate on the system from a management perspective and independent from a power supply, they share the same chassis and aggregate both the data path and the OOB function into one.
Like every other part of the Open19 platform, we have multiple sources for the Open19 switch.
The switch has the following features:
- 3.2T Switch
- Dual switch: data path and management (OOB)
- 50G per server data path
- 1G per server management (optional)
- Console port per server (optional)
- 12v input (no power supplies)
- Up to 8x100G uplinks or non-open19 gear ports
- Broadwell-DE CPU running ICOS and SONiC
- BMC running OpenBMC code
- Linkedin white box design (open sourced)
- Cost-optimized
This is a first generation switch for Open19. In the second generation switch, we will follow up with a 6.4T to double the per server capacity.
LinkedIn: The first generation of the Open19 cable system — power cable and data cable — has been designed to optimize for ease of use and future-proofing level for the next three to five years.
The design of the power cable takes into consideration a maximum configuration for a typical data center of 19.2kW power feed to accommodate two leaf zones. We can take 9.6kW and create two levels of power — one for each managed system of 200W per half-width 1RU server. We selected power connectors and pins that will handle 400W (35A at 12V) per server, which enables a power-managed system to push each half-width server to 400W. As described above, that means 800W to double size and 1600W for a 2RU system.
The power cable terminates into a high-density standard power connector that aggregates 12 servers. The connector leverages the same connectivity technology and mirrors the server side. The cable and connectors are all off-the-shelf with an open specification for any supplier who would like to produce it.
