Revolutionising AI Inference and Fine-Tuning
I/ONX High Performance Compute has unveiled the Symphony SixtyFour, a new platform that promises to cut costs by 50% across AI inference and fine-tuning processes. The announcement was made on 22nd April in Las Vegas.
Symphony SixtyFour supports up to 64 accelerators on a single node, significantly reducing the infrastructure overhead linked with traditional AI systems. This consolidation helps eliminate the ‘Host Tax’ by cutting power usage by up to 30kW per rack, while also reducing the cost of rack-scale deployments by up to 70%.
According to I/ONX CEO Justyn Hornor, “Enterprise AI infrastructure is entering a new phase of maturity. With Symphony SixtyFour, we’ve reimagined the stack to be more fluid and fit for purpose, allowing organisations to master massive-scale inference while finally eliminating the unnecessary infrastructure waste that has hindered ROI.”
Key Features of Symphony SixtyFour
The Symphony SixtyFour platform maximises efficiency by collapsing multiple infrastructure nodes into a single node, removing up to 30kW of wasted support power. It offers zero-hop, near-deterministic performance by housing 64 accelerators within a single operating system instance, eliminating network latency.
The platform is designed for heterogeneous flexibility, supporting a mix of high-end GPUs and low-power co-processors. This vendor-neutral setup allows enterprises to adapt to shifting market dynamics and future-proof their infrastructure.
Symphony SixtyFour also simplifies management tasks, reducing operational costs and collapsing the Software Tax. Enterprises can save up to $500,000 annually on software and licensing fees per cluster.
With inference now accounting for 90% of enterprise AI workloads, Symphony SixtyFour provides significant advantages for reduced CapEx and OpEx, addressing previous limitations of deploying inference on training hardware platforms.
In training comparisons, the I/ONX system recovers the 30kW Host Tax typically wasted on redundant CPUs, memory, and support hardware in multi-node clusters. Ongoing support tasks are also simplified.
For production-scale inference on alternative accelerators, the platform is transformative, drawing one-fourth the power of a traditional 64-device cluster. This completely eliminates liquid cooling needs for inference only.
Symphony SixtyFour is now available, providing organisations with an opportunity to enhance their AI infrastructure efficiency. I/ONX continues to lead the charge in delivering high-density solutions that unlock the full economic potential of AI technologies.

