
Generative AI + Design Workflow Orchestration
Equus and Ravel have partnered to showcase a first-of-its-kind Generative AI + Design workflow for on-prem and remote teams running on Supermicro’s new Liquid Cooled AI Superworkstation, all managed by RAVEL Orchestrate™. This complete solution is now available to partners and customers. Click below to schedule an opportunity to demo.
Ravel Smart Assembly = Simplify Management – Simplify Assembly & Deployment – Simplify ROI Visibility – Simplify Telemetry To Make Good Decisions
Configure
Create virtual workstation software images customized to meet the specific needs of your teams.
Deploy
Deploy your customized cloud and on-prem remote workstations without needing to be a cloud or virtualization guru.
Custom virtual workstation deployment and management made easy
Equus provides Optimized Compute Infrastructure (full stack) for your specific application.
Not assessing the needs of your software/infrastructure applications (processing power, GPU usage, memory allocation, network bandwidth, and storage needs) leads to overspending and underperforming. Most off-the-shelf solutions offer either too much or too little computing power for your applications.
Optimizing your full software and hardware stack can lead to tremendous cost savings while also equipping your workforce with proper tools to get work done efficiently – on premises or remote.
With today’s expanding Generative AI and Design landscape, you’ll need a trusted advisor to accelerate your growth while optimizing your expenses. Equus is here to help!

Key Benefits
Transportable
Supermicro’s AI Superworkstation with RAVEL Orchestrate is easily deployed at any on-premise location for immediate use
Multi-Use in a Single Machine
It’s a fleet of workstations to provide your creative team with a single point of high compute power for heavy Gen AI/Design workloads
Multi-Use in a Single Machine

SuperWorkstation SYS-751GE-TNRT
High Performance Liquid-Cooled Tower/5U Rackmount AI GPU Workstation with Dual Processor (4th Gen Intel® Xeon®) Supporting up to 4 NVIDIA A100 PCIe 80GB GPUs at Noise Level Under 30dBA.
- High Performance Computing
- Generative AI
- Engineering/Scientific Research
- AI/Deep Learning Training
Key Features
- Acoustically optimized for quiet operation
- 4th Gen Intel® Xeon® Scalable processor support
- Intel® C741 Chipset
- 6 PCIe 5.0 x16, 2 NVMe M.2 slots
- 8x 2.5″ peripheral drive bays, Up to 8x NVMe SSD support
- Closed-loop liquid cooling for CPUs and GPUs
- Supports up to four liquid-cooled GPUs

The Solution Addresses Three Unique Use Cases For The M&E, AEC, PD&M, and Gaming content creation industries
- The generative AI/Design computer working server (GPU Super Server)
- A clustered environment for Generative AI and Generation Design for SMBs or smaller installations with ability to scale and collaborate with other team members.
- A core cluster that is ready to scale immediately for medium and larger organizations and team members.
The above uses include the following: data science and heavy compute workloads for Oil & Gas, Medical, AI Training and more.