Arxys Launches New VideoX V4 with Intel 4th Gen Scalable

Intel’s latest 4th Generation Scalable CPU’s

Intel unveils its 4th Gen Intel® Xeon® Scalable processors, Intel® Xeon® CPU Max Series and Intel® Data Center GPU Max Series. Built to take on and solve computing challenges at scale, Intel’s strategy aligns CPU cores with built-in accelerators optimized for specific workloads and delivers increased performance at higher efficiency for optimal total cost of ownership. The 4th Gen Intel Xeon Scalable processors are Intel’s most sustainable data center processors ever, delivering a range of features for managing power and performance, making the best use of CPU resources to achieve key sustainability goals. In addition, the Xeon CPU Max and the Max Series GPU add high-bandwidth memory and maximum compute density to solve the world’s most challenging problems faster. 

4th Gen Xeon Scalable processors are built on the Intel 7 manufacturing process and will offer improved per-core performance over the prior generation, in addition to higher density—scaling up to 60 cores per socket. Unlike Alder Lake and Raptor Lake on the consumer side, these processors use Golden Cove “P-Cores” exclusively. The platform supports DDR5 memory and PCIe 5.0 plus Compute Express Link (CXL) 1.1 to move high-throughput data. 

With the most built-in accelerators of any CPU in the world for key workloads such as AI, analytics, networking, security, storage and high performance computing (HPC), 4th Gen Intel Xeon Scalable and Intel Max Series families deliver leadership performance in a purpose-built workload-first approach.

  • 4th Gen Intel Xeon Scalable processors are Intel’s most sustainable data center processors, delivering a range of features for optimizing power and performance, making optimal use of CPU resources to help achieve customers’ sustainability goals.
  • When compared with prior generations, 4th Gen Xeon customers can expect a 2.9x1 average performance per watt efficiency improvement for targeted workloads when utilizing built-in accelerators, up to 70-watt2 power savings per CPU in optimized power mode with minimal performance loss for select workloads and a 52% to 66% lower total cost of ownership (TCO)3.

Hardware Acceleration

Intel® Advanced Matrix Extensions (Intel® AMX)

With new Intel AMX, AI performance on the CPU expands to include fine-tuning and small and medium deep learning training models. Intel AMX is a built-in accelerator that improves the performance of deep learning training and inference. It is ideal for workloads like natural language processing, recommendation systems and image recognition.

Intel® QuickAssist Technology (Intel® QAT)

By offloading encryption, decryption and compression, Intel QAT— now integrated as a built-in accelerator — helps free up processor cores so systems can serve a larger number of clients or use less power. With Intel QAT, 4th Gen Intel Xeon Scalable processors are the highest- performance CPUs that can compress and encrypt in a single data flow.

Intel® Data Streaming Accelerator (Intel® DSA)

Intel DSA drives high performance for storage, networking and data-intensive workloads by improving streaming data movement and transformation operations. Designed to offload the most common data movement tasks that cause overhead in data center-scale deployments, Intel DSA helps speed up data movement across the CPU, memory and caches, as well as all attached memory, storage and network devices.

Intel® Dynamic Load Balancer (Intel® DLB)

Intel DLB helps improve system performance related to handling network data on multicore Intel® Xeon® Scalable processors. It enables the efficient distribution of network processing across multiple CPU cores/threads and dynamically distributes network data across multiple CPU cores for processing as the system load varies. Intel DLB also restores the order of networking data packets processed simultaneously on CPU cores.

Intel® In-Memory Analytics Accelerator (Intel® IAA)

Intel IAA helps run database and analytics workloads faster, with potentially greater power efficiency. This built-in accelerator increases query throughput and decreases the memory footprint for in-memory database and big data analytics workloads. Intel IAA is ideal for in- memory databases, open source databases, and data stores like RocksDB and ClickHouse.

Enhanced Performance

Intel® Advanced Vector Extensions 512 (Intel® AVX-512)

Intel AVX-512 is the latest x86 vector instruction set, with up to two fused-multiply add (FMA) units and other optimizations to accelerate performance for demanding computational tasks, including scientific simulations, financial analytics, and 3D modeling and analysis.

Intel® Advanced Vector Extensions 512 (Intel® AVX-512) for vRAN

Intel AVX-512 for virtualized radio access network (vRAN) is designed to deliver greater capacity at the same power envelope for vRAN workloads. This helps communications service providers increase their performance per watt to meet critical performance, scaling and energy efficiency requirements.

Intel® Crypto Acceleration

Intel Crypto Acceleration reduces the impact of implementing pervasive data encryption and increases the performance of encryption-sensitive workloads, like secure sockets layer (SSL) web servers, 5G infrastructure and VPNs/firewalls.

Intel® Speed Select Technology (Intel® SST)

Intel SST is designed to grant more active and expansive control over CPU performance. Intel SST enables improved server utilization and reduced qualification costs by allowing customers to configure a single server to match fluctuating workloads. The result is one flexible server with multiple configurations, leading to improved TCO.

Intel® Data Direct I/O Technology (Intel® DDIO)

Intel DDIO helps remove inefficiencies by enabling direct communication between Intel® Ethernet controllers and adapters and host processor cache. Eliminating frequent visits to main memory can help reduce power consumption, provide greater I/O bandwidth scalability and reduce latency.

Increased Security

Intel® Software Guard Extensions (Intel® SGX)

With Intel SGX, organizations can unlock new opportunities for business collaboration and insights — even with sensitive or regulated data. Intel SGX is the most researched, updated and deployed confidential computing technology in data centers on the market today, with the smallest trust boundary. Confidential computing improves the isolation of sensitive data with enhanced hardware-based memory protections. Support for Intel SGX on Intel Xeon CPU Max Series is on DDR flat mode only.

Intel® Trust Domain Extension (Intel® TDX)

Intel TDX is a new capability available through select cloud providers in 2023 that offers increased confidentiality at the virtual machine (VM) level, enhancing privacy and control over data. Within an Intel TDX confidential VM, the guest OS and VM applications are isolated from access by the cloud host, hypervisor and other VMs on the platform.

Intel® Control-Flow Enforcement Technology (Intel® CET)

Intel CET provides enhanced hardware-based protections against return-oriented and jump/call- oriented programming attacks, two of the most common software-based attack techniques. Using this technology helps shut down an entire class of system memory attacks that long evaded software-only solutions.

Key Advantage over 2nd Gen Scalable CPUs

Sustainability 
The expansiveness of built-in accelerators included in 4th Gen Xeon means Intel delivers platform-level power savings, lessening the need for additional discrete acceleration and helping our customers achieve their sustainability goals. Additionally, the new Optimized Power Mode can deliver up to 20% socket power savings with a less than 5% performance impact for selected workloads11. New innovations in air and liquid cooling reduce total data center energy consumption further; and for the manufacturing of 4th Gen Xeon, it’s been built with 90% or more renewable electricity at Intel sites with state-of-the-art water reclamation facilities.

Artificial Intelligence 
In AI, and compared to previous generation, 4th Gen Xeon processors achieve up to 10x5,6higher PyTorch real-time inference and training performance with built-in Intel® Advanced Matrix Extension (Intel® AMX) accelerators. Intel’s 4th Gen Xeon unlocks new levels of performance for inference and training across a wide breadth of AI workloads. The Xeon CPU Max Series expands on these capabilities for natural language processing, with customers seeing up to a 20x12 speed-up on large language models. With the delivery of Intel’s AI software suite, developers can use their AI tool of choice, while increasing productivity and speeding time to AI development. The suite is portable from the workstation, enabling it to scale out in the cloud and all the way out to the edge. And it has been validated with over 400 machine learning and deep learning AI models across the most common AI uses cases in every business segment.

Scroll to Top