Akamai Expands Cloud Services with Dedicated GPUs for Media Workloads - Latest Global News

Akamai Expands Cloud Services with Dedicated GPUs for Media Workloads

Developer-focused cloud computing infrastructure provider Akamai Technologies Inc. today announced the availability of a new industry-optimized service running on Nvidia Corp.’s graphics processing units. based and specifically aimed at media workloads.

It uses the Nvidia RTX 4000 Ada generation GPU to provide customers in the media industry with maximum performance efficiency and economics, encoding, decoding and processing videos faster and at lower prices than traditional virtual machines. According to Akamai, internal benchmarks show that the new service can perform GPU-based encoding tasks up to 25 times faster than traditional central processing unit-based encoding.

This makes the new service ideal for video streaming service providers. According to Akamai, the new service is intended to provide media providers with a more scalable and resilient architecture for streaming video by leveraging their highly distributed edge network with integrated content delivery.

According to Akamai, there is a growing need for this type of industry-optimized GPU service. It says the media industry is underserved by today’s cloud computing providers, which have largely focused their extensive but still limited GPU resources on handling artificial intelligence workloads such as training and inference of large language models. But while Nvidia’s GPUs are ideal for AI, they can also be fine-tuned to meet the specific needs and requirements of the media and entertainment industry.

For example, the Nvidia RTX 4000 GPU is uniquely capable of transcoding live video streams faster than real-time, resulting in significantly better streaming performance by reducing buffering and enabling faster playback. That’s because the RTX 4000 GPUs feature Nvidia’s latest NVENC and NVDEC hardware, while also providing additional capacity for concurrent encoding and decoding tasks, allowing them to support higher throughput in video processing tasks.

Other media-focused use cases include virtual reality and augmented reality content that require high-quality, real-time rendering of 3D graphics and multimedia content.

Shawn Michels, vice president of cloud products at Akamai, said media companies need access to reliable, low-latency computing resources to ensure the portability of the workloads they run. “Nvidia’s GPUs offer excellent value when deployed on Akamai’s global edge network,” he said.

Akamai is also well-positioned to deliver its GPU services from hundreds of globally distributed edge locations because its cloud infrastructure leverages its content delivery network. The company started its career as a CDN provider and remains a leader in this industry. However, following its $900 million acquisition of Lindoe LLC, it has since expanded into providing cloud computing services through its connected cloud infrastructure platform.

Akamai claims to offer the world’s most widely used cloud infrastructure, surpassing rivals such as Amazon Web Services Inc., Microsoft Corp. and Google Cloud, as the company can host its services in the hundreds of locations that its CDN includes.

Although Akamai says its GPU-based Nvidia RTX 4000 service is specifically intended for media workloads, that doesn’t mean it can’t be used for other tasks as well. For example, the new service can also support generative AI training and inference thanks to the inclusion of more than 20 gigabytes of GDDR6 memory, providing the extensive capacity that LLMs and their datasets require.

Other workloads it can support include data analysis, video gaming and graphics rendering, as well as high-performance computing tasks such as scientific simulations and calculations, the company said.

“To support a wide range of workloads, you need a wide range of compute instances,” Michels said. “What we are doing with industry-optimized GPUs is one of the many steps we are taking for our customers to increase instance diversity across the computing continuum to power and operate edge-native applications.”

Sharing Is Caring:

Leave a Comment