Akamai has announced the launch of Akamai Cloud Inference, a new solution that provides tools for developers to build and run AI applications at the edge. According to Akamai, bringing data workloads closer to end users with this tool can result in 3x better throughput and reduce latency up to 2.5x. “Training an LLM is like creating a map, requiring you to gather data, analyze terrain, and plot routes,” said Adam Karon, chief operating officer and general manager of the Cloud Technology Group at Akamai. “It’s slow and resource-intensive, but once built, it’s highly useful. AI inference is like using a GPS, instantly applying that knowledge, recalculating in real time, and adapting to changes to get you where you need to go. Inference is the next frontier for AI.” Akamai Cloud Inference offers a variety of compute types, from classic CPUs to GPUs to tailored ASIC VPUs. It offers integrations with Nvidia’s AI ecosystem, leveraging technologies such as Triton, TAO Toolkit, TensorRT, a...
This website is about programming knowledge. You can call this blog best programming master.