According to Microsoft, Windows ML is now ready for production, just in time for the upcoming generation of AI PC CPUs. It’s a significant advancement for local AI and the processors that enable its use.
Seven years after launching the technology at Build, Microsoft discreetly announced on Tuesday that Windows ML is now widely accessible.
Hopefully, Windows ML will address a recurring issue: although customers are interested in AI, they don’t care where it resides. Many people associate ChatGPT, a cloud-based application, with “AI.” AMD, Qualcomm, and Intel have all made significant investments in local AI and potent NPUs that can do dozens of TOPS (trillions of operations per second).
Applications must be specially coded for NPUs, according to analysts. Windows ML alters that. According to Microsoft, “Windows ML is the integrated AI inferencing runtime built for on-device model inference and simplified model dependency management across CPUs, GPUs, and NPUs.”
Microsoft continues, “With the ability to run models locally, developers can create AI experiences that are more cost-effective, private, and responsive, reaching users across the widest range of Windows hardware.”
In essence, Windows ML is made to count all of a computer’s resources and assign the task or application to the GPU, CPU, or NPU that is most suited for it. It’s unclear if the developer or Windows ML will give more weight to GPU power or NPU efficiency, but that is unimportant. Everyone will benefit if the software is able to utilize the hardware that is available fully.