Microsoft Edge might run Phi3 Mini AI model locally on Windows 11 and Windows 10


Microsoft Edge Phi3 mini model

Microsoft appears to be testing a new feature that could allow Edge browser to locally run small language called “Phi3 Mini”. This means you’ll be able to use Microsoft Edge to play with one of Microsoft’s small language models directly on Windows 11 (and Windows 10) PC. This is according to new experimental flags spotted in Edge Canary.

As noticed by our reader Leo on X and confirmed by Windows Latest, there’s a new flag called “Prompt API for Phi3 min”. Microsoft describes it as a way to turn on “exploratory Prompt API, which allows you to interact with a “built-in large language model” (Phi3 Mini in Edge).

Phi3 mini model in Microsoft Edge
Phi3 mini model in Microsoft Edge | Image Courtesy: WindowsLatest.com

“Enables the exploratory Prompt API, allowing you to send natural language instructions to a built-in large language model (Phi3 Mini in Edge). Exploratory APIs are designed for local prototyping to help discover potential use cases, and may never launch. These explorations will inform the built-in AI roadmap,” Microsoft confirmed the experiment via Edge flags menu.

Is Edge really going to locally host an AI model?

What makes us think the model could be locally hosted?

The use of “exploratory” term suggests that Edge’s small language model idea is still being worked on and is likely in the very early days of development.

Microsoft clearly doesn’t appear to be hosting the full-fledged large language model locally within Edge, which isn’t practically possible. It’s possible the company is trying to use parts of the small language model to power new features in the browser.

“This API is primarily intended for natural language processing tasks such as summarizing, classifying, or rephrasing text. It is NOT suitable for use cases that require factual accuracy (e.g. answering knowledge questions),” Microsoft noted in the experimental flag description.

Microsoft is seeing how it works directly through Edge browser. The API is designed for really simple tasks like summarizing or rephrasing text. The company also warns that the API doesn’t need to be accurate for knowledge-based questions.

But how could Microsoft integrate local AI into Edge? For example, you can select a few hundred words of text, right-click, and then use the built-in Edge AI process to “summarize or rephrase text” locally.

You could do that already via Copilot, but when data is processed locally, responses are typically faster and more private.

Additionally, Microsoft mentions “local prototyping,” which is another clear sign that the company is experimenting with locally hosting the API. The term “built-in large language model” also confirms that this model is integrated into the Edge browser itself.

Why would Microsoft use the term “built-in” if that was not the case?

Gemini can also be used locally in Chrome.. sort of

There are also ways to run Google Gemini Nano (a small language model) directly in Google Chrome, and you’ll be surprised how insanely fast it runs if you watch the above video shared by Morten Just on X.

What’s particularly interesting is that these locally hosted models work entirely offline, so you don’t have to worry about sharing your data or slow internet.

Of course, locally running AI models in Edge is an idea that requires a lot of work, and one can hope the integration doesn’t take up a lot of system resources.

In addition to local AI, Edge also wants to reduce clutter and integrate Windows 11’s energy-saving to reduce power consumption.



Source link

Previous articlePhilips OLED809 review: a colourful mid-range OLED TV with Ambilight that rivals the LG C4
Next articleGalaxy Digital Buys $82 Million in Bitcoin Amid Market Dip