🔧 First look in Chrome Built-in AI [Early Preview] with Gemini Nano
Nachrichtenbereich: 🔧 Programmierung
🔗 Quelle: dev.to
In the past few weeks, I have been testing the Gemini Nano built into Google Chrome. The goal of this post is to provide some initial feedback on this new feature, which may be officially released in Chrome soon.
A little context
The main goal of integrating Gemini Nano into Google Chrome via a Web API is to promote and facilitate small AI features within web pages, without affecting performance or requiring server requests for simple responses, so, with this goal in mind, the Chrome team is working to make Gemini Nano available through an API in Chrome. This initial implementation has some limitations: it currently only works on desktops, requires a device with a GPU, and needs some available storage. To learn more about this project, you can view the official documentation here => https://developer.chrome.com/docs/ai/built-in
Note: It’s important to emphasize that this implementation is still in its early stages, and changes or improvements may be coming soon.
My test
To build my test I used a web page that I have in my lab repository build in NextJS, I used this page for a variety of tests so this project can looks like a little messy but is only for test, if you want to take a look check it out in => https://github.com/MichelAraujo/lab-playground/tree/master/my-app-next
For this test I used:
- Macbook Pro M3, 18Gbs
- macOS 15.0
- Google Chrome Canary 131.0
My goal here was build a simple feature for understand how this built-in AI in Chrome works, for that I build a English sentence feedback, in summary works like this, I put a sentence in English and the AI give some feedback about my sentence and how I can improve it (all using AI built-in in Chrome without make any request in the server).
A little bit about my prompt config (for this test I used the Prompt API, you can learn more about in the official doc):
Note: In this built-in AI experimentation, we had access to other available APIs, such as the Summarization API, Language Detection, and a few others.
The Prompt initialization example:
const createPromptSession = async () => {
const { available } = await ai.assistant.capabilities();
if (available !== "no") {
if (!session) {
console.log('## Create session ##');
session = await ai.assistant.create({
systemPrompt: "you're an English teacher",
monitor(m) {
m.addEventListener("downloadprogress", (e) => {
console.log(`Downloaded ${e.loaded} of ${e.total} bytes.`);
});
}
});
return session;
}
return session;
}
console.error('# Error - assistant is not available');
return undefined;
}
The key point here is the context I provide in the prompt, like 'you're an English teacher,' to ensure that my feature delivers accurate feedback on English usage in the results.
Next, I create the prompt and send a string with context about what I want in the response, along with the user input from the page.
const streaming = await clonedSession.promptStreaming(`Give me feedback about the grammar on the following sentence in English. Sentence: ${sentence}`);
return streaming;
Check out some feedback that AI give me with this setup, this is a simple Prompt setup but you can check the accuracy of the response, check out:
First, a simple sentence "I want to be a good guitar player!"
Second, I use the same sentence to check for a different result.
Third, a more complex sentence with little mistake:
I like this feedback!
Forth, the suggestion sentence of the previous answer:
The average quality of the answers (in my opinion) is OK, some with little mistakes or lack of context but in general is very good! Like I said, my Prompt setup can be improved too.
In the code example in the repository, I've included some prompt configuration commands that adjust the quality and context of the responses, if you'd like to check them out.
About performance
Here is the point that surprised my a lot, the performance is very good, for all of my tests I can't detect any lake of the performance or degradation in some aspects of my page. I make the tests in a page running animation and when I run the test I don't see lake of the performance in render.
Note: I tried running the same test on an old notebook I have with Windows to check the performance, but it doesn't have enough storage to run the experimental Chrome Canary with the built-in features installed. =(
For performance tests I like to show running in live, so I made a quick video that shows some tests.
For this tests I simulate 6x slowdown CPU and Slow 4g connection and the performance runs very good like you can see in the video below:
https://youtu.be/DI2pAZ-N8tw
I made some record in Devtools performance panel to see show the main thread, GPU thread, behaves when I run the AI prompts, the result was:
We can see a lot of GPU used as expected because de IA execution uses GPU.
Next, test:
The main thread seems to be running very smoothly with little JS / Render tasks that don't compromise other JS tasks execution.
Conclusion
This is a very early-stage feature, so it will see significant improvements in the future. However, it already looks very promising, and I can't wait to see how web pages will leverage it to create new features for users.
I hope this arrives for mobile too, let's wait and see.
Maybe I will share more tests.
Thanks!
...
🎥 Built-in AI: Gemini Nano in Chrome
📈 33.7 Punkte
🎥 Video | Youtube
🔧 Local first AI app with Gemini Nano and Chrome
📈 29.99 Punkte
🔧 Programmierung
🔧 Chrome Gemini Nano Demo
📈 24.81 Punkte
🔧 Programmierung