Skip to content

Commit

Permalink
Production Deployment (#20)
Browse files Browse the repository at this point in the history
  • Loading branch information
pawcoding authored Nov 3, 2024
2 parents d975cba + 46e20df commit e1a538e
Show file tree
Hide file tree
Showing 2 changed files with 42 additions and 18 deletions.
37 changes: 30 additions & 7 deletions src/components/posts/local-ai-prompt-demo.solid.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -2,24 +2,30 @@ import { Show, createSignal, type JSX } from "solid-js";

declare global {
interface Window {
ai: {
createTextSession: () => Promise<ChromeAISession>;
ai?: {
languageModel?: ChromeLanguageModel;
assistant?: ChromeLanguageModel;
};
}
}

type ChromeAISession = {
type ChromeLanguageModel = {
capabilities: () => Promise<{ available: 'no' | 'after-download' | 'readily' }>
create: () => Promise<ChromeAIAssistant>;
}

type ChromeAIAssistant = {
promptStreaming: (prompt: string) => ReadableStream<string>;
};

export default function LocalAiPromptDemo(props: { children?: JSX.Element }) {
const isAvailable = !!window.ai;
const isAvailable = !!window.ai && (!!window.ai.languageModel || !!window.ai.assistant);

const [prompt, setPrompt] = createSignal("");
const [response, setResponse] = createSignal("");
const [error, setError] = createSignal("");
const [generating, setGenerating] = createSignal(false);
let session: ChromeAISession | undefined = undefined;
let session: ChromeAIAssistant | undefined = undefined;

async function run() {
if (generating()) return;
Expand All @@ -28,7 +34,25 @@ export default function LocalAiPromptDemo(props: { children?: JSX.Element }) {

try {
if (!session) {
session = await window.ai.createTextSession();
let model: ChromeLanguageModel;
if (window.ai?.languageModel) {
model = window.ai.languageModel;
} else if (window.ai?.assistant) {
model = window.ai.assistant;
} else {
throw new Error("no model detected");
}

const capabilities = await model.capabilities();
if (capabilities.available === "no") {
throw new Error("no model available");
} else if (capabilities.available === "after-download") {
if (!confirm("The model will be downloaded. This might take a while. Do you want to continue?")) {
throw new Error("download request was denied")
}
}

session = await model.create();
}

setResponse("");
Expand All @@ -38,7 +62,6 @@ export default function LocalAiPromptDemo(props: { children?: JSX.Element }) {
const stream = session.promptStreaming(
prompt() + "\n\nResponse in 3 to 5 sentences.",
);
// @ts-ignore - This actually works so I don't know why it's complaining
for await (const chunk of stream) {
setResponse(chunk);
}
Expand Down
23 changes: 12 additions & 11 deletions src/content/posts/local-ai-in-the-browser.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ title: "Integrating local AI into Rainbow Palette"
description: "Google announced a local Gemini Nano model inside Chrome that can be used by websites directly in the browser. I tried to integrate this experimental feature into Rainbow Palette to generate color palettes. Let's see how it works and how you can use it yourself!"
image: "/images/local-ai-in-the-browser.webp"
pubDate: 2024-07-07T18:30:00
modDate: 2024-11-03T13:00:00
tags: ["web development", "rainbow palette", "ai", "llm", "google chrome"]
---

Expand Down Expand Up @@ -140,9 +141,11 @@ To then activate the model inside your browser you need to activate both of the
- [`#prompt-api-for-gemini-nano`](chrome://flags/#prompt-api-for-gemini-nano) (Enabled)
- [`#optimization-guide-on-device-model`](chrome://flags/#optimization-guide-on-device-model) (Enabled BypassPerfRequirement)

After that you also need to go to [chrome://components](chrome://components/), search for `Optimization Guide On Device Model` and click on `Check for update`.
~~After that you also need to go to [chrome://components](chrome://components/), search for `Optimization Guide On Device Model` and click on `Check for update`.
Maybe you need to restart your browser in between for this to appear or maybe it doesn't appear at all (for me it didn't appear on my Linux machine, but only on my Windows laptop).
But if you can see it, click on `Check for update` and wait for the model to be downloaded.
But if you can see it, click on `Check for update` and wait for the model to be downloaded.~~

**November update:** The component is not available anymore, but the model can be downloaded automatically now.

Then you should be ready to go and can start using the model in your browser.

Expand All @@ -153,11 +156,11 @@ The prompt has to be sent to a session, so we first have to create one.
This can be done via the `window.ai` object in the browser like this:

```javascript
const session = await window.ai.createTextSession();
const session = await window.ai.languageModel.create();
```

This creates a new text session with some default options.
You can also pass some options to the `createTextSession` function to customize the session, but we will not do this for now.
You can also pass some options to the `create` function to customize the session, but we will not do this for now.

But now that we have a session, we can start sending prompts to it:

Expand Down Expand Up @@ -193,7 +196,7 @@ It's not the fastest model and depending on your hardware and the prompt it may
But it's fun to play around with it.
Since the model is running locally on your device, you can also use it offline!

Until now, the only sort of documentation I found for this API is this [type declaration file](https://github.com/jeasonstudio/chrome-ai/blob/02defd3c2eb0e85c0770114b3e30ab29184cbe71/src/global.d.ts) used in the [`chrome-ai`](https://www.npmjs.com/package/chrome-ai) npm package.
Until now, the only sort of documentation I found for this API is this [type declaration file](https://github.com/jeasonstudio/chrome-ai/blob/main/src/global.d.ts) used in the [`chrome-ai`](https://www.npmjs.com/package/chrome-ai) npm package.
This package is a wrapper around the API to make it easier to use.
This also allows you to easily intergrate the model with the [`ai`](https://www.npmjs.com/package/ai) package, which is a more general wrapper around different AI models.
For now though I'll stick with the raw API.
Expand Down Expand Up @@ -225,17 +228,15 @@ async function createSession(): Promise<ChromeAISession> {
* Some other checks
*/

const canCreateTextSession = await window.ai.canCreateTextSession();
if (canCreateTextSession === "readily") {
const options = await window.ai.defaultTextSessionOptions();
return await window.ai.createTextSession({
...options,
const capabilities = await window.ai.languageModel.capabilities();
if (capabilities.available === "readily") {
return await window.ai.languageModel.create({
temperature: 0.6,
});
}

/*
* Check for generic session as a fallback or throw error
* Some fallback stuff
*/
}
```
Expand Down

0 comments on commit e1a538e

Please sign in to comment.