-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(deps): update dependency openai to v4.80.1 #236
Conversation
openai debug - [puLL-Merge] - openai/openai-node@v4.78.1..v4.79.4 Diffdiff --git .release-please-manifest.json .release-please-manifest.json
index 3218ab333..b1ab5c7b9 100644
--- .release-please-manifest.json
+++ .release-please-manifest.json
@@ -1,3 +1,3 @@
{
- ".": "4.78.1"
+ ".": "4.79.4"
}
diff --git CHANGELOG.md CHANGELOG.md
index 320d00140..4254a9b8f 100644
--- CHANGELOG.md
+++ CHANGELOG.md
@@ -1,5 +1,73 @@
# Changelog
+## 4.79.4 (2025-01-21)
+
+Full Changelog: [v4.79.3...v4.79.4](https://github.com/openai/openai-node/compare/v4.79.3...v4.79.4)
+
+### Bug Fixes
+
+* **jsr:** correct zod config ([e45fa5f](https://github.com/openai/openai-node/commit/e45fa5f535ca74789636001e60e33edcad4db83c))
+
+
+### Chores
+
+* **internal:** minor restructuring ([#1278](https://github.com/openai/openai-node/issues/1278)) ([58ea92a](https://github.com/openai/openai-node/commit/58ea92a7464a04223f24ba31dbc0f7d0cf99cc19))
+
+
+### Documentation
+
+* update deprecation messages ([#1275](https://github.com/openai/openai-node/issues/1275)) ([1c6599e](https://github.com/openai/openai-node/commit/1c6599e47ef75a71cb309a1e14d97bc97bd036d0))
+
+## 4.79.3 (2025-01-21)
+
+Full Changelog: [v4.79.2...v4.79.3](https://github.com/openai/openai-node/compare/v4.79.2...v4.79.3)
+
+### Bug Fixes
+
+* **jsr:** export zod helpers ([9dc55b6](https://github.com/openai/openai-node/commit/9dc55b62b564ad5ad1d4a60fe520b68235d05296))
+
+## 4.79.2 (2025-01-21)
+
+Full Changelog: [v4.79.1...v4.79.2](https://github.com/openai/openai-node/compare/v4.79.1...v4.79.2)
+
+### Chores
+
+* **internal:** add test ([#1270](https://github.com/openai/openai-node/issues/1270)) ([b7c2d3d](https://github.com/openai/openai-node/commit/b7c2d3d9abd315f1452a578b0fd0d82e6ac4ff60))
+
+
+### Documentation
+
+* **readme:** fix Realtime API example link ([#1272](https://github.com/openai/openai-node/issues/1272)) ([d0653c7](https://github.com/openai/openai-node/commit/d0653c7fef48360d137a7411dfdfb95d477cdbc5))
+
+## 4.79.1 (2025-01-17)
+
+Full Changelog: [v4.79.0...v4.79.1](https://github.com/openai/openai-node/compare/v4.79.0...v4.79.1)
+
+### Bug Fixes
+
+* **realtime:** correct import syntax ([#1267](https://github.com/openai/openai-node/issues/1267)) ([74702a7](https://github.com/openai/openai-node/commit/74702a739f566810d2b6c4e0832cfa17a1d1e272))
+
+## 4.79.0 (2025-01-17)
+
+Full Changelog: [v4.78.1...v4.79.0](https://github.com/openai/openai-node/compare/v4.78.1...v4.79.0)
+
+### Features
+
+* **client:** add Realtime API support ([#1266](https://github.com/openai/openai-node/issues/1266)) ([7160ebe](https://github.com/openai/openai-node/commit/7160ebe647769fbf48a600c9961d1a6f86dc9622))
+
+
+### Bug Fixes
+
+* **logs/azure:** redact sensitive header when DEBUG is set ([#1218](https://github.com/openai/openai-node/issues/1218)) ([6a72fd7](https://github.com/openai/openai-node/commit/6a72fd736733db19504a829bf203b39d5b9e3644))
+
+
+### Chores
+
+* fix streaming ([379c743](https://github.com/openai/openai-node/commit/379c7435ed5d508458e9cdc22386039b84fcec5e))
+* **internal:** streaming refactors ([#1261](https://github.com/openai/openai-node/issues/1261)) ([dd4af93](https://github.com/openai/openai-node/commit/dd4af939792583854a313367c5fe2f98eea2f3c8))
+* **types:** add `| undefined` to client options properties ([#1264](https://github.com/openai/openai-node/issues/1264)) ([5e56979](https://github.com/openai/openai-node/commit/5e569799b9ac8f915b16de90d91d38b568c1edce))
+* **types:** rename vector store chunking strategy ([#1263](https://github.com/openai/openai-node/issues/1263)) ([d31acee](https://github.com/openai/openai-node/commit/d31acee860c80ba945d4e70b956c7ed75f5f849a))
+
## 4.78.1 (2025-01-10)
Full Changelog: [v4.78.0...v4.78.1](https://github.com/openai/openai-node/compare/v4.78.0...v4.78.1)
diff --git README.md README.md
index 3039857a1..3bd386e99 100644
--- README.md
+++ README.md
@@ -83,6 +83,93 @@ main();
If you need to cancel a stream, you can `break` from the loop
or call `stream.controller.abort()`.
+## Realtime API beta
+
+The Realtime API enables you to build low-latency, multi-modal conversational experiences. It currently supports text and audio as both input and output, as well as [function calling](https://platform.openai.com/docs/guides/function-calling) through a `WebSocket` connection.
+
+The Realtime API works through a combination of client-sent events and server-sent events. Clients can send events to do things like update session configuration or send text and audio inputs. Server events confirm when audio responses have completed, or when a text response from the model has been received. A full event reference can be found [here](https://platform.openai.com/docs/api-reference/realtime-client-events) and a guide can be found [here](https://platform.openai.com/docs/guides/realtime).
+
+This SDK supports accessing the Realtime API through the [WebSocket API](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket) or with [ws](https://github.com/websockets/ws).
+
+Basic text based example with `ws`:
+
+\`\`\`ts
+// requires `yarn add ws @types/ws`
+import { OpenAIRealtimeWS } from 'openai/beta/realtime/ws';
+
+const rt = new OpenAIRealtimeWS({ model: 'gpt-4o-realtime-preview-2024-12-17' });
+
+// access the underlying `ws.WebSocket` instance
+rt.socket.on('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+});
+
+rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+});
+
+rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+});
+
+rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+rt.on('response.text.done', () => console.log());
+
+rt.on('response.done', () => rt.close());
+
+rt.socket.on('close', () => console.log('\nConnection closed!'));
+```
+
+To use the web API `WebSocket` implementation, replace `OpenAIRealtimeWS` with `OpenAIRealtimeWebSocket` and adjust any `rt.socket` access:
+
+```ts
+import { OpenAIRealtimeWebSocket } from 'openai/beta/realtime/websocket';
+
+const rt = new OpenAIRealtimeWebSocket({ model: 'gpt-4o-realtime-preview-2024-12-17' });
+// ...
+rt.socket.addEventListener('open', () => {
+ // ...
+});
+```
+
+A full example can be found [here](https://github.com/openai/openai-node/blob/master/examples/realtime/websocket.ts).
+
+### Realtime error handling
+
+When an error is encountered, either on the client side or returned from the server through the [`error` event](https://platform.openai.com/docs/guides/realtime/realtime-api-beta#handling-errors), the `error` event listener will be fired. However, if you haven't registered an `error` event listener then an `unhandled Promise rejection` error will be thrown.
+
+It is **highly recommended** that you register an `error` event listener and handle errors approriately as typically the underlying connection is still usable.
+
+```ts
+const rt = new OpenAIRealtimeWS({ model: 'gpt-4o-realtime-preview-2024-12-17' });
+rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+});
+```
+
### Request & Response types
This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:
diff --git api.md api.md
index a885628a3..33ab95ef6 100644
--- api.md
+++ api.md
@@ -283,7 +283,7 @@ Types:
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">OtherFileChunkingStrategyObject</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">StaticFileChunkingStrategy</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">StaticFileChunkingStrategyObject</a></code>
-- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">StaticFileChunkingStrategyParam</a></code>
+- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">StaticFileChunkingStrategyObjectParam</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">VectorStore</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">VectorStoreDeleted</a></code>
diff --git examples/package.json examples/package.json
index c8a5f7087..b8c34ac45 100644
--- examples/package.json
+++ examples/package.json
@@ -6,14 +6,15 @@
"license": "MIT",
"private": true,
"dependencies": {
+ "@azure/identity": "^4.2.0",
"express": "^4.18.2",
"next": "^14.1.1",
"openai": "file:..",
- "zod-to-json-schema": "^3.21.4",
- "@azure/identity": "^4.2.0"
+ "zod-to-json-schema": "^3.21.4"
},
"devDependencies": {
"@types/body-parser": "^1.19.3",
- "@types/express": "^4.17.19"
+ "@types/express": "^4.17.19",
+ "@types/web": "^0.0.194"
}
}
diff --git a/examples/realtime/websocket.ts b/examples/realtime/websocket.ts
new file mode 100644
index 000000000..0da131bc3
--- /dev/null
+++ examples/realtime/websocket.ts
@@ -0,0 +1,48 @@
+import { OpenAIRealtimeWebSocket } from 'openai/beta/realtime/websocket';
+
+async function main() {
+ const rt = new OpenAIRealtimeWebSocket({ model: 'gpt-4o-realtime-preview-2024-12-17' });
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.addEventListener('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.addEventListener('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git a/examples/realtime/ws.ts b/examples/realtime/ws.ts
new file mode 100644
index 000000000..4bbe85e5d
--- /dev/null
+++ examples/realtime/ws.ts
@@ -0,0 +1,55 @@
+import { OpenAIRealtimeWS } from 'openai/beta/realtime/ws';
+
+async function main() {
+ const rt = new OpenAIRealtimeWS({ model: 'gpt-4o-realtime-preview-2024-12-17' });
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.on('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['foo'] as any,
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.on('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git jsr.json jsr.json
index 257faa02d..e6d772116 100644
--- jsr.json
+++ jsr.json
@@ -1,7 +1,14 @@
{
"name": "@openai/openai",
- "version": "4.78.1",
- "exports": "./index.ts",
+ "version": "4.79.4",
+ "exports": {
+ ".": "./index.ts",
+ "./helpers/zod": "./helpers/zod.ts",
+ "./beta/realtime/websocket": "./beta/realtime/websocket.ts"
+ },
+ "imports": {
+ "zod": "npm:zod@3"
+ },
"publish": {
"exclude": [
"!."
diff --git package.json package.json
index ff6ec16bc..d7a5555e5 100644
--- package.json
+++ package.json
@@ -1,6 +1,6 @@
{
"name": "openai",
- "version": "4.78.1",
+ "version": "4.79.4",
"description": "The official TypeScript library for the OpenAI API",
"author": "OpenAI <support@openai.com>",
"types": "dist/index.d.ts",
@@ -36,6 +36,7 @@
"@swc/core": "^1.3.102",
"@swc/jest": "^0.2.29",
"@types/jest": "^29.4.0",
+ "@types/ws": "^8.5.13",
"@typescript-eslint/eslint-plugin": "^6.7.0",
"@typescript-eslint/parser": "^6.7.0",
"eslint": "^8.49.0",
@@ -52,6 +53,7 @@
"tsc-multi": "^1.1.0",
"tsconfig-paths": "^4.0.0",
"typescript": "^4.8.2",
+ "ws": "^8.18.0",
"zod": "^3.23.8"
},
"sideEffects": [
@@ -126,9 +128,13 @@
},
"bin": "./bin/cli",
"peerDependencies": {
+ "ws": "^8.18.0",
"zod": "^3.23.8"
},
"peerDependenciesMeta": {
+ "ws": {
+ "optional": true
+ },
"zod": {
"optional": true
}
diff --git a/src/beta/realtime/index.ts b/src/beta/realtime/index.ts
new file mode 100644
index 000000000..75f0f3088
--- /dev/null
+++ src/beta/realtime/index.ts
@@ -0,0 +1 @@
+export { OpenAIRealtimeError } from './internal-base';
diff --git a/src/beta/realtime/internal-base.ts b/src/beta/realtime/internal-base.ts
new file mode 100644
index 000000000..391d69911
--- /dev/null
+++ src/beta/realtime/internal-base.ts
@@ -0,0 +1,83 @@
+import { RealtimeClientEvent, RealtimeServerEvent, ErrorEvent } from '../../resources/beta/realtime/realtime';
+import { EventEmitter } from '../../lib/EventEmitter';
+import { OpenAIError } from '../../error';
+
+export class OpenAIRealtimeError extends OpenAIError {
+ /**
+ * The error data that the API sent back in an `error` event.
+ */
+ error?: ErrorEvent.Error | undefined;
+
+ /**
+ * The unique ID of the server event.
+ */
+ event_id?: string | undefined;
+
+ constructor(message: string, event: ErrorEvent | null) {
+ super(message);
+
+ this.error = event?.error;
+ this.event_id = event?.event_id;
+ }
+}
+
+type Simplify<T> = { [KeyType in keyof T]: T[KeyType] } & {};
+
+type RealtimeEvents = Simplify<
+ {
+ event: (event: RealtimeServerEvent) => void;
+ error: (error: OpenAIRealtimeError) => void;
+ } & {
+ [EventType in Exclude<RealtimeServerEvent['type'], 'error'>]: (
+ event: Extract<RealtimeServerEvent, { type: EventType }>,
+ ) => unknown;
+ }
+>;
+
+export abstract class OpenAIRealtimeEmitter extends EventEmitter<RealtimeEvents> {
+ /**
+ * Send an event to the API.
+ */
+ abstract send(event: RealtimeClientEvent): void;
+
+ /**
+ * Close the websocket connection.
+ */
+ abstract close(props?: { code: number; reason: string }): void;
+
+ protected _onError(event: null, message: string, cause: any): void;
+ protected _onError(event: ErrorEvent, message?: string | undefined): void;
+ protected _onError(event: ErrorEvent | null, message?: string | undefined, cause?: any): void {
+ message =
+ event?.error ?
+ `${event.error.message} code=${event.error.code} param=${event.error.param} type=${event.error.type} event_id=${event.error.event_id}`
+ : message ?? 'unknown error';
+
+ if (!this._hasListener('error')) {
+ const error = new OpenAIRealtimeError(
+ message +
+ `\n\nTo resolve these unhandled rejection errors you should bind an \`error\` callback, e.g. \`rt.on('error', (error) => ...)\` `,
+ event,
+ );
+ // @ts-ignore
+ error.cause = cause;
+ Promise.reject(error);
+ return;
+ }
+
+ const error = new OpenAIRealtimeError(message, event);
+ // @ts-ignore
+ error.cause = cause;
+
+ this._emit('error', error);
+ }
+}
+
+export function buildRealtimeURL(props: { baseURL: string; model: string }): URL {
+ const path = '/realtime';
+
+ const url = new URL(props.baseURL + (props.baseURL.endsWith('/') ? path.slice(1) : path));
+ url.protocol = 'wss';
+ url.searchParams.set('model', props.model);
+ return url;
+}
diff --git a/src/beta/realtime/websocket.ts b/src/beta/realtime/websocket.ts
new file mode 100644
index 000000000..e0853779d
--- /dev/null
+++ src/beta/realtime/websocket.ts
@@ -0,0 +1,97 @@
+import { OpenAI } from '../../index';
+import { OpenAIError } from '../../error';
+import * as Core from '../../core';
+import type { RealtimeClientEvent, RealtimeServerEvent } from '../../resources/beta/realtime/realtime';
+import { OpenAIRealtimeEmitter, buildRealtimeURL } from './internal-base';
+
+interface MessageEvent {
+ data: string;
+}
+
+type _WebSocket =
+ typeof globalThis extends (
+ {
+ WebSocket: infer ws;
+ }
+ ) ?
+ // @ts-ignore
+ InstanceType<ws>
+ : any;
+
+export class OpenAIRealtimeWebSocket extends OpenAIRealtimeEmitter {
+ url: URL;
+ socket: _WebSocket;
+
+ constructor(
+ props: {
+ model: string;
+ dangerouslyAllowBrowser?: boolean;
+ },
+ client?: Pick<OpenAI, 'apiKey' | 'baseURL'>,
+ ) {
+ super();
+
+ const dangerouslyAllowBrowser =
+ props.dangerouslyAllowBrowser ??
+ (client as any)?._options?.dangerouslyAllowBrowser ??
+ (client?.apiKey.startsWith('ek_') ? true : null);
+
+ if (!dangerouslyAllowBrowser && Core.isRunningInBrowser()) {
+ throw new OpenAIError(
+ "It looks like you're running in a browser-like environment.\n\nThis is disabled by default, as it risks exposing your secret API credentials to attackers.\n\nYou can avoid this error by creating an ephemeral session token:\nhttps://platform.openai.com/docs/api-reference/realtime-sessions\n",
+ );
+ }
+
+ client ??= new OpenAI({ dangerouslyAllowBrowser });
+
+ this.url = buildRealtimeURL({ baseURL: client.baseURL, model: props.model });
+ // @ts-ignore
+ this.socket = new WebSocket(this.url, [
+ 'realtime',
+ `openai-insecure-api-key.${client.apiKey}`,
+ 'openai-beta.realtime-v1',
+ ]);
+
+ this.socket.addEventListener('message', (websocketEvent: MessageEvent) => {
+ const event = (() => {
+ try {
+ return JSON.parse(websocketEvent.data.toString()) as RealtimeServerEvent;
+ } catch (err) {
+ this._onError(null, 'could not parse websocket event', err);
+ return null;
+ }
+ })();
+
+ if (event) {
+ this._emit('event', event);
+
+ if (event.type === 'error') {
+ this._onError(event);
+ } else {
+ // @ts-expect-error TS isn't smart enough to get the relationship right here
+ this._emit(event.type, event);
+ }
+ }
+ });
+
+ this.socket.addEventListener('error', (event: any) => {
+ this._onError(null, event.message, null);
+ });
+ }
+
+ send(event: RealtimeClientEvent) {
+ try {
+ this.socket.send(JSON.stringify(event));
+ } catch (err) {
+ this._onError(null, 'could not send data', err);
+ }
+ }
+
+ close(props?: { code: number; reason: string }) {
+ try {
+ this.socket.close(props?.code ?? 1000, props?.reason ?? 'OK');
+ } catch (err) {
+ this._onError(null, 'could not close the connection', err);
+ }
+ }
+}
diff --git a/src/beta/realtime/ws.ts b/src/beta/realtime/ws.ts
new file mode 100644
index 000000000..631a36cd2
--- /dev/null
+++ src/beta/realtime/ws.ts
@@ -0,0 +1,69 @@
+import * as WS from 'ws';
+import { OpenAI } from '../../index';
+import type { RealtimeClientEvent, RealtimeServerEvent } from '../../resources/beta/realtime/realtime';
+import { OpenAIRealtimeEmitter, buildRealtimeURL } from './internal-base';
+
+export class OpenAIRealtimeWS extends OpenAIRealtimeEmitter {
+ url: URL;
+ socket: WS.WebSocket;
+
+ constructor(
+ props: { model: string; options?: WS.ClientOptions | undefined },
+ client?: Pick<OpenAI, 'apiKey' | 'baseURL'>,
+ ) {
+ super();
+ client ??= new OpenAI();
+
+ this.url = buildRealtimeURL({ baseURL: client.baseURL, model: props.model });
+ this.socket = new WS.WebSocket(this.url, {
+ ...props.options,
+ headers: {
+ ...props.options?.headers,
+ Authorization: `Bearer ${client.apiKey}`,
+ 'OpenAI-Beta': 'realtime=v1',
+ },
+ });
+
+ this.socket.on('message', (wsEvent) => {
+ const event = (() => {
+ try {
+ return JSON.parse(wsEvent.toString()) as RealtimeServerEvent;
+ } catch (err) {
+ this._onError(null, 'could not parse websocket event', err);
+ return null;
+ }
+ })();
+
+ if (event) {
+ this._emit('event', event);
+
+ if (event.type === 'error') {
+ this._onError(event);
+ } else {
+ // @ts-expect-error TS isn't smart enough to get the relationship right here
+ this._emit(event.type, event);
+ }
+ }
+ });
+
+ this.socket.on('error', (err) => {
+ this._onError(null, err.message, err);
+ });
+ }
+
+ send(event: RealtimeClientEvent) {
+ try {
+ this.socket.send(JSON.stringify(event));
+ } catch (err) {
+ this._onError(null, 'could not send data', err);
+ }
+ }
+
+ close(props?: { code: number; reason: string }) {
+ try {
+ this.socket.close(props?.code ?? 1000, props?.reason ?? 'OK');
+ } catch (err) {
+ this._onError(null, 'could not close the connection', err);
+ }
+ }
+}
diff --git src/core.ts src/core.ts
index 972cceaec..3d2d029a5 100644
--- src/core.ts
+++ src/core.ts
@@ -1148,9 +1148,43 @@ function applyHeadersMut(targetHeaders: Headers, newHeaders: Headers): void {
}
}
+const SENSITIVE_HEADERS = new Set(['authorization', 'api-key']);
+
export function debug(action: string, ...args: any[]) {
if (typeof process !== 'undefined' && process?.env?.['DEBUG'] === 'true') {
- console.log(`OpenAI:DEBUG:${action}`, ...args);
+ const modifiedArgs = args.map((arg) => {
+ if (!arg) {
+ return arg;
+ }
+
+ // Check for sensitive headers in request body 'headers' object
+ if (arg['headers']) {
+ // clone so we don't mutate
+ const modifiedArg = { ...arg, headers: { ...arg['headers'] } };
+
+ for (const header in arg['headers']) {
+ if (SENSITIVE_HEADERS.has(header.toLowerCase())) {
+ modifiedArg['headers'][header] = 'REDACTED';
+ }
+ }
+
+ return modifiedArg;
+ }
+
+ let modifiedArg = null;
+
+ // Check for sensitive headers in headers object
+ for (const header in arg) {
+ if (SENSITIVE_HEADERS.has(header.toLowerCase())) {
+ // avoid making a copy until we need to
+ modifiedArg ??= { ...arg };
+ modifiedArg[header] = 'REDACTED';
+ }
+ }
+
+ return modifiedArg ?? arg;
+ });
+ console.log(`OpenAI:DEBUG:${action}`, ...modifiedArgs);
}
}
diff --git src/index.ts src/index.ts
index 2320850fb..cf6aa89e3 100644
--- src/index.ts
+++ src/index.ts
@@ -137,7 +137,7 @@ export interface ClientOptions {
* Note that request timeouts are retried by default, so in a worst-case scenario you may wait
* much longer than this timeout before the promise succeeds or fails.
*/
- timeout?: number;
+ timeout?: number | undefined;
/**
* An HTTP agent used to manage HTTP(S) connections.
@@ -145,7 +145,7 @@ export interface ClientOptions {
* If not provided, an agent will be constructed by default in the Node.js environment,
* otherwise no agent is used.
*/
- httpAgent?: Agent;
+ httpAgent?: Agent | undefined;
/**
* Specify a custom `fetch` function implementation.
@@ -161,7 +161,7 @@ export interface ClientOptions {
*
* @default 2
*/
- maxRetries?: number;
+ maxRetries?: number | undefined;
/**
* Default headers to include with every request to the API.
@@ -169,7 +169,7 @@ export interface ClientOptions {
* These can be removed in individual requests by explicitly setting the
* header to `undefined` or `null` in request options.
*/
- defaultHeaders?: Core.Headers;
+ defaultHeaders?: Core.Headers | undefined;
/**
* Default query parameters to include with every request to the API.
@@ -177,13 +177,13 @@ export interface ClientOptions {
* These can be removed in individual requests by explicitly setting the
* param to `undefined` in request options.
*/
- defaultQuery?: Core.DefaultQuery;
+ defaultQuery?: Core.DefaultQuery | undefined;
/**
* By default, client-side use of this library is not allowed, as it risks exposing your secret API credentials to attackers.
* Only set this option to `true` if you understand the risks and have appropriate mitigations in place.
*/
- dangerouslyAllowBrowser?: boolean;
+ dangerouslyAllowBrowser?: boolean | undefined;
}
/**
diff --git src/internal/decoders/line.ts src/internal/decoders/line.ts
index 1e0bbf390..34e41d1dc 100644
--- src/internal/decoders/line.ts
+++ src/internal/decoders/line.ts
@@ -1,6 +1,6 @@
import { OpenAIError } from '../../error';
-type Bytes = string | ArrayBuffer | Uint8Array | Buffer | null | undefined;
+export type Bytes = string | ArrayBuffer | Uint8Array | Buffer | null | undefined;
/**
* A re-implementation of httpx's `LineDecoder` in Python that handles incrementally
diff --git a/src/internal/stream-utils.ts b/src/internal/stream-utils.ts
new file mode 100644
index 000000000..37f7793cf
--- /dev/null
+++ src/internal/stream-utils.ts
@@ -0,0 +1,32 @@
+/**
+ * Most browsers don't yet have async iterable support for ReadableStream,
+ * and Node has a very different way of reading bytes from its "ReadableStream".
+ *
+ * This polyfill was pulled from https://github.com/MattiasBuelens/web-streams-polyfill/pull/122#issuecomment-1627354490
+ */
+export function ReadableStreamToAsyncIterable<T>(stream: any): AsyncIterableIterator<T> {
+ if (stream[Symbol.asyncIterator]) return stream;
+
+ const reader = stream.getReader();
+ return {
+ async next() {
+ try {
+ const result = await reader.read();
+ if (result?.done) reader.releaseLock(); // release lock when stream becomes closed
+ return result;
+ } catch (e) {
+ reader.releaseLock(); // release lock when stream becomes errored
+ throw e;
+ }
+ },
+ async return() {
+ const cancelPromise = reader.cancel();
+ reader.releaseLock();
+ await cancelPromise;
+ return { done: true, value: undefined };
+ },
+ [Symbol.asyncIterator]() {
+ return this;
+ },
+ };
+}
diff --git a/src/lib/EventEmitter.ts b/src/lib/EventEmitter.ts
new file mode 100644
index 000000000..9adeebdc3
--- /dev/null
+++ src/lib/EventEmitter.ts
@@ -0,0 +1,98 @@
+type EventListener<Events, EventType extends keyof Events> = Events[EventType];
+
+type EventListeners<Events, EventType extends keyof Events> = Array<{
+ listener: EventListener<Events, EventType>;
+ once?: boolean;
+}>;
+
+export type EventParameters<Events, EventType extends keyof Events> = {
+ [Event in EventType]: EventListener<Events, EventType> extends (...args: infer P) => any ? P : never;
+}[EventType];
+
+export class EventEmitter<EventTypes extends Record<string, (...args: any) => any>> {
+ #listeners: {
+ [Event in keyof EventTypes]?: EventListeners<EventTypes, Event>;
+ } = {};
+
+ /**
+ * Adds the listener function to the end of the listeners array for the event.
+ * No checks are made to see if the listener has already been added. Multiple calls passing
+ * the same combination of event and listener will result in the listener being added, and
+ * called, multiple times.
+ * @returns this, so that calls can be chained
+ */
+ on<Event extends keyof EventTypes>(event: Event, listener: EventListener<EventTypes, Event>): this {
+ const listeners: EventListeners<EventTypes, Event> =
+ this.#listeners[event] || (this.#listeners[event] = []);
+ listeners.push({ listener });
+ return this;
+ }
+
+ /**
+ * Removes the specified listener from the listener array for the event.
+ * off() will remove, at most, one instance of a listener from the listener array. If any single
+ * listener has been added multiple times to the listener array for the specified event, then
+ * off() must be called multiple times to remove each instance.
+ * @returns this, so that calls can be chained
+ */
+ off<Event extends keyof EventTypes>(event: Event, listener: EventListener<EventTypes, Event>): this {
+ const listeners = this.#listeners[event];
+ if (!listeners) return this;
+ const index = listeners.findIndex((l) => l.listener === listener);
+ if (index >= 0) listeners.splice(index, 1);
+ return this;
+ }
+
+ /**
+ * Adds a one-time listener function for the event. The next time the event is triggered,
+ * this listener is removed and then invoked.
+ * @returns this, so that calls can be chained
+ */
+ once<Event extends keyof EventTypes>(event: Event, listener: EventListener<EventTypes, Event>): this {
+ const listeners: EventListeners<EventTypes, Event> =
+ this.#listeners[event] || (this.#listeners[event] = []);
+ listeners.push({ listener, once: true });
+ return this;
+ }
+
+ /**
+ * This is similar to `.once()`, but returns a Promise that resolves the next time
+ * the event is triggered, instead of calling a listener callback.
+ * @returns a Promise that resolves the next time given event is triggered,
+ * or rejects if an error is emitted. (If you request the 'error' event,
+ * returns a promise that resolves with the error).
+ *
+ * Example:
+ *
+ * const message = await stream.emitted('message') // rejects if the stream errors
+ */
+ emitted<Event extends keyof EventTypes>(
+ event: Event,
+ ): Promise<
+ EventParameters<EventTypes, Event> extends [infer Param] ? Param
+ : EventParameters<EventTypes, Event> extends [] ? void
+ : EventParameters<EventTypes, Event>
+ > {
+ return new Promise((resolve, reject) => {
+ // TODO: handle errors
+ this.once(event, resolve as any);
+ });
+ }
+
+ protected _emit<Event extends keyof EventTypes>(
+ this: EventEmitter<EventTypes>,
+ event: Event,
+ ...args: EventParameters<EventTypes, Event>
+ ) {
+ const listeners: EventListeners<EventTypes, Event> | undefined = this.#listeners[event];
+ if (listeners) {
+ this.#listeners[event] = listeners.filter((l) => !l.once) as any;
+ listeners.forEach(({ listener }: any) => listener(...(args as any)));
+ }
+ }
+
+ protected _hasListener(event: keyof EventTypes): boolean {
+ const listeners = this.#listeners[event];
+ return listeners && listeners.length > 0;
+ }
+}
diff --git src/resources/beta/beta.ts src/resources/beta/beta.ts
index ccd043243..df929b2f7 100644
--- src/resources/beta/beta.ts
+++ src/resources/beta/beta.ts
@@ -48,7 +48,7 @@ import {
OtherFileChunkingStrategyObject,
StaticFileChunkingStrategy,
StaticFileChunkingStrategyObject,
- StaticFileChunkingStrategyParam,
+ StaticFileChunkingStrategyObjectParam,
VectorStore,
VectorStoreCreateParams,
VectorStoreDeleted,
@@ -85,7 +85,7 @@ export declare namespace Beta {
type OtherFileChunkingStrategyObject as OtherFileChunkingStrategyObject,
type StaticFileChunkingStrategy as StaticFileChunkingStrategy,
type StaticFileChunkingStrategyObject as StaticFileChunkingStrategyObject,
- type StaticFileChunkingStrategyParam as StaticFileChunkingStrategyParam,
+ type StaticFileChunkingStrategyObjectParam as StaticFileChunkingStrategyObjectParam,
type VectorStore as VectorStore,
type VectorStoreDeleted as VectorStoreDeleted,
VectorStoresPage as VectorStoresPage,
diff --git src/resources/beta/index.ts src/resources/beta/index.ts
index aa2e52d4c..babca0016 100644
--- src/resources/beta/index.ts
+++ src/resources/beta/index.ts
@@ -46,7 +46,7 @@ export {
type OtherFileChunkingStrategyObject,
type StaticFileChunkingStrategy,
type StaticFileChunkingStrategyObject,
- type StaticFileChunkingStrategyParam,
+ type StaticFileChunkingStrategyObjectParam,
type VectorStore,
type VectorStoreDeleted,
type VectorStoreCreateParams,
diff --git src/resources/beta/vector-stores/index.ts src/resources/beta/vector-stores/index.ts
index 89fc0cde0..d587bd160 100644
--- src/resources/beta/vector-stores/index.ts
+++ src/resources/beta/vector-stores/index.ts
@@ -23,7 +23,7 @@ export {
type OtherFileChunkingStrategyObject,
type StaticFileChunkingStrategy,
type StaticFileChunkingStrategyObject,
- type StaticFileChunkingStrategyParam,
+ type StaticFileChunkingStrategyObjectParam,
type VectorStore,
type VectorStoreDeleted,
type VectorStoreCreateParams,
diff --git src/resources/beta/vector-stores/vector-stores.ts src/resources/beta/vector-stores/vector-stores.ts
index 35ad8c369..cbff2d562 100644
--- src/resources/beta/vector-stores/vector-stores.ts
+++ src/resources/beta/vector-stores/vector-stores.ts
@@ -116,7 +116,7 @@ export type FileChunkingStrategy = StaticFileChunkingStrategyObject | OtherFileC
* The chunking strategy used to chunk the file(s). If not set, will use the `auto`
* strategy. Only applicable if `file_ids` is non-empty.
*/
-export type FileChunkingStrategyParam = AutoFileChunkingStrategyParam | StaticFileChunkingStrategyParam;
+export type FileChunkingStrategyParam = AutoFileChunkingStrategyParam | StaticFileChunkingStrategyObjectParam;
/**
* This is returned when the chunking strategy is unknown. Typically, this is
@@ -154,7 +154,7 @@ export interface StaticFileChunkingStrategyObject {
type: 'static';
}
-export interface StaticFileChunkingStrategyParam {
+export interface StaticFileChunkingStrategyObjectParam {
static: StaticFileChunkingStrategy;
/**
@@ -397,7 +397,7 @@ export declare namespace VectorStores {
type OtherFileChunkingStrategyObject as OtherFileChunkingStrategyObject,
type StaticFileChunkingStrategy as StaticFileChunkingStrategy,
type StaticFileChunkingStrategyObject as StaticFileChunkingStrategyObject,
- type StaticFileChunkingStrategyParam as StaticFileChunkingStrategyParam,
+ type StaticFileChunkingStrategyObjectParam as StaticFileChunkingStrategyObjectParam,
type VectorStore as VectorStore,
type VectorStoreDeleted as VectorStoreDeleted,
VectorStoresPage as VectorStoresPage,
diff --git src/resources/chat/completions.ts src/resources/chat/completions.ts
index 31f5814cb..88c778036 100644
--- src/resources/chat/completions.ts
+++ src/resources/chat/completions.ts
@@ -163,8 +163,8 @@ export interface ChatCompletionAssistantMessageParam {
content?: string | Array<ChatCompletionContentPartText | ChatCompletionContentPartRefusal> | null;
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
function_call?: ChatCompletionAssistantMessageParam.FunctionCall | null;
@@ -198,8 +198,8 @@ export namespace ChatCompletionAssistantMessageParam {
}
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
export interface FunctionCall {
/**
@@ -360,8 +360,8 @@ export namespace ChatCompletionChunk {
content?: string | null;
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
function_call?: Delta.FunctionCall;
@@ -380,8 +380,8 @@ export namespace ChatCompletionChunk {
export namespace Delta {
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
export interface FunctionCall {
/**
@@ -620,8 +620,8 @@ export interface ChatCompletionMessage {
audio?: ChatCompletionAudio | null;
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
function_call?: ChatCompletionMessage.FunctionCall | null;
@@ -633,8 +633,8 @@ export interface ChatCompletionMessage {
export namespace ChatCompletionMessage {
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
export interface FunctionCall {
/**
diff --git src/resources/files.ts src/resources/files.ts
index 43708310b..67bc95469 100644
--- src/resources/files.ts
+++ src/resources/files.ts
@@ -168,13 +168,13 @@ export interface FileObject {
| 'vision';
/**
- * @deprecated: Deprecated. The current status of the file, which can be either
+ * @deprecated Deprecated. The current status of the file, which can be either
* `uploaded`, `processed`, or `error`.
*/
status: 'uploaded' | 'processed' | 'error';
/**
- * @deprecated: Deprecated. For details on why a fine-tuning training file failed
+ * @deprecated Deprecated. For details on why a fine-tuning training file failed
* validation, see the `error` field on `fine_tuning.job`.
*/
status_details?: string;
diff --git src/resources/fine-tuning/jobs/jobs.ts src/resources/fine-tuning/jobs/jobs.ts
index 44dd011aa..9be03c302 100644
--- src/resources/fine-tuning/jobs/jobs.ts
+++ src/resources/fine-tuning/jobs/jobs.ts
@@ -516,7 +516,7 @@ export interface JobCreateParams {
export namespace JobCreateParams {
/**
- * @deprecated: The hyperparameters used for the fine-tuning job. This value is now
+ * @deprecated The hyperparameters used for the fine-tuning job. This value is now
* deprecated in favor of `method`, and should be passed in under the `method`
* parameter.
*/
diff --git src/streaming.ts src/streaming.ts
index 2891e6ac3..6a57a50a0 100644
--- src/streaming.ts
+++ src/streaming.ts
@@ -1,6 +1,7 @@
import { ReadableStream, type Response } from './_shims/index';
import { OpenAIError } from './error';
import { LineDecoder } from './internal/decoders/line';
+import { ReadableStreamToAsyncIterable } from './internal/stream-utils';
import { APIError } from './error';
@@ -96,7 +97,7 @@ export class Stream<Item> implements AsyncIterable<Item> {
async function* iterLines(): AsyncGenerator<string, void, unknown> {
const lineDecoder = new LineDecoder();
- const iter = readableStreamAsyncIterable<Bytes>(readableStream);
+ const iter = ReadableStreamToAsyncIterable<Bytes>(readableStream);
for await (const chunk of iter) {
for (const line of lineDecoder.decode(chunk)) {
yield line;
@@ -210,7 +211,7 @@ export async function* _iterSSEMessages(
const sseDecoder = new SSEDecoder();
const lineDecoder = new LineDecoder();
- const iter = readableStreamAsyncIterable<Bytes>(response.body);
+ const iter = ReadableStreamToAsyncIterable<Bytes>(response.body);
for await (const sseChunk of iterSSEChunks(iter)) {
for (const line of lineDecoder.decode(sseChunk)) {
const sse = sseDecoder.decode(line);
@@ -363,36 +364,3 @@ function partition(str: string, delimiter: string): [string, string, string] {
return [str, '', ''];
}
-
-/**
- * Most browsers don't yet have async iterable support for ReadableStream,
- * and Node has a very different way of reading bytes from its "ReadableStream".
- *
- * This polyfill was pulled from https://github.com/MattiasBuelens/web-streams-polyfill/pull/122#issuecomment-1627354490
- */
-export function readableStreamAsyncIterable<T>(stream: any): AsyncIterableIterator<T> {
- if (stream[Symbol.asyncIterator]) return stream;
-
- const reader = stream.getReader();
- return {
- async next() {
- try {
- const result = await reader.read();
- if (result?.done) reader.releaseLock(); // release lock when stream becomes closed
- return result;
- } catch (e) {
- reader.releaseLock(); // release lock when stream becomes errored
- throw e;
- }
- },
- async return() {
- const cancelPromise = reader.cancel();
- reader.releaseLock();
- await cancelPromise;
- return { done: true, value: undefined };
- },
- [Symbol.asyncIterator]() {
- return this;
- },
- };
-}
diff --git src/version.ts src/version.ts
index a8ac58ba2..e8b9601ed 100644
--- src/version.ts
+++ src/version.ts
@@ -1 +1 @@
-export const VERSION = '4.78.1'; // x-release-please-version
+export const VERSION = '4.79.4'; // x-release-please-version
diff --git tests/index.test.ts tests/index.test.ts
index a6f0040a4..6227d6fbe 100644
--- tests/index.test.ts
+++ tests/index.test.ts
@@ -2,7 +2,7 @@
import OpenAI from 'openai';
import { APIUserAbortError } from 'openai';
-import { Headers } from 'openai/core';
+import { debug, Headers } from 'openai/core';
import defaultFetch, { Response, type RequestInit, type RequestInfo } from 'node-fetch';
describe('instantiate client', () => {
@@ -96,6 +96,15 @@ describe('instantiate client', () => {
expect(response).toEqual({ url: 'http://localhost:5000/foo', custom: true });
});
+ test('explicit global fetch', async () => {
+ // make sure the global fetch type is assignable to our Fetch type
+ const client = new OpenAI({
+ baseURL: 'http://localhost:5000/',
+ apiKey: 'My API Key',
+ fetch: defaultFetch,
+ });
+ });
+
test('custom signal', async () => {
const client = new OpenAI({
baseURL: process.env['TEST_API_BASE_URL'] ?? 'http://127.0.0.1:4010',
@@ -424,3 +433,95 @@ describe('retries', () => {
expect(count).toEqual(3);
});
});
+
+describe('debug()', () => {
+ const env = process.env;
+ const spy = jest.spyOn(console, 'log');
+
+ beforeEach(() => {
+ jest.resetModules();
+ process.env = { ...env };
+ process.env['DEBUG'] = 'true';
+ });
+
+ afterEach(() => {
+ process.env = env;
+ });
+
+ test('body request object with Authorization header', function () {
+ // Test request body includes headers object with Authorization
+ const headersTest = {
+ headers: {
+ Authorization: 'fakeAuthorization',
+ },
+ };
+ debug('request', headersTest);
+ expect(spy).toHaveBeenCalledWith('OpenAI:DEBUG:request', {
+ headers: {
+ Authorization: 'REDACTED',
+ },
+ });
+ });
+
+ test('body request object with api-key header', function () {
+ // Test request body includes headers object with api-ley
+ const apiKeyTest = {
+ headers: {
+ 'api-key': 'fakeKey',
+ },
+ };
+ debug('request', apiKeyTest);
+ expect(spy).toHaveBeenCalledWith('OpenAI:DEBUG:request', {
+ headers: {
+ 'api-key': 'REDACTED',
+ },
+ });
+ });
+
+ test('header object with Authorization header', function () {
+ // Test headers object with authorization header
+ const authorizationTest = {
+ authorization: 'fakeValue',
+ };
+ debug('request', authorizationTest);
+ expect(spy).toHaveBeenCalledWith('OpenAI:DEBUG:request', {
+ authorization: 'REDACTED',
+ });
+ });
+
+ test('input args are not mutated', function () {
+ const authorizationTest = {
+ authorization: 'fakeValue',
+ };
+ const client = new OpenAI({
+ baseURL: 'http://localhost:5000/',
+ defaultHeaders: authorizationTest,
+ apiKey: 'api-key',
+ });
+
+ const { req } = client.buildRequest({ path: '/foo', method: 'post' });
+ debug('request', authorizationTest);
+ expect((req.headers as Headers)['authorization']).toEqual('fakeValue');
+ expect(spy).toHaveBeenCalledWith('OpenAI:DEBUG:request', {
+ authorization: 'REDACTED',
+ });
+ });
+
+ test('input headers are not mutated', function () {
+ const authorizationTest = {
+ authorization: 'fakeValue',
+ };
+ const client = new OpenAI({
+ baseURL: 'http://localhost:5000/',
+ defaultHeaders: authorizationTest,
+ apiKey: 'api-key',
+ });
+
+ const { req } = client.buildRequest({ path: '/foo', method: 'post' });
+ debug('request', { headers: req.headers });
+ expect((req.headers as Headers)['authorization']).toEqual('fakeValue');
+ expect(spy).toHaveBeenCalledWith('OpenAI:DEBUG:request', {
+ authorization: 'REDACTED',
+ });
+ });
+});
DescriptionThis PR mainly focuses on introducing support for a new Realtime API in the OpenAI Node.js SDK, along with various bug fixes, chores, and documentation updates. ChangesChangesVersion Bump
Documentation & Changelog
Bug Fixes
New Features
Internal Restructuring
sequenceDiagram
participant Dev as Developer
participant API as OpenAI Node.js SDK
Dev->>API: Bump version to 4.79.4
Dev->>API: Update documentation for Realtime API
Dev->>API: Add Realtime API through WebSocket and `ws` libraries
Dev->>API: Enhance debug logging
Dev->>API: Add unit tests for new features
API-->>Dev: Updates integrated and ready for deployment
Security Hotspots
Notes
|
anthropic debug - [puLL-Merge] - openai/openai-node@v4.78.1..v4.79.4 Diffdiff --git .release-please-manifest.json .release-please-manifest.json
index 3218ab333..b1ab5c7b9 100644
--- .release-please-manifest.json
+++ .release-please-manifest.json
@@ -1,3 +1,3 @@
{
- ".": "4.78.1"
+ ".": "4.79.4"
}
diff --git CHANGELOG.md CHANGELOG.md
index 320d00140..4254a9b8f 100644
--- CHANGELOG.md
+++ CHANGELOG.md
@@ -1,5 +1,73 @@
# Changelog
+## 4.79.4 (2025-01-21)
+
+Full Changelog: [v4.79.3...v4.79.4](https://github.com/openai/openai-node/compare/v4.79.3...v4.79.4)
+
+### Bug Fixes
+
+* **jsr:** correct zod config ([e45fa5f](https://github.com/openai/openai-node/commit/e45fa5f535ca74789636001e60e33edcad4db83c))
+
+
+### Chores
+
+* **internal:** minor restructuring ([#1278](https://github.com/openai/openai-node/issues/1278)) ([58ea92a](https://github.com/openai/openai-node/commit/58ea92a7464a04223f24ba31dbc0f7d0cf99cc19))
+
+
+### Documentation
+
+* update deprecation messages ([#1275](https://github.com/openai/openai-node/issues/1275)) ([1c6599e](https://github.com/openai/openai-node/commit/1c6599e47ef75a71cb309a1e14d97bc97bd036d0))
+
+## 4.79.3 (2025-01-21)
+
+Full Changelog: [v4.79.2...v4.79.3](https://github.com/openai/openai-node/compare/v4.79.2...v4.79.3)
+
+### Bug Fixes
+
+* **jsr:** export zod helpers ([9dc55b6](https://github.com/openai/openai-node/commit/9dc55b62b564ad5ad1d4a60fe520b68235d05296))
+
+## 4.79.2 (2025-01-21)
+
+Full Changelog: [v4.79.1...v4.79.2](https://github.com/openai/openai-node/compare/v4.79.1...v4.79.2)
+
+### Chores
+
+* **internal:** add test ([#1270](https://github.com/openai/openai-node/issues/1270)) ([b7c2d3d](https://github.com/openai/openai-node/commit/b7c2d3d9abd315f1452a578b0fd0d82e6ac4ff60))
+
+
+### Documentation
+
+* **readme:** fix Realtime API example link ([#1272](https://github.com/openai/openai-node/issues/1272)) ([d0653c7](https://github.com/openai/openai-node/commit/d0653c7fef48360d137a7411dfdfb95d477cdbc5))
+
+## 4.79.1 (2025-01-17)
+
+Full Changelog: [v4.79.0...v4.79.1](https://github.com/openai/openai-node/compare/v4.79.0...v4.79.1)
+
+### Bug Fixes
+
+* **realtime:** correct import syntax ([#1267](https://github.com/openai/openai-node/issues/1267)) ([74702a7](https://github.com/openai/openai-node/commit/74702a739f566810d2b6c4e0832cfa17a1d1e272))
+
+## 4.79.0 (2025-01-17)
+
+Full Changelog: [v4.78.1...v4.79.0](https://github.com/openai/openai-node/compare/v4.78.1...v4.79.0)
+
+### Features
+
+* **client:** add Realtime API support ([#1266](https://github.com/openai/openai-node/issues/1266)) ([7160ebe](https://github.com/openai/openai-node/commit/7160ebe647769fbf48a600c9961d1a6f86dc9622))
+
+
+### Bug Fixes
+
+* **logs/azure:** redact sensitive header when DEBUG is set ([#1218](https://github.com/openai/openai-node/issues/1218)) ([6a72fd7](https://github.com/openai/openai-node/commit/6a72fd736733db19504a829bf203b39d5b9e3644))
+
+
+### Chores
+
+* fix streaming ([379c743](https://github.com/openai/openai-node/commit/379c7435ed5d508458e9cdc22386039b84fcec5e))
+* **internal:** streaming refactors ([#1261](https://github.com/openai/openai-node/issues/1261)) ([dd4af93](https://github.com/openai/openai-node/commit/dd4af939792583854a313367c5fe2f98eea2f3c8))
+* **types:** add `| undefined` to client options properties ([#1264](https://github.com/openai/openai-node/issues/1264)) ([5e56979](https://github.com/openai/openai-node/commit/5e569799b9ac8f915b16de90d91d38b568c1edce))
+* **types:** rename vector store chunking strategy ([#1263](https://github.com/openai/openai-node/issues/1263)) ([d31acee](https://github.com/openai/openai-node/commit/d31acee860c80ba945d4e70b956c7ed75f5f849a))
+
## 4.78.1 (2025-01-10)
Full Changelog: [v4.78.0...v4.78.1](https://github.com/openai/openai-node/compare/v4.78.0...v4.78.1)
diff --git README.md README.md
index 3039857a1..3bd386e99 100644
--- README.md
+++ README.md
@@ -83,6 +83,93 @@ main();
If you need to cancel a stream, you can `break` from the loop
or call `stream.controller.abort()`.
+## Realtime API beta
+
+The Realtime API enables you to build low-latency, multi-modal conversational experiences. It currently supports text and audio as both input and output, as well as [function calling](https://platform.openai.com/docs/guides/function-calling) through a `WebSocket` connection.
+
+The Realtime API works through a combination of client-sent events and server-sent events. Clients can send events to do things like update session configuration or send text and audio inputs. Server events confirm when audio responses have completed, or when a text response from the model has been received. A full event reference can be found [here](https://platform.openai.com/docs/api-reference/realtime-client-events) and a guide can be found [here](https://platform.openai.com/docs/guides/realtime).
+
+This SDK supports accessing the Realtime API through the [WebSocket API](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket) or with [ws](https://github.com/websockets/ws).
+
+Basic text based example with `ws`:
+
+\`\`\`ts
+// requires `yarn add ws @types/ws`
+import { OpenAIRealtimeWS } from 'openai/beta/realtime/ws';
+
+const rt = new OpenAIRealtimeWS({ model: 'gpt-4o-realtime-preview-2024-12-17' });
+
+// access the underlying `ws.WebSocket` instance
+rt.socket.on('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+});
+
+rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+});
+
+rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+});
+
+rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+rt.on('response.text.done', () => console.log());
+
+rt.on('response.done', () => rt.close());
+
+rt.socket.on('close', () => console.log('\nConnection closed!'));
+```
+
+To use the web API `WebSocket` implementation, replace `OpenAIRealtimeWS` with `OpenAIRealtimeWebSocket` and adjust any `rt.socket` access:
+
+```ts
+import { OpenAIRealtimeWebSocket } from 'openai/beta/realtime/websocket';
+
+const rt = new OpenAIRealtimeWebSocket({ model: 'gpt-4o-realtime-preview-2024-12-17' });
+// ...
+rt.socket.addEventListener('open', () => {
+ // ...
+});
+```
+
+A full example can be found [here](https://github.com/openai/openai-node/blob/master/examples/realtime/websocket.ts).
+
+### Realtime error handling
+
+When an error is encountered, either on the client side or returned from the server through the [`error` event](https://platform.openai.com/docs/guides/realtime/realtime-api-beta#handling-errors), the `error` event listener will be fired. However, if you haven't registered an `error` event listener then an `unhandled Promise rejection` error will be thrown.
+
+It is **highly recommended** that you register an `error` event listener and handle errors approriately as typically the underlying connection is still usable.
+
+```ts
+const rt = new OpenAIRealtimeWS({ model: 'gpt-4o-realtime-preview-2024-12-17' });
+rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+});
+```
+
### Request & Response types
This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:
diff --git api.md api.md
index a885628a3..33ab95ef6 100644
--- api.md
+++ api.md
@@ -283,7 +283,7 @@ Types:
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">OtherFileChunkingStrategyObject</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">StaticFileChunkingStrategy</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">StaticFileChunkingStrategyObject</a></code>
-- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">StaticFileChunkingStrategyParam</a></code>
+- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">StaticFileChunkingStrategyObjectParam</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">VectorStore</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">VectorStoreDeleted</a></code>
diff --git examples/package.json examples/package.json
index c8a5f7087..b8c34ac45 100644
--- examples/package.json
+++ examples/package.json
@@ -6,14 +6,15 @@
"license": "MIT",
"private": true,
"dependencies": {
+ "@azure/identity": "^4.2.0",
"express": "^4.18.2",
"next": "^14.1.1",
"openai": "file:..",
- "zod-to-json-schema": "^3.21.4",
- "@azure/identity": "^4.2.0"
+ "zod-to-json-schema": "^3.21.4"
},
"devDependencies": {
"@types/body-parser": "^1.19.3",
- "@types/express": "^4.17.19"
+ "@types/express": "^4.17.19",
+ "@types/web": "^0.0.194"
}
}
diff --git a/examples/realtime/websocket.ts b/examples/realtime/websocket.ts
new file mode 100644
index 000000000..0da131bc3
--- /dev/null
+++ examples/realtime/websocket.ts
@@ -0,0 +1,48 @@
+import { OpenAIRealtimeWebSocket } from 'openai/beta/realtime/websocket';
+
+async function main() {
+ const rt = new OpenAIRealtimeWebSocket({ model: 'gpt-4o-realtime-preview-2024-12-17' });
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.addEventListener('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.addEventListener('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git a/examples/realtime/ws.ts b/examples/realtime/ws.ts
new file mode 100644
index 000000000..4bbe85e5d
--- /dev/null
+++ examples/realtime/ws.ts
@@ -0,0 +1,55 @@
+import { OpenAIRealtimeWS } from 'openai/beta/realtime/ws';
+
+async function main() {
+ const rt = new OpenAIRealtimeWS({ model: 'gpt-4o-realtime-preview-2024-12-17' });
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.on('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['foo'] as any,
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.on('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git jsr.json jsr.json
index 257faa02d..e6d772116 100644
--- jsr.json
+++ jsr.json
@@ -1,7 +1,14 @@
{
"name": "@openai/openai",
- "version": "4.78.1",
- "exports": "./index.ts",
+ "version": "4.79.4",
+ "exports": {
+ ".": "./index.ts",
+ "./helpers/zod": "./helpers/zod.ts",
+ "./beta/realtime/websocket": "./beta/realtime/websocket.ts"
+ },
+ "imports": {
+ "zod": "npm:zod@3"
+ },
"publish": {
"exclude": [
"!."
diff --git package.json package.json
index ff6ec16bc..d7a5555e5 100644
--- package.json
+++ package.json
@@ -1,6 +1,6 @@
{
"name": "openai",
- "version": "4.78.1",
+ "version": "4.79.4",
"description": "The official TypeScript library for the OpenAI API",
"author": "OpenAI <support@openai.com>",
"types": "dist/index.d.ts",
@@ -36,6 +36,7 @@
"@swc/core": "^1.3.102",
"@swc/jest": "^0.2.29",
"@types/jest": "^29.4.0",
+ "@types/ws": "^8.5.13",
"@typescript-eslint/eslint-plugin": "^6.7.0",
"@typescript-eslint/parser": "^6.7.0",
"eslint": "^8.49.0",
@@ -52,6 +53,7 @@
"tsc-multi": "^1.1.0",
"tsconfig-paths": "^4.0.0",
"typescript": "^4.8.2",
+ "ws": "^8.18.0",
"zod": "^3.23.8"
},
"sideEffects": [
@@ -126,9 +128,13 @@
},
"bin": "./bin/cli",
"peerDependencies": {
+ "ws": "^8.18.0",
"zod": "^3.23.8"
},
"peerDependenciesMeta": {
+ "ws": {
+ "optional": true
+ },
"zod": {
"optional": true
}
diff --git a/src/beta/realtime/index.ts b/src/beta/realtime/index.ts
new file mode 100644
index 000000000..75f0f3088
--- /dev/null
+++ src/beta/realtime/index.ts
@@ -0,0 +1 @@
+export { OpenAIRealtimeError } from './internal-base';
diff --git a/src/beta/realtime/internal-base.ts b/src/beta/realtime/internal-base.ts
new file mode 100644
index 000000000..391d69911
--- /dev/null
+++ src/beta/realtime/internal-base.ts
@@ -0,0 +1,83 @@
+import { RealtimeClientEvent, RealtimeServerEvent, ErrorEvent } from '../../resources/beta/realtime/realtime';
+import { EventEmitter } from '../../lib/EventEmitter';
+import { OpenAIError } from '../../error';
+
+export class OpenAIRealtimeError extends OpenAIError {
+ /**
+ * The error data that the API sent back in an `error` event.
+ */
+ error?: ErrorEvent.Error | undefined;
+
+ /**
+ * The unique ID of the server event.
+ */
+ event_id?: string | undefined;
+
+ constructor(message: string, event: ErrorEvent | null) {
+ super(message);
+
+ this.error = event?.error;
+ this.event_id = event?.event_id;
+ }
+}
+
+type Simplify<T> = { [KeyType in keyof T]: T[KeyType] } & {};
+
+type RealtimeEvents = Simplify<
+ {
+ event: (event: RealtimeServerEvent) => void;
+ error: (error: OpenAIRealtimeError) => void;
+ } & {
+ [EventType in Exclude<RealtimeServerEvent['type'], 'error'>]: (
+ event: Extract<RealtimeServerEvent, { type: EventType }>,
+ ) => unknown;
+ }
+>;
+
+export abstract class OpenAIRealtimeEmitter extends EventEmitter<RealtimeEvents> {
+ /**
+ * Send an event to the API.
+ */
+ abstract send(event: RealtimeClientEvent): void;
+
+ /**
+ * Close the websocket connection.
+ */
+ abstract close(props?: { code: number; reason: string }): void;
+
+ protected _onError(event: null, message: string, cause: any): void;
+ protected _onError(event: ErrorEvent, message?: string | undefined): void;
+ protected _onError(event: ErrorEvent | null, message?: string | undefined, cause?: any): void {
+ message =
+ event?.error ?
+ `${event.error.message} code=${event.error.code} param=${event.error.param} type=${event.error.type} event_id=${event.error.event_id}`
+ : message ?? 'unknown error';
+
+ if (!this._hasListener('error')) {
+ const error = new OpenAIRealtimeError(
+ message +
+ `\n\nTo resolve these unhandled rejection errors you should bind an \`error\` callback, e.g. \`rt.on('error', (error) => ...)\` `,
+ event,
+ );
+ // @ts-ignore
+ error.cause = cause;
+ Promise.reject(error);
+ return;
+ }
+
+ const error = new OpenAIRealtimeError(message, event);
+ // @ts-ignore
+ error.cause = cause;
+
+ this._emit('error', error);
+ }
+}
+
+export function buildRealtimeURL(props: { baseURL: string; model: string }): URL {
+ const path = '/realtime';
+
+ const url = new URL(props.baseURL + (props.baseURL.endsWith('/') ? path.slice(1) : path));
+ url.protocol = 'wss';
+ url.searchParams.set('model', props.model);
+ return url;
+}
diff --git a/src/beta/realtime/websocket.ts b/src/beta/realtime/websocket.ts
new file mode 100644
index 000000000..e0853779d
--- /dev/null
+++ src/beta/realtime/websocket.ts
@@ -0,0 +1,97 @@
+import { OpenAI } from '../../index';
+import { OpenAIError } from '../../error';
+import * as Core from '../../core';
+import type { RealtimeClientEvent, RealtimeServerEvent } from '../../resources/beta/realtime/realtime';
+import { OpenAIRealtimeEmitter, buildRealtimeURL } from './internal-base';
+
+interface MessageEvent {
+ data: string;
+}
+
+type _WebSocket =
+ typeof globalThis extends (
+ {
+ WebSocket: infer ws;
+ }
+ ) ?
+ // @ts-ignore
+ InstanceType<ws>
+ : any;
+
+export class OpenAIRealtimeWebSocket extends OpenAIRealtimeEmitter {
+ url: URL;
+ socket: _WebSocket;
+
+ constructor(
+ props: {
+ model: string;
+ dangerouslyAllowBrowser?: boolean;
+ },
+ client?: Pick<OpenAI, 'apiKey' | 'baseURL'>,
+ ) {
+ super();
+
+ const dangerouslyAllowBrowser =
+ props.dangerouslyAllowBrowser ??
+ (client as any)?._options?.dangerouslyAllowBrowser ??
+ (client?.apiKey.startsWith('ek_') ? true : null);
+
+ if (!dangerouslyAllowBrowser && Core.isRunningInBrowser()) {
+ throw new OpenAIError(
+ "It looks like you're running in a browser-like environment.\n\nThis is disabled by default, as it risks exposing your secret API credentials to attackers.\n\nYou can avoid this error by creating an ephemeral session token:\nhttps://platform.openai.com/docs/api-reference/realtime-sessions\n",
+ );
+ }
+
+ client ??= new OpenAI({ dangerouslyAllowBrowser });
+
+ this.url = buildRealtimeURL({ baseURL: client.baseURL, model: props.model });
+ // @ts-ignore
+ this.socket = new WebSocket(this.url, [
+ 'realtime',
+ `openai-insecure-api-key.${client.apiKey}`,
+ 'openai-beta.realtime-v1',
+ ]);
+
+ this.socket.addEventListener('message', (websocketEvent: MessageEvent) => {
+ const event = (() => {
+ try {
+ return JSON.parse(websocketEvent.data.toString()) as RealtimeServerEvent;
+ } catch (err) {
+ this._onError(null, 'could not parse websocket event', err);
+ return null;
+ }
+ })();
+
+ if (event) {
+ this._emit('event', event);
+
+ if (event.type === 'error') {
+ this._onError(event);
+ } else {
+ // @ts-expect-error TS isn't smart enough to get the relationship right here
+ this._emit(event.type, event);
+ }
+ }
+ });
+
+ this.socket.addEventListener('error', (event: any) => {
+ this._onError(null, event.message, null);
+ });
+ }
+
+ send(event: RealtimeClientEvent) {
+ try {
+ this.socket.send(JSON.stringify(event));
+ } catch (err) {
+ this._onError(null, 'could not send data', err);
+ }
+ }
+
+ close(props?: { code: number; reason: string }) {
+ try {
+ this.socket.close(props?.code ?? 1000, props?.reason ?? 'OK');
+ } catch (err) {
+ this._onError(null, 'could not close the connection', err);
+ }
+ }
+}
diff --git a/src/beta/realtime/ws.ts b/src/beta/realtime/ws.ts
new file mode 100644
index 000000000..631a36cd2
--- /dev/null
+++ src/beta/realtime/ws.ts
@@ -0,0 +1,69 @@
+import * as WS from 'ws';
+import { OpenAI } from '../../index';
+import type { RealtimeClientEvent, RealtimeServerEvent } from '../../resources/beta/realtime/realtime';
+import { OpenAIRealtimeEmitter, buildRealtimeURL } from './internal-base';
+
+export class OpenAIRealtimeWS extends OpenAIRealtimeEmitter {
+ url: URL;
+ socket: WS.WebSocket;
+
+ constructor(
+ props: { model: string; options?: WS.ClientOptions | undefined },
+ client?: Pick<OpenAI, 'apiKey' | 'baseURL'>,
+ ) {
+ super();
+ client ??= new OpenAI();
+
+ this.url = buildRealtimeURL({ baseURL: client.baseURL, model: props.model });
+ this.socket = new WS.WebSocket(this.url, {
+ ...props.options,
+ headers: {
+ ...props.options?.headers,
+ Authorization: `Bearer ${client.apiKey}`,
+ 'OpenAI-Beta': 'realtime=v1',
+ },
+ });
+
+ this.socket.on('message', (wsEvent) => {
+ const event = (() => {
+ try {
+ return JSON.parse(wsEvent.toString()) as RealtimeServerEvent;
+ } catch (err) {
+ this._onError(null, 'could not parse websocket event', err);
+ return null;
+ }
+ })();
+
+ if (event) {
+ this._emit('event', event);
+
+ if (event.type === 'error') {
+ this._onError(event);
+ } else {
+ // @ts-expect-error TS isn't smart enough to get the relationship right here
+ this._emit(event.type, event);
+ }
+ }
+ });
+
+ this.socket.on('error', (err) => {
+ this._onError(null, err.message, err);
+ });
+ }
+
+ send(event: RealtimeClientEvent) {
+ try {
+ this.socket.send(JSON.stringify(event));
+ } catch (err) {
+ this._onError(null, 'could not send data', err);
+ }
+ }
+
+ close(props?: { code: number; reason: string }) {
+ try {
+ this.socket.close(props?.code ?? 1000, props?.reason ?? 'OK');
+ } catch (err) {
+ this._onError(null, 'could not close the connection', err);
+ }
+ }
+}
diff --git src/core.ts src/core.ts
index 972cceaec..3d2d029a5 100644
--- src/core.ts
+++ src/core.ts
@@ -1148,9 +1148,43 @@ function applyHeadersMut(targetHeaders: Headers, newHeaders: Headers): void {
}
}
+const SENSITIVE_HEADERS = new Set(['authorization', 'api-key']);
+
export function debug(action: string, ...args: any[]) {
if (typeof process !== 'undefined' && process?.env?.['DEBUG'] === 'true') {
- console.log(`OpenAI:DEBUG:${action}`, ...args);
+ const modifiedArgs = args.map((arg) => {
+ if (!arg) {
+ return arg;
+ }
+
+ // Check for sensitive headers in request body 'headers' object
+ if (arg['headers']) {
+ // clone so we don't mutate
+ const modifiedArg = { ...arg, headers: { ...arg['headers'] } };
+
+ for (const header in arg['headers']) {
+ if (SENSITIVE_HEADERS.has(header.toLowerCase())) {
+ modifiedArg['headers'][header] = 'REDACTED';
+ }
+ }
+
+ return modifiedArg;
+ }
+
+ let modifiedArg = null;
+
+ // Check for sensitive headers in headers object
+ for (const header in arg) {
+ if (SENSITIVE_HEADERS.has(header.toLowerCase())) {
+ // avoid making a copy until we need to
+ modifiedArg ??= { ...arg };
+ modifiedArg[header] = 'REDACTED';
+ }
+ }
+
+ return modifiedArg ?? arg;
+ });
+ console.log(`OpenAI:DEBUG:${action}`, ...modifiedArgs);
}
}
diff --git src/index.ts src/index.ts
index 2320850fb..cf6aa89e3 100644
--- src/index.ts
+++ src/index.ts
@@ -137,7 +137,7 @@ export interface ClientOptions {
* Note that request timeouts are retried by default, so in a worst-case scenario you may wait
* much longer than this timeout before the promise succeeds or fails.
*/
- timeout?: number;
+ timeout?: number | undefined;
/**
* An HTTP agent used to manage HTTP(S) connections.
@@ -145,7 +145,7 @@ export interface ClientOptions {
* If not provided, an agent will be constructed by default in the Node.js environment,
* otherwise no agent is used.
*/
- httpAgent?: Agent;
+ httpAgent?: Agent | undefined;
/**
* Specify a custom `fetch` function implementation.
@@ -161,7 +161,7 @@ export interface ClientOptions {
*
* @default 2
*/
- maxRetries?: number;
+ maxRetries?: number | undefined;
/**
* Default headers to include with every request to the API.
@@ -169,7 +169,7 @@ export interface ClientOptions {
* These can be removed in individual requests by explicitly setting the
* header to `undefined` or `null` in request options.
*/
- defaultHeaders?: Core.Headers;
+ defaultHeaders?: Core.Headers | undefined;
/**
* Default query parameters to include with every request to the API.
@@ -177,13 +177,13 @@ export interface ClientOptions {
* These can be removed in individual requests by explicitly setting the
* param to `undefined` in request options.
*/
- defaultQuery?: Core.DefaultQuery;
+ defaultQuery?: Core.DefaultQuery | undefined;
/**
* By default, client-side use of this library is not allowed, as it risks exposing your secret API credentials to attackers.
* Only set this option to `true` if you understand the risks and have appropriate mitigations in place.
*/
- dangerouslyAllowBrowser?: boolean;
+ dangerouslyAllowBrowser?: boolean | undefined;
}
/**
diff --git src/internal/decoders/line.ts src/internal/decoders/line.ts
index 1e0bbf390..34e41d1dc 100644
--- src/internal/decoders/line.ts
+++ src/internal/decoders/line.ts
@@ -1,6 +1,6 @@
import { OpenAIError } from '../../error';
-type Bytes = string | ArrayBuffer | Uint8Array | Buffer | null | undefined;
+export type Bytes = string | ArrayBuffer | Uint8Array | Buffer | null | undefined;
/**
* A re-implementation of httpx's `LineDecoder` in Python that handles incrementally
diff --git a/src/internal/stream-utils.ts b/src/internal/stream-utils.ts
new file mode 100644
index 000000000..37f7793cf
--- /dev/null
+++ src/internal/stream-utils.ts
@@ -0,0 +1,32 @@
+/**
+ * Most browsers don't yet have async iterable support for ReadableStream,
+ * and Node has a very different way of reading bytes from its "ReadableStream".
+ *
+ * This polyfill was pulled from https://github.com/MattiasBuelens/web-streams-polyfill/pull/122#issuecomment-1627354490
+ */
+export function ReadableStreamToAsyncIterable<T>(stream: any): AsyncIterableIterator<T> {
+ if (stream[Symbol.asyncIterator]) return stream;
+
+ const reader = stream.getReader();
+ return {
+ async next() {
+ try {
+ const result = await reader.read();
+ if (result?.done) reader.releaseLock(); // release lock when stream becomes closed
+ return result;
+ } catch (e) {
+ reader.releaseLock(); // release lock when stream becomes errored
+ throw e;
+ }
+ },
+ async return() {
+ const cancelPromise = reader.cancel();
+ reader.releaseLock();
+ await cancelPromise;
+ return { done: true, value: undefined };
+ },
+ [Symbol.asyncIterator]() {
+ return this;
+ },
+ };
+}
diff --git a/src/lib/EventEmitter.ts b/src/lib/EventEmitter.ts
new file mode 100644
index 000000000..9adeebdc3
--- /dev/null
+++ src/lib/EventEmitter.ts
@@ -0,0 +1,98 @@
+type EventListener<Events, EventType extends keyof Events> = Events[EventType];
+
+type EventListeners<Events, EventType extends keyof Events> = Array<{
+ listener: EventListener<Events, EventType>;
+ once?: boolean;
+}>;
+
+export type EventParameters<Events, EventType extends keyof Events> = {
+ [Event in EventType]: EventListener<Events, EventType> extends (...args: infer P) => any ? P : never;
+}[EventType];
+
+export class EventEmitter<EventTypes extends Record<string, (...args: any) => any>> {
+ #listeners: {
+ [Event in keyof EventTypes]?: EventListeners<EventTypes, Event>;
+ } = {};
+
+ /**
+ * Adds the listener function to the end of the listeners array for the event.
+ * No checks are made to see if the listener has already been added. Multiple calls passing
+ * the same combination of event and listener will result in the listener being added, and
+ * called, multiple times.
+ * @returns this, so that calls can be chained
+ */
+ on<Event extends keyof EventTypes>(event: Event, listener: EventListener<EventTypes, Event>): this {
+ const listeners: EventListeners<EventTypes, Event> =
+ this.#listeners[event] || (this.#listeners[event] = []);
+ listeners.push({ listener });
+ return this;
+ }
+
+ /**
+ * Removes the specified listener from the listener array for the event.
+ * off() will remove, at most, one instance of a listener from the listener array. If any single
+ * listener has been added multiple times to the listener array for the specified event, then
+ * off() must be called multiple times to remove each instance.
+ * @returns this, so that calls can be chained
+ */
+ off<Event extends keyof EventTypes>(event: Event, listener: EventListener<EventTypes, Event>): this {
+ const listeners = this.#listeners[event];
+ if (!listeners) return this;
+ const index = listeners.findIndex((l) => l.listener === listener);
+ if (index >= 0) listeners.splice(index, 1);
+ return this;
+ }
+
+ /**
+ * Adds a one-time listener function for the event. The next time the event is triggered,
+ * this listener is removed and then invoked.
+ * @returns this, so that calls can be chained
+ */
+ once<Event extends keyof EventTypes>(event: Event, listener: EventListener<EventTypes, Event>): this {
+ const listeners: EventListeners<EventTypes, Event> =
+ this.#listeners[event] || (this.#listeners[event] = []);
+ listeners.push({ listener, once: true });
+ return this;
+ }
+
+ /**
+ * This is similar to `.once()`, but returns a Promise that resolves the next time
+ * the event is triggered, instead of calling a listener callback.
+ * @returns a Promise that resolves the next time given event is triggered,
+ * or rejects if an error is emitted. (If you request the 'error' event,
+ * returns a promise that resolves with the error).
+ *
+ * Example:
+ *
+ * const message = await stream.emitted('message') // rejects if the stream errors
+ */
+ emitted<Event extends keyof EventTypes>(
+ event: Event,
+ ): Promise<
+ EventParameters<EventTypes, Event> extends [infer Param] ? Param
+ : EventParameters<EventTypes, Event> extends [] ? void
+ : EventParameters<EventTypes, Event>
+ > {
+ return new Promise((resolve, reject) => {
+ // TODO: handle errors
+ this.once(event, resolve as any);
+ });
+ }
+
+ protected _emit<Event extends keyof EventTypes>(
+ this: EventEmitter<EventTypes>,
+ event: Event,
+ ...args: EventParameters<EventTypes, Event>
+ ) {
+ const listeners: EventListeners<EventTypes, Event> | undefined = this.#listeners[event];
+ if (listeners) {
+ this.#listeners[event] = listeners.filter((l) => !l.once) as any;
+ listeners.forEach(({ listener }: any) => listener(...(args as any)));
+ }
+ }
+
+ protected _hasListener(event: keyof EventTypes): boolean {
+ const listeners = this.#listeners[event];
+ return listeners && listeners.length > 0;
+ }
+}
diff --git src/resources/beta/beta.ts src/resources/beta/beta.ts
index ccd043243..df929b2f7 100644
--- src/resources/beta/beta.ts
+++ src/resources/beta/beta.ts
@@ -48,7 +48,7 @@ import {
OtherFileChunkingStrategyObject,
StaticFileChunkingStrategy,
StaticFileChunkingStrategyObject,
- StaticFileChunkingStrategyParam,
+ StaticFileChunkingStrategyObjectParam,
VectorStore,
VectorStoreCreateParams,
VectorStoreDeleted,
@@ -85,7 +85,7 @@ export declare namespace Beta {
type OtherFileChunkingStrategyObject as OtherFileChunkingStrategyObject,
type StaticFileChunkingStrategy as StaticFileChunkingStrategy,
type StaticFileChunkingStrategyObject as StaticFileChunkingStrategyObject,
- type StaticFileChunkingStrategyParam as StaticFileChunkingStrategyParam,
+ type StaticFileChunkingStrategyObjectParam as StaticFileChunkingStrategyObjectParam,
type VectorStore as VectorStore,
type VectorStoreDeleted as VectorStoreDeleted,
VectorStoresPage as VectorStoresPage,
diff --git src/resources/beta/index.ts src/resources/beta/index.ts
index aa2e52d4c..babca0016 100644
--- src/resources/beta/index.ts
+++ src/resources/beta/index.ts
@@ -46,7 +46,7 @@ export {
type OtherFileChunkingStrategyObject,
type StaticFileChunkingStrategy,
type StaticFileChunkingStrategyObject,
- type StaticFileChunkingStrategyParam,
+ type StaticFileChunkingStrategyObjectParam,
type VectorStore,
type VectorStoreDeleted,
type VectorStoreCreateParams,
diff --git src/resources/beta/vector-stores/index.ts src/resources/beta/vector-stores/index.ts
index 89fc0cde0..d587bd160 100644
--- src/resources/beta/vector-stores/index.ts
+++ src/resources/beta/vector-stores/index.ts
@@ -23,7 +23,7 @@ export {
type OtherFileChunkingStrategyObject,
type StaticFileChunkingStrategy,
type StaticFileChunkingStrategyObject,
- type StaticFileChunkingStrategyParam,
+ type StaticFileChunkingStrategyObjectParam,
type VectorStore,
type VectorStoreDeleted,
type VectorStoreCreateParams,
diff --git src/resources/beta/vector-stores/vector-stores.ts src/resources/beta/vector-stores/vector-stores.ts
index 35ad8c369..cbff2d562 100644
--- src/resources/beta/vector-stores/vector-stores.ts
+++ src/resources/beta/vector-stores/vector-stores.ts
@@ -116,7 +116,7 @@ export type FileChunkingStrategy = StaticFileChunkingStrategyObject | OtherFileC
* The chunking strategy used to chunk the file(s). If not set, will use the `auto`
* strategy. Only applicable if `file_ids` is non-empty.
*/
-export type FileChunkingStrategyParam = AutoFileChunkingStrategyParam | StaticFileChunkingStrategyParam;
+export type FileChunkingStrategyParam = AutoFileChunkingStrategyParam | StaticFileChunkingStrategyObjectParam;
/**
* This is returned when the chunking strategy is unknown. Typically, this is
@@ -154,7 +154,7 @@ export interface StaticFileChunkingStrategyObject {
type: 'static';
}
-export interface StaticFileChunkingStrategyParam {
+export interface StaticFileChunkingStrategyObjectParam {
static: StaticFileChunkingStrategy;
/**
@@ -397,7 +397,7 @@ export declare namespace VectorStores {
type OtherFileChunkingStrategyObject as OtherFileChunkingStrategyObject,
type StaticFileChunkingStrategy as StaticFileChunkingStrategy,
type StaticFileChunkingStrategyObject as StaticFileChunkingStrategyObject,
- type StaticFileChunkingStrategyParam as StaticFileChunkingStrategyParam,
+ type StaticFileChunkingStrategyObjectParam as StaticFileChunkingStrategyObjectParam,
type VectorStore as VectorStore,
type VectorStoreDeleted as VectorStoreDeleted,
VectorStoresPage as VectorStoresPage,
diff --git src/resources/chat/completions.ts src/resources/chat/completions.ts
index 31f5814cb..88c778036 100644
--- src/resources/chat/completions.ts
+++ src/resources/chat/completions.ts
@@ -163,8 +163,8 @@ export interface ChatCompletionAssistantMessageParam {
content?: string | Array<ChatCompletionContentPartText | ChatCompletionContentPartRefusal> | null;
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
function_call?: ChatCompletionAssistantMessageParam.FunctionCall | null;
@@ -198,8 +198,8 @@ export namespace ChatCompletionAssistantMessageParam {
}
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
export interface FunctionCall {
/**
@@ -360,8 +360,8 @@ export namespace ChatCompletionChunk {
content?: string | null;
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
function_call?: Delta.FunctionCall;
@@ -380,8 +380,8 @@ export namespace ChatCompletionChunk {
export namespace Delta {
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
export interface FunctionCall {
/**
@@ -620,8 +620,8 @@ export interface ChatCompletionMessage {
audio?: ChatCompletionAudio | null;
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
function_call?: ChatCompletionMessage.FunctionCall | null;
@@ -633,8 +633,8 @@ export interface ChatCompletionMessage {
export namespace ChatCompletionMessage {
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
export interface FunctionCall {
/**
diff --git src/resources/files.ts src/resources/files.ts
index 43708310b..67bc95469 100644
--- src/resources/files.ts
+++ src/resources/files.ts
@@ -168,13 +168,13 @@ export interface FileObject {
| 'vision';
/**
- * @deprecated: Deprecated. The current status of the file, which can be either
+ * @deprecated Deprecated. The current status of the file, which can be either
* `uploaded`, `processed`, or `error`.
*/
status: 'uploaded' | 'processed' | 'error';
/**
- * @deprecated: Deprecated. For details on why a fine-tuning training file failed
+ * @deprecated Deprecated. For details on why a fine-tuning training file failed
* validation, see the `error` field on `fine_tuning.job`.
*/
status_details?: string;
diff --git src/resources/fine-tuning/jobs/jobs.ts src/resources/fine-tuning/jobs/jobs.ts
index 44dd011aa..9be03c302 100644
--- src/resources/fine-tuning/jobs/jobs.ts
+++ src/resources/fine-tuning/jobs/jobs.ts
@@ -516,7 +516,7 @@ export interface JobCreateParams {
export namespace JobCreateParams {
/**
- * @deprecated: The hyperparameters used for the fine-tuning job. This value is now
+ * @deprecated The hyperparameters used for the fine-tuning job. This value is now
* deprecated in favor of `method`, and should be passed in under the `method`
* parameter.
*/
diff --git src/streaming.ts src/streaming.ts
index 2891e6ac3..6a57a50a0 100644
--- src/streaming.ts
+++ src/streaming.ts
@@ -1,6 +1,7 @@
import { ReadableStream, type Response } from './_shims/index';
import { OpenAIError } from './error';
import { LineDecoder } from './internal/decoders/line';
+import { ReadableStreamToAsyncIterable } from './internal/stream-utils';
import { APIError } from './error';
@@ -96,7 +97,7 @@ export class Stream<Item> implements AsyncIterable<Item> {
async function* iterLines(): AsyncGenerator<string, void, unknown> {
const lineDecoder = new LineDecoder();
- const iter = readableStreamAsyncIterable<Bytes>(readableStream);
+ const iter = ReadableStreamToAsyncIterable<Bytes>(readableStream);
for await (const chunk of iter) {
for (const line of lineDecoder.decode(chunk)) {
yield line;
@@ -210,7 +211,7 @@ export async function* _iterSSEMessages(
const sseDecoder = new SSEDecoder();
const lineDecoder = new LineDecoder();
- const iter = readableStreamAsyncIterable<Bytes>(response.body);
+ const iter = ReadableStreamToAsyncIterable<Bytes>(response.body);
for await (const sseChunk of iterSSEChunks(iter)) {
for (const line of lineDecoder.decode(sseChunk)) {
const sse = sseDecoder.decode(line);
@@ -363,36 +364,3 @@ function partition(str: string, delimiter: string): [string, string, string] {
return [str, '', ''];
}
-
-/**
- * Most browsers don't yet have async iterable support for ReadableStream,
- * and Node has a very different way of reading bytes from its "ReadableStream".
- *
- * This polyfill was pulled from https://github.com/MattiasBuelens/web-streams-polyfill/pull/122#issuecomment-1627354490
- */
-export function readableStreamAsyncIterable<T>(stream: any): AsyncIterableIterator<T> {
- if (stream[Symbol.asyncIterator]) return stream;
-
- const reader = stream.getReader();
- return {
- async next() {
- try {
- const result = await reader.read();
- if (result?.done) reader.releaseLock(); // release lock when stream becomes closed
- return result;
- } catch (e) {
- reader.releaseLock(); // release lock when stream becomes errored
- throw e;
- }
- },
- async return() {
- const cancelPromise = reader.cancel();
- reader.releaseLock();
- await cancelPromise;
- return { done: true, value: undefined };
- },
- [Symbol.asyncIterator]() {
- return this;
- },
- };
-}
diff --git src/version.ts src/version.ts
index a8ac58ba2..e8b9601ed 100644
--- src/version.ts
+++ src/version.ts
@@ -1 +1 @@
-export const VERSION = '4.78.1'; // x-release-please-version
+export const VERSION = '4.79.4'; // x-release-please-version
diff --git tests/index.test.ts tests/index.test.ts
index a6f0040a4..6227d6fbe 100644
--- tests/index.test.ts
+++ tests/index.test.ts
@@ -2,7 +2,7 @@
import OpenAI from 'openai';
import { APIUserAbortError } from 'openai';
-import { Headers } from 'openai/core';
+import { debug, Headers } from 'openai/core';
import defaultFetch, { Response, type RequestInit, type RequestInfo } from 'node-fetch';
describe('instantiate client', () => {
@@ -96,6 +96,15 @@ describe('instantiate client', () => {
expect(response).toEqual({ url: 'http://localhost:5000/foo', custom: true });
});
+ test('explicit global fetch', async () => {
+ // make sure the global fetch type is assignable to our Fetch type
+ const client = new OpenAI({
+ baseURL: 'http://localhost:5000/',
+ apiKey: 'My API Key',
+ fetch: defaultFetch,
+ });
+ });
+
test('custom signal', async () => {
const client = new OpenAI({
baseURL: process.env['TEST_API_BASE_URL'] ?? 'http://127.0.0.1:4010',
@@ -424,3 +433,95 @@ describe('retries', () => {
expect(count).toEqual(3);
});
});
+
+describe('debug()', () => {
+ const env = process.env;
+ const spy = jest.spyOn(console, 'log');
+
+ beforeEach(() => {
+ jest.resetModules();
+ process.env = { ...env };
+ process.env['DEBUG'] = 'true';
+ });
+
+ afterEach(() => {
+ process.env = env;
+ });
+
+ test('body request object with Authorization header', function () {
+ // Test request body includes headers object with Authorization
+ const headersTest = {
+ headers: {
+ Authorization: 'fakeAuthorization',
+ },
+ };
+ debug('request', headersTest);
+ expect(spy).toHaveBeenCalledWith('OpenAI:DEBUG:request', {
+ headers: {
+ Authorization: 'REDACTED',
+ },
+ });
+ });
+
+ test('body request object with api-key header', function () {
+ // Test request body includes headers object with api-ley
+ const apiKeyTest = {
+ headers: {
+ 'api-key': 'fakeKey',
+ },
+ };
+ debug('request', apiKeyTest);
+ expect(spy).toHaveBeenCalledWith('OpenAI:DEBUG:request', {
+ headers: {
+ 'api-key': 'REDACTED',
+ },
+ });
+ });
+
+ test('header object with Authorization header', function () {
+ // Test headers object with authorization header
+ const authorizationTest = {
+ authorization: 'fakeValue',
+ };
+ debug('request', authorizationTest);
+ expect(spy).toHaveBeenCalledWith('OpenAI:DEBUG:request', {
+ authorization: 'REDACTED',
+ });
+ });
+
+ test('input args are not mutated', function () {
+ const authorizationTest = {
+ authorization: 'fakeValue',
+ };
+ const client = new OpenAI({
+ baseURL: 'http://localhost:5000/',
+ defaultHeaders: authorizationTest,
+ apiKey: 'api-key',
+ });
+
+ const { req } = client.buildRequest({ path: '/foo', method: 'post' });
+ debug('request', authorizationTest);
+ expect((req.headers as Headers)['authorization']).toEqual('fakeValue');
+ expect(spy).toHaveBeenCalledWith('OpenAI:DEBUG:request', {
+ authorization: 'REDACTED',
+ });
+ });
+
+ test('input headers are not mutated', function () {
+ const authorizationTest = {
+ authorization: 'fakeValue',
+ };
+ const client = new OpenAI({
+ baseURL: 'http://localhost:5000/',
+ defaultHeaders: authorizationTest,
+ apiKey: 'api-key',
+ });
+
+ const { req } = client.buildRequest({ path: '/foo', method: 'post' });
+ debug('request', { headers: req.headers });
+ expect((req.headers as Headers)['authorization']).toEqual('fakeValue');
+ expect(spy).toHaveBeenCalledWith('OpenAI:DEBUG:request', {
+ authorization: 'REDACTED',
+ });
+ });
+});
DescriptionThis PR introduces Realtime API support to the OpenAI Node.js SDK and includes several bug fixes and improvements. The Realtime API enables low-latency, multi-modal conversational experiences through WebSocket connections, supporting text and audio as both input and output. Security Hotspots
ChangesChanges
sequenceDiagram
participant Client
participant OpenAIRealtime
participant WebSocket
participant OpenAI API
Client->>OpenAIRealtime: new OpenAIRealtimeWS()
OpenAIRealtime->>WebSocket: Create connection
WebSocket->>OpenAI API: Connect with API key
OpenAI API-->>WebSocket: Connection established
Client->>OpenAIRealtime: send(session.update)
OpenAIRealtime->>WebSocket: Send event
WebSocket->>OpenAI API: Update session
Client->>OpenAIRealtime: send(conversation.item.create)
OpenAIRealtime->>WebSocket: Send event
WebSocket->>OpenAI API: Create conversation item
OpenAI API-->>WebSocket: Stream responses
WebSocket-->>OpenAIRealtime: Emit events
OpenAIRealtime-->>Client: Handle response events
Client->>OpenAIRealtime: close()
OpenAIRealtime->>WebSocket: Close connection
WebSocket->>OpenAI API: Terminate connection
|
bedrock debug - [puLL-Merge] - openai/openai-node@v4.78.1..v4.79.4 Diffdiff --git .release-please-manifest.json .release-please-manifest.json
index 3218ab333..b1ab5c7b9 100644
--- .release-please-manifest.json
+++ .release-please-manifest.json
@@ -1,3 +1,3 @@
{
- ".": "4.78.1"
+ ".": "4.79.4"
}
diff --git CHANGELOG.md CHANGELOG.md
index 320d00140..4254a9b8f 100644
--- CHANGELOG.md
+++ CHANGELOG.md
@@ -1,5 +1,73 @@
# Changelog
+## 4.79.4 (2025-01-21)
+
+Full Changelog: [v4.79.3...v4.79.4](https://github.com/openai/openai-node/compare/v4.79.3...v4.79.4)
+
+### Bug Fixes
+
+* **jsr:** correct zod config ([e45fa5f](https://github.com/openai/openai-node/commit/e45fa5f535ca74789636001e60e33edcad4db83c))
+
+
+### Chores
+
+* **internal:** minor restructuring ([#1278](https://github.com/openai/openai-node/issues/1278)) ([58ea92a](https://github.com/openai/openai-node/commit/58ea92a7464a04223f24ba31dbc0f7d0cf99cc19))
+
+
+### Documentation
+
+* update deprecation messages ([#1275](https://github.com/openai/openai-node/issues/1275)) ([1c6599e](https://github.com/openai/openai-node/commit/1c6599e47ef75a71cb309a1e14d97bc97bd036d0))
+
+## 4.79.3 (2025-01-21)
+
+Full Changelog: [v4.79.2...v4.79.3](https://github.com/openai/openai-node/compare/v4.79.2...v4.79.3)
+
+### Bug Fixes
+
+* **jsr:** export zod helpers ([9dc55b6](https://github.com/openai/openai-node/commit/9dc55b62b564ad5ad1d4a60fe520b68235d05296))
+
+## 4.79.2 (2025-01-21)
+
+Full Changelog: [v4.79.1...v4.79.2](https://github.com/openai/openai-node/compare/v4.79.1...v4.79.2)
+
+### Chores
+
+* **internal:** add test ([#1270](https://github.com/openai/openai-node/issues/1270)) ([b7c2d3d](https://github.com/openai/openai-node/commit/b7c2d3d9abd315f1452a578b0fd0d82e6ac4ff60))
+
+
+### Documentation
+
+* **readme:** fix Realtime API example link ([#1272](https://github.com/openai/openai-node/issues/1272)) ([d0653c7](https://github.com/openai/openai-node/commit/d0653c7fef48360d137a7411dfdfb95d477cdbc5))
+
+## 4.79.1 (2025-01-17)
+
+Full Changelog: [v4.79.0...v4.79.1](https://github.com/openai/openai-node/compare/v4.79.0...v4.79.1)
+
+### Bug Fixes
+
+* **realtime:** correct import syntax ([#1267](https://github.com/openai/openai-node/issues/1267)) ([74702a7](https://github.com/openai/openai-node/commit/74702a739f566810d2b6c4e0832cfa17a1d1e272))
+
+## 4.79.0 (2025-01-17)
+
+Full Changelog: [v4.78.1...v4.79.0](https://github.com/openai/openai-node/compare/v4.78.1...v4.79.0)
+
+### Features
+
+* **client:** add Realtime API support ([#1266](https://github.com/openai/openai-node/issues/1266)) ([7160ebe](https://github.com/openai/openai-node/commit/7160ebe647769fbf48a600c9961d1a6f86dc9622))
+
+
+### Bug Fixes
+
+* **logs/azure:** redact sensitive header when DEBUG is set ([#1218](https://github.com/openai/openai-node/issues/1218)) ([6a72fd7](https://github.com/openai/openai-node/commit/6a72fd736733db19504a829bf203b39d5b9e3644))
+
+
+### Chores
+
+* fix streaming ([379c743](https://github.com/openai/openai-node/commit/379c7435ed5d508458e9cdc22386039b84fcec5e))
+* **internal:** streaming refactors ([#1261](https://github.com/openai/openai-node/issues/1261)) ([dd4af93](https://github.com/openai/openai-node/commit/dd4af939792583854a313367c5fe2f98eea2f3c8))
+* **types:** add `| undefined` to client options properties ([#1264](https://github.com/openai/openai-node/issues/1264)) ([5e56979](https://github.com/openai/openai-node/commit/5e569799b9ac8f915b16de90d91d38b568c1edce))
+* **types:** rename vector store chunking strategy ([#1263](https://github.com/openai/openai-node/issues/1263)) ([d31acee](https://github.com/openai/openai-node/commit/d31acee860c80ba945d4e70b956c7ed75f5f849a))
+
## 4.78.1 (2025-01-10)
Full Changelog: [v4.78.0...v4.78.1](https://github.com/openai/openai-node/compare/v4.78.0...v4.78.1)
diff --git README.md README.md
index 3039857a1..3bd386e99 100644
--- README.md
+++ README.md
@@ -83,6 +83,93 @@ main();
If you need to cancel a stream, you can `break` from the loop
or call `stream.controller.abort()`.
+## Realtime API beta
+
+The Realtime API enables you to build low-latency, multi-modal conversational experiences. It currently supports text and audio as both input and output, as well as [function calling](https://platform.openai.com/docs/guides/function-calling) through a `WebSocket` connection.
+
+The Realtime API works through a combination of client-sent events and server-sent events. Clients can send events to do things like update session configuration or send text and audio inputs. Server events confirm when audio responses have completed, or when a text response from the model has been received. A full event reference can be found [here](https://platform.openai.com/docs/api-reference/realtime-client-events) and a guide can be found [here](https://platform.openai.com/docs/guides/realtime).
+
+This SDK supports accessing the Realtime API through the [WebSocket API](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket) or with [ws](https://github.com/websockets/ws).
+
+Basic text based example with `ws`:
+
+\`\`\`ts
+// requires `yarn add ws @types/ws`
+import { OpenAIRealtimeWS } from 'openai/beta/realtime/ws';
+
+const rt = new OpenAIRealtimeWS({ model: 'gpt-4o-realtime-preview-2024-12-17' });
+
+// access the underlying `ws.WebSocket` instance
+rt.socket.on('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+});
+
+rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+});
+
+rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+});
+
+rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+rt.on('response.text.done', () => console.log());
+
+rt.on('response.done', () => rt.close());
+
+rt.socket.on('close', () => console.log('\nConnection closed!'));
+```
+
+To use the web API `WebSocket` implementation, replace `OpenAIRealtimeWS` with `OpenAIRealtimeWebSocket` and adjust any `rt.socket` access:
+
+```ts
+import { OpenAIRealtimeWebSocket } from 'openai/beta/realtime/websocket';
+
+const rt = new OpenAIRealtimeWebSocket({ model: 'gpt-4o-realtime-preview-2024-12-17' });
+// ...
+rt.socket.addEventListener('open', () => {
+ // ...
+});
+```
+
+A full example can be found [here](https://github.com/openai/openai-node/blob/master/examples/realtime/websocket.ts).
+
+### Realtime error handling
+
+When an error is encountered, either on the client side or returned from the server through the [`error` event](https://platform.openai.com/docs/guides/realtime/realtime-api-beta#handling-errors), the `error` event listener will be fired. However, if you haven't registered an `error` event listener then an `unhandled Promise rejection` error will be thrown.
+
+It is **highly recommended** that you register an `error` event listener and handle errors approriately as typically the underlying connection is still usable.
+
+```ts
+const rt = new OpenAIRealtimeWS({ model: 'gpt-4o-realtime-preview-2024-12-17' });
+rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+});
+```
+
### Request & Response types
This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:
diff --git api.md api.md
index a885628a3..33ab95ef6 100644
--- api.md
+++ api.md
@@ -283,7 +283,7 @@ Types:
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">OtherFileChunkingStrategyObject</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">StaticFileChunkingStrategy</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">StaticFileChunkingStrategyObject</a></code>
-- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">StaticFileChunkingStrategyParam</a></code>
+- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">StaticFileChunkingStrategyObjectParam</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">VectorStore</a></code>
- <code><a href="./src/resources/beta/vector-stores/vector-stores.ts">VectorStoreDeleted</a></code>
diff --git examples/package.json examples/package.json
index c8a5f7087..b8c34ac45 100644
--- examples/package.json
+++ examples/package.json
@@ -6,14 +6,15 @@
"license": "MIT",
"private": true,
"dependencies": {
+ "@azure/identity": "^4.2.0",
"express": "^4.18.2",
"next": "^14.1.1",
"openai": "file:..",
- "zod-to-json-schema": "^3.21.4",
- "@azure/identity": "^4.2.0"
+ "zod-to-json-schema": "^3.21.4"
},
"devDependencies": {
"@types/body-parser": "^1.19.3",
- "@types/express": "^4.17.19"
+ "@types/express": "^4.17.19",
+ "@types/web": "^0.0.194"
}
}
diff --git a/examples/realtime/websocket.ts b/examples/realtime/websocket.ts
new file mode 100644
index 000000000..0da131bc3
--- /dev/null
+++ examples/realtime/websocket.ts
@@ -0,0 +1,48 @@
+import { OpenAIRealtimeWebSocket } from 'openai/beta/realtime/websocket';
+
+async function main() {
+ const rt = new OpenAIRealtimeWebSocket({ model: 'gpt-4o-realtime-preview-2024-12-17' });
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.addEventListener('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.addEventListener('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git a/examples/realtime/ws.ts b/examples/realtime/ws.ts
new file mode 100644
index 000000000..4bbe85e5d
--- /dev/null
+++ examples/realtime/ws.ts
@@ -0,0 +1,55 @@
+import { OpenAIRealtimeWS } from 'openai/beta/realtime/ws';
+
+async function main() {
+ const rt = new OpenAIRealtimeWS({ model: 'gpt-4o-realtime-preview-2024-12-17' });
+
+ // access the underlying `ws.WebSocket` instance
+ rt.socket.on('open', () => {
+ console.log('Connection opened!');
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['foo'] as any,
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+ rt.send({
+ type: 'session.update',
+ session: {
+ modalities: ['text'],
+ model: 'gpt-4o-realtime-preview',
+ },
+ });
+
+ rt.send({
+ type: 'conversation.item.create',
+ item: {
+ type: 'message',
+ role: 'user',
+ content: [{ type: 'input_text', text: 'Say a couple paragraphs!' }],
+ },
+ });
+
+ rt.send({ type: 'response.create' });
+ });
+
+ rt.on('error', (err) => {
+ // in a real world scenario this should be logged somewhere as you
+ // likely want to continue procesing events regardless of any errors
+ throw err;
+ });
+
+ rt.on('session.created', (event) => {
+ console.log('session created!', event.session);
+ console.log();
+ });
+
+ rt.on('response.text.delta', (event) => process.stdout.write(event.delta));
+ rt.on('response.text.done', () => console.log());
+
+ rt.on('response.done', () => rt.close());
+
+ rt.socket.on('close', () => console.log('\nConnection closed!'));
+}
+
+main();
diff --git jsr.json jsr.json
index 257faa02d..e6d772116 100644
--- jsr.json
+++ jsr.json
@@ -1,7 +1,14 @@
{
"name": "@openai/openai",
- "version": "4.78.1",
- "exports": "./index.ts",
+ "version": "4.79.4",
+ "exports": {
+ ".": "./index.ts",
+ "./helpers/zod": "./helpers/zod.ts",
+ "./beta/realtime/websocket": "./beta/realtime/websocket.ts"
+ },
+ "imports": {
+ "zod": "npm:zod@3"
+ },
"publish": {
"exclude": [
"!."
diff --git package.json package.json
index ff6ec16bc..d7a5555e5 100644
--- package.json
+++ package.json
@@ -1,6 +1,6 @@
{
"name": "openai",
- "version": "4.78.1",
+ "version": "4.79.4",
"description": "The official TypeScript library for the OpenAI API",
"author": "OpenAI <support@openai.com>",
"types": "dist/index.d.ts",
@@ -36,6 +36,7 @@
"@swc/core": "^1.3.102",
"@swc/jest": "^0.2.29",
"@types/jest": "^29.4.0",
+ "@types/ws": "^8.5.13",
"@typescript-eslint/eslint-plugin": "^6.7.0",
"@typescript-eslint/parser": "^6.7.0",
"eslint": "^8.49.0",
@@ -52,6 +53,7 @@
"tsc-multi": "^1.1.0",
"tsconfig-paths": "^4.0.0",
"typescript": "^4.8.2",
+ "ws": "^8.18.0",
"zod": "^3.23.8"
},
"sideEffects": [
@@ -126,9 +128,13 @@
},
"bin": "./bin/cli",
"peerDependencies": {
+ "ws": "^8.18.0",
"zod": "^3.23.8"
},
"peerDependenciesMeta": {
+ "ws": {
+ "optional": true
+ },
"zod": {
"optional": true
}
diff --git a/src/beta/realtime/index.ts b/src/beta/realtime/index.ts
new file mode 100644
index 000000000..75f0f3088
--- /dev/null
+++ src/beta/realtime/index.ts
@@ -0,0 +1 @@
+export { OpenAIRealtimeError } from './internal-base';
diff --git a/src/beta/realtime/internal-base.ts b/src/beta/realtime/internal-base.ts
new file mode 100644
index 000000000..391d69911
--- /dev/null
+++ src/beta/realtime/internal-base.ts
@@ -0,0 +1,83 @@
+import { RealtimeClientEvent, RealtimeServerEvent, ErrorEvent } from '../../resources/beta/realtime/realtime';
+import { EventEmitter } from '../../lib/EventEmitter';
+import { OpenAIError } from '../../error';
+
+export class OpenAIRealtimeError extends OpenAIError {
+ /**
+ * The error data that the API sent back in an `error` event.
+ */
+ error?: ErrorEvent.Error | undefined;
+
+ /**
+ * The unique ID of the server event.
+ */
+ event_id?: string | undefined;
+
+ constructor(message: string, event: ErrorEvent | null) {
+ super(message);
+
+ this.error = event?.error;
+ this.event_id = event?.event_id;
+ }
+}
+
+type Simplify<T> = { [KeyType in keyof T]: T[KeyType] } & {};
+
+type RealtimeEvents = Simplify<
+ {
+ event: (event: RealtimeServerEvent) => void;
+ error: (error: OpenAIRealtimeError) => void;
+ } & {
+ [EventType in Exclude<RealtimeServerEvent['type'], 'error'>]: (
+ event: Extract<RealtimeServerEvent, { type: EventType }>,
+ ) => unknown;
+ }
+>;
+
+export abstract class OpenAIRealtimeEmitter extends EventEmitter<RealtimeEvents> {
+ /**
+ * Send an event to the API.
+ */
+ abstract send(event: RealtimeClientEvent): void;
+
+ /**
+ * Close the websocket connection.
+ */
+ abstract close(props?: { code: number; reason: string }): void;
+
+ protected _onError(event: null, message: string, cause: any): void;
+ protected _onError(event: ErrorEvent, message?: string | undefined): void;
+ protected _onError(event: ErrorEvent | null, message?: string | undefined, cause?: any): void {
+ message =
+ event?.error ?
+ `${event.error.message} code=${event.error.code} param=${event.error.param} type=${event.error.type} event_id=${event.error.event_id}`
+ : message ?? 'unknown error';
+
+ if (!this._hasListener('error')) {
+ const error = new OpenAIRealtimeError(
+ message +
+ `\n\nTo resolve these unhandled rejection errors you should bind an \`error\` callback, e.g. \`rt.on('error', (error) => ...)\` `,
+ event,
+ );
+ // @ts-ignore
+ error.cause = cause;
+ Promise.reject(error);
+ return;
+ }
+
+ const error = new OpenAIRealtimeError(message, event);
+ // @ts-ignore
+ error.cause = cause;
+
+ this._emit('error', error);
+ }
+}
+
+export function buildRealtimeURL(props: { baseURL: string; model: string }): URL {
+ const path = '/realtime';
+
+ const url = new URL(props.baseURL + (props.baseURL.endsWith('/') ? path.slice(1) : path));
+ url.protocol = 'wss';
+ url.searchParams.set('model', props.model);
+ return url;
+}
diff --git a/src/beta/realtime/websocket.ts b/src/beta/realtime/websocket.ts
new file mode 100644
index 000000000..e0853779d
--- /dev/null
+++ src/beta/realtime/websocket.ts
@@ -0,0 +1,97 @@
+import { OpenAI } from '../../index';
+import { OpenAIError } from '../../error';
+import * as Core from '../../core';
+import type { RealtimeClientEvent, RealtimeServerEvent } from '../../resources/beta/realtime/realtime';
+import { OpenAIRealtimeEmitter, buildRealtimeURL } from './internal-base';
+
+interface MessageEvent {
+ data: string;
+}
+
+type _WebSocket =
+ typeof globalThis extends (
+ {
+ WebSocket: infer ws;
+ }
+ ) ?
+ // @ts-ignore
+ InstanceType<ws>
+ : any;
+
+export class OpenAIRealtimeWebSocket extends OpenAIRealtimeEmitter {
+ url: URL;
+ socket: _WebSocket;
+
+ constructor(
+ props: {
+ model: string;
+ dangerouslyAllowBrowser?: boolean;
+ },
+ client?: Pick<OpenAI, 'apiKey' | 'baseURL'>,
+ ) {
+ super();
+
+ const dangerouslyAllowBrowser =
+ props.dangerouslyAllowBrowser ??
+ (client as any)?._options?.dangerouslyAllowBrowser ??
+ (client?.apiKey.startsWith('ek_') ? true : null);
+
+ if (!dangerouslyAllowBrowser && Core.isRunningInBrowser()) {
+ throw new OpenAIError(
+ "It looks like you're running in a browser-like environment.\n\nThis is disabled by default, as it risks exposing your secret API credentials to attackers.\n\nYou can avoid this error by creating an ephemeral session token:\nhttps://platform.openai.com/docs/api-reference/realtime-sessions\n",
+ );
+ }
+
+ client ??= new OpenAI({ dangerouslyAllowBrowser });
+
+ this.url = buildRealtimeURL({ baseURL: client.baseURL, model: props.model });
+ // @ts-ignore
+ this.socket = new WebSocket(this.url, [
+ 'realtime',
+ `openai-insecure-api-key.${client.apiKey}`,
+ 'openai-beta.realtime-v1',
+ ]);
+
+ this.socket.addEventListener('message', (websocketEvent: MessageEvent) => {
+ const event = (() => {
+ try {
+ return JSON.parse(websocketEvent.data.toString()) as RealtimeServerEvent;
+ } catch (err) {
+ this._onError(null, 'could not parse websocket event', err);
+ return null;
+ }
+ })();
+
+ if (event) {
+ this._emit('event', event);
+
+ if (event.type === 'error') {
+ this._onError(event);
+ } else {
+ // @ts-expect-error TS isn't smart enough to get the relationship right here
+ this._emit(event.type, event);
+ }
+ }
+ });
+
+ this.socket.addEventListener('error', (event: any) => {
+ this._onError(null, event.message, null);
+ });
+ }
+
+ send(event: RealtimeClientEvent) {
+ try {
+ this.socket.send(JSON.stringify(event));
+ } catch (err) {
+ this._onError(null, 'could not send data', err);
+ }
+ }
+
+ close(props?: { code: number; reason: string }) {
+ try {
+ this.socket.close(props?.code ?? 1000, props?.reason ?? 'OK');
+ } catch (err) {
+ this._onError(null, 'could not close the connection', err);
+ }
+ }
+}
diff --git a/src/beta/realtime/ws.ts b/src/beta/realtime/ws.ts
new file mode 100644
index 000000000..631a36cd2
--- /dev/null
+++ src/beta/realtime/ws.ts
@@ -0,0 +1,69 @@
+import * as WS from 'ws';
+import { OpenAI } from '../../index';
+import type { RealtimeClientEvent, RealtimeServerEvent } from '../../resources/beta/realtime/realtime';
+import { OpenAIRealtimeEmitter, buildRealtimeURL } from './internal-base';
+
+export class OpenAIRealtimeWS extends OpenAIRealtimeEmitter {
+ url: URL;
+ socket: WS.WebSocket;
+
+ constructor(
+ props: { model: string; options?: WS.ClientOptions | undefined },
+ client?: Pick<OpenAI, 'apiKey' | 'baseURL'>,
+ ) {
+ super();
+ client ??= new OpenAI();
+
+ this.url = buildRealtimeURL({ baseURL: client.baseURL, model: props.model });
+ this.socket = new WS.WebSocket(this.url, {
+ ...props.options,
+ headers: {
+ ...props.options?.headers,
+ Authorization: `Bearer ${client.apiKey}`,
+ 'OpenAI-Beta': 'realtime=v1',
+ },
+ });
+
+ this.socket.on('message', (wsEvent) => {
+ const event = (() => {
+ try {
+ return JSON.parse(wsEvent.toString()) as RealtimeServerEvent;
+ } catch (err) {
+ this._onError(null, 'could not parse websocket event', err);
+ return null;
+ }
+ })();
+
+ if (event) {
+ this._emit('event', event);
+
+ if (event.type === 'error') {
+ this._onError(event);
+ } else {
+ // @ts-expect-error TS isn't smart enough to get the relationship right here
+ this._emit(event.type, event);
+ }
+ }
+ });
+
+ this.socket.on('error', (err) => {
+ this._onError(null, err.message, err);
+ });
+ }
+
+ send(event: RealtimeClientEvent) {
+ try {
+ this.socket.send(JSON.stringify(event));
+ } catch (err) {
+ this._onError(null, 'could not send data', err);
+ }
+ }
+
+ close(props?: { code: number; reason: string }) {
+ try {
+ this.socket.close(props?.code ?? 1000, props?.reason ?? 'OK');
+ } catch (err) {
+ this._onError(null, 'could not close the connection', err);
+ }
+ }
+}
diff --git src/core.ts src/core.ts
index 972cceaec..3d2d029a5 100644
--- src/core.ts
+++ src/core.ts
@@ -1148,9 +1148,43 @@ function applyHeadersMut(targetHeaders: Headers, newHeaders: Headers): void {
}
}
+const SENSITIVE_HEADERS = new Set(['authorization', 'api-key']);
+
export function debug(action: string, ...args: any[]) {
if (typeof process !== 'undefined' && process?.env?.['DEBUG'] === 'true') {
- console.log(`OpenAI:DEBUG:${action}`, ...args);
+ const modifiedArgs = args.map((arg) => {
+ if (!arg) {
+ return arg;
+ }
+
+ // Check for sensitive headers in request body 'headers' object
+ if (arg['headers']) {
+ // clone so we don't mutate
+ const modifiedArg = { ...arg, headers: { ...arg['headers'] } };
+
+ for (const header in arg['headers']) {
+ if (SENSITIVE_HEADERS.has(header.toLowerCase())) {
+ modifiedArg['headers'][header] = 'REDACTED';
+ }
+ }
+
+ return modifiedArg;
+ }
+
+ let modifiedArg = null;
+
+ // Check for sensitive headers in headers object
+ for (const header in arg) {
+ if (SENSITIVE_HEADERS.has(header.toLowerCase())) {
+ // avoid making a copy until we need to
+ modifiedArg ??= { ...arg };
+ modifiedArg[header] = 'REDACTED';
+ }
+ }
+
+ return modifiedArg ?? arg;
+ });
+ console.log(`OpenAI:DEBUG:${action}`, ...modifiedArgs);
}
}
diff --git src/index.ts src/index.ts
index 2320850fb..cf6aa89e3 100644
--- src/index.ts
+++ src/index.ts
@@ -137,7 +137,7 @@ export interface ClientOptions {
* Note that request timeouts are retried by default, so in a worst-case scenario you may wait
* much longer than this timeout before the promise succeeds or fails.
*/
- timeout?: number;
+ timeout?: number | undefined;
/**
* An HTTP agent used to manage HTTP(S) connections.
@@ -145,7 +145,7 @@ export interface ClientOptions {
* If not provided, an agent will be constructed by default in the Node.js environment,
* otherwise no agent is used.
*/
- httpAgent?: Agent;
+ httpAgent?: Agent | undefined;
/**
* Specify a custom `fetch` function implementation.
@@ -161,7 +161,7 @@ export interface ClientOptions {
*
* @default 2
*/
- maxRetries?: number;
+ maxRetries?: number | undefined;
/**
* Default headers to include with every request to the API.
@@ -169,7 +169,7 @@ export interface ClientOptions {
* These can be removed in individual requests by explicitly setting the
* header to `undefined` or `null` in request options.
*/
- defaultHeaders?: Core.Headers;
+ defaultHeaders?: Core.Headers | undefined;
/**
* Default query parameters to include with every request to the API.
@@ -177,13 +177,13 @@ export interface ClientOptions {
* These can be removed in individual requests by explicitly setting the
* param to `undefined` in request options.
*/
- defaultQuery?: Core.DefaultQuery;
+ defaultQuery?: Core.DefaultQuery | undefined;
/**
* By default, client-side use of this library is not allowed, as it risks exposing your secret API credentials to attackers.
* Only set this option to `true` if you understand the risks and have appropriate mitigations in place.
*/
- dangerouslyAllowBrowser?: boolean;
+ dangerouslyAllowBrowser?: boolean | undefined;
}
/**
diff --git src/internal/decoders/line.ts src/internal/decoders/line.ts
index 1e0bbf390..34e41d1dc 100644
--- src/internal/decoders/line.ts
+++ src/internal/decoders/line.ts
@@ -1,6 +1,6 @@
import { OpenAIError } from '../../error';
-type Bytes = string | ArrayBuffer | Uint8Array | Buffer | null | undefined;
+export type Bytes = string | ArrayBuffer | Uint8Array | Buffer | null | undefined;
/**
* A re-implementation of httpx's `LineDecoder` in Python that handles incrementally
diff --git a/src/internal/stream-utils.ts b/src/internal/stream-utils.ts
new file mode 100644
index 000000000..37f7793cf
--- /dev/null
+++ src/internal/stream-utils.ts
@@ -0,0 +1,32 @@
+/**
+ * Most browsers don't yet have async iterable support for ReadableStream,
+ * and Node has a very different way of reading bytes from its "ReadableStream".
+ *
+ * This polyfill was pulled from https://github.com/MattiasBuelens/web-streams-polyfill/pull/122#issuecomment-1627354490
+ */
+export function ReadableStreamToAsyncIterable<T>(stream: any): AsyncIterableIterator<T> {
+ if (stream[Symbol.asyncIterator]) return stream;
+
+ const reader = stream.getReader();
+ return {
+ async next() {
+ try {
+ const result = await reader.read();
+ if (result?.done) reader.releaseLock(); // release lock when stream becomes closed
+ return result;
+ } catch (e) {
+ reader.releaseLock(); // release lock when stream becomes errored
+ throw e;
+ }
+ },
+ async return() {
+ const cancelPromise = reader.cancel();
+ reader.releaseLock();
+ await cancelPromise;
+ return { done: true, value: undefined };
+ },
+ [Symbol.asyncIterator]() {
+ return this;
+ },
+ };
+}
diff --git a/src/lib/EventEmitter.ts b/src/lib/EventEmitter.ts
new file mode 100644
index 000000000..9adeebdc3
--- /dev/null
+++ src/lib/EventEmitter.ts
@@ -0,0 +1,98 @@
+type EventListener<Events, EventType extends keyof Events> = Events[EventType];
+
+type EventListeners<Events, EventType extends keyof Events> = Array<{
+ listener: EventListener<Events, EventType>;
+ once?: boolean;
+}>;
+
+export type EventParameters<Events, EventType extends keyof Events> = {
+ [Event in EventType]: EventListener<Events, EventType> extends (...args: infer P) => any ? P : never;
+}[EventType];
+
+export class EventEmitter<EventTypes extends Record<string, (...args: any) => any>> {
+ #listeners: {
+ [Event in keyof EventTypes]?: EventListeners<EventTypes, Event>;
+ } = {};
+
+ /**
+ * Adds the listener function to the end of the listeners array for the event.
+ * No checks are made to see if the listener has already been added. Multiple calls passing
+ * the same combination of event and listener will result in the listener being added, and
+ * called, multiple times.
+ * @returns this, so that calls can be chained
+ */
+ on<Event extends keyof EventTypes>(event: Event, listener: EventListener<EventTypes, Event>): this {
+ const listeners: EventListeners<EventTypes, Event> =
+ this.#listeners[event] || (this.#listeners[event] = []);
+ listeners.push({ listener });
+ return this;
+ }
+
+ /**
+ * Removes the specified listener from the listener array for the event.
+ * off() will remove, at most, one instance of a listener from the listener array. If any single
+ * listener has been added multiple times to the listener array for the specified event, then
+ * off() must be called multiple times to remove each instance.
+ * @returns this, so that calls can be chained
+ */
+ off<Event extends keyof EventTypes>(event: Event, listener: EventListener<EventTypes, Event>): this {
+ const listeners = this.#listeners[event];
+ if (!listeners) return this;
+ const index = listeners.findIndex((l) => l.listener === listener);
+ if (index >= 0) listeners.splice(index, 1);
+ return this;
+ }
+
+ /**
+ * Adds a one-time listener function for the event. The next time the event is triggered,
+ * this listener is removed and then invoked.
+ * @returns this, so that calls can be chained
+ */
+ once<Event extends keyof EventTypes>(event: Event, listener: EventListener<EventTypes, Event>): this {
+ const listeners: EventListeners<EventTypes, Event> =
+ this.#listeners[event] || (this.#listeners[event] = []);
+ listeners.push({ listener, once: true });
+ return this;
+ }
+
+ /**
+ * This is similar to `.once()`, but returns a Promise that resolves the next time
+ * the event is triggered, instead of calling a listener callback.
+ * @returns a Promise that resolves the next time given event is triggered,
+ * or rejects if an error is emitted. (If you request the 'error' event,
+ * returns a promise that resolves with the error).
+ *
+ * Example:
+ *
+ * const message = await stream.emitted('message') // rejects if the stream errors
+ */
+ emitted<Event extends keyof EventTypes>(
+ event: Event,
+ ): Promise<
+ EventParameters<EventTypes, Event> extends [infer Param] ? Param
+ : EventParameters<EventTypes, Event> extends [] ? void
+ : EventParameters<EventTypes, Event>
+ > {
+ return new Promise((resolve, reject) => {
+ // TODO: handle errors
+ this.once(event, resolve as any);
+ });
+ }
+
+ protected _emit<Event extends keyof EventTypes>(
+ this: EventEmitter<EventTypes>,
+ event: Event,
+ ...args: EventParameters<EventTypes, Event>
+ ) {
+ const listeners: EventListeners<EventTypes, Event> | undefined = this.#listeners[event];
+ if (listeners) {
+ this.#listeners[event] = listeners.filter((l) => !l.once) as any;
+ listeners.forEach(({ listener }: any) => listener(...(args as any)));
+ }
+ }
+
+ protected _hasListener(event: keyof EventTypes): boolean {
+ const listeners = this.#listeners[event];
+ return listeners && listeners.length > 0;
+ }
+}
diff --git src/resources/beta/beta.ts src/resources/beta/beta.ts
index ccd043243..df929b2f7 100644
--- src/resources/beta/beta.ts
+++ src/resources/beta/beta.ts
@@ -48,7 +48,7 @@ import {
OtherFileChunkingStrategyObject,
StaticFileChunkingStrategy,
StaticFileChunkingStrategyObject,
- StaticFileChunkingStrategyParam,
+ StaticFileChunkingStrategyObjectParam,
VectorStore,
VectorStoreCreateParams,
VectorStoreDeleted,
@@ -85,7 +85,7 @@ export declare namespace Beta {
type OtherFileChunkingStrategyObject as OtherFileChunkingStrategyObject,
type StaticFileChunkingStrategy as StaticFileChunkingStrategy,
type StaticFileChunkingStrategyObject as StaticFileChunkingStrategyObject,
- type StaticFileChunkingStrategyParam as StaticFileChunkingStrategyParam,
+ type StaticFileChunkingStrategyObjectParam as StaticFileChunkingStrategyObjectParam,
type VectorStore as VectorStore,
type VectorStoreDeleted as VectorStoreDeleted,
VectorStoresPage as VectorStoresPage,
diff --git src/resources/beta/index.ts src/resources/beta/index.ts
index aa2e52d4c..babca0016 100644
--- src/resources/beta/index.ts
+++ src/resources/beta/index.ts
@@ -46,7 +46,7 @@ export {
type OtherFileChunkingStrategyObject,
type StaticFileChunkingStrategy,
type StaticFileChunkingStrategyObject,
- type StaticFileChunkingStrategyParam,
+ type StaticFileChunkingStrategyObjectParam,
type VectorStore,
type VectorStoreDeleted,
type VectorStoreCreateParams,
diff --git src/resources/beta/vector-stores/index.ts src/resources/beta/vector-stores/index.ts
index 89fc0cde0..d587bd160 100644
--- src/resources/beta/vector-stores/index.ts
+++ src/resources/beta/vector-stores/index.ts
@@ -23,7 +23,7 @@ export {
type OtherFileChunkingStrategyObject,
type StaticFileChunkingStrategy,
type StaticFileChunkingStrategyObject,
- type StaticFileChunkingStrategyParam,
+ type StaticFileChunkingStrategyObjectParam,
type VectorStore,
type VectorStoreDeleted,
type VectorStoreCreateParams,
diff --git src/resources/beta/vector-stores/vector-stores.ts src/resources/beta/vector-stores/vector-stores.ts
index 35ad8c369..cbff2d562 100644
--- src/resources/beta/vector-stores/vector-stores.ts
+++ src/resources/beta/vector-stores/vector-stores.ts
@@ -116,7 +116,7 @@ export type FileChunkingStrategy = StaticFileChunkingStrategyObject | OtherFileC
* The chunking strategy used to chunk the file(s). If not set, will use the `auto`
* strategy. Only applicable if `file_ids` is non-empty.
*/
-export type FileChunkingStrategyParam = AutoFileChunkingStrategyParam | StaticFileChunkingStrategyParam;
+export type FileChunkingStrategyParam = AutoFileChunkingStrategyParam | StaticFileChunkingStrategyObjectParam;
/**
* This is returned when the chunking strategy is unknown. Typically, this is
@@ -154,7 +154,7 @@ export interface StaticFileChunkingStrategyObject {
type: 'static';
}
-export interface StaticFileChunkingStrategyParam {
+export interface StaticFileChunkingStrategyObjectParam {
static: StaticFileChunkingStrategy;
/**
@@ -397,7 +397,7 @@ export declare namespace VectorStores {
type OtherFileChunkingStrategyObject as OtherFileChunkingStrategyObject,
type StaticFileChunkingStrategy as StaticFileChunkingStrategy,
type StaticFileChunkingStrategyObject as StaticFileChunkingStrategyObject,
- type StaticFileChunkingStrategyParam as StaticFileChunkingStrategyParam,
+ type StaticFileChunkingStrategyObjectParam as StaticFileChunkingStrategyObjectParam,
type VectorStore as VectorStore,
type VectorStoreDeleted as VectorStoreDeleted,
VectorStoresPage as VectorStoresPage,
diff --git src/resources/chat/completions.ts src/resources/chat/completions.ts
index 31f5814cb..88c778036 100644
--- src/resources/chat/completions.ts
+++ src/resources/chat/completions.ts
@@ -163,8 +163,8 @@ export interface ChatCompletionAssistantMessageParam {
content?: string | Array<ChatCompletionContentPartText | ChatCompletionContentPartRefusal> | null;
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
function_call?: ChatCompletionAssistantMessageParam.FunctionCall | null;
@@ -198,8 +198,8 @@ export namespace ChatCompletionAssistantMessageParam {
}
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
export interface FunctionCall {
/**
@@ -360,8 +360,8 @@ export namespace ChatCompletionChunk {
content?: string | null;
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
function_call?: Delta.FunctionCall;
@@ -380,8 +380,8 @@ export namespace ChatCompletionChunk {
export namespace Delta {
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
export interface FunctionCall {
/**
@@ -620,8 +620,8 @@ export interface ChatCompletionMessage {
audio?: ChatCompletionAudio | null;
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
function_call?: ChatCompletionMessage.FunctionCall | null;
@@ -633,8 +633,8 @@ export interface ChatCompletionMessage {
export namespace ChatCompletionMessage {
/**
- * @deprecated: Deprecated and replaced by `tool_calls`. The name and arguments of
- * a function that should be called, as generated by the model.
+ * @deprecated Deprecated and replaced by `tool_calls`. The name and arguments of a
+ * function that should be called, as generated by the model.
*/
export interface FunctionCall {
/**
diff --git src/resources/files.ts src/resources/files.ts
index 43708310b..67bc95469 100644
--- src/resources/files.ts
+++ src/resources/files.ts
@@ -168,13 +168,13 @@ export interface FileObject {
| 'vision';
/**
- * @deprecated: Deprecated. The current status of the file, which can be either
+ * @deprecated Deprecated. The current status of the file, which can be either
* `uploaded`, `processed`, or `error`.
*/
status: 'uploaded' | 'processed' | 'error';
/**
- * @deprecated: Deprecated. For details on why a fine-tuning training file failed
+ * @deprecated Deprecated. For details on why a fine-tuning training file failed
* validation, see the `error` field on `fine_tuning.job`.
*/
status_details?: string;
diff --git src/resources/fine-tuning/jobs/jobs.ts src/resources/fine-tuning/jobs/jobs.ts
index 44dd011aa..9be03c302 100644
--- src/resources/fine-tuning/jobs/jobs.ts
+++ src/resources/fine-tuning/jobs/jobs.ts
@@ -516,7 +516,7 @@ export interface JobCreateParams {
export namespace JobCreateParams {
/**
- * @deprecated: The hyperparameters used for the fine-tuning job. This value is now
+ * @deprecated The hyperparameters used for the fine-tuning job. This value is now
* deprecated in favor of `method`, and should be passed in under the `method`
* parameter.
*/
diff --git src/streaming.ts src/streaming.ts
index 2891e6ac3..6a57a50a0 100644
--- src/streaming.ts
+++ src/streaming.ts
@@ -1,6 +1,7 @@
import { ReadableStream, type Response } from './_shims/index';
import { OpenAIError } from './error';
import { LineDecoder } from './internal/decoders/line';
+import { ReadableStreamToAsyncIterable } from './internal/stream-utils';
import { APIError } from './error';
@@ -96,7 +97,7 @@ export class Stream<Item> implements AsyncIterable<Item> {
async function* iterLines(): AsyncGenerator<string, void, unknown> {
const lineDecoder = new LineDecoder();
- const iter = readableStreamAsyncIterable<Bytes>(readableStream);
+ const iter = ReadableStreamToAsyncIterable<Bytes>(readableStream);
for await (const chunk of iter) {
for (const line of lineDecoder.decode(chunk)) {
yield line;
@@ -210,7 +211,7 @@ export async function* _iterSSEMessages(
const sseDecoder = new SSEDecoder();
const lineDecoder = new LineDecoder();
- const iter = readableStreamAsyncIterable<Bytes>(response.body);
+ const iter = ReadableStreamToAsyncIterable<Bytes>(response.body);
for await (const sseChunk of iterSSEChunks(iter)) {
for (const line of lineDecoder.decode(sseChunk)) {
const sse = sseDecoder.decode(line);
@@ -363,36 +364,3 @@ function partition(str: string, delimiter: string): [string, string, string] {
return [str, '', ''];
}
-
-/**
- * Most browsers don't yet have async iterable support for ReadableStream,
- * and Node has a very different way of reading bytes from its "ReadableStream".
- *
- * This polyfill was pulled from https://github.com/MattiasBuelens/web-streams-polyfill/pull/122#issuecomment-1627354490
- */
-export function readableStreamAsyncIterable<T>(stream: any): AsyncIterableIterator<T> {
- if (stream[Symbol.asyncIterator]) return stream;
-
- const reader = stream.getReader();
- return {
- async next() {
- try {
- const result = await reader.read();
- if (result?.done) reader.releaseLock(); // release lock when stream becomes closed
- return result;
- } catch (e) {
- reader.releaseLock(); // release lock when stream becomes errored
- throw e;
- }
- },
- async return() {
- const cancelPromise = reader.cancel();
- reader.releaseLock();
- await cancelPromise;
- return { done: true, value: undefined };
- },
- [Symbol.asyncIterator]() {
- return this;
- },
- };
-}
diff --git src/version.ts src/version.ts
index a8ac58ba2..e8b9601ed 100644
--- src/version.ts
+++ src/version.ts
@@ -1 +1 @@
-export const VERSION = '4.78.1'; // x-release-please-version
+export const VERSION = '4.79.4'; // x-release-please-version
diff --git tests/index.test.ts tests/index.test.ts
index a6f0040a4..6227d6fbe 100644
--- tests/index.test.ts
+++ tests/index.test.ts
@@ -2,7 +2,7 @@
import OpenAI from 'openai';
import { APIUserAbortError } from 'openai';
-import { Headers } from 'openai/core';
+import { debug, Headers } from 'openai/core';
import defaultFetch, { Response, type RequestInit, type RequestInfo } from 'node-fetch';
describe('instantiate client', () => {
@@ -96,6 +96,15 @@ describe('instantiate client', () => {
expect(response).toEqual({ url: 'http://localhost:5000/foo', custom: true });
});
+ test('explicit global fetch', async () => {
+ // make sure the global fetch type is assignable to our Fetch type
+ const client = new OpenAI({
+ baseURL: 'http://localhost:5000/',
+ apiKey: 'My API Key',
+ fetch: defaultFetch,
+ });
+ });
+
test('custom signal', async () => {
const client = new OpenAI({
baseURL: process.env['TEST_API_BASE_URL'] ?? 'http://127.0.0.1:4010',
@@ -424,3 +433,95 @@ describe('retries', () => {
expect(count).toEqual(3);
});
});
+
+describe('debug()', () => {
+ const env = process.env;
+ const spy = jest.spyOn(console, 'log');
+
+ beforeEach(() => {
+ jest.resetModules();
+ process.env = { ...env };
+ process.env['DEBUG'] = 'true';
+ });
+
+ afterEach(() => {
+ process.env = env;
+ });
+
+ test('body request object with Authorization header', function () {
+ // Test request body includes headers object with Authorization
+ const headersTest = {
+ headers: {
+ Authorization: 'fakeAuthorization',
+ },
+ };
+ debug('request', headersTest);
+ expect(spy).toHaveBeenCalledWith('OpenAI:DEBUG:request', {
+ headers: {
+ Authorization: 'REDACTED',
+ },
+ });
+ });
+
+ test('body request object with api-key header', function () {
+ // Test request body includes headers object with api-ley
+ const apiKeyTest = {
+ headers: {
+ 'api-key': 'fakeKey',
+ },
+ };
+ debug('request', apiKeyTest);
+ expect(spy).toHaveBeenCalledWith('OpenAI:DEBUG:request', {
+ headers: {
+ 'api-key': 'REDACTED',
+ },
+ });
+ });
+
+ test('header object with Authorization header', function () {
+ // Test headers object with authorization header
+ const authorizationTest = {
+ authorization: 'fakeValue',
+ };
+ debug('request', authorizationTest);
+ expect(spy).toHaveBeenCalledWith('OpenAI:DEBUG:request', {
+ authorization: 'REDACTED',
+ });
+ });
+
+ test('input args are not mutated', function () {
+ const authorizationTest = {
+ authorization: 'fakeValue',
+ };
+ const client = new OpenAI({
+ baseURL: 'http://localhost:5000/',
+ defaultHeaders: authorizationTest,
+ apiKey: 'api-key',
+ });
+
+ const { req } = client.buildRequest({ path: '/foo', method: 'post' });
+ debug('request', authorizationTest);
+ expect((req.headers as Headers)['authorization']).toEqual('fakeValue');
+ expect(spy).toHaveBeenCalledWith('OpenAI:DEBUG:request', {
+ authorization: 'REDACTED',
+ });
+ });
+
+ test('input headers are not mutated', function () {
+ const authorizationTest = {
+ authorization: 'fakeValue',
+ };
+ const client = new OpenAI({
+ baseURL: 'http://localhost:5000/',
+ defaultHeaders: authorizationTest,
+ apiKey: 'api-key',
+ });
+
+ const { req } = client.buildRequest({ path: '/foo', method: 'post' });
+ debug('request', { headers: req.headers });
+ expect((req.headers as Headers)['authorization']).toEqual('fakeValue');
+ expect(spy).toHaveBeenCalledWith('OpenAI:DEBUG:request', {
+ authorization: 'REDACTED',
+ });
+ });
+});
DescriptionThis PR adds support for the Realtime API beta to the OpenAI Node.js SDK. The Realtime API enables low-latency, multi-modal conversational experiences through WebSocket connections, supporting text and audio as both input and output. The PR also includes some bug fixes, documentation updates, and internal improvements. Possible Issues
Security Hotspots
ChangesChanges
sequenceDiagram
participant Client
participant OpenAIRealtimeWS
participant WebSocket
participant OpenAI API
Client->>OpenAIRealtimeWS: new OpenAIRealtimeWS()
OpenAIRealtimeWS->>WebSocket: Create WebSocket connection
WebSocket->>OpenAI API: Connect with API key
Client->>OpenAIRealtimeWS: send(session.update)
OpenAIRealtimeWS->>OpenAI API: Update session config
OpenAI API-->>OpenAIRealtimeWS: session.created event
OpenAIRealtimeWS-->>Client: Emit session.created
Client->>OpenAIRealtimeWS: send(conversation.item.create)
OpenAIRealtimeWS->>OpenAI API: Send conversation item
Client->>OpenAIRealtimeWS: send(response.create)
OpenAI API-->>OpenAIRealtimeWS: response.text.delta events
OpenAIRealtimeWS-->>Client: Emit text deltas
OpenAI API-->>OpenAIRealtimeWS: response.done event
OpenAIRealtimeWS-->>Client: Emit response.done
Client->>OpenAIRealtimeWS: close()
OpenAIRealtimeWS->>WebSocket: Close connection
|
This PR contains the following updates:
4.79.4
->4.80.1
Release Notes
openai/openai-node (openai)
v4.80.1
Compare Source
Full Changelog: v4.80.0...v4.80.1
Bug Fixes
Documentation
v4.80.0
Compare Source
Full Changelog: v4.79.4...v4.80.0
Features
Configuration
📅 Schedule: Branch creation - "* 0-4 * * 3" (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.