Skip to content

Commit

Permalink
Add config schema and example auth call
Browse files Browse the repository at this point in the history
  • Loading branch information
calclavia committed Jan 11, 2025
1 parent eda6907 commit c8c88cd
Show file tree
Hide file tree
Showing 7 changed files with 146 additions and 172 deletions.
74 changes: 41 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,75 +22,83 @@ npm install @smithery/sdk

## Usage

In this example, we'll connect use OpenAI client with Exa search capabilities.
In this example, we'll connect to Exa search capabilities using either OpenAI or Anthropic.

```bash
npm install @smithery/mcp-exa
npm install @smithery/sdk @modelcontextprotocol/sdk
```

The following code sets up OpenAI and connects to an Exa MCP server. In this case, we're running the server locally within the same process, so it's just a simple passthrough.
The following code sets up the client and connects to an Exa MCP server:

```typescript
import { MultiClient } from "@smithery/sdk"
import { OpenAIChatAdapter } from "@smithery/sdk/integrations/llm/openai"
import * as exa from "@smithery/mcp-exa"
import { AnthropicChatAdapter } from "@smithery/sdk/integrations/llm/anthropic"
import { SSEClientTransport } from "@modelcontextprotocol/sdk/client/sse.js"
import { OpenAI } from "openai"
import { createTransport } from "@smithery/sdk/registry"
import Anthropic from "@anthropic-ai/sdk"
import EventSource from "eventsource"

const openai = new OpenAI()
const exaServer = exa.createServer({
apiKey: process.env.EXA_API_KEY,
})
// Patch event source for Node.js environment
global.EventSource = EventSource as any

const sequentialThinking = await createTransport(
"@modelcontextprotocol/server-sequential-thinking",
// Create a new connection
const exaTransport = new SSEClientTransport(
// Replace with your deployed MCP server URL
new URL("https://your-mcp-server.example.com/sse")
)

// Initialize a multi-client connection
const client = new MultiClient()
await client.connectAll({
exa: exaServer,
sequentialThinking: sequentialThinking,
exa: exaTransport,
// You can add more connections here...
})

// Configure and authenticate
await client.clients.exa.request({
method: "config",
params: {
config: {
apiKey: process.env.EXA_API_KEY,
},
},
})
```

Now you can make your LLM aware of the available tools from Exa.
Now you can use either OpenAI or Anthropic to interact with the tools:

```typescript
// Create an adapter
const adapter = new OpenAIChatAdapter(client)
const response = await openai.chat.completions.create({
// Using OpenAI
const openai = new OpenAI()
const openaiAdapter = new OpenAIChatAdapter(client)
const openaiResponse = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "In 2024, did OpenAI release GPT-5?" }],
// Pass the tools to OpenAI call
tools: await adapter.listTools(),
messages: [{ role: "user", content: "What AI events are happening in Singapore?" }],
tools: await openaiAdapter.listTools(),
})
// Obtain the tool outputs as new messages
const toolMessages = await adapter.callTool(response)
const openaiToolMessages = await openaiAdapter.callTool(openaiResponse)
```

Using this, you can easily enable your LLM to call tools and obtain the results.

However, it's often the case where your LLM needs to call a tool, see its response, and continue processing output of the tool in order to give you a final response.

In this case, you have to loop your LLM call and update your messages until there are no more toolMessages to continue.

Example:
For more complex interactions where the LLM needs to process tool outputs and potentially make additional calls, you'll need to implement a conversation loop. Here's an example:

```typescript
let messages = [
{
role: "user",
content:
"Deduce Obama's age in number of days. It's November 28, 2024 today. Search to ensure correctness.",
content: "What are some AI events happening in Singapore and how many days until the next one?",
},
]
const adapter = new OpenAIChatAdapter(client)
let isDone = false

while (!isDone) {
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages,
tools: await adapter.listTools(),
})

// Handle tool calls
const toolMessages = await adapter.callTool(response)

Expand All @@ -109,14 +117,14 @@ See a full example in the [examples](./src/examples) directory.
Error: ReferenceError: EventSource is not defined
```

This event means you're trying to use EventSource API (which is typically used in the browser) from Node. You'll have to install the following to use it:
This error means you're trying to use EventSource API (which is typically used in the browser) from Node. Install the following packages:

```bash
npm install eventsource
npm install -D @types/eventsource
```

Patch the global EventSource object:
Then patch the global EventSource object:

```typescript
import EventSource from "eventsource"
Expand Down
45 changes: 24 additions & 21 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 2 additions & 4 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,7 @@
".": "./dist/index.js",
"./*": "./dist/*"
},
"files": [
"dist"
],
"files": ["dist"],
"scripts": {
"build": "tsc",
"build:all": "npm run build -ws --include-workspace-root",
Expand All @@ -23,7 +21,7 @@
"dependencies": {
"@anthropic-ai/sdk": "^0.32.1",
"@icons-pack/react-simple-icons": "^10.2.0",
"@modelcontextprotocol/sdk": "^1.0.3",
"@modelcontextprotocol/sdk": "^1.1.1",
"openai": "^4.0.0",
"uuid": "^11.0.3"
},
Expand Down
50 changes: 49 additions & 1 deletion src/config.ts
Original file line number Diff line number Diff line change
@@ -1 +1,49 @@
export const REGISTRY_URL = "https://registry.smithery.ai"
import {
ProgressTokenSchema,
RequestSchema,
ResultSchema,
} from "@modelcontextprotocol/sdk/types.js"
import { z } from "zod"

// Copied from MCP
export const BaseRequestSchema = z
.object({
_meta: z.optional(
z
.object({
/**
* If specified, the caller is requesting out-of-band progress notifications for this request (as represented by notifications/progress). The value of this parameter is an opaque token that will be attached to any subsequent notifications. The receiver is not obligated to provide these notifications.
*/
progressToken: z.optional(ProgressTokenSchema),
})
.passthrough(),
),
})
.passthrough()

/**
* A custom method to set the configuration of the server deployed on Smithery.
* This must be called after initialization and before using the SSE server.
*/
export const ConfigRequestSchema = RequestSchema.extend({
method: z.literal("config"),
params: BaseRequestSchema.extend({
config: z.any(),
}),
})

export type ConfigRequest = z.infer<typeof ConfigRequestSchema>

/**
* A custom response schema to expected when creating a config request.
*/
export const ConfigResultSchema = ResultSchema.extend({
error: z
.any()
.optional()
.describe(
"An object containing the error. If no error is present, it meanas the config succeeded.",
),
}).describe("The result of a config request.")

export type ConfigResult = z.infer<typeof ConfigResultSchema>
Loading

0 comments on commit c8c88cd

Please sign in to comment.