Skip to content

nidhinpd-YML/example-chat-app

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gemini chat app

Table of Contents

Intro

This chat app allows the user to converse with Gemini and use it as an intelligent, personal AI assistant. Two modes of text-only chat are currently available in this app, non-streaming and streaming.

In non-streaming mode, a response is returned after Gemini completes the entire text generation process.

Streaming mode uses Gemini's streaming capability to achieve faster interactions.

Frontend

The client for this app is written using React and served using Vite.

Backend

The app currently has 2 different backend servers that the user can choose from, Flask and Node.js.

API documentation

Endpoints available

Endpoint

Details

chat/ This is the non-streaming POST method route. Use this to send the chat message and the history of the conversation to the Gemini model. The complete response generated by the model to the posted message will be returned in the API's response.

POST chat/

Parameters

Name
Type
Data type
Description
chatrequiredstringLatest chat message from user
historyoptionalarrayCurrent chat history between user and Gemini model

Response

HTTP code
Content-Type
Response
200application/json{"text": string}
stream/ This is the streaming POST method route. Use this to send the chat message and the history of the conversation to the Gemini model. The response generated by the model will be streamed to handle partial results.

POST stream/

Parameters

Name
Type
Data type
Description
chatrequiredstringLatest chat message from user
historyoptionalarrayCurrent chat history between user and Gemini model

Response

HTTP code
Content-Type
Response
200application/jsonstring

Installation

Click here to skip to installation on Windows.

For Linux/macOS a setup bash script is available for easy installation. If you prefer installing manually you can skip to next section.

Using bash script(Linux/macOS)

Make the script executable

chmod +x setup.sh

Installation options

You can choose to install the required packages for

  • Frontend(React) plus a backend server of your choice(Flask or Node.js).
  • Frontend(React) and both backend servers(Flask, and Node.js).

You can specify your choice using the BACKEND variable while running the script.

Install React along with Node.js backend.

BACKEND=nodejs ./setup.sh

Install React along with Python/Flask backend.

BACKEND=python source ./setup.sh

Install React along with both(Flask, and Node.js) backends.

BACKEND=all source ./setup.sh

Install manually(Linux/macOS/Windows)

nvm(Node Version Manager) installation

Linux/macOS
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash

export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm bash_completion
Windows

Visit official npm docs for installation instructions.

Node.js installation

Linux/macOS/Windows

Install version 22.6.0 of node.

nvm install 22.6.0
Install the package dependencies for Node.js

You can quickly install the required packages using the package.json file.

  1. Navigate to the app directory, server-js (i.e. where package.json is located).
  2. Run npm install. This will install all the required packages mentioned in package.json.

Flask installation

Create a new virtual environment.
Linux/macOS
python -m venv venv
source .venv/bin/activate
Windows
python -m venv venv
.\venv\Scripts\activate
Install the required Python packages.
Linux/macOS/Windows
pip install -r requirements.txt

Run the app

To launch the app you have to perform the following steps:

  1. Run React client
  2. Run the backend server of your choice(Flask or Node.js)

Run React client

  1. Navigate to the app directory, client-react/.
  2. Run the application with the following command:
npm run start

The client will start on localhost:3000.

Run backend server

Grab an API Key

Before you can use the Gemini API, you must first obtain an API key. If you don't already have one, create a key with one click in Google AI Studio.

Get an API key

Refer to the instructions for your choice of backend in the following section.

Configure and run Node.js backend
Configuration
  1. Navigate to the app directory, server-js/.
  2. Copy the .env.example file to .env.
cp .env.example .env
  1. Specify the Gemini API key for the variable GOOGLE_API_KEY in the .env file.
GOOGLE_API_KEY=<your_api_key>
Running the Application

To run the Node.js chat app, use the following command.

node --env-file=.env app.js

--env-file=.env tells node.js where the .env file lies.

By default, the app will run on port 9000.

To specify a custom port, edit the PORT key in your .env file, PORT=xxxx.

** Note: ** In case of a custom port, you must update the host URL in React App.js.

Configure and run Python/Flask backend
Configuration
  1. Navigate to the app directory, server-python/.
  2. Copy the .env.example file to .env.
cp .env.example .env
  1. Specify the Gemini API key for the variable GOOGLE_API_KEY in the .env file.
GOOGLE_API_KEY=<your_api_key>
Running the Application

Run the application with the following command.

python app.py

The server will start on localhost:9000.

Usage

To start using the app, visit http://localhost:3000

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • JavaScript 57.8%
  • CSS 16.4%
  • Python 11.9%
  • Shell 7.8%
  • HTML 6.1%