This project is a demo application that showcases how to integrate the Hugging Face API with a Next.js 14 application using the App Router. The demo includes various NLP and image processing tasks like text completion, translation, image-to-text, and text-to-image generation.
- Text Completion: Utilize the
mistralai/Mistral-7B-Instruct-v0.2
model for generating text completions. - Translation: Translate text using the
t5-base
model. - Image to Text: Generate text descriptions from images with the
nlpconnect/vit-gpt2-image-captioning
model. - Text to Image: Create images from text prompts using the
stabilityai/stable-diffusion-xl-base-1.0
model.
- Node.js: Ensure Node.js is installed.
- Hugging Face API Key: Register at Hugging Face and obtain an API key.
-
Clone the repository:
git clone https://github.com/brown2020/huggingface-api.git cd huggingface-api
-
Install dependencies:
npm install
-
Configure environment variables:
- Copy
.env.example
to.env.local
. - Add your Hugging Face API key to the
.env.local
file:
HF_TOKEN=your_huggingface_token_here
- Copy
To start the server, run:
npm run dev
Visit http://localhost:3000
in your browser.
Select a task (e.g., Text Completion, Translation, Image to Text, Text to Image) and submit your input. The app will display the corresponding output.
The API route for handling requests is located at pages/api/hf.ts
, which interacts with the Hugging Face API to process different tasks.
HF_TOKEN=your_huggingface_token_here
Deploy this project on Vercel for easy hosting of your Next.js application. Refer to the Next.js deployment documentation for more details.
Contributions are welcome! Please open an issue or submit a pull request for any improvements.
This project is licensed under the MIT License.