TransGPT is a GPT-powered comimc translation tool for tranlators to improve their efficiency by automating certain steps in their workflow.
Client: React, Next.js (13) , Chakra UI
Server: Node.js (v16.17.1), Python (3.10)
Database: PostgreSQL
- GPT chat
- Glossary translation
- Translation
- OCR Text extraction
- Speech bubble detection
- DeepL translation
- Google Translate
To run this project, you will need to add the following environment variables to your .env
file.
Run cp .env.example .env
Read more on Environment variables in Next.js 13 here
OPENAI_API_KEY = xxxxx
NEXT_PUBLIC_SERVER_URL = xxxx
NEXT_PUBLIC_PORT = xxxxx
NEXT_PUBLIC_VERSION_DATE = Aug 2
DATABASE_URL=
EMAIL_VERIFICATION_SECRET=
EMAIL_FROM=
EMAIL_USER=
EMAIL_PASSWORD=
EMAIL_HOST =
EMAIL_PORT =
SECRET_KEY=
Clone the project
git clone https://github.com/John-Oula/trans-gpt
Go to the project directory
cd trans-gpt
Install dependencies
npm install
Go to the server directory and install dependencies
cd server && npm install
Install python packages
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/
Start the Next.js development server
cd .. && npm run dev
Open a new terminal and start the backend server
cd server && node app.js
To deploy this project run
npm install pm2 -g
cd trans-gpt
Build the app
npm run build
Start pm2
pm2 start ecosystem.config.js
pm2 save
In the screenshot below, an additional requirement was added to instruct GPT to give an analysis of its translations. This show's that GPT's output is greatly influenced by system message and prompt design.
-
More focus should be put in prompt engineering to realize better GPT output
-
Improve the speed of batch OCR text extraction.
-
Add an admin management panel to manage glossaries preferrebly with react-admin
-
Develop a mechanism to count and control the number of tokens used, especially with the "Translate All" feature. Check this repository
-
Support internationalization
-
Support for
.rar
file upload and extraction. Currently supports .zip files -
Fine-tuning GPT. Fine-tuning lets you get more out of the models available through the Open AI's API by providing:
- Higher quality results than prompt design
- Ability to train on more examples than can fit in a prompt
- Token savings due to shorter prompts
- Lower latency requests
For support, email [email protected].