Merge pull request #68 from shivan2418/add_mock_api

add mocked api to speed up development
This commit is contained in:
Niek van der Maas 2023-03-24 09:21:59 +01:00 committed by GitHub
commit fe99bfd1f5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 1741 additions and 0 deletions

2
.env Normal file
View File

@ -0,0 +1,2 @@
# Uncomment the following line to use the mocked API
#VITE_API_BASE=http://localhost:5174

View File

@ -44,6 +44,12 @@ git subtree pull --prefix src/awesome-chatgpt-prompts https://github.com/f/aweso
docker compose up -d
```
## Mocked api
If you don't want to wait for the API to respond, you can use the mocked API instead. To use the mocked API, edit the `.env` file at root of the project ans set the key `VITE_API_BASE=http://localhost:5174` in it. Then, run the `docker compose up -d` command above.
You can customize the mocked API response by sending a message that consists of `d` followed by a number, it will delay the response the the specified number of seconds. You can customize the length of the response by including `l` followed by a number, it will return a response with the specified number of sentences.
For example, sending the message `d2 l10` will result in a 2 seconds delay and 10 sentences response.
## Desktop app
You can also use ChatGPT-web as a desktop app. To do so, [install Rust first](https://www.rust-lang.org/tools/install). Then, simply run `npm run tauri dev` for the development version or `npm run tauri build` for the production version of the desktop app. The desktop app will be built in the `src-tauri/target` folder.

View File

@ -4,8 +4,23 @@ services:
chatgpt_web:
container_name: chatgpt_web
restart: always
depends_on:
- mocked_api
env_file:
- .env
ports:
- 5173:5173
volumes:
- .:/app
build:
context: "."
dockerfile: Dockerfile
mocked_api:
container_name: mocked_api
build:
context: "."
dockerfile: mocked_api/Dockerfile-mockapi
restart: always
ports:
- 5174:5174

View File

@ -0,0 +1,8 @@
FROM python:alpine
WORKDIR /work
RUN pip install fastapi uvicorn lorem-text
COPY mocked_api/mock_api.py .
COPY mocked_api/models_response.json .
CMD ["uvicorn", "mock_api:app", "--host", "0.0.0.0", "--port", "5174"]

73
mocked_api/mock_api.py Normal file
View File

@ -0,0 +1,73 @@
import json
import re
import time
from lorem_text import lorem
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
app = FastAPI()
# add CORS middleware to allow requests from any origin
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_methods=["*"],
allow_headers=["*"],
)
# Define a route to handle POST requests
@app.post("/v1/chat/completions")
async def post_data(data: dict):
"""Returns mock responses for testing purposes."""
messages = data['messages']
instructions = messages[-1]['content']
delay = 0
lines = None
answer = 'Default mock answer from mocked API'
try:
delay = re.findall(r'(?<=d)\d+',instructions)[0]
except:
pass
try:
lines = re.findall(r'(?<=l)\d+',instructions)[0]
except:
pass
if delay:
time.sleep(int(delay))
if lines:
answer = "\n".join([lorem.sentence() for _ in range(int(lines))])
response = {
"id": 0,
"choices": [{
"index": 0,
"finish_reason": "stop",
"message": {"content": answer,"role": "assistant"}
}]
}
return response
@app.get('/v1/models')
async def list_models():
"""Returns a list of models to get app to work."""
with open('/work/models_response.json') as f:
result = json.load(f)
return result
@app.post('/')
async def post_data(data: dict):
"""Basic route for testing the API works"""
result = {"message": "Data received", "data": data}
return result

File diff suppressed because it is too large Load Diff