add mocked api

This commit is contained in:
Emil Elgaard 2023-03-23 12:50:41 -04:00
parent fddde9424b
commit c96c9f15e5
5 changed files with 1735 additions and 0 deletions

View File

@ -44,6 +44,11 @@ git subtree pull --prefix src/awesome-chatgpt-prompts https://github.com/f/aweso
docker compose up -d docker compose up -d
``` ```
## Mocked api
If you don't want to wait for the api to respond, you can use the mocked api instead. To use the mocked api create a `.env` at root of the project
with key `VITE_API_BASE=http://localhost:5174` in it. You customize the mocked api response by including d followed by a number, it will delay the response X seconds.
You can customize the length of the response by including l followed by a number, it will return a response with X sentences. For example `d2 l10` = 2 seconds delay and 10 sentences response.
## Desktop app ## Desktop app
You can also use ChatGPT-web as a desktop app. To do so, [install Rust first](https://www.rust-lang.org/tools/install). Then, simply run `npm run tauri dev` for the development version or `npm run tauri build` for the production version of the desktop app. The desktop app will be built in the `src-tauri/target` folder. You can also use ChatGPT-web as a desktop app. To do so, [install Rust first](https://www.rust-lang.org/tools/install). Then, simply run `npm run tauri dev` for the development version or `npm run tauri build` for the production version of the desktop app. The desktop app will be built in the `src-tauri/target` folder.

View File

@ -4,8 +4,19 @@ services:
chatgpt_web: chatgpt_web:
container_name: chatgpt_web container_name: chatgpt_web
restart: always restart: always
env_file:
- .env
ports: ports:
- 5173:5173 - 5173:5173
build: build:
context: "." context: "."
dockerfile: Dockerfile dockerfile: Dockerfile
mocked_api:
container_name: mocked_api
build:
context: "."
dockerfile: mocked_api/Dockerfile-mockapi
restart: always
ports:
- 5174:5174

View File

@ -0,0 +1,9 @@
FROM python:3.10-slim-buster
WORKDIR /work
RUN pip install fastapi uvicorn lorem-text
COPY mocked_api/mock_api.py .
COPY mocked_api/models_response.json .
CMD ["uvicorn", "mock_api:app", "--host", "0.0.0.0", "--port", "5174"]

73
mocked_api/mock_api.py Normal file
View File

@ -0,0 +1,73 @@
import json
import re
import time
from lorem_text import lorem
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
app = FastAPI()
# add CORS middleware to allow requests from any origin
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_methods=["*"],
allow_headers=["*"],
)
# Define a route to handle POST requests
@app.post("/v1/chat/completions")
async def post_data(data: dict):
"""Returns mock responses for testing purposes."""
messages = data['messages']
instructions = messages[-1]['content']
delay = 0
lines = None
answer = 'Default mock answer from mocked API'
try:
delay = re.findall(r'(?<=d)\d+',instructions)[0]
except:
pass
try:
lines = re.findall(r'(?<=l)\d+',instructions)[0]
except:
pass
if delay:
time.sleep(int(delay))
if lines:
answer = "\n".join([lorem.sentence() for _ in range(int(lines))])
response = {
"id": 0,
"choices": [{
"index": 0,
"finish_reason": "stop",
"message": {"content": answer,"role": "assistant"}
}]
}
return response
@app.get('/v1/models')
async def list_models():
"""Returns a list of models to get app to work."""
with open('/work/models_response.json') as f:
result = json.load(f)
return result
@app.post('/')
async def post_data(data: dict):
"""Basic route for testing the API works"""
result = {"message": "Data received", "data": data}
return result

File diff suppressed because it is too large Load Diff