Commit Graph

423 Commits

Author SHA1 Message Date
Webifi ed0dbe1188 Enable petals via uri 2023-08-19 13:09:13 -05:00
Webifi 98dbaec35a Fix missing prompt prefix 2023-08-19 12:44:37 -05:00
Webifi 43792b1d69 Update CheapGPT 2023-08-18 13:25:33 -05:00
Webifi 5eea1c8ddd fix typo 2023-08-18 13:14:20 -05:00
Webifi f9183c0662 Shrink text-areas on profile change 2023-08-18 13:06:51 -05:00
Webifi c70b2c3928 auto apply system prompt changes to current chat 2023-08-18 12:47:47 -05:00
Webifi 538e9d749a Enable system prompt for CheapGPT 2023-08-18 12:35:53 -05:00
Webifi ff97a30e78 tweak profile 2023-08-17 21:34:05 -05:00
Webifi a260d49d5a Add more variation in response 2023-08-17 21:32:29 -05:00
Webifi 726842179c typo 2023-08-17 18:46:12 -05:00
Webifi e5f6be8f81 typos 2023-08-17 18:27:10 -05:00
Webifi bc7ba1da74 Try to make jenLlama profile more creative in responses 2023-08-17 18:20:28 -05:00
Webifi 8edccda1eb Allow more leeway in name suggestion token count 2023-08-17 12:24:39 -05:00
Webifi 950a27d8e6 Add more debug, fix non-streaming response 2023-08-17 12:20:09 -05:00
Webifi e00aad20f5 Force model type for jenLama profile 2023-08-17 08:22:41 -05:00
Webifi 083e31d4a7 Add uninhibited LLaMA profile 2023-08-17 07:58:43 -05:00
Webifi 5f919098f5 Add CheapGPT profile 2023-08-17 07:58:04 -05:00
Webifi f523f8d4bc Fix typo 2023-08-16 15:20:57 -05:00
Webifi cb2b9e07f4 Add distinction between chat and instruct models 2023-08-16 15:20:07 -05:00
Webifi f4d9774423 update token count 2023-08-16 01:51:49 -05:00
Webifi 86f427f62f Fix potential infinite loop 2023-08-15 23:48:49 -05:00
Webifi a08d8bcd54 Move token counting to model detail. 2023-08-15 21:46:33 -05:00
Webifi 91885384a1 disable non-chat Llama-2 2023-08-15 20:39:19 -05:00
Webifi fb2290308f Begin refactoring model providers to be less anti-pattern 2023-08-15 20:32:30 -05:00
Webifi af568efd3a Add lead prompts 2023-08-07 12:44:16 -05:00
Webifi bcb2b93e84 Default aggressive stop on 2023-08-07 12:37:57 -05:00
Webifi 7c588ce212 Add StableBeluga2. Update prompt structures 2023-08-07 12:33:42 -05:00
Webifi 8b2f2515f9 Send summary request as system prompt 2023-08-07 12:32:46 -05:00
Webifi c473d731ce Properly close websocket connections. 2023-07-28 18:10:45 -05:00
Webifi bb2a177f22 remove debug 2023-07-28 16:48:36 -05:00
Webifi 4c37969635 remove debug 2023-07-28 16:48:07 -05:00
Webifi 0f12fcdb95 Format LLama-2-chat prompts to spec. 2023-07-28 16:46:50 -05:00
Webifi f223f4e510 Fix disabled OpenAI models #243 2023-07-26 17:49:26 -05:00
Webifi 58afe8f375 typo 2023-07-25 14:55:07 -05:00
Webifi 1ef08110c3 Add aggressive stop setting 2023-07-25 14:53:29 -05:00
Webifi b0812796a1 Add link to swarm health 2023-07-25 13:28:14 -05:00
Webifi ff3799637b Allow scrolling while streaming re: #241 2023-07-25 00:21:04 -05:00
Webifi 0ffdd78863 Another prompt prefix issue 2023-07-24 23:14:28 -05:00
Webifi 833633991a Fix user prompt prefix injection 2023-07-24 22:20:52 -05:00
Webifi af08f5c99e Update text 2023-07-24 20:00:20 -05:00
Webifi 190bf16ce6 Drop llama 65b and guanaco 65b - unstable in swarm 2023-07-24 19:56:05 -05:00
Webifi 38d38bf948 Fix some issues with stop sequences and role sequences 2023-07-24 19:48:28 -05:00
Webifi f56e29b829 Show shorter model name 2023-07-24 15:53:04 -05:00
Webifi f6380e1cc2 Allow Petals and/or OpenAI 2023-07-24 15:26:17 -05:00
Webifi ca19bab19d Don't allow too low of temp or top_p 2023-07-22 17:21:01 -05:00
Webifi 7aadca3c5c Better error handling for Petals 2023-07-22 17:08:40 -05:00
Webifi 15dcd27e8f Get temp and top_p working for Petals 2023-07-22 16:48:26 -05:00
Webifi 6d35a46d50 Allow user to adjust message chaining strings 2023-07-22 14:40:12 -05:00
Webifi 9a6004c55d More changes for Petals integration 2023-07-22 13:24:18 -05:00
Webifi df222e7028 Try to import chat name suggestion 2023-07-22 13:23:24 -05:00