As a chat service provider, I’d like minimize my UI work and focus on the model itself.
In other words, I’d like to be able to provide my LLM through t3.chat, like a Custom GPT but monetized. t3.chat takes a cut and sends me the remainder.
Either approach looks good to me:
I provide a service API conforming to a standard expected by t3.chat, or
I build the model inside t3.chat with a creator view where I can prompt engineer, upload docs for RAG, etc, similar to a Custom GPT experience from Open AI
Please authenticate to join the conversation.
Closed
Feature Request
Get notified by email when there are changes.
Closed
Feature Request
Get notified by email when there are changes.