At Superflows, our mission is to give everyone access to an expert in the software they're using.
While building the Alpha version, I was struck at how often I had to hold myself back in not building entire new massive sections of the product.
We ruthlessly prioritised the core features that would allow us to test our core hypothesis: that we could build a useful developer tool to help with building an AI assistant.
This page was born out of this process. Exploding with ideas, I wrote them down mostly so I didn't forget them all. It's also not quite a roadmap since it's not ordered yet - the order will depend on user feedback.
I've split them into 4 sections, based on the 4 main areas of the product:
- Action chaining: allow the LLM to output multiple API calls in 1 LLM response where later API calls use the output of previous calls.
- Scale to large API specifications: GPT's performance drops considerably when the number of API endpoints to select from grows. We have attempted several potential solutions, but none have worked satisfactorily so far. We believe a combination of intelligently selecting the most relevant endpoints and using fine-tuned models could be the solution.
- Enhance the OpenAPI schema provided
- This could be as simple as adding descriptions to parameters or endpoints that lack them
- Or updating the schema based on errors received from calling the API where there are mistakes
- Also, updating it based on user feedback on whether specific responses were helpful or not in answering their queries
- Add capability to answer questions based on the software's docs (standard question answering approach: scrape, chunk, embed in vector DB)
- E.g. How do I contact support?
- Can I speak to my account manager?
- We're at our usage limit, how do we upgrade?
- Enable the LLM to select non-HTTP actions:
- Calling functions in the frontend which are exposed by developers
- Enable redirects to other pages in the frontend
- Help the AI better understand how to use the API by providing relevant sections of the software's docs
- E.g. to create a report, you first must set up a segment and then add the report to that segment
Large Language Model
As an Open Source company, we are really excited to getting to grips with Llama 2.
We hope we can make meaningful contributions in the coming months to the Open Source community that will form around it.
We think it offers massive potential for improving the performance of our AI assistant.
- Improve the base model (fine-tune & LoRA on Llama 2)
- The hope: Make the model faster, cheaper, more accurate and able to be deployed on your own cloud infrastructure
- Customer-specific fine-tuning:
- Enable customers to fine-tune the model on their own data
- Persist chat history: enable developers to see the chat history in the playground
- AI assistant evaluation: enable developers to evaluate the model's ability to answer questions more thoroughly before deployment
- Generate potential user questions
- Evaluate the model's ability to answer those questions on a test database configuration
- AI Assistant version control
- Add development versions - so changes made on the dashboard don't affect production until that branch is set to be the main branch
- Analytics: enable developers to see how their AI assistant is being used at scale
- Add analytics section to the dashboard
- What questions are commonly asked?
- Integrations with other tools could be useful e.g. Segment
- Feedback: enable users to give feedback on the answers they receive
- What kinds of questions are not answered well?
- What are the most common questions?
- Frontend UI components - add new views other than the sidebar
- Add single page view
- Add command bar view
- Enable developers to add their own visual components (e.g. visualising a graph)
- Make frontend components fully stylable through setting CSS
- Component libraries for
- PHP ...
Tell me what you think - I added comments below (if it doesn't show, click here and scroll to the bottom).