Your <insert startup name> GPT
In the startup world, you time and time again have to describe your company/solution - only every time the questions are posed slightly differently so you’re seldom able to re-use a previous text straight off. Some tweaking is almost always needed, and if it’s - for example - in the context of applying for a grant, a lot of creativity might sometimes be needed to answer a range of questions about your current and future business.
Now, the solution to all this is of course LLMs (like ChatGPT). Given a neat bunch of texts about your startup, what it’s done and where it’s going, an LLM is perfect to wring out texts to answer all manner of questions about your company. The more you can supply in the form of background material, the better the responses to questions can become.
What you need to do is to make sure that the LLM uses all your background material as a ‘source of truth’ or at least as a basis to spin off some creative sentences from. You can provide some ‘context’ (e.g. in the form of a document) together with your question, to better steer the responses in the right direction, but if you have a lot of ‘context’ (and you should have, if you saved all your pitches, presentations, applications that you’ve done over the years) you will feel limited in what you can provide as a context for a single conversation with an LLM.
A very easy way to provide the LLM with a lot of context is to create a ‘GPT’ within ChatGPT (note that GPT creation requires a ‘Plus’ subscription at $20/month - however ChatGPT users on a free plan can still use this GPT without paying). You give the GPT a name and you supply it with background material simply by uploading relevant files. You can also give more detailed instructions on how the GPT should answer queries - as in ‘be brief and to the point’ or ‘elaborate and provide a lot of details’, or ‘always answer as if you are Yoda from Star Wars’ (that last one might not play great with Vinnova, but it can be fun!) Experimentation here is key, but you’ll likely get a lot o value without going in-depth on prompt engineering and keeping it barebones.
If a ‘GPT’ feels limited, the next step is to create a custom RAG application, with a database of vectorized background information that the LLM can use to Retrieve and Augment when Generating (the re’s your RAG acronym). It’s a bit more involved but if you’re not inclined to build it from scratch there are RAG as a service solutions out there. Or you can ask your LLM to generate the needed code for it.
As always, give the answers a look before accepting them as ‘best answer’. Make sure the LLM hasn’t gone to hallucinating. And maybe you can tweak something that’s missing with a follow-up question.
I guess many, or even most, of you have done this already. But I thought getting this out there could save some time for those of you that still haven’t. And also, everyone could comment and give more tips on how to accomplish these kinds of things - I know there are good ideas out there worth sharing!
Opinion piece by Petter Wolff.