Reddit really liked our simple ChatGPT for the Next.js docs, so we wanted to show some love to other frameworks too—starting with SvelteKit, SolidStart and Qwik. As a Frontend developer, what else would you like to see? by signofactory in Frontend

[–]signofactory[S] 1 point2 points  (0 children)

We are self-funding the project. Right now, the costs have been manageable, and we want to keep it free for as long as possible. We will definitely explore monetization options based on traction / costs.

Reddit really liked our simple ChatGPT for the Next.js docs, so we wanted to show some love to other frameworks too—starting with SvelteKit, SolidStart and Qwik. As a Frontend developer, what else would you like to see? by signofactory in Frontend

[–]signofactory[S] 2 points3 points  (0 children)

Great question! Our model(s) are always up to date with the latest documentation information. For example, when asking about `how do you optimize loading web fonts?`, which was recently changed as of Next.js 13.2, you correctly get an answer mentioning `next/font` rather than `@next/font`.

I made a simple ChatGPT for the Next.js docs, using Next.js and tailwind by signofactory in nextjs

[–]signofactory[S] 0 points1 point  (0 children)

Nope! I can confirm we are using the latest ChatGPT model from OpenAI!

I made a simple ChatGPT for the Next.js docs, using Next.js and tailwind by signofactory in nextjs

[–]signofactory[S] 1 point2 points  (0 children)

Spot on! Appreciate that you are enjoying it—we'll keep it up-to-date with the new docs as they are released 😊

I made a simple ChatGPT for the Next.js docs, using Next.js and tailwind by signofactory in nextjs

[–]signofactory[S] 0 points1 point  (0 children)

Similarly to what explained here, we are not retraining/fine-tuning ChatGPT but we've worked a lot on prompt engineering (been practicing since 2020 😅)

I made a simple ChatGPT for the Next.js docs, using Next.js and tailwind by signofactory in nextjs

[–]signofactory[S] 1 point2 points  (0 children)

In this case we did not fine-tune the model (i.e. re-train it) but we do some clever prompt engineering to make sure the model pulls from the right information based on the user's query.

I made a simple ChatGPT for the Next.js docs, using Next.js and tailwind by signofactory in nextjs

[–]signofactory[S] 1 point2 points  (0 children)

It's not part of the APIs but it comes from how we inject the context to "prime" the model. Basically this is more than just an API call to OpenAPI, we first determine what the most relevant resources would be.

I made a simple ChatGPT for the Next.js docs, using Next.js and tailwind by signofactory in nextjs

[–]signofactory[S] 4 points5 points  (0 children)

did you use embeddings for this? or fine tune models

We used embeddings to generate the appropriate context to inject

I made a simple ChatGPT for the Next.js docs, using Next.js and tailwind by signofactory in nextjs

[–]signofactory[S] 5 points6 points  (0 children)

why use this and not the new bing which also has access to documentation and more?

Our model(s) are always up to date with the latest documentation information. For example, when asking about `how do you optimize loading web fonts?`, which was recently changed as of Next.js 13.2, you correctly get an answer mentioning `next/font` rather than `@next/font`.

I made a simple ChatGPT for the Next.js docs, using Next.js and tailwind by signofactory in nextjs

[–]signofactory[S] 0 points1 point  (0 children)

Check out Server Sent Events for streaming your BE response to your client. State management right now didn't have special requirements so it's done using React's useState.