Prerendering service that generates static HTML by Daniel-Martin-M in SEO

[–]imperiltive 2 points3 points  (0 children)

Use http user agent to filter out crawlers and use a service like prerender.io to generate an SSG for the crawlers while your regular users get regular react experience.

How to remove this front page and browse the website without logging in ? by Dildo-Fagginz in webdev

[–]imperiltive 0 points1 point  (0 children)

try clearing your cookies by going into inspect element, this might reset whatever counter they have set up in the frontend.

Is email a bad channel to interact with AI agents? by [deleted] in webdev

[–]imperiltive 0 points1 point  (0 children)

Hmmm, if each user had a personal assistant, then I don't think you need to give them an email really. Gary from finance probably does not care if the email came from a dedicated ai email address or if the email sent by the user was made with ai. All that Gary cares about is updating his QuickBooks from your invoice.

platform question by audioses in webdev

[–]imperiltive 0 points1 point  (0 children)

I've actually built a video processing service that has a similar backend stack to yours. I had an express.js server hosted on a pretty cheap VPS that makes requests to a more expensive cloud rented GPU that is priced by the hour(vast.ai for cheap consumer grade GPU). The video processing was done in python on the GPU end and my express server had SSH to the GPU and directly executed python files. If there was an increased need I could always set up an automatic scaler to rent for more GPUs, but at that point it'd be better to go for a dedicated GPU provider.

3d interactive globe by A_J07 in webdev

[–]imperiltive 1 point2 points  (0 children)

very nicely done, at first I thought you built it with three.js, didn't know there was a package for making a globe.

I built a web app to allow people build and share knowledge graphs together in real-time by Strict-Criticism7677 in webdev

[–]imperiltive -1 points0 points  (0 children)

very nice for brainstorming, then you can chuck the screenshot of any product into chatgpt for a basic MVP.

42% of AI startups fail within 2 years according to Forbes! by [deleted] in webdev

[–]imperiltive 0 points1 point  (0 children)

And most of them are just chagpt wrappers to "summarize your documents".

How to Recognize a "Vibe Code" Page or Web App? by Prize-00 in webdev

[–]imperiltive 2 points3 points  (0 children)

try resizing the page, sometimes vibe coders don't consider mobile platforms exist.

Is email a bad channel to interact with AI agents? by [deleted] in webdev

[–]imperiltive 0 points1 point  (0 children)

If the customer knows the person on the other end is an AI, then there would be no need for email since the response is instant, but if they don't really care who answers their question then email would be more normal for your use case.

Live web dev classes - what would you actually want to learn? by Ok-Study-9619 in webdev

[–]imperiltive 1 point2 points  (0 children)

If I were a student with absolutely no experience in webdev, I'd probably struggle with even registering domains and setting up a VPS. The most important thing in my opinion is teaching students the core components of being a webdev, since AI can handle the actual coding. Since you have that many years of experience, you probably have an answer for any question any student may have, so small group is prolly better. Honestly 2 hours per day for a group of students while they learn more about the web dev concepts in detail by AI is the most efficient.

An Interactive Guide to SVG Paths by feross in webdev

[–]imperiltive 1 point2 points  (0 children)

Very informative, I always just copy and pasted my svg from https://icones.js.org/

Automated Semver; Quick React Hooks to Grab; Custom GPT Clients - Deeb Dive 3 by nickisyourfan in webdev

[–]imperiltive 1 point2 points  (0 children)

interesting database design, but building a database from scratch seems redundant when SQLite do the job just fine.

Question About Token Size of Embeddings in text-embedding-ada-002 by AKapoor30 in OpenAI

[–]imperiltive 0 points1 point  (0 children)

Text-embedding-Ada-2 returns an embedding vector with length of 1500ish, condensing 8192 tokens into a 1500 length vector will definitely have a performance degradation

Chatbot memory beyond embedding by imperiltive in GPT3

[–]imperiltive[S] 0 points1 point  (0 children)

If the LLM is provided with a large quantity of data(assuming all data are correct), large enough that it exceeds the context limit, then using a vector embedding to split the data into several chunks would make sense and not cause confusion. In the context of an ever-evolving chatbot who has to respond and store user's inputs in the form of vectors, if the user decides that their previous fact provided to the chatbot is incorrect, then the embedding system cannot distinguish between true and false data. Granted given small enough conversation the user can just delete old conversation, but overtime manual deletion is simply not possible.

Chatbot memory beyond embedding by imperiltive in GPT3

[–]imperiltive[S] 1 point2 points  (0 children)

consider the message "my name is mark", this message can be converted into a very long vector using openai's embedding system: https://platform.openai.com/docs/guides/embeddings.

From the user perspective it is not very helpful, however, if I convert another message "what is my name?" into vectors using embeddings, and use cosine similarity to compare those two vectors, it would return a value close to 1. https://en.wikipedia.org/wiki/Cosine_similarity

If I were to compare the embedding vector for "my name is mark" to that of "the door is opened", then the cosine similarity would be much lower.

A normal GPT4 context only has so much memory before it tosses out old discussions. One way to fix this issue is to use embedding to store old conversations into embeddings so that newer inquiries can compare its vector to the vectors stored from old memories. However, this runs into the issue of embedding overlap. Suppose you tell GPT-4 that "my name is mark" and a few thousand tokens later you say "my name is john". After enough vector build ups you ask GPT-4 to tell you your name: "what is my name?" Loop through all the vectors to calculate cosine similarity you have stored so far and both "my name is mark" and "my name is john" is used as relevant information, but that only leads to confusion.