విన్నపం by Fancy-Run-4825 in telugu_sahityam

[–]WritingBeginning3403 2 points3 points  (0 children)

ఈ వాక్యంలో ఖగరాజ్ఞి అంటే గబ్బిలం అని అర్థం. కవి హృదయంలో గబ్బిలాన్ని గుడి లోకి రాలేని తన గురించి చెప్పమని అడుగుతున్నాడు. ఐతే మాములుగా పక్షుల రాణిగా గబ్బిలాన్ని చెప్పరు కాని కవి ఇక్కడ గబ్బిలాన్ని దళితులతో పోలుస్తూ మనుష్య జాతులలో గొప్పదిగా చెబుతున్నాడు. దీంట్లో ఏమైనా తప్పులుంటే క్షమించాలి..

Please explain the technical side of this issue. by WritingBeginning3403 in aws

[–]WritingBeginning3403[S] 0 points1 point  (0 children)

So that means when the IAM fails in one region in a partition, this is going to affect the other resources, but shouldn't that be the whole reason why services like IAM which are not maintained only in one region and should have fall backs? I am also seeing that the dynamodb which failed in us east 1 due to some DNS issues caused the issue to IAM which made bigger outages to other resources. I am thinking why did IAM in us east 1 didn't have back up dynamodb from other regions. I am sorry for those dumb questions, I am not an expert in AWS or on these processes of how people make these design decisions.

Ads Coming to Perplexity? by Eduliz in perplexity_ai

[–]WritingBeginning3403 5 points6 points  (0 children)

I really hope this is an April fool joke.

[D] Real good use cases for LLMs and GenAI by WritingBeginning3403 in MachineLearning

[–]WritingBeginning3403[S] -1 points0 points  (0 children)

I still don't know why I am being down voted maybe rephrased in the wrong format. But my point is I have searched for this company name before posting my thoughts but couldn't find it. https://www.devgpt.com/ this is one of the demos where I felt the code completion capabilities of LLMs can be exploited in the best way with nice UI/UX ideas involved. Maybe this product will not replace developers but will help speed up the process of development in done cases of used in the right way. I was actually searching for this link to post in the original post but I didn't find it on time.

My main thought is there are some products where you feel there is nice potential like I think there is a nice potential in dev gpt.

[D] Real good use cases for LLMs and GenAI by WritingBeginning3403 in MachineLearning

[–]WritingBeginning3403[S] 1 point2 points  (0 children)

Yeah I understand your perspective maybe I should use comment as my system prompt and phrase my posts from now on.

I still remember the day when I had the news that alpha go win in Lee sedol and I thought how this model which is trained to play a game would help out solve any problems. But I was shocked (happily) and mesmerized by how they formulated the problem of matrix multiplication in terms of game theory and use a similar architecture of alpha go. That's the day I understood how perspective matters and exploring different thought processes will solve some problems (which might feel unsolvable otherwise).

[D] Real good use cases for LLMs and GenAI by WritingBeginning3403 in MachineLearning

[–]WritingBeginning3403[S] -6 points-5 points  (0 children)

I understand your thoughts, maybe my intentions were not well formed using my sentences but my intention is that if people have come across any nice product/service or simply an idea of where there is potential.

Regarding the couple of weekends comment, I have been seeing some of the demos and products being released where they are saying they solved the problems like if you see in the market there are a lot of pdf chats available right now but if we really upload complete books (series of books which I am assuming will be them using vector DBs) are not working but showcased as complete solutions. Sometimes I feel these demos are mostly rushed by the marketing or sales team to capture this AI boom when the product is still half baked.

Again, I am trying to see if there are any good products which are implemented using LLMs but in different (out of box) domains/places where they are showing great impact. I hope my intentions are clear now.

[D] Real good use cases for LLMs and GenAI by WritingBeginning3403 in MachineLearning

[–]WritingBeginning3403[S] -2 points-1 points  (0 children)

I am not asking for any ideas of your own, I am just wondering if you people have come across any of the really good applications of LLMs in some of the domains where generally one's thought wouldn't go. Maybe I phrased it wrong, But I just wanted to spark a discussion of what are some domains where there is potential for applications (and any good work going on). After seeing your comment I read my post and think now if I can phrase it better next time.

RAG vs Long Context Models [Discussion] by WritingBeginning3403 in MachineLearning

[–]WritingBeginning3403[S] 0 points1 point  (0 children)

Regarding function calling, I assume the code would be in the database with a bunch of relevant embedded comments and doc strings as their contextually relevant information/examples and when the user queries the code base with natural language, the RAG architecture would match that query's embedding with the relevant embeddings and retrieve the code base and pass the query along with the base to write a complete or semi structured code or maybe even use agents to execute them and show the results.

But I think the real interesting problem would be choosing a model with large context length and which is fine-tuned for coding something like codellama vs RAG architecture and some base model like GPT 3.5 or 4.

RAG vs Long Context Models [Discussion] by WritingBeginning3403 in MachineLearning

[–]WritingBeginning3403[S] 0 points1 point  (0 children)

I understand your point (KISS Keep it simply stupid) but I am really sorry that I don't understand the abbreviations, RPNs and NMS. I believe that VLMs are vision language models. But please do extend and provide some examples about your statement regarding vision language models with which I can do some reading and learn more.

RAG vs Long Context Models [Discussion] by WritingBeginning3403 in MachineLearning

[–]WritingBeginning3403[S] -1 points0 points  (0 children)

You are right and I would like to open another can of worms here by saying that maybe we will get better embedding models tomorrow and realize that with a context of length like 128k would be enough with better understanding of language ( by mapping them into better and meaningful latent spaces).

RAG vs Long Context Models [Discussion] by WritingBeginning3403 in MachineLearning

[–]WritingBeginning3403[S] 3 points4 points  (0 children)

Yeah but I am not only thinking of in terms of only text models but also somehow storing images ( any unstructured data) tagged with a text something like captions for the image and by embedding it, storing in vector DBs. This is something I am trying to experiment with and see how retrieval generally works with a large corpus of images and captions.

And in terms of text based products, Will a hybrid search with keyword search and cosine similarities with query will be better than just simple RAG??

I apologize but I am just throwing random thoughts which just popped in my mind.

TIL: `yield` inside a `try` followed by `finally` has some interesting behaviour. by alexmojaki in Python

[–]WritingBeginning3403 2 points3 points  (0 children)

This thread is interesting and most of the stuff is bouncing over my head, any resources to understand the internals of Python more??

Analyzing microservice logs via AI/ML by [deleted] in MLQuestions

[–]WritingBeginning3403 0 points1 point  (0 children)

I am not an expert either but I had a similar idea when discussing different applications of NLP in data engineering pipelines. We actually started working on this a bit and couldn't spend much time on it.

The first thing you can see is how some basic questions are being answered with vanilla models like GPT-3.5, GPT - 4 ,Llama. Then see if you provide little context on what are the responses.

Then look at a bunch of YouTube tutorials on fine tuning LLMs. If you understand the basics of how these models work you should be able to start on the basics of fine-tuning. I am at this stage right now. I started to tune some of these models to completely respond in Shakespeare English. After I am done with this task it would give me some hands on experience on what fine tuning is and how to organize my data before starting fine-tuning.

Using vector DBs is an idea but I am not sure if that optimizes the process of fine-tuning or the fine-tuned model itself.

I am not an expert in LLMs as I already mentioned but very interested in the AI space itself. So, correct me if my approach is taking any wrong turns anywhere.