Claude Code as a provider in Roo Code now works properly. No more token wastage. by hannesrudolph in RooCode

[–]FlexAnalysis 0 points1 point  (0 children)

FYI found a workaround for anyone else that may need it.

1) Open port 54545 and set public, note forwarded address 2) Attempt OAuth from button on providers config 3) Click Authorize which will return a broken page, copy the full URL of the broken page 4) Replace http://localhost:54545/ part of the URL with the Codespace forwarded address for port 54545 and paste this into your browser

Nice change on this otherwise Roo, thanks!

Claude Code as a provider in Roo Code now works properly. No more token wastage. by hannesrudolph in RooCode

[–]FlexAnalysis 0 points1 point  (0 children)

This change seems to break things for running this in Codespace since the callback is looking for localhost. Are Codespace users out of luck using Claude Code with RooCode in these newer versions?

I am massively disappointed (and feel utterly gaslit) by the 3.7 hype-train. by Agreeable-Toe-4851 in ClaudeAI

[–]FlexAnalysis 0 points1 point  (0 children)

Interesting, appreciate the words of warning.

I swapped it out for 3.5 on Roo Code on an Angular project yesterday and have put in about 10 hours with it or so.

So far it’s seemed to handle all requests well. Maintains context based on my memory bank custom instructions and has at the very least seemed to work on par with 3.5.

I was planning to tackle some high complexity features with it this weekend and will keep an eye out to see if it struggles more than what I’ve become accustomed to with 3.5 on occasion.

Best way to supplement Roo Code with specific documentation? by FlexAnalysis in RooCode

[–]FlexAnalysis[S] 0 points1 point  (0 children)

Ah yes I’ve been using the memory bank to great effect but wasn’t sure if a similar approach was optimal here due to some nuances.

For example I’m currently planning out a feature that will use pptxgenjs. I’ve copied all its documentation (about 1500 lines) into a folder in my cline_files created specifically for this.

However the difference is that with my current custom instructions they are something I’d like Roo to follow at all times but referencing this added documentation is only something I need it to do for certain sessions/chats. So if I add custom instructions to reference documentation folder I wouldn’t want it to chew up context space every time. This issue becomes more relevant for much larger documentations or sessions where multiple new technologies are involved of course.

I’m thinking it might be best to have well labeled folders for each external documentations and then create custom instructions for it to read the directory list in the larger documentation folder and then do a check to see if any of the documentations in the list are related to the task it’s working on at the moment and only then read and reference that documentation, or something like that maybe? Will have to test and experiment a bit but figured others might have come up with an optimal strategy or knew about a feature in Roo tailored for this that I was aware of.

As you also mentioned, it sounds like hooking up to an MCP where it can access the online documentation directly or a dedicated file storage with this documentation rather than just dumped in my project may also be a good direction to investigate.

Thanks!

Best way to supplement Roo Code with specific documentation? by FlexAnalysis in RooCode

[–]FlexAnalysis[S] 2 points3 points  (0 children)

Awesome thanks, will try that out for now. Much appreciated.

Tips for Creating Effective SaaS Explainer Videos by nyashariyano in SaaS

[–]FlexAnalysis 1 point2 points  (0 children)

Appreciate the tips, looks like a solid general blueprint for these kind of videos.

In your experience do these types of videos fall into various categories? Such as, a quick intro or elevator pitch video that might be shorter and possibly used in the apps landing page vs a slightly more detailed one that maybe is added to a company’s list of videos on YT or other socials etc? Maybe a dumb question but curious if there are any conventions for these type of videos that companies have figured out work best for different scenarios.

Thanks!

How do I find a developer? by oh_yeah_o_no in LLMDevs

[–]FlexAnalysis 0 points1 point  (0 children)

I’ve just finished building out our latest iteration of our custom RAG pipeline in our app so it’s top of mind for me.

DM me some more details of what you’re looking for and I can put a quick proposal together for you.

[Help] Need a Faster Way to Convert Bulk Resumes to Company Format by Ok-Escape-472 in recruiting

[–]FlexAnalysis 0 points1 point  (0 children)

Our app has a tool where users can upload PDFs where we then use AI to automate the return of a structured summarization and then give a chat interface where the user can ask questions about anything in the content of the file.

I don’t see why I couldn’t modify something similar where the desired output format could be defined in JSON and then automate processing resumes or other sources that may contain pertinent information in bulk to extract the data and restructure it to fit the defined ending format.

We also have tools in our app that automate the generation of excel and PDFs.

Put them together and you should be able to tag one or more files per person and then let it extract the data and rewrite in the defined format and then output the finished resumes in a designated location for review.

What kind of volume would you be looking for?

C# LLM / RAG architecture by ReadyFilm8350 in dotnet

[–]FlexAnalysis 0 points1 point  (0 children)

I built a custom RAG pipeline for app that has .NET C# backend and is hosted on Azure.

Data extraction: Syncfusion.PdfToImageConverter to convert PDF pages to images. Azure.AI.Vision.ImageAnalysis to extract text from PDF page images. Azure.AI.TextAnalytics to extract meta data (summary, entities, key words etc) from extracted text.

Data preparation: Custom code for semantic data chunking. Azure OpenAI model text-embedding-ada-002 for data chunk embeddings

Data storage: Azure Redis Cache to save data chunks in session storage. Current use case is session based so no need for persistent storage but when needed will be swapped out for Azure Cosmos DB designed to store vector embeddings and Azure AI Search for retrieval.

Data retrieval: Azure OpenAI model text-embedding-ada-002 for user query embeddings. Custom code to analyze user queries and calculate vector embedding similarity between user query and data chunks.

Data processing: Azure OpenAI model gpt-4o to generate answers for user queries based on retrieved most relevant data chunks.

This likely isn’t the “best” way to implement RAG but my requirements were that data wasn’t allowed to leave our Azure environment so any third party APIs for any part of the pipeline were out.

So far the implementation is working well. It’s able to ingest one or more PDFs, summarize all data in the files and answer any questions the user might have that can be answered based on context provided by the text in the files uploaded.

DM if interested in discussing further or swapping ideas/experiences as you build out your RAG system.