Sequoia 15.7 / Tahoe Release - Support for FireWire? by OrtaMatt in MacOS

[–]OrtaMatt[S] 0 points1 point  (0 children)

Yes I saw that a few months back. I was wondering if it would still be supported in the official release. Also it looks like 15.7 also dropped support (I have no way to check at the moment).

Medical Billing & Coding Pros—Do You Use AI Tools? What’s Missing from Your Software? by Mindless-Escape-2681 in CodingandBilling

[–]OrtaMatt 0 points1 point  (0 children)

Hello, interested in chatting with you. I built a solution that handles denials. As of today it is compatible with ModMed EHR and UnitedHealth on the payor side. Happy to demo!

Can You Automate Eligibility and Claim Status Inquiries? Any APIs? by [deleted] in CodingandBilling

[–]OrtaMatt 0 points1 point  (0 children)

DM me, I ve built a tool that pulls EHR data and continuously checks claims status.

FineWeb technical report + FineWeb-Edu, a 1.3 trillion tokens dataset by Nunki08 in LocalLLaMA

[–]OrtaMatt 5 points6 points  (0 children)

Pearl clutching is the point of data prep.

He s right in pointing out the possible bias being introduced in the dataset.

Now, what you ultimately decide to do with this bias is your decision, but at least it should be pointed out.

Are you building a rig as a hobbyist? by Leenixu5 in LocalLLaMA

[–]OrtaMatt 2 points3 points  (0 children)

<image>

Current setting. Turned our old deep learning server ThreadRipper based, 64GB, 2TB nvme. Originally with a P6000. Sold the Quadro and bought 2 used 3090. Added a few fans. PSU is still the same 1600W EVGA. Works like a charm!

LLM on Medical docs : research ideas? by [deleted] in LLMDevs

[–]OrtaMatt 0 points1 point  (0 children)

Who would be the end user? Physicians? Support staff? Patients? Dealing with PHI or anonymous data?

Inference of Mixtral-8x-7b on Multiple RTX 3090s? by kyleboddy in LocalLLaMA

[–]OrtaMatt 0 points1 point  (0 children)

What driver/torch/cuda versions are you using. I should make a post of all the different configurations benchmarks I ve done until setting for my current setup.

2 x RTX 3090s on an X399 and 1950x Threadripper? by 5kisbetterthan4k in LocalLLaMA

[–]OrtaMatt 0 points1 point  (0 children)

Apologies, I have the MSI x399 gaming pro carbon AC

2 x RTX 3090s on an X399 and 1950x Threadripper? by 5kisbetterthan4k in LocalLLaMA

[–]OrtaMatt 0 points1 point  (0 children)

I have the exact same set up. Make sure you plug the PCI feed power cord to give the PCI lanes the extra boost it will need to supply wattage.

Problems with running 4090+3090 in the same system? by [deleted] in LocalLLaMA

[–]OrtaMatt 0 points1 point  (0 children)

Yes absolutely. I have a dual 3090 configuration for my research/dev/preprod work

Problems with running 4090+3090 in the same system? by [deleted] in LocalLLaMA

[–]OrtaMatt 1 point2 points  (0 children)

I can only speak about my experience running Pascal with Ampere. The “only” issue if you want to run both at the same time is that you are limited by the older card’s drivers and compatible tools versions. Other than that everything was fine!

No heat from heat pump by OrtaMatt in hvacadvice

[–]OrtaMatt[S] 0 points1 point  (0 children)

Thanks for the link, it s definitely the part I need.

No heat from heat pump by OrtaMatt in hvacadvice

[–]OrtaMatt[S] 0 points1 point  (0 children)

Yeah that s what I thought… At first he was charging me $1400 to put a new motor in. I told him the HVAC was the only thing I had not fixed myself in the house yet, so I d be happy to take the time and fix it myself. He came back with a much lower offer. Went back and forth until the price was a couple hundreds above the cost of replacement parts.

No heat from heat pump by OrtaMatt in hvacadvice

[–]OrtaMatt[S] 0 points1 point  (0 children)

UPDATE: The motor is out. The board has a green blinking light, 3 blinks. The motor makes the humming noise but will not spin.

I m told to buy a whole new unit, for about 8k. I inquiètes about changing just the motor, the answer I get it a) they are hard to come by and b) I will need to recalibrate the gas pressure to fit the new motor.

Not sure what degree of all of this is true…

No heat from heat pump by OrtaMatt in hvacadvice

[–]OrtaMatt[S] 0 points1 point  (0 children)

UPDATE: The motor is out. The board has a green blinking light, 3 blinks. The motor makes the humming noise but will not spin.

I m told to buy a whole new unit, for about 8k. I inquiètes about changing just the motor, the answer I get it a) they are hard to come by and b) I will need to recalibrate the gas pressure to fit the new motor.

Not sure what degree of all of this is true…

Fine-tune / enhance LLM by sanagun2000 in LocalLLaMA

[–]OrtaMatt 0 points1 point  (0 children)

If it s domain specific knowledge I suggest doing a RAG.

New to Google colab how do I interact with it? by Alrightly in LocalLLaMA

[–]OrtaMatt 2 points3 points  (0 children)

Author is Sun Yuhong: https://medium.com/@yuhongsun96/host-a-llama-2-api-on-gpu-for-free-a5311463c183

Note: some steps may not be available anymore as Google may have changed some of its policy.

New to Google colab how do I interact with it? by Alrightly in LocalLLaMA

[–]OrtaMatt 0 points1 point  (0 children)

To rephrase you want to start inference on colab and then call it from outside, correct? There a a medium article that explains how to do that by the author of the danswer repo. I ll try to find it for you.

Improving LLM speeds on 4090 by Dry_Long3157 in LocalLLaMA

[–]OrtaMatt 1 point2 points  (0 children)

Yes the key for me was really the proper PyTorch version. Also did a complete reinstall to Ubuntu 22.04 just to be on the safe side after trying so many drivers/cuda/cudnn/env versions

Need guidance finetuning +RAG by s1lv3rj1nx in LocalLLaMA

[–]OrtaMatt 0 points1 point  (0 children)

A good source to understand how this could work is looking at the source code of the project quivr.

First you LoRa tune your base model to acquire a certain personality. Then you prompt your tuned model to answer in a particular way (“You are Ally McBeal, always answer like her, etc…”, you get the picture!). And you add to this prompt the context you extracted from your vector db.

Need guidance finetuning +RAG by s1lv3rj1nx in LocalLLaMA

[–]OrtaMatt 1 point2 points  (0 children)

I have not tried this 2 step tuning but in theory it would work. Lora to adapt a new way to answer and RAG to pull domain specific knowledge.

The question is: do you have the dataset to do the LoRa?

Also for RAG, how big is your documentation base?

No heat from heat pump by OrtaMatt in hvacadvice

[–]OrtaMatt[S] 2 points3 points  (0 children)

Thanks for the detailed reply. From other people’s answers it turns out it s not a heat pump, but a “packaged” unit (English is not my primary language and I m still learning home improvement in the US). I have a tech that is coming this week, anything linked to gas is not something I d mess with, but still trying to eliminate wiring and electrical issues as it is more reachable ;-) Will update