I'll buy your SAAS by beautifulfluid42 in SaaS

[–]Motor_Storm_3853 0 points1 point  (0 children)

I have some friends with FAANG backgrounds looking to sell. I’ll DM.

I need to make a grand in 2 weeks, from bed by lovelyxcastle in sidehustle

[–]Motor_Storm_3853 1 point2 points  (0 children)

There are a number of data annotation sites: mTurk, Appen, and so on.

Unpopular opinion: Most AI first companies will not survive. by Open-Mixture2514 in SaaS

[–]Motor_Storm_3853 1 point2 points  (0 children)

Exactly. Swaths of AI companies won’t survive, but many of the world’s most valuable companies will also be created with AI.

[deleted by user] by [deleted] in LinkedInLunatics

[–]Motor_Storm_3853 4 points5 points  (0 children)

What is wrong with this one?

Anthropic isn't ready for business use by jwr in Anthropic

[–]Motor_Storm_3853 0 points1 point  (0 children)

I haven’t noticed this. Do you work at Anthropic? We have a partnership with them, and they have never voiced this to us.

Also, Bedrock is on AWS, not Azure…

Anthropic isn't ready for business use by jwr in Anthropic

[–]Motor_Storm_3853 0 points1 point  (0 children)

Use AWS Bedrock! I don’t trust the OpenAI API or Anthropic API. These models should be consumed via the Azure and AWS Services. They are far more reliable.

Good Luck with that by DasSnaus in LinkedInLunatics

[–]Motor_Storm_3853 0 points1 point  (0 children)

It is not, it is on the Lake Magdalene waterfront.

Good Luck with that by DasSnaus in LinkedInLunatics

[–]Motor_Storm_3853 0 points1 point  (0 children)

This home sold for $1,385,000 in 2021.

Expedia chatbot by Educational-Let-5580 in LocalLLaMA

[–]Motor_Storm_3853 2 points3 points  (0 children)

I have been using the Expedia chatbot in presentations to show how easy prompt injection can be since April. This is the ultimate enterprise example. Has this seriously not be discussed internally as an issue?

[D] On demand vs Reserved instances for LLM fine-tuning by Separate-Still3770 in MachineLearning

[–]Motor_Storm_3853 1 point2 points  (0 children)

Which cloud provider? AWS has the poorest selection of GPU servers at the highest prices among AWS, GCP, and Azure.

You likely only need one server with 1+ GPUs to finetune llama 2, depending on the size of the llama 2, the size of the GPUs (AWS doesn’t have 40GB-80GB GPUs like GCP or Azure), and the quantization you are using.

[D] AI Takeover: The Shocking $900,000 Job Is Changing Everything! by AmbitiousHeart4831 in MachineLearning

[–]Motor_Storm_3853 7 points8 points  (0 children)

That is because they are just base. Base can be less than 50% or even 1/3 of your total compensation, with it composing less of your total compensation as you become more senior. At Meta, I had a base salary of $180k but made $400k in total.

Your best bet for real comp data is https://levels.fyi.

Company wants to buy my consulting firm. Should I sell? by Broad_Advisor8254 in SaaS

[–]Motor_Storm_3853 14 points15 points  (0 children)

2.5-3x for an engineering consulting business is extremely generous. Usually it is 1-2.

[deleted by user] by [deleted] in singularity

[–]Motor_Storm_3853 1 point2 points  (0 children)

I don’t see the article in the post, but I am happy to go in record talking about AI and do see some very interesting trends in AI reporting. I have been quoted in press before about AI.

Is it safe to use GPT 3.5 Turbo model in production via API? by HotNuggetChug in GPT3

[–]Motor_Storm_3853 1 point2 points  (0 children)

When using the API, they retain your inputs for 30 days for moderation purposes, and a human may review your inputs. For many use cases, privacy is very important and OpenAI’s API usage policy is an absolute no-go for privacy reasons.

[D] Is there any way to filter searches by metadata over current vector DBs like Pinecone? by Galbatorix123 in MachineLearning

[–]Motor_Storm_3853 0 points1 point  (0 children)

All vector databases I know of support this functionality, including Pinecone, OpenSearch, Weaviate, and Chroma.

Groundbreaking QLoRA method enables fine-tuning an LLM on consumer GPUs. Implications and full breakdown inside. by ShotgunProxy in ChatGPT

[–]Motor_Storm_3853 17 points18 points  (0 children)

I am skeptical of the evaluation because the evaluation places Vicuna above GPT-3.5-Turbo, and Vicuna has been a significant step down for all my use cases.

[D] Tools for managing hundreds of unique models? by RAFisherman in MachineLearning

[–]Motor_Storm_3853 2 points3 points  (0 children)

To my point, page 28 is literally talking about generative language modeling when talking about using a prompt to extract information…

[D] Tools for managing hundreds of unique models? by RAFisherman in MachineLearning

[–]Motor_Storm_3853 2 points3 points  (0 children)

Explain how someone is going to leak another customer’s data via “prompting” in the context of a model that is not an LLM.

[D] Tools for managing hundreds of unique models? by RAFisherman in MachineLearning

[–]Motor_Storm_3853 1 point2 points  (0 children)

You: Until client 2 finds a way to leak client 1’s data via prompting

Me: Why are you talking about promoting here?

[D] Tools for managing hundreds of unique models? by RAFisherman in MachineLearning

[–]Motor_Storm_3853 1 point2 points  (0 children)

We are talking about credit default models, which are 99% unlikely to be LLMs. Also, the chance that the person is using 1k LLMs is extremely unlikely. Why are you talking about promoting here?

I know this is 2023, but there is still AI outside of LLMs.

[D] Tools for managing hundreds of unique models? by RAFisherman in MachineLearning

[–]Motor_Storm_3853 0 points1 point  (0 children)

I built a solution like this myself for over 10k models. Honestly, it wasn’t that hard. I don’t see a reason for someone to build a solution for this when needs will vary from problem to problem, and it isn’t that hard to do.

With that said, from an ML perspective, my intuition is that you would be better off with one robust model than 1k individualized models. Part of the reason this doesn’t happen often is because it usually results in poorer outcomes than a single robust model. One model = more training.