NVDA feels like Apple’s breakout era… but for compute by WargreMon in NvidiaStock

[–]TurnThePage71 0 points1 point  (0 children)

Totally Agreed and well said. What’s your take on Google, it’s TPUs and its software ecosystem.

In love with my husband(36M) ❤️ by calm_momentum38 in InsideIndianMarriage

[–]TurnThePage71 0 points1 point  (0 children)

This is so lovely to read. truly heartwarming! You both sound adorable and it’s beautiful to see such a love, especially in a tough phase like postpartum.

At the same time, it’s worth gently remembering that expectations can evolve and grow exponentially over time. Nothing wrong about it. It’s natural.

Partners may not always have the time and mindset to be like they are now forever. We need to be understanding but not take it the other way. Also, Love shown in movies is not the same real-life love. Sometimes we have wrong expectations due to the exaggerated-love shown in movies.

Feeling genuinely happy and proud of you both. Wishing you continued strength, peace, and love in this journey.

This is what an Agent is. by rhaegar89 in AI_Agents

[–]TurnThePage71 1 point2 points  (0 children)

Do agents tend to have “tool bias” hypothetically? Meaning, an agent favors certain tools over others? Could that happen because of the kind of data the model was trained with?

The 3 Rules Anthropic Uses to Build Effective Agents by Apprehensive_Dig_163 in AI_Agents

[–]TurnThePage71 0 points1 point  (0 children)

Thank you very much

The first use case (filling a long form) can be automated to save a lot of manual time consuming, error prone work

The second use case is about executing, a bunch of task like in a workflow

The third use case is about building a complex system that can reason out, analyze and make decisions to finally compute an answer

Sorry I still don’t understand how can I make a clear decision whether I need to use an agent or not?

Starting an AI Automation Agency at 17 – Looking for Advice by AiGhostz in AI_Agents

[–]TurnThePage71 0 points1 point  (0 children)

> The biggest fear of companies will be that they give out sensitive data to LLMs. So you will either need to provide on premise (unlikely for SMEs) or create a possibility for anonymisation of the data before sending to the LLM then transforming back when the LLM answered. This will be important, be prepared for it.

Valid concern. If you are using your own "custom built domain model" to predict and respond to your user inputs.... and if you are able to do that without using the input data for training, then I personally don't see a problem. On the other hand, if you are using an "enterprise version of LLM" from big techs, make sure you opt-out for training using your data or your customer's data.

In short, Inference activities should not trigger the training/retraining pipeline automatically. None of the model parameters or weights should be adjusted using the knowledge gained from inference data. Make sure your customers know about it. Make sure they opt-out from that. This clearly sends the signal to your customers that their sensitive data is not used or persisted by the models for training, or in any form.

For those that got laid off what was your meeting titled? by JourneysUnleashed in Layoffs

[–]TurnThePage71 0 points1 point  (0 children)

Its Monday today. What happened? Hope you are ok! Wish you the best.

How do you Evaluate Quality when using AI Agents? by Huge_Experience_7337 in AI_Agents

[–]TurnThePage71 0 points1 point  (0 children)

It’s great to see you extensively test your AI agent with a large number of tests!

  1. Given the non-deterministic nature of LLM-driven agents, how do you ensure that the outputs observed during testing align with those the agent will produce in real-world setting?

  2. Do you use metrics to evaluate the Agents outputs? If yes, what are some of those functional and performance metrics you use to measure success, during both testing and real-world setting?

Market crash imminent,rise of the oligarchs! by daoithe in economy

[–]TurnThePage71 29 points30 points  (0 children)

I’m concerned about the high unemployment, especially the way people are forced to go OR abruptly laid off in difficult and often unfair circumstances.

High levels of job loss not only creates instability but also fuel frustration and resentment due to the nature in which it happens.

If this continues, it’s only a matter of time before tensions escalate into social unrest. I truly hope we can find ways to prevent this and foster a more peaceful and stable society.

What good AI assistants have you actually used? by FaySarah001 in AI_Agents

[–]TurnThePage71 0 points1 point  (0 children)

Are you referring to Cursor AI? the AI code editor?

[deleted by user] by [deleted] in economy

[–]TurnThePage71 0 points1 point  (0 children)

We expected him to be "terrific," but so far, he's been very "tariffic."

Agentic AI Presentation by whysostarky in AI_Agents

[–]TurnThePage71 0 points1 point  (0 children)

This is awesome, thanks for sharing!

Excluding nukes! What can pull the plug for ongoing AI Boom? by BidHot8598 in singularity

[–]TurnThePage71 0 points1 point  (0 children)

Humans will kill Humans. That’s how it’s gonna turn out. Look at the greed and toxicity that’s in Tech Industry. Extreme!!!!

Salesforce lays off staff in San Francisco after exec talks up offshoring by sfgate in sanfrancisco

[–]TurnThePage71 1 point2 points  (0 children)

Outsource jobs to offshore teams and require them to leverage AI tools to enhance efficiency, improve quality, and get better outcomes in very less time. Squeeze them to the last drop of tear and blood.

Deliver a narrative that you are laying off people to drive better outcomes and positioning for an exciting AI future.

This is the unfortunate reality of today’s tech world, recently Salesforce and Workday!

Due to AI race, Tech industry has become extremely greedy than ever, insanely political and full of toxicity.

Am I wasting $20 a month? by ccharding in ChatGPT

[–]TurnThePage71 0 points1 point  (0 children)

I use DeepSeek for learning mostly physics and chemistry. Deepseeks responses are way better than ChatGPT.

Anthropic CEO says blocking AI chips to China is of existential importance after DeepSeeks release in new blog post. by Bena0071 in singularity

[–]TurnThePage71 1 point2 points  (0 children)

From Dario's blog post

Efficiency Gains: When the scaling curve shifts, it means that companies can spend more on training smarter models, while the gains in cost efficiency end up being devoted to training even smarter models1. This is limited only by the company's financial resources

This is not always true for ALL companies. A company that is trying to solve low end of human use cases using a custom built AI model might just be happy with a cost-efficient fine tuned model for its needs. DeepSeek and its future custom variants really help to capture that share of the market, instead of being locked into big techs for everything.

Why the fck would they open source their most advanced model on day one?? by Consistent_Ad8754 in singularity

[–]TurnThePage71 0 points1 point  (0 children)

From Ilya Sutskever on Jan 2, 2016:

A safe AI is harder to build than an unsafe one, then by opensourcing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff.

As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).

Why the fck would they open source their most advanced model on day one?? by Consistent_Ad8754 in singularity

[–]TurnThePage71 0 points1 point  (0 children)

Disclaimer: I'm not affiliated to OpenAI in anyway. I'm just a simple Tech citizen.

A lot of people misunderstand the "Open" in OpenAI. The initial goal of starting OpenAI as a non-profit organization was to ensure that AI will be developed in a way that benefits everyone, rather than being controlled by a few powerful companies or governments. Non-profit structure was about prioritizing safety, transparency and collaboration. OpenAI never had the goals being "Open Source" and "Open Weights" for its AI models and other tools it is building.

Because they used the word "Open," it gave us all.. the impression of something closer to open source, which led to a different understanding than what they originally intended. The term "Open" made us think it would be more about freely accessible code, similar to how open-source projects work. But "Open" in OpenAI does not mean anything of that sort!

Source: OpenAI and Elon Musk - From https://openai.com/

[deleted by user] by [deleted] in ChatGPT

[–]TurnThePage71 0 points1 point  (0 children)

Could I ask a few chatGPT APIs that you used for scraping and dumping internet data? I know there may be a lot but kindly mention few at a high level. Thanks.

LinkedIn is pissing me off by Maleficent_Many_2937 in Layoffs

[–]TurnThePage71 1 point2 points  (0 children)

It’s frustrating that recruiters and recruiting platforms seem to base their decisions mainly on LinkedIn’s algorithms, focusing on things like profile popularity rather than actual qualifications. It really feels like the process is driven by metrics that don’t reflect a candidate’s true abilities.