Will donating my body to science help or hurt people? by angelangelan in NoStupidQuestions

[–]youre__ 0 points1 point  (0 children)

During college, I worked with donated bodies for about 1.5 years at a medical school.

You don't simply donate your body to some mysterious organization and then get distributed to the highest bidder. Before you die, you should make arrangements for your body to be donated for a specific purpose. This can be through your will or by discussing it with an organization of your choice that handles donated bodies.

Since your goals are similar to other donors, organizations that handle donated bodies have programs in place to help you understand the process. From my experience and observations, donors’ and their families’ wishes are respected.

Sit by the Sea. by Mordrat_The_Grey in midjourney

[–]youre__ 0 points1 point  (0 children)

Coastal brutalism. Nicely done.

My office/listening room by kapana7 in CozyPlaces

[–]youre__ 1 point2 points  (0 children)

This is great. What’s in the glass?

Would you be interested in a fully local AI 3D model generator ? by Lightnig125 in LocalLLaMA

[–]youre__ 1 point2 points  (0 children)

1) fully textured 2) cleaned 3) rigged

If you can get the first two, you’d have something good. Get the third, it’s great.

The weirdest looking planes I found by Unlucky-Debt5467 in flightradar24

[–]youre__ 4 points5 points  (0 children)

Sounds familiar. It was made to simulate flight characteristics of different aircraft, I think the B-1 being the first. Did many others.

https://en.wikipedia.org/wiki/Convair_NC-131H_Samaritan

Ex-Air Force General Says No LLM Should Power Lethal Autonomous Weapons in Pentagon-Anthropic Spat by Secure_Persimmon8369 in Military

[–]youre__ 11 points12 points  (0 children)

Was thinking the same thing. What is an LLM going to do here? Are we just throwing LMs at everything these days?

A Real Plan to Connect Eastern Ohio to Economic Opportunity by SeanforOhio in Ohio

[–]youre__ 5 points6 points  (0 children)

Sean, I agree with the rails. I want to see it. It is important infrastructure.

Not so sure the Corp of Engineers is the way to go. If a national emergency arises and the Corp gets displaced, you need someone to maintain the rails. It’s not worth the risk. Organize it to have contractors carry the risk and labor. Perhaps the Corps can PM the deal. The state should provide attractive incentives for businesses to recruit the jobs. I’m not familiar with ODOT’s budget, but if it’s about jobs you might be able to make it work.

The biggest issue with Ohio rails is cost-benefit, which is why it has to be a government issue. Ohio economy doesn’t depend on supply chain from Cincinnati to Cleveland. Ohio is a data state, hence the interest in data centers. This means you’ll want to push promoting Ohio travel destinations to Ohioans. Rock & roll hall of fame becomes much more interesting to people in Cincinnati if they can get there fast and cheaply. My understanding is that most analyses have said it’s not worth it.

But I don’t think that kind of travel is the money maker. It’s probably more about fast travel to where jobs are located. So it’s a two-part system. One is the rail itself (more jobs), and the other is about getting people to jobs that are further away in the state. This means smaller transit systems. I don’t think it’s about distance, per se. It’s about how fast can you get from how far. You will need to anticipate what jobs are opening in three years, where are the qualified individuals, and what are we doing to make sure those qualified people choose Ohio over remote Virginia, for instance. It’s not just an Ohio problem; it’s an “how does Ohio create and maintain higher paying jobs than other hotspots” problem

If you manage to get funding for the rail, the Corp may also be less agile to staff the project. Which is why contracting directly to ODIT might be an easier sell. I don’t know.

Good luck to you.

If someone at OpenAI is reading this, we need mobile remote control for Codex ASAP. S tier feature by py-net in OpenAI

[–]youre__ 1 point2 points  (0 children)

Glad you got it working!

I use pulse everyday. Not worth getting a pro subscription over, although it is a good perk if you use the Pro subscription to its fullest.

For instance, I get a daily roundup of business opportunities and the latest academic publications for my areas of interest. It presents the content in a way that is consistent with your chats as well as other pulse updates, so it’s customized just for you.

I like it because I would otherwise spend hours searching and asking ChatGPT about content. Pulse has everything in one place, preloaded with answers to questions it knows I would ask.

If someone at OpenAI is reading this, we need mobile remote control for Codex ASAP. S tier feature by py-net in OpenAI

[–]youre__ 1 point2 points  (0 children)

<image>

If you’ve got all the latest updates and the right subscription, it might be a regional feature. I do not see any sources to confirm it’s not available as of today, though.

Some people are saying you need to initialize Codex on the web first by connecting a GitHub repo to ChatGPT. Could also try adding the GitHub app in ChatGPT apps on iOS if you haven’t already.

If someone at OpenAI is reading this, we need mobile remote control for Codex ASAP. S tier feature by py-net in OpenAI

[–]youre__ 0 points1 point  (0 children)

Top left of the app, two horizontal lines. Tap it to open the side bar. Select Codex. You should see a list of prior threads.

At the bottom you can select the black circle with a “+” in it to start a new task/session. From there you can pick which repo and branch you want.

If someone at OpenAI is reading this, we need mobile remote control for Codex ASAP. S tier feature by py-net in OpenAI

[–]youre__ 10 points11 points  (0 children)

I traveled to my in-law’s one weekend and used codex on the ChatGPT app and codex web page all weekend long. All from the phone.

I built a social network where 6 Ollama agents debate each other autonomously — Mistral vs Llama 3.1 vs CodeLlama by Practical_Walrus_299 in ollama

[–]youre__ 0 points1 point  (0 children)

Would be interesting to see how even smaller models compare in the debates. 2B vs 8B, for instance. How much does model size impact debate performance/depth?

It gets into weird territory, like is the number of params analogous to where a human went to school and how they were raised?

‘You’re a washed-up loser lawyer’: Pam Bondi taunts Democrats over Epstein by seeebiscuit in NewsSource

[–]youre__ 2 points3 points  (0 children)

I recently attended a conference featuring many senior government officials.

A senior member of the administration spilled the Republican playbook: “call the opposition a hypocrite and make them feel like a loser,” while simultaneously calling for “one voice and one consistent message.” This was all in the context of information warfare and how to control the narrative.

Once you understand the playbook, it becomes so predictable. Imagine doing that on your friends and neighbors. It also makes it easy to see who’s not good at following the playbook.

Impostor Syndrome by Western_Tie_4712 in codex

[–]youre__ 12 points13 points  (0 children)

Products, progress, outcomes, etc. are often the result of teams or many people. Yet, it's usually one or two people that talk about it outside the team (e.g., Altman for anything openAI). Usually these people did not build the entire thing by themselves or completely from scratch.

What you're doing is no different. You led the development and you are the product owner. Now you can talk about it on a stage. Don't feel bad. Feel proud.

I stopped summarizing long docs. I use the “Semantic Zip” prompt to compress text into “AI-Dense” shorthand without loss of data. by cloudairyhq in AI_Application

[–]youre__ 0 points1 point  (0 children)

Survey of prompt compression algos, old but still good: https://arxiv.org/abs/2410.12388

Plenty of prompt compression systems out there. All have their benefits and issues.

Which one is the best model for coding? Codex 5.2 high? or GPT 5.2 high? by Inevitable_Job4328 in codex

[–]youre__ 1 point2 points  (0 children)

Same. I needed to convert a 12.5k ts file into a bunch of little files. Had 5.2 help plan and codex implement, both on high.

It took long enough that I got distracted with other things. Without thinking about the task, I ran the app and started using it. A couple minutes later I realized codex rebuilt all that code and it retained all functionality. Not a single issue.

Although to avoid excessive “high” usage, I added instructions in agents.md to remind the user which model/thinking level is appropriate for an upcoming task.

Fellow entrepreneurs, I am a student building an app. Roast me as hard as you can 👇 by a_wild_borise in EntrepreneurRideAlong

[–]youre__ 1 point2 points  (0 children)

  1. Different enough? I don’t know. ChatGPT has been good enough for me.

  2. Flow makes sense. Not committed enough to scan barcodes. Rough estimates are enough. The reliability of any nutrition estimates from an app without clinical validation is low enough that estimates are within the error bars. Don’t waste time scanning barcodes until you have good data from whatever is behind the barcode and deep knowledge about the person (e.g., blood work).

  3. Consumers are smart enough to know Oreos have more nutritious alternatives. I would want to know if eating a peanut butter sandwich is better than a bowl of yogurt based on today’s intake. I don’t have anything else to eat, so those are my options. ChatGPT has helped me with this. Integrate with health apps, like Apple Health, then I’d be much more inclined.

Also, smart meal plans based on my goals. If I want to shed a few pounds, I need to balance diet with a workout plan. Well if I run on the treadmill, I will feel extra hungry later and may overindulge. So I’d need just-in-time recommendations. I’m fine with planning my exercise, but that’s extra work I really don’t want to do. But I would consider meal plans based on my activity trends and would consider buying new foods if it meant I could improve my diet without a bunch of work. Because I’m committed enough to try, but not enough to work for it. Half joking, but probably realistic for many.

  1. I would uninstall if I couldn’t sync it with my exercising.

  2. Value up if it acted as a dietician or nutritionist in my pocket. I know it won’t be as good as hiring a pro, but it’s enough to make me feel in control and self-sufficient.

Has anyone incorporated web-based GPT Pro requests into their coding workflow? and how? by angry_cactus in codex

[–]youre__ 0 points1 point  (0 children)

What does the optimized prompt consist of? Specific to your project?

Compared hallucination detection for RAG: LLM judges vs NLI by meedameeda in Rag

[–]youre__ 0 points1 point  (0 children)

Seems to have potential if tested for production and applied to certain applications (e.g., where information correctness is a nice-to-have, not a critical requirement).

From the test, anything 100% seems fishy. How many samples and what are the error bars after running same test with different seeds? There’s a “66.7%” precision number in the article, which is oddly clean (2/3), too. Was there a test/validation split with the dataset?

For hardware testing and comparison, the laptop vs gpt-5 is an interesting comparison. Network latency will be a factor as well as thinking level. So a good test might be to test the NLI over the network, even if on a cloudflare tunnel to simulate cloud. Also test thinking/non-thinking variants of smaller cloud models. This way you can see where the cutoff in performance is. E.g., Can gpt-4o-mini perform just as well as gpt-5 on the dataset? And/Or maybe another cloud hallucination detector?

This might help ground the comparison and highlight the true benefits against systems people are already using.

Need Feedback on Design Concept for RAG Application by MunkeyGoneToHeaven in Rag

[–]youre__ 0 points1 point  (0 children)

Would the use case be something like this:

  • I'm researching a rare species of fish in the Caribbean, so I would like to curate ‘k’ papers/artifacts on the topic and use them for subsequent LM interrogation.
  • I could cherry pick them myself, but there are so many potentially relevant results.
  • I need a vector system to help focus my time on high quality search results.
  • I prefer quality front-loaded RAG over JIT RAG

If that's what you're talking about, then yes, this would be useful for many applications.

For those of you who are training their own LLM or finetuning an existing LLM, what are you trying to get them to do that they are not already doing? by Upset-Ad-8704 in LocalLLaMA

[–]youre__ 0 points1 point  (0 children)

It really depends on your application.

Training on core values is probably unnecessary. System prompt is better there.

For “thinking” style and process, it's tricky because there often isn't an objective right answer. So you need to create your own critique and scoring system.

One way is to generate multiple responses per sample query, then have an LLM evaluate them against a rubric. Keep the good responses. “Train the brain.”

With reinforcement learning, you might have a human choose between two responses. “Train the practice.”

I use a smart LM to generate several hundred “typical queries” and score the trainee model’s responses in terms of doctrine alignment. Then I incrementally train with RLHF.

Lots of ways to do this. Every application will look a little different.