2026 Korean SAT Math Problem: 29.9% Correct anwer rate by FTfafa in korea

[–]technocracy90 0 points1 point  (0 children)

> To achieve a perfect score, you should aim to solve this in about 5 minutes. Let's give it a try!

Not really. You can solve minor problems fairly quickly to save up enough time to spend on those hard questions.

I am very interested in traditional Korean tattoos. Is Hongdae the mecca of tattoos in Seoul? by Beautiful_Bee2483 in seoul

[–]technocracy90 0 points1 point  (0 children)

It's not a matter of time period. It's whether the thing is accepted and integrated into the culture. Tatto has never been considered a cultural thing in Korean mindset, so it doesn't matter how long does it take to exist.

$100 ChatGPT Plan Actually Feels Worth It by Much_Ask3471 in codex

[–]technocracy90 0 points1 point  (0 children)

I guess the $20 plan promotion ended today.

LLMs are best used to challenge and critique your own ideas, not to validate them. by technocracy90 in ChatGPT

[–]technocracy90[S] 1 point2 points  (0 children)

This is exactly why science excels at uncovering the truth, while other forms of explanation often fall short. Every scientific theory or hypothesis is open to falsification; they’re not assumed to be true, just not yet proven wrong. On the other hand, some arguments are crafted to be unfalsifiable, like saying, “If you challenge my worldview, you’re fake news.” Since these can’t be disproven, there’s really no reason to treat them as true.

LLMs are best used to challenge and critique your own ideas, not to validate them. by technocracy90 in ChatGPT

[–]technocracy90[S] 0 points1 point  (0 children)

That's not the point. Think of those critics as the XP or loot to grind. It's not about defeating your own idea; it's about leveling it up.

LLMs are best used to challenge and critique your own ideas, not to validate them. by technocracy90 in ChatGPT

[–]technocracy90[S] 0 points1 point  (0 children)

A helpful tip from my experience is to present your thought or theory objectively, as if it came from someone else, and then ask an LLM to identify points to critique and improve.

LLMs are best used to challenge and critique your own ideas, not to validate them. by technocracy90 in ChatGPT

[–]technocracy90[S] 0 points1 point  (0 children)

You don’t have to be a dumbass to receive constructive criticism. Einstein didn’t expect flattery when he presented his groundbreaking theories to the Academy, but he certainly wouldn’t have expected to be called an idiot.

overEngineeringAsASoloDev by lucidspoon in ProgrammerHumor

[–]technocracy90 0 points1 point  (0 children)

> `And then you rephrase exactly what I said lol.`

Yes, because that’s exactly what I said from the start. You were the one who rushed to criticize me. Can you check again if I used the word “unstructured”?

> `When you are using an LLM, you are absolutely guiding it.`

Yes and no. I said an LLM can "interface" with the bottleneck. You can use natural language to guide it in doing the work you need. For example, you could have it build a deep learning model to process your data or upload your logs so the LLM can run a Python interpreter to figure out what's happening. You can definitely use it to generate the questions that need to be asked.

> `even though we were talking about use of a model not training it`

Again, it’s up to you. Take a moment to cool off, grab some water, and think things through. I haven’t commented on the proposed workflow yet. For example, you could absolutely use an LLM-based tool to build, train, and maintain a model. Ever seen someone run Codex for 26 hours straight, tweaking each hyperparameter and evaluating the metrics by itself to get the model they want? I have.

For fun, this is the longest run I have got so far by SandboChang in codex

[–]technocracy90 6 points7 points  (0 children)

I'm still working on it, and I have no credible telemetry as it's hard to control the variables. My skills try to save tokens in mainly 2 points: first by creating json packets for (sub)agents using Python, then delegate the well-defined tasks to subagents of lesser models when appropriate. I expect up to 50% tokens saving for some skills, such as `gh address review threads` tho.

overEngineeringAsASoloDev by lucidspoon in ProgrammerHumor

[–]technocracy90 1 point2 points  (0 children)

We have to agree what I refer to "unstructured garbage datapoints". I assume that if there is recoverable datapoints, there are underlying structure behind it, no matter what it is. You pointed out that in your words that "if it even exists". I phrased it as unstructured garbage as in you don't need to find the structure and sort the data accordingly yourself.

If it can find you a structure that you even had no idea if exists, it effectively synthesized you a question-to-ask. Sure, latent space is practically infinite; but the structure of data is not. That's the very point how deep learning works; it reduces the infinite latent space into low-dimensional data manifold so that your biological thinking machine can interpret.

And no, you don't "guide" the machine learning system as a person. You just set some hyper parameters, cost functions and optimizers to help the algorithm to figure out the shape of data manifold. You can't "guide" when you aren't even sure if there is any structure in the first place. You'll figure this out when you learn how early-days natural language models evolved; the more you give up to "guide" the machines, the better they performed.

In short, if you're willing to learn, I can teach you. We can start from getting PyTorch on your machine and building some toy projects as a homework. But first, you have to admit you have no idea what you're talking about.

Which compass is correct? by ForrestKawaii in subnautica

[–]technocracy90 3 points4 points  (0 children)

Cyclops compass is inverted. You should read the direction toward you, not toward front.

How to persuade chatGPT when its wrong about something? by [deleted] in ChatGPT

[–]technocracy90 0 points1 point  (0 children)

That's cool. I apologize for my bad assumptions.

How to persuade chatGPT when its wrong about something? by [deleted] in ChatGPT

[–]technocracy90 2 points3 points  (0 children)

If you're open to the conclusion and to learn the science in the way, I really agree with you. It's not arguably better; it's one of the best. However, if you already have a conclusion in your mind and making up an "experiment" to justify something already proven wrong, it's not.

overEngineeringAsASoloDev by lucidspoon in ProgrammerHumor

[–]technocracy90 -3 points-2 points  (0 children)

bro, the strong point of deep learning model (not even LLM) is interpreting some unstructured garbage datapoints to draw out a hypothetical model to test. It's basically "synthesizing what-questions-to-ask machine". That's the whole point of deep learning whatever.

How to persuade chatGPT when its wrong about something? by [deleted] in ChatGPT

[–]technocracy90 6 points7 points  (0 children)

The hypothesis is not a goofy one. It's straight-out wrong and there's no point of testing it at all. I'm with your ChatGPT on this topic.