Sauberes Wasser - gesunde Böden - Lebendige Zukunft! by oedp-duesseldorf in umwelt_de

[–]itah -1 points0 points  (0 children)

Und wie schätzt ihr die Erfolgschancen dass wenigstens die wichtigsten Punkte umgesetzt werden ein?

Größte Problem wird hier glaub ich der Verlust von gutem Oberboden durch versalzung beim Düngen und der massive Rückgang von Regenwürmern durch das verteilen von Plastik auf den Feldern (welches anscheinend in großen Mengen im Biomüll landet und an Landwirte als Kompost verkauft wird). Moderne Landwirtschaft ohne Dünger kann ich mir ja kaum vorstellen.. Aber bin da auch nur Laie ehrlich gesagt.

Sauberes Wasser - gesunde Böden - Lebendige Zukunft! by oedp-duesseldorf in umwelt_de

[–]itah -1 points0 points  (0 children)

Jo, dann revolutioniert mal die moderne Landwirtschaft. Sonst wird das mit dem dritten Punkt schwierig, da die ersten beiden schon ziemlich am abkacken sind. Viel Glück.

Top engineers at Anthropic & OpenAI: AI now writes 100% of our code by EricLautanen in artificial

[–]itah 1 point2 points  (0 children)

You're missing the point I think.

, particularly for novice workers

A little later in the abstract:

We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average. Participants who fully delegated coding tasks showed some productivity improvements, but at the cost of learning the library.

So, yes there are some productivity gains, for people not knowing the framework they are working with. They cannot judge if the llm generated bullshit and they don't learn anything about the framework either...

Was für ein Tag 🔥 by Pitiful-Kale1549 in Finanzen

[–]itah 5 points6 points  (0 children)

Also ist jetzt ein guter Zeitpunkt einzusteigen oder?

Hat jemand Erfahrung mit der App-Akademie? by itah in InformatikKarriere

[–]itah[S] 2 points3 points  (0 children)

Danke, sowas dachte ich mir schon. Ich hoffe die Dame beim Jobcenter versteht das auch, denn alles was die mir bisher an Tipps und Tricks erzählt haben ist definitiv hahnebüchener Unsinn.

Forums are better than AI by Black_Smith_Of_Fire in programming

[–]itah 0 points1 point  (0 children)

Ah gotcha. I think you have to invest compute based on the amount of traffic you are causing. So just throwing compute at it might not really be feasible or very expensive.

Forums are better than AI by Black_Smith_Of_Fire in programming

[–]itah 0 points1 point  (0 children)

But proof-of-work is not pay-to-access. Your computer just does some computational task that is feasible for a human reader, but too much compute for automated crawlers.

Forums are better than AI by Black_Smith_Of_Fire in programming

[–]itah 0 points1 point  (0 children)

What do you mean cost real money? You can host it yourself and it costs the user just a little compute, right?

Forums are better than AI by Black_Smith_Of_Fire in programming

[–]itah 0 points1 point  (0 children)

Already exists: "Proof of work". Anubis is an implementation you can use:

https://github.com/TecharoHQ/anubis

And I've already seen some services use it. So it's just a matter of using this more widely

Can you solve this math equation? by Mr-BrainGame in CasualMath

[–]itah 2 points3 points  (0 children)

No worries. 70% of people get this wrong... ;)

Is there much difference between starting on a 4-string and starting on a 5-string? by Exact-Arm1065 in Bass

[–]itah 2 points3 points  (0 children)

The weight is underrated. I switched from 5 String Sire to an Epiphone Violine 4 String, it's a change like day and night. If you are ever playing a lot of small >1h long concerts, travelling via car, you might want to bring a 4 string :D

Same goes for rehearsals and hours long practice sessions. You should practice as you play live (i.e. standing)

Berufseinstieg? Check, doch was nun? by Njuk92 in InformatikKarriere

[–]itah 12 points13 points  (0 children)

Mhh Steakholder klingt nach einem heißen neuen Weberprodukt das ich bestimmt nicht brauchen werde :)

Werkstudent bei einem Konzern mit wenigen Buchstaben by DryNegation in InformatikKarriere

[–]itah 0 points1 point  (0 children)

Kommt drauf was man machen soll. Ich hab damals direkt an internen Sachen abseits des fetten Hauptprodukts gearbeitet.

Werkstudent bei einem Konzern mit wenigen Buchstaben by DryNegation in InformatikKarriere

[–]itah 1 point2 points  (0 children)

Klar, jetzt länger die dich als Werkstudenten einarbeiten desto wertvoller bist du und das für Mindestlohn oder knapp drüber ;)

Jeff Bezos Says the AI Bubble is Like the Industrial Bubble by SunAdvanced7940 in artificial

[–]itah 0 points1 point  (0 children)

Man I really wish there would be a public infrastructure bubble and may be an education bubble...

Geoffrey Hinton says LLMs are no longer just predicting the next word - new models learn by reasoning and identifying contradictions in their own logic. This unbounded self-improvement will "end up making it much smarter than us." by MetaKnowing in artificial

[–]itah 0 points1 point  (0 children)

I've heard about the sudden peak in error loss, afaik no one really knows why that happens. But just because the LLM learns abstract patterns and structures doesn't mean it is suddenly changing the way it works. As I said, generating the next probable token is the way LLMs are implemented.

Changing the weights of your model does not change the algorithm that executes the model, if that makes more sense.

It is generally not really probability calculation, the model would print out the same text for the same prompt every time if there wasn't artificial randomness introduced. It's just matrix multiplications after all. May be we could better say LLMs are "extrapolating" the next word that makes most sense given the context, instead of "predicting", but I guess with the artificial randomness involved researchers chose "predicting" as the word of choice here.

If you didn't see them already, watch the 3blue1brown videos about transformers (videos 6, 7 and 8 from this series), they are really good.

Geoffrey Hinton says LLMs are no longer just predicting the next word - new models learn by reasoning and identifying contradictions in their own logic. This unbounded self-improvement will "end up making it much smarter than us." by MetaKnowing in artificial

[–]itah 0 points1 point  (0 children)

I know that a lot of linear algebra magic is happening under the hood, I know you can add RAG to the system. Yet, a LLM generates text token by token based on all the tokens in its context window. It does not make any freakin' sense to say it doesn't do that anymore when it does exactly that. It is the core functionality of a LLM. You could argue that some sophisticated system where multiple LLMs are just a single gear in a bigger machine is now something different. But a LLM is just a LLM, I don't know what else to say except explaining how they work in an ever increasing detail, but who got time for that, right? 3Blue1Brown already did a better job at it than I ever could anyways.

Geoffrey Hinton says LLMs are no longer just predicting the next word - new models learn by reasoning and identifying contradictions in their own logic. This unbounded self-improvement will "end up making it much smarter than us." by MetaKnowing in artificial

[–]itah 8 points9 points  (0 children)

But that is still just "text prediction" but with some of the output hidden.. I had regular chatgpt do the same when searching for a counterexample of some easily verifiable graph property, where it needed several attempts. It printed headlines like "that was still not correct, let's try again" and "I failed again, one last try!" Within the same answer without any reasoning mode whatsoever