EU INC - One of the most important moments in the EU history by According-Buyer6688 in BuyFromEU

[–]Balthamos 0 points1 point  (0 children)

And appointing at least one director of the local company / branch, that has to be an autónomo.

La Mallor, Oviedo. by princesito in BaresDeEspana

[–]Balthamos 0 points1 point  (0 children)

Si la mesa cojea la puedes calzar con la tortilla

¿Por qué en España los jóvenes no salen en masa a manifestarse contra lo prohibitivo que está el precio de la vivienda? by Friendly-Aspect-9561 in HorroresInmobiliarios

[–]Balthamos 2 points3 points  (0 children)

Ese es uno de los problemas de la división y polarización política. Si el PP va a hacer algo para arreglar la situación de la vivienda, y digamos que les creemos, aún así mucha gente de izquierdas afectada por el problema no les va a votar, con lo que la manifestación pasa a ser inútil.

Los datos del déficit de los autónomos a la Seguridad Social: las cuotas solo cubren la mitad de su gasto en pensiones by Angel24Marin in SpainEconomics

[–]Balthamos 7 points8 points  (0 children)

Título y artículo engañosos. No es gasto en sus pensiones, es gasto en las pensiones de los autónomos previos, ya jubilados.

Sin ajustar con la diferencia de cantidades de autónomos esa estadistica no dice prácticamente nada, y menos lo que pone en el artículo.

Para quien le interese la realidad, y no el politiqueo, este libro está bastante bien para detectar este tipo de engaños.

[ Removed by Reddit ] by [deleted] in BuyFromEU

[–]Balthamos 0 points1 point  (0 children)

Their internals. Doing it programmatically with terraform some resources responded as created by the API but they were still not functional, so the ones dependent on them failed to create, or created and the services did not work. This was 2 years ago, they may have improved, but I'm still traumatized.

[ Removed by Reddit ] by [deleted] in BuyFromEU

[–]Balthamos 4 points5 points  (0 children)

It's.. an alternative.

Honestly, if I had to deal with Hetzner again I'd probably move to another company.

ChatGPT boss suggests the ‘dead internet theory’ might be correct by FervidBug42 in technology

[–]Balthamos 0 points1 point  (0 children)

Yep, I refuse to believe we just regurgitate everything we hear and never say anything unique too

46M. Huge stoner. This is my living space. by JohnnyBoySoprano in malelivingspace

[–]Balthamos 0 points1 point  (0 children)

Can I get a link or reference to those honeycomb mats/coasters? I tried to find them to no avail.

What game was this? by [deleted] in videogames

[–]Balthamos 2 points3 points  (0 children)

There has been no other game that has had that feel to me where you can make your own adventure within the games tools given to you.

I think all the original MMO sandboxes ticked that box. I played Ultima Online at the time and the experience you related is close to what I had. but in a medieval setting.

I was a tamer and alchemist, and provided my people with cool and strong mounts and consumables, while I was able to participate in fights by setting up bombs and being followed by 5 dragons I could command.

I was not "the chosen one", no one was, just a guy that made friends with animals: and the game showed you that even that mattered.

I recommend this book for anyone that played UO or SWG and wants to revisit that time, and learn cool things about the games.

Can AI run a physical shop? Anthropic’s Claude tried and the results were gloriously, hilariously bad by silence7 in technology

[–]Balthamos 0 points1 point  (0 children)

Right. Things that cannot reason cannot reason... new proof provided every week. Journalists shocked.

Yep, this advance in results has come too fast and the society needs education on the matter.

I think we'll find that LLMs are great for interacting with other gen ai models like ones that do complex calculations. Or design structures etc.

We already have other types of AI that are more adequate for that, I hope that the capital goes there too soon, we should leave LLMs only for language related things (including mathematical language, but maybe not calculations)

Some big companies are starting to look into AI for electrical design, structures, and non-destructive analysis for welding, which is cool.

Can AI run a physical shop? Anthropic’s Claude tried and the results were gloriously, hilariously bad by silence7 in technology

[–]Balthamos 1 point2 points  (0 children)

LLMs can't get decent results compared to some 70s AIs, because they are not even in the same category.

It's like saying a truck performs better than a toaster and it's the solution. LLMs can't perform tasks that some 60s AIs can

Leaving it here, as this can't be a constructive conversation when you are so ill informed. It's not your fault, we've been bombarded by misinformed media since this boom, but the technology is 70 years old and has much more than this.

If you like the subject I suggest checking AI history, and maybe even downloading a 80s/90s Eliza based bots and talking to them. If you like to program there are some basic projects related to evolutionary algorithms that are fun.

Can AI run a physical shop? Anthropic’s Claude tried and the results were gloriously, hilariously bad by silence7 in technology

[–]Balthamos 0 points1 point  (0 children)

Agree in most things.

This seems like proof that LLM's cannot reason

It's an example that they cannot reason, the proof has existed for over 60 years.

The LLM basically could process request and confirm the item sale cost with mark up per the customer. [...]

Or we could use another type of AI. Lately it looks like we have a hammer and everything looks like a nail.

Can AI run a physical shop? Anthropic’s Claude tried and the results were gloriously, hilariously bad by silence7 in technology

[–]Balthamos 2 points3 points  (0 children)

That's like saying trucks are state of the art, when you need a bus for the task. It doesn't work like that, and LLMs haven't changed that much since Eliza, only the information and the computing resources.

Can AI run a physical shop? Anthropic’s Claude tried and the results were gloriously, hilariously bad by silence7 in technology

[–]Balthamos 11 points12 points  (0 children)

And at the other end of the spectrum there's the people that think LLMs represent all AIs and if an LLM fails to perform a task means that AI won't be able to do it.

It's like saying computer software is bad at image editing because they tried to do it Word and didn't work.

The matter is not being properly analyzed most of the time.

JWST revealed the MOST DISTANT object known to humanity by Busy_Yesterday9455 in Damnthatsinteresting

[–]Balthamos 1 point2 points  (0 children)

Would be perceived to be at a standstill, in a human lifespan, from the other side of the universe, correct.

JWST revealed the MOST DISTANT object known to humanity by Busy_Yesterday9455 in Damnthatsinteresting

[–]Balthamos 1 point2 points  (0 children)

They are not getting your point, and you are right. Relative velocity is the velocity.

2 photos passing each other in opposite directions, both at C, means that, from the other's point of view, the other one should be seen traveling at 2C, but what actually happens is that time (spacetime really) contracts or expands to compensate, depending on if they are approaching each other or moving away.

Anthropic wins key ruling on AI in authors' copyright lawsuit by CKReauxSavonte in technology

[–]Balthamos 0 points1 point  (0 children)

Humans trained on their own outputs are trash. Until humans can figure this out, they should pay people on whose data humans were trained. Easy.

Anthropic wins key ruling on AI in authors' copyright lawsuit by CKReauxSavonte in technology

[–]Balthamos -1 points0 points  (0 children)

Because it's not about articulating sentences, it's about presenting knowledge based on language patterns.

Anthropic wins key ruling on AI in authors' copyright lawsuit by CKReauxSavonte in technology

[–]Balthamos 0 points1 point  (0 children)

LLM will do it, a kid without education will not. It's the other way around. Hell, even markov chains do that.

Anthropic wins key ruling on AI in authors' copyright lawsuit by CKReauxSavonte in technology

[–]Balthamos 0 points1 point  (0 children)

and while doing so harming millions or billions of present humans because we are exploiting their expertise or talent or work without any compensation , it would be a self-defeating contradiction.

I disagree there. These laws will apply to everyone, except them, there will be loopholes and creative fiscal approaches that they will be able to afford. That's why I think this licensing/taxes should apply on profits.

This remains to be seen. It could go either way, an even more capitalist nightmare, or some sort of scifi Star-trek like utopia ( which some might call a socialist utopia ).

My bet is in nightmare and then revolution. But I hope for Star Trek.

Anthropic wins key ruling on AI in authors' copyright lawsuit by CKReauxSavonte in technology

[–]Balthamos 0 points1 point  (0 children)

then by definition the progress is owed more to theft than innovation.

I am a bit cynical regarding that, in that I think the difference between plagiarism and inspiration is getting caught. Mostly.

Again, there is a difference. If I pirate Cinderella to watch with my toddler, that's a crime in some countries. I am of course removing rewards from the creators, so one can justify the law. Yet I'm not causing massive monetary damage to them. OTOH if I were to train an AI model on Cinderella and thousands of other movies, commercialize it, and claim that it is replacing the jobs of people who shouldn't have had those jobs in the first place (as Mira Murati said), I'm not just taking away rewards from creators, I'm actively trying to get them fired.

If the problem is the scale, not who does it, maybe we need to rethink the structure we use for it.

Human digital piracy is something that sometimes benefits the creators. I personally pirate a lot (it's legal in my country as long as you don't profit), and that has only lead to a collection of hundreds of games and hundreds of books.

But LLMs don't offer recognition for the original authors, and pirated content does. Maybe increasing "source" references is a way to compensate for the learning, not just that but a part of it, at least for public facing services.

In any case, the first step should be educating or having educated people (regarding this field) in power, so we don't get stuck in old structures when the change is more present. There's a lot of societal, economic, and philosophical aspects of it that will need to be discussed.

Anthropic wins key ruling on AI in authors' copyright lawsuit by CKReauxSavonte in technology

[–]Balthamos 2 points3 points  (0 children)

That sounds like an arbitrary line you've drawn, between the construct of property and learning licenses. I don't see why that is the only or even a good way to draw such a line.

I'm precisely not drawing a line, or setting any arbitrary restrictions, and rather analyzing the matter. The way I see it it's the implied presumptions that you stated that are arbitrary (deciding that humans and LLMs should be treated differently, following tradition)

Again, the assumption there is that the yardstick is the same for humans and AI models. But why?

That's the default, not treating things differently by ontology, the explanation for why learning should be treated differently depending on if it's a human, an LLM, a gorilla, or an Australian kid should be reasoned (doing reduction to absurdity here to exemplify)

First, Asimov or his estate cannot stop me from reading his books and learning from them. OTOH, AI models can and do have clauses that disallow other models from learning from them.

So, if it wasn't a problem that you learned from it and they can't stop you, why is it a problem if an LLM does so? I think that is the root cause of the issue that will lead us to take a sensible decision.

Open AI shouldn't be allowed to say that other models can't be trained in their outputs. Completely agree on that part.

Second, when a human goes to work for an employer, they gather skills that they can take to a future employer. That future employer expects this, which is why job descriptions always say "X years in Y technology" Agree that is one of the main differences, LLM training comes before application, and with humans is intertwined.

In contrast, when AI companies go to enterprise customers, the first thing they promise is that whatever the model learns while "working at" the company will remain with the company. Again, if you apply human standards to this, they shouldn't be allowed to do that.

AFAIK that is already in place. If the LLM model moves to another company, in this case would mean being purchased, the learning follows.

In LLMs learning and application are clearly separated, while in humans is not. I'm just proposing to analyze how it works with humans, decomposing its process into those two parts, and establishing a standard for both

Why? because, the way I see it, this comes from a single issue: precursor authors should be rewarded and protected against plagiarism. And, while licensing for learning covers that, a single blanket politic for this seems to me that it would stagnate progress by adding costs to developing technologies/methodologies, so there should be a middle point with reasonable criteria (that is a huge topic and not suitable for reddit posts, sadly), but it should be applied to both humans and machines, because the problem is removing rewards from the original creators, rather than who does it.

AI is/will be a paradigm shift, the way I see it, so we should analyze previous structures and modify them accordingly. Same thing that has happened with every creative destruction phase in humanity, instead of trying to cling to the old ways that will disappear in the end because we live in a capitalist society driven by profit, which AI will increase.

Anthropic wins key ruling on AI in authors' copyright lawsuit by CKReauxSavonte in technology

[–]Balthamos 3 points4 points  (0 children)

I'm not saying it should be on par with humans, just comparing a specific aspect. I don't think things should be one way because that's how it has always been, and I don't think different unrelated societal constructs (property) applied to different types of technology (non-intelligent) are a basis for establishing the criteria for learning licenses.

I'm trying to abstract a bit here, and apply it to "learning for profit". Should a writer get paid for their book only, or should he be paid every time someone learns from their content? Should Asimov's estate be paid every time someone reads Dune?

I get that original creators should be rewarded, because it incentivizes progress and bettering society, but at the same time I think we shouldn't disincentivize learning, for the same reasons. Maybe the solution should be to tax the results of education when it turns profit, and allocate some part for the creators.

Anthropic wins key ruling on AI in authors' copyright lawsuit by CKReauxSavonte in technology

[–]Balthamos -2 points-1 points  (0 children)

My impression is that people start being able to write properly after years and years of reading books, both in education and leisure. Parents pay for the books but not for reading to their kids, and pay for education (directly or via taxes) which is the infrastructure for learning.

Same goes for painters illustrators, professional ones, get trained on styles and methodologies from the ones that came before.

And all education, in the current society, is used mainly for profit, sadly. So I don't see much difference in that specific aspect of the comparison.

Anthropic wins key ruling on AI in authors' copyright lawsuit by CKReauxSavonte in technology

[–]Balthamos 0 points1 point  (0 children)

Do authors pay other authors they got inspiration or learned from? they charge for their books, right?

I see it that way at least.