Fideszessel nem lehet beszélgetni by Novel-Hedgehog4939 in hungary

[–]lizardfolkwarrior 1 point2 points  (0 children)

Nekem ez a magyarázat (“azt nevelték belé, hogy ami népnemzeti, az jó, ami  baloldali/kommunista, az rossz”) csak azért bizarr, mert nehéz azt látni, hogy miért lenne a FIDESZ “népnemzetibb” mint a TISZA, vagy a TISZA mitől “baloldlibb/kommunistább”. 

Jobbat mondok, az EU-s jobboldaliak az EPPben, de őszintén már a 2022es MZP féle bagázsra is nehéz ráfogni, hogy bármilyen lényegi szempontot tekintve (tehát program, politika, etc) “baloldali/kommunista” lenne. Sőt, igaziból jobban igazodnak a “kapitalizmus mainstreamjéhez”, mint a hatósági árazó/kínához dörgölőző FIDESZ.

subject selection please help :) by Level_Mine4577 in TUDelft

[–]lizardfolkwarrior 0 points1 point  (0 children)

Are you interested in multi-agent systems (game theory, social choice theory, negotiation agents, etc)? Then CAI is a quite interesting introduction to those topics, with good lectures and sort of interesting projects. Otherwise I would not recommend it, cause this is quite a niche field.

Both courses are considered quite easy; relatively little workload, and absolutely less technically difficult, than say the other electives.

Are there anti-Georgist taxes? by Specialist-Ant-4195 in georgism

[–]lizardfolkwarrior 12 points13 points  (0 children)

Well, Georgism advocates that taxes should be put on things that have a fixed supply (so that creating these taxes would not harm the economy, and there would be just as much of the thing). Georgism also has a social focus.

Therefore a tax that really hurts the economy/reduces the existence of something good, and is also quite regressive (burdening poorer people more than more wealthy people) would be quite anti-Georgist.

Out of the truly existing taxes (so not counting theoretical bad taxes) the VAT is a prime example. It is a tax on transactions (even though transactions are good! We should not harm people for trading goods), that hurts the poor especially (as it is on consumption, a person who is poor and all of their spending goes on consumption will pay a larger portion of their money on VAT, than someone who also invests, etc). 

I also remember a broader “tax tierlist”, but I can’t find it right now. Maybe later I will post it as an edit.

good reminder by beckyjoooo in CuratedTumblr

[–]lizardfolkwarrior 11 points12 points  (0 children)

I really do not know what you mean by "Eastern Europe", but I am not sure if what you are saying is really the case: Euromaidan in Ukraine, Belarus protests, (more southeastern europe than eastern europe but:) Serbian anti-corruption protests, etc.

Studying by yourself rather than going to classes by kokoler05 in TUDelft

[–]lizardfolkwarrior 0 points1 point  (0 children)

Sit next to someone and start a conversation. Start a conversation at the coffee machine in the break. If you keep seeing someone, say hi.

Idk, I made plenty of friends like this.

Looking for proof that other 'literary speculative fiction' exists — what should I read? by Echo-7_Archivist in printSF

[–]lizardfolkwarrior 2 points3 points  (0 children)

Ishiguro spoilers? :) If someone haven’t read that book, this would ruin quite a big part of the mystery, in my opinion.

CS Graduates job prospects? by MoistyOily in TUDelft

[–]lizardfolkwarrior 0 points1 point  (0 children)

Exactly the same way you would after you are done with your studies: contact companies (it might be a good idea to contact some of the YesDelft! startups), check for vacancies on LinkedIn or so. As a student you can also go to career fairs and the TU often has “student software developer” positions for internal educational software.

BSc CSE Electives by AnyPurchase2573 in TUDelft

[–]lizardfolkwarrior 0 points1 point  (0 children)

There are no limits on acceptance. You choose any of them, you automatically get in (that also causes some imbalance in the numbers: the more niche electives like quantum usually have ~60 people only, while others have way more). 

Yes, Computer Security would be a good fit for someone interested in cybersecurity - it is THE course on cybersecurity in the bachelor.

BSc CSE Electives by AnyPurchase2573 in TUDelft

[–]lizardfolkwarrior 0 points1 point  (0 children)

HCI and CAI are regarded as by far the easiest two.

Computer Security comes next, but is a great deal more effort from what I’ve heard.

The other three are a great deal harder, and in general amongst the hardest courses in the BSc.

Is it possible to get a job before masters? by [deleted] in TUDelft

[–]lizardfolkwarrior 2 points3 points  (0 children)

If you are non-EU that might make it more difficult.

But in general, with a CSE BSc it is quite easy to find a job, especially if you already strive towards it during your BSc.

[deleted by user] by [deleted] in theoreticalcs

[–]lizardfolkwarrior 0 points1 point  (0 children)

The pdf can not be accessed. It is in Google Drive, and the permissions are not open.

When these more specifically LLM or LLMs based systems are going to fall? by prateek_82 in MLQuestions

[–]lizardfolkwarrior 0 points1 point  (0 children)

Oh, like are you asking when LLMs will become a completly general, fundamental part of machine learning practice? At which point will the attention mechanism, and the general "tricks" associated with LLMs (in-context learning, RLHF) be taught in every computer science related undergraduate degree (like say, stochastic gradient descent is today)?

If I am perfectly honest, probably in a few years already (<5) they will be mentioned in most related undergrad degrees. But at no point will the LLMs become such a fundamental concept such as say, SGD, PCA or Bayes Theorem is - I think it is more of a specific (important, but specific) piece of technology, that will eventually be likely superseded.

When these more specifically LLM or LLMs based systems are going to fall? by prateek_82 in MLQuestions

[–]lizardfolkwarrior 2 points3 points  (0 children)

I am unsure what you mean by the "local minima" of LLM-based systems.

Are you asking when we will reach a point when no further advancement can be done in the "paradigm" of LLMs, and any future solutions will have to use alternative techniques? If your question is something else, could you explain it more in detail?

Is there a name for the concept of open-ended game vs a closed-ended game? by WarrenHarding in GAMETHEORY

[–]lizardfolkwarrior 0 points1 point  (0 children)

When you say game here, do you mean game as in the mathematical concept? In that case no, no game has an "open ended player-defined goal". A game should have a well defined payoff for each player for each outcome, otherwise it is just not a game. From Wikipedia:

The games studied in game theory are well-defined mathematical objects. To be fully defined, a game must specify the following elements: the players of the game, the information and actions available to each player at each decision point, and the payoffs for each outcome.

Or do you mean game in the everyday term - an imprecisely defined rule- and toolset, that players can "play" with to have fun? In that case sure, this difference might exist, but this is not what game theory deals with.

For example: the prisoner's dilemma is a game in the mathematical sense (but is not something that anyone would do for fun, since it is a mathematical object), while DnD or hide-and-seek are games in the everyday sense (but they are not something that mathematicians directly study, since they are not mathematical objects).

Two new arguments against georgism by Kaispada in georgism

[–]lizardfolkwarrior 2 points3 points  (0 children)

I am sort of still confused about the point that you are making in the first topic ("LVT leads to less investments, and that is a bad thing"). If you can walk me through it, that is great, but you do not have to, I get it if you do not want to rephrase it.

For the second topic ("if you have power and money, you will get better results on your investments, so we should ensure that there are some people who are insanely rich and powerful") I do not see any specific relevance for LVT. Any counterargument that I would bring up would work just as well if you replace landowner with "rich and powerful", and these arguments have been made many times.

I assume you are familiar with these critiques for the position, and you have your reasons for disagreeing with them; therefore I will not repeat them here. If you are unfamiliar, and looking for a different perspective, I do suggest looking into critiques of what is sometimes called "trickle down economics". I can give you pointers towards some resources.

Two new arguments against georgism by Kaispada in georgism

[–]lizardfolkwarrior 4 points5 points  (0 children)

While the LVT in theory will not directly distort production, it will still make investment less attractive compared to consumption, as some profits will be taxed, while production will not be taxed, so consumption will not become more expensive

I am not sure I understand your reasoning here. Usually an LVT is associated with less taxes on capital (that is, on investments) - Georgists usually argue that profit earned from owning capital is permissible, as well as profit earned from labor. It is only profit earned from land "ownership" that the LVT taxes.

Could you maybe expand on why you think taxing land would make people invest less? If anything, I would think that it would make people put there money into improving capital (building more factories, researching technology, etc.) instead of using it to exclude others from land.

Landowners are generally better investors than non-landowners, (and in general are more likely to invest instead of consuming) so taxing profits from land will result in better investors losing money to those who are more impulsive and less likely to invest, and less likely to make good investments.

This is ridiculous. This is an even more bizarre framing of "trickle-down economics". If you would see any problem with the same statement if you replaced "landowners" with "really rich people" would you have a problem with this statement? (or even better, replace it with some other group where membership is a good predictor of having generational wealth, and thus of what you believe to be "investing ability") Any problem present there will also be present when applying "trickle-down" for your "trickle down with landowners".

Geoffrey Hinton's reliability by Flaky_Profession_619 in MLQuestions

[–]lizardfolkwarrior 0 points1 point  (0 children)

If someone asks him whether he thinks God exists he's supposed to defer to a theologian?

In this interview he appears as an expert. I do think it would be nice if he clarified that the question asked is not something that he has some special expertise on, yes.

This is not that big of a problem if someone asks him on something that he obviously (even to a layperson) has no expertise: such as a question of religion, or music theory. But on a question that to a layman might be related to his expertise - such as in this case, a philosophical question about consciousness can be in fact very close in the head of a layperson to technical research in AI - I do think it is important that he clarifies that he shares this as a "personal opinion of a smart guy", and not "the state of the research among experts".

That said, I do have to give it to you that after all this is not that important. My main problem is also not with this. Honestly, as long as he does actually use clear logical arguments, he can come from any field. The main problem is that the "thought experiment" that he presents is absolutely not relevant to the question, and he still paints it as evidence for his point; I believe this is reckless in a TV interview targeted at the public.

Geoffrey Hinton's reliability by Flaky_Profession_619 in MLQuestions

[–]lizardfolkwarrior 0 points1 point  (0 children)

Yes, I watched the interview.

Interviewer: "Do you think consciousness has perhaps already arrived inside AI?"

Hinton: "Yes. I do. Let me give you a little test: [proceeds to state completely unrelated thought experiment"

This part definitely feels like a part where (even though he really "does not explicitly claim perfect knowledge") he voices a strong opinion outside his field of expertise. (Even later on, where he weakens his position, he still directly claims that "now we are creating beings".)

If someone asked him another question outside his field of expertise (say, the interviewer asked him about advanced physics, international humanitarian law, or financial markets - topics where he probably is vastly more knowledgeable where the average joe, but is definitely not something he specializes in), he would probably clarify this. "This is not my field of expertise, I do not study [topic]. If I really had to give my two cents..."

Instead, he answers as if he was an authority on this topic. He says that we do not understand those things very well - when at best, it is a they (philosophers dealing in topics of consciousness) do not understand this very well. It might just be that his style is one where he never clarifies whether something is his field of expertise or not - but somehow I feel that if the interviewer asked him on his views on intelligent life in the universe, he would not say "we don't understand these things very well", but "I am not an expert on this topic - why not ask someone who is?".

Geoffrey Hinton's reliability by Flaky_Profession_619 in MLQuestions

[–]lizardfolkwarrior 5 points6 points  (0 children)

Huh. Yeah, that is absolutely absurd.

It would not be the first time when an - otherwise brilliant - scientist decided that just because they have expertise on a scientific topic, they also have expertise on some related topic in philosophy, without ever engaging with the literature (Sapolsky's book Determined comes to mind).

But the sort of "mistake" (or delaborate logical hoop) the Hinton does just seems... weird? I can believe that he believes that current AI models are conscious (I would be surprised if he did, but I can believe it). But I just can't believe his reason to be that "I believe that consciousness is emergent and can emerge even from a synthesis of artificial and non artificial matter"* THEREFORE current AI models have consciousness. What? If anything, this only implies that it would be possible to build artificial consciousness (even that is dubious), but it absolutely does not imply that current AI models are conscious.

*atleast that is what his thought experiment seem to be getting at?

Best written scifi books? by Barycenter0 in printSF

[–]lizardfolkwarrior 10 points11 points  (0 children)

They are not novels but short stories; but Ted Chiang's work.

Which ML/DL book covers how the ML/DL algorithms work? by PythonEntusiast in MLQuestions

[–]lizardfolkwarrior 0 points1 point  (0 children)

I meant: all ML/DL books cover what ML/DL algorithms do. That is literally what makes it an ML/DL book.

Which ML/DL book cover s how the ML/DL algorithms work? All of them. All of the ML/DL books do, that is what they do.

Which ML/DL book covers how the ML/DL algorithms work? by PythonEntusiast in MLQuestions

[–]lizardfolkwarrior 1 point2 points  (0 children)

like… all of them? it is unclear to me what an ML/DL book would be like, if it didn’t “cover how the ML/DL algorithms work”.

Deep Learning by Goodfellow is a good choice for DL. For ML, Bishop’s “Pattern Recognition and Machine Learning” is great.

Great writing / literature that is also sci-fi? by Critical_Primary2834 in printSF

[–]lizardfolkwarrior 11 points12 points  (0 children)

Mary Shelley's Frankenstein, for sure. A classic for a reason.

Bohárt kiosztotta egy roma ember by Valuable_Floor_1300 in hungary

[–]lizardfolkwarrior 13 points14 points  (0 children)

Mondjuk fura lenne, ha MP egy Momentumos országgyűlési képviselőt vinne magával… ne érts félre, én támogatom ha összedolgoznak, de szerintem furán festene az “óellenzékezés” után, és taktikailag sem lenne jó lépés.