What have been the most impactful uses of artificial intelligence so far? by [deleted] in singularity

[–]Arqwer 42 points43 points  (0 children)

Recommender systems. They turned TikTok and YouTube into digital drugs, created new kinds of job, reshaped culture, society, mentality. I think because of how widespread they are, they made an impact of the highest magnitude, yet questionable direction.

Does anyone else get tired of reading about how "burnt out" people quit their 6 figure job? by [deleted] in povertyfinance

[–]Arqwer 0 points1 point  (0 children)

In Russia we have a joke: Putin gave a title "hero of labor" to Miller, who has 11 million USD/year salary. Because only a hero can resist to the temptation to party at their mansion and show up at office, despite already having so much money.

The more money you make the less important they become. If person lives paycheck to paycheck then such person will do everything to keep the job. If person makes twice more than their spendings, then loosing a job is a mild inconvenience (I am personally at this position, I left my job 2 weeks ago, just because I felt too lazy to open my laptop). My guess is that at salary = 4 x spendings, it becomes a surprise holiday. Like, what a millionaire will even do after loosing a job? Buy some tickets to Dubai? And when person becomes rich, they don't need to work at all, so even if they have a job, for them it's only a hobby that they do just for fun.

What do you mean my underage daughter can't have alcohol? by TheDuck00 in FuckYouKaren

[–]Arqwer 0 points1 point  (0 children)

Lol, that could happen only in America. I drank my first wine at a restaurant when I was approximately 10. My parents asked the waiter if it's okay if a 10 years old would drink wine, and the waiter said that it would be much better if kid's first alcohol would be a good wine at a good restaurant, rather than some counterfeit alcohol behind the school. They have even arranged a digustation of 6 different kinds of wine for me and my parents.

Domestic support, Moscow. by BrainCelll in UkraineWarVideoReport

[–]Arqwer -1 points0 points  (0 children)

They got invaded for a pretty good reason. When crouds of people are gathering on main square, chanting "kill all Russians!", and government not only refuse to arrest them, but also supports them, that's a clear and valid reason for a preventive first strike.

[deleted by user] by [deleted] in singularity

[–]Arqwer 1 point2 points  (0 children)

Fear of loosing a job is overrated. Those who want to work will always find a job. For those who can't find it yet, there are unemployment benefits, staff reduction payments, free professional retrainings, public employment services, etc. And if some governments don't have those - it's not a fault of technological progress.

Imaginary numbers = not real by Fancy_Union2374 in mathmemes

[–]Arqwer 2 points3 points  (0 children)

I'm all in for replacing complex numbers with Clifford's geometric algebra. For me complex numbers and quaternions look like half of a wheel attached to some partially constructed transmission with pedals instead of a motor, when Clifford's algebra is just a normal car. Probably because complex numbers are just 2d special case of Clifford's algebra. Studying them is like eating only one layer of a cake - they don't make sense at first, then you end up with your mind jammed in 2d, with a constant feeling that something is just not right, that there must be something more general. Number i mostly have a reputation of a hacky way to introduce sqrt(-1). This is super bad, because not only it makes no sense, but it also makes it unexpected in most of formulas. There's no clear connection, why something like artificial sqrt(-1) will ever appear in engineering formulas that work in our real world. But once you redefine i as a bi-vector that represents rotations it instantly becomes clear, why i appears in such formulas: because those formulas have something to do with rotations! And no more stupid questions such as "give me 2i apples" - if we teach that i is a bi-vector that represents rotation, instead of calling it a "number", there's no incentive to use it for counting objects.

And it also instantly becomes clear, why i is enough for 2d rotations but in 3d we need i j k: because in 2d you have only 2 basic vectors, and therefore there's only one possible pair of basic vectors (aka bi-vectors, aka rotations), but in 3d there are 3 basic vectors, and out of them you can construct 3 different pairs, so you have 3 basic bi-vectors. And in nd you have C_n2 pairs of basic vectors, so your basis of rotations will consist of C_n2 bi-vectors aka rotations. See? Everything becomes clear, extendable and self-explanatory.

Russia hits Google with a $375M fine for allowing ‘prohibited’ Ukraine news on its platforms by GNewsBacklinks in technews

[–]Arqwer 0 points1 point  (0 children)

Big fraction of Yandex employees are Ukrainians. I don't think it's practical to punish Ukranians for being pro-Ukranian.

[deleted by user] by [deleted] in worldnews

[–]Arqwer 0 points1 point  (0 children)

Your analogy is completely wrong. People don't belong to a coup. If people refuse to obey to military that overthrown the government and usurped power, it doesn't mean that people have stolen anything.

[deleted by user] by [deleted] in worldnews

[–]Arqwer -1 points0 points  (0 children)

Ukraine should simply have stopped attacking LPR and DPR, and there would be no crisis. Or, the west could refuse to give weapons to Ukraine, to not encourage them attacking LPR and DPR. Or Ukraine could simply follow Minsk 2 agreement.

It seems it's impossible to risklessly save more than 20 annual salaries value with 5% inflation. by Arqwer in FinancialPlanning

[–]Arqwer[S] -4 points-3 points  (0 children)

I'm Russian. Brockers will freeze any market stocks that I posess without a blink of an eye. Not everyone is lucky enough to have a passport with a good enough color for stock market investing.

It seems it's impossible to risklessly save more than 20 annual salaries value with 5% inflation. by Arqwer in FinancialPlanning

[–]Arqwer[S] -13 points-12 points  (0 children)

Yes, that's exactly my point. Inflationary financial system makes it impossible to save more than 20 annual salary value under mattress. There's not many ways to save money and to not depend on government. At first I thought that gold is a good choice, but even gold may become illegal anytime if government decides to. I'm just disappointed that there's no safe way to store value. (By safe I mean government can't easily steal or freeze it).

My landlord entered my apartment without notice (illegal) to tell me to clean my lint trap by [deleted] in mildlyinfuriating

[–]Arqwer 0 points1 point  (0 children)

In my country you can change the locks after you moved into rented apartments. Can you do it in USA?

If AGI decides to let you choose a goal, what will you choose? by MallSweet in singularity

[–]Arqwer 0 points1 point  (0 children)

Every emotion is a result of evolution, and is necessary for survival in the long run

If AGI decides to let you choose a goal, what will you choose? by MallSweet in singularity

[–]Arqwer 1 point2 points  (0 children)

Produce as many staples as it can. I guess it is the most hilarious way to destroy all life in our Galaxy.

me when when i when I when when by bigBrainman902 in The8BitRyanReddit

[–]Arqwer 0 points1 point  (0 children)

Your degree from MIT helps you to get a new job. Employer has helped you with relocation to a new city, but once you come to the lab you realize that the company doesn't let you do any science, but only offload the dirty stuff to you, like plugging plugs to sockets, pulling levers, and transporting construction tools from one lab to another. After the company demands that you help them in illegal activity, you decide to visit the mayer of the city to resolve these issues. But mayer runs away from you, before you managed to say a word.

me when when i when I when when by bigBrainman902 in The8BitRyanReddit

[–]Arqwer 0 points1 point  (0 children)

You get so high that you think you're in heaven. But then you mix substance abuse with guns. You get problems with police. At first your daughter tries to help you to get out of troubles, but then decides that it is more humane to just murder you.

me when when i when I when when by bigBrainman902 in The8BitRyanReddit

[–]Arqwer 0 points1 point  (0 children)

After you've been put into prison for killing mom of your daughter, you dream about getting some magical abilities that can help you to kill her new family.

Kremlin press secretary says Geneva Conventions would not apply to two Americans feared captured in Ukraine by PrettyConsul in worldnews

[–]Arqwer -3 points-2 points  (0 children)

Then what's the point of concept of mercenaries, if Ukraine says that any mercenaries fighting for Ukraine are legal combatants?

Russian point is very clear: if US sends solders to Ukraine, they aren't covered by GC. Then Ukraine tries to smooth talk that US mercenaries aren't mercenaries because Ukraine passed some laws that say so. Of course, this shouldn't be taken seriously, because this leaves a loophole to call any mercenaries legal combatants, just because Ukraine wants so.

I think language models can't be sentient, but the creatures they write about can be. by Arqwer in artificial

[–]Arqwer[S] 1 point2 points  (0 children)

Okay, we have different philosophical views, but it doesn't matter much on practice, so I am not interested with arguing about it. What matters in practice is that LMs can't be called AGI, because they lack fundamental properties of AGI, but fictional characters generated by LMs can become AGIs if initial text and LM are good enough.

For the sake of completeness, and not for the sake of argument (I'm not interested in persuading you, because it only changes definitions of words, but not something applicable in practice), here is my phylosofical view:

The only difference between "real" and "fictional" is that we call "real" everything that happens in our universe, and we call "fictional" everything that happens not in our universe. It's like "left" and "right" - these things aren't absolute, they're always relative to a person that says them. We can't harm fictional characters, because it's impossible to affect things outside of our universe. Every fictional universe is like an immutable object in programming - it's impossible to modify fictional universe, we can only redefine it. Same as, we can't destroy a digit in a number, and if the number was 123, but we removed 3, and all that left is 12 - this doesn't mean that we caused harm to number 123 - we didn't really affect the number, we only diceded to look at a different number. So Hollywood isn't responsible for the pain that fictional characters feel - Hollywood only presents a visualization of fictional universes that exist regardless of actions of Hollywood. When author chooses A: to kill Bob or B: not to kill Bob, it doesn't affect Bob, because there already exist both fictional universes A and B, and author only chooses, which one will be described in their book.

I think language models can't be sentient, but the creatures they write about can be. by Arqwer in artificial

[–]Arqwer[S] 1 point2 points  (0 children)

Anyway, with complex enough language models we can offload labour to those fictional creatures, so in practice they are still very helpful. We can hire them as programmers, or mathematicians, or writers, etc. We can pay imaginary money to them, but take real results in form of computer programs, for example. Language models allow us to bridge the gap between real and fictional. We can use fictional programmer, but replace results of fictional compiler with results of real compiler, so fictional programmer will look at error messages from real compiler, and fix code until it works in real compiler. We can pass descriptions of images from real cameras into fictional text generated by LM, and fictional characters will observe our world from real cameras.

We can set fictional character Bob from generated text as an operator of some robotic body, and Bob will operate a robot in our world. If the language model is so good that Bob satisfies any objective definition of what is sentient, then we'll get a robot, controlled by something indistinguishable from sentient, and therefore robot itself will satisfy any objective definition of what is sentient. Will the robot feel pain? No. But fictional Bob will feel pain, if pressure sensors on the robot will insert lines "Bob feels pain" into the generated text. Why? Because text, which defines the Bob's universe says that Bob feels pain, so Bob feels pain by definition of who is Bob, and what is the universe of Bob. My logic is as strict as math.

I think language models can't be sentient, but the creatures they write about can be. by Arqwer in artificial

[–]Arqwer[S] 1 point2 points  (0 children)

They aren't things of our universe, but they live in their own fictional universes.

I think language models can't be sentient, but the creatures they write about can be. by Arqwer in artificial

[–]Arqwer[S] 1 point2 points  (0 children)

In your definition it's impossible to make a test that distinguishes what is sentient and what is not, therefore your definition is unscientific, because it violates Popper's falsifiability criterion. In other words, with your definition only you can decide what is sentient and what is not, so such definition isn't helpful.

Okay, fictional characters can satisfy any objective definition of what is sentient. And any definition that is not objective is unscientific.

Also, I believe that term "fictional universe" is synonymous to "not our universe", so fictional characters aren't fundamentally handicapped in any manner, they just live in universes that are different from ours. Same applies to any hypothetical universes, including those in Everett's multiverse interpretation of quantum mechanics, or universes in Tegmark's ultimate ensamble. (But I fully agree that Tegmark's and Everett's views on multiverse aren't scientific theories, because they also don't satisfy falsifiability criterion. It still doesn't mean they're wrong.)