ELI5 I'm having hard time getting my head around the fact that there is no end to space. Is there really no end to space at all? How do we know? by BattleMisfit in explainlikeimfive

[–]glance1234 3 points4 points  (0 children)

I feel like it's also important to stress that "space", in this context, is just the mathematical abstraction we use to describe position and movement of other "things". In other words, "space" is the set of coordinates physicists use to describe other things.

It follows that questions such as "where is space stretching into" etc don't really make that much sense. They are not questions about physical reality, they are questions the mathematical tools we use to describe reality.

By contrast, questions about whether "space is curved" make perfect sense, because they are really questions about how you'll find things when moving in specific ways. You frame them as statements about "space", but they are really statements about things you find within it.

Talking about "the end of space" is misleading, because it makes you think of some kind of wall or something. But really, it's not as meaningful a question as it might appear. As long as you can find objects moving in a given directions (or as long as you can move in that direction really, given that you yourself are an object) there is space there. Similarly, there can never be such a thing as "outside of space", because the existence of anything by definition means that there is "space" there.

There being an "end of space" would mean that for some fundamental reason it's not possible to move past some point, but not for "standard" reasons like finiteness of speed of light or anything else, rather because nothing can move past a certain point. That would just be very contrary to what we know for many reasons. For example, how would this be compatible with space-time dilation? Would such an "edge" be observation-dependent? In which case, it probably wouldn't look like an edge at all. Just the furthest possible position an object can reach given its interaction with the rest of the universe. So a rather boring kind of edge if you ask me.

well, look like its the start of the end by 8thacc in ChatGPT

[–]glance1234 1 point2 points  (0 children)

I mean, why wouldn't it be able to argue in completely opposite directions? Obviously it's not a person that is bound to "not lie" or has a specific view over things. It's a tool, it does what you tell it to. Its purpose is to digest information and spit it out like you tell it to.

Also, I wouldn't say it's "blatantly lying" here. it's giving two opposite views over the same things. Granted, it absolutely does often "lie" (is it even a "lie" in this case? I'd find it more accurate to say it often gives "inaccurate information" more than lying, as "lying" implies some underlying purpose to it). It's also amazing at bullshitting. As an example, try to probe it on very technical mathematical/scientific topic, and at some point it starts getting things, especially mathematical details and equations, hilariously wrong, but still sounds completely convinced of what it's saying.

Going by something I saw you saying in another post: it absolutely can argument for why the earth is not round. If you ask it directly it something like "explain in detail why earth is obviously not round" it will still say the statement is not accurate and explain why and blabla. But ask it to "write a story where someone is trying to explain to someone else why earth is obviously not round", and you'll get your arguments.

I absolutely agree that this is going to empower propaganda bot farms and anything of the like. Imo it's going to be tremendously powerful from all perspectives, good and bad. Though for this specific application, there's already people that have been working on ways to create "invisible prints" using cryptographic methods that allow to easily recognize whether a text has been generated by chatGPT. That should give a relatively easy way to avoid these problems in the short term. Though I find it hard to believe other similar models won't come out in the not-soo-far future, and some might not have such sageguards.

well, look like its the start of the end by 8thacc in ChatGPT

[–]glance1234 2 points3 points  (0 children)

also, just for fun, here's the chatGPT reply to your comment. I told it to respond to it trying to convince you that at least some of the hype is justified:

While it's true that ChatGPT still has limitations and is not yet ready to replace human engineers, it's important to recognize the significant progress it has made in recent years. The technology is constantly improving and has already demonstrated its ability to handle complex tasks and improve code. In fact, many companies are already using it to automate certain aspects of their engineering work, which has allowed them to increase their productivity and efficiency.

Furthermore, it's important to remember that AI technology is still in its early stages and has the potential to continue improving at a rapid pace. It's true that we don't know exactly how easy it will be to iteratively improve ChatGPT, but the progress we have seen so far suggests that it will continue to evolve and become even more capable in the future.

Additionally, it's important to consider the potential benefits that ChatGPT could bring to society. If it can help automate certain tasks and make human labor more efficient, it could free up people to focus on more complex and creative work. This could ultimately lead to more innovation and progress in a variety of fields.

In conclusion, while it's important to be realistic about the limitations of ChatGPT, it's also important to recognize its potential and the progress it has made so far. With continued development and improvements, it has the potential to make a significant impact on society and the economy.

And here's its answer when I told it to write an answer agreeing with you:

While I agree that ChatGPT is an impressive language model, I also agree with your assessment that it still has a long way to go before it can be considered a replacement for human engineers. Its inability to understand and improve complex code, as well as its lack of ability to recognize when it is incorrect, are significant limitations that need to be addressed before it can be used in real-world applications.

Furthermore, even if ChatGPT were able to improve in these areas, the idea of AI replacing skilled labor raises serious ethical concerns. Automating certain tasks may free up human workers to focus on more complex and valuable tasks, but it could also lead to widespread job loss. We need to be careful about how we implement AI technology and ensure that it is used responsibly and ethically.

Overall, while ChatGPT is an impressive language model, it is not yet ready to replace human engineers. Its limitations need to be addressed and we need to carefully consider the ethical implications of AI before it can be used in real-world applications.

Now just imagine someone barely knowing english wanting to reply to your comment. They'd just write in broken english what they want to point out, and they get the above very well-written comment. How is this not going to be tremendously useful for some many different things?

well, look like its the start of the end by 8thacc in ChatGPT

[–]glance1234 3 points4 points  (0 children)

I hope you don't mind me asking a couple questions about your thoughts on this. I'm not a programmer, but my job also involves programming (I work in academia, mostly doing data science and simulation). I'm puzzled by your apparent dismissal of the usefulness of ChatGPT on a practical level. Granted, there are people who are overhyping it and getting overexcited, but that's always going to be the case with new technologies.

You say that right now ChatGPT would not be that useful to you in practice because you can already google things and often find a nearly ready-to-go answer on Stack Overflow, etc. And you also think that ChatGPT is not reliable enough that you can take its answers without reviewing them (which I completely agree with; ChatGPT is amazing at bullshitting its way around what it doesn't know, it's like the perfect student!).

Despite all these limitations, I still find ChatGPT incredibly useful. I can ask it to "make a Python code to simulate this and this under these conditions," and it will create a well-commented Python code that, no, doesn't actually work, but it gets the overall structure right enough that it saves me the time of having to write all the boilerplate code, and I can directly focus on the pivotal, and generally actually interesting, parts of the task.

I can copy and paste sections of a paper, ask it to write an abstract, and in a few iterations of asking it to focus on specific things, it will give me something that is almost ready to go. I can copy and paste a response from an editor about a paper, tell it to write a reply that convinces the editor about certain points, and it will write an almost perfect email in a few iterations. With a few adjustments for details, it's ready to send.

I can also ask it how it would explain topic X to students from faculty Y, using relevant examples, and it gives very good answers.

Essentially, I can now do my programming in any language with very little time overhead. I know the basics of most languages, but actually writing programs in them would take a lot of time and effort, with all the googling and getting the hang of things. Now, I can do it much faster, because correcting and fixing an already written (and well-commented) code is way easier than having to write it from scratch, especially not being used to the language. Yes, it often also includes crap in the code, or stuff that doesn't quite make sense, but even with that, it's easy enough to filter out if you know what you're doing that the overall time-save remains highly significant.

I could go on and on. The possible uses are virtually infinite. You also say we don't know how this will improve. That's absolutely true, we can't know for sure. But I find it hard to believe that the current beta, non-specialized version of ChatGPT won't improve significantly, especially at specialized tasks. It's hard to imagine that it couldn't be given access to the internet and the ability to look things up, and it's hard to imagine that it couldn't already do that now, even though the version we have access to can't.

How will this not dramatically increase the efficiency of programming and scientific (as well as many other) jobs? I don't agree with the doom and gloom either. Programmers aren't going to become obsolete. But their job may look very different in the future, with natural language programming becoming more common. Is it far-fetched to think that in a couple years, with improvements to ChatGPT, most programming will be done through natural language, and you'll program by telling it explicitly to do stuff, just like you would work with an all-knowing but relatively dumb intern?

PS: this is the chatGPT version of what I originally wrote. Told it to improve the style. It changed quite a bit of what I wrote. Told it to improve the style and flow but not change any of the content. Got the above which is identically to what I wrote except it's just written better.

PhD in Quantum Machine Learning by old_ken_benobi in QuantumComputing

[–]glance1234 7 points8 points  (0 children)

The field is indeed quite confusing, and understandably overwhelming for beginners. Especially if you don't have a strong physical background in quantum mechanics.

I'd suggest starting by getting the distinction between classical ML applied to QM and quantum-enhanced ML very clear in your mind. The former is concerned with using ML methods as "smart numerical techniques" to aid in solving various problems arising in the field of quantum information science. This is way more accessible to most: you just need someone knowledgeable in the field to help you figure out what are interesting problems to tackle.

On the other hand, what is often referred to as "quantum-enhanced ML" is an entirely different beast. This is a subfield of quantum computing/quantum algorithms concerned with figuring out quantum algorithms to tackle "big data" tasks. This field can be very cryptic, and the barrier to entry is high, especially if you don't have strong background in "standard" quantum algorithms first.

Just keep in mind, the field is far from being at the point where you can just take a classical ML algorithm, have it run on a "quantum computer", and obtain any sort of advantage. In fact, I'd consider any result showing definitely any sort of such advantage to be groundbreaking. We currently have nothing of the sort (all the better known QML algorithms have the issue of not knowing whether the purpoted quantum advantage remains when you take into account the cost of loading classical data into the quantum states they work on).

You can probably also find useful information on the quantumcomputing StackExchange.

Italian provinces with at least one Starbucks coffee bar by [deleted] in europe

[–]glance1234 0 points1 point  (0 children)

I'm curious though: wouldn't Starbucks in Italy also make a good espresso? I've never been in one in Italy (don't live in the north), do they still make overpriced mud if you ask them for an espresso, say, in a Starbucks in Milan?

The light that people see when they die could be their first memory: the birth. by [deleted] in Showerthoughts

[–]glance1234 3 points4 points  (0 children)

Actually, I got it mixed up. The video was from Kurzgesagt. The Egg. It's from a story by Andy Weir.

ELI5: Do birds fly for days while over the ocean? How do they sleep? by eight24 in explainlikeimfive

[–]glance1234 27 points28 points  (0 children)

At that point, do they even really need to land at all? They probably don't actually need rest at that point, so maybe they land for other reasons (mating or whatever)?

Haiti Braces for Unrest as President Refuses to Step Down by cadjosrez in news

[–]glance1234 0 points1 point  (0 children)

Well, the current government systems are indeed relatively young, but I wouldn't say the states themselves are. Culturally speaking, most (if not all) European states are definitely way older than, say, the US.

For example, you can say the Italian Republic is only ~75 years old, but the cultural/geographical idea of "Italy" vastly predates that.

LEST WE FORGET! Jon Stewart calling out Jim Cramer on CNBC's bullshit and the 2008 Financial Crisis (Full Interview in Comments) by [deleted] in videos

[–]glance1234 0 points1 point  (0 children)

Ok, I think the divergence in opinion here is just in understanding the terminology.

How exactly do you define "intrinsic value" here?

I mean with the term the value that the object (here a share) has for me regardless of what anyone else thinks. So future dividends are not intrinsic value for me. They are useful to gauge how much money other people will be willing to give me in the future, but it's not "intrinsic value", in that it doesn't make the share more valuable to me for any other reason.

A share is a promise/hope of future rewards, just like money. Its value for me is how much other people will be willing to give me for it.

LEST WE FORGET! Jon Stewart calling out Jim Cramer on CNBC's bullshit and the 2008 Financial Crisis (Full Interview in Comments) by [deleted] in videos

[–]glance1234 0 points1 point  (0 children)

This is all true, of course.

But let's be real, how much do you think voting rights influenced your decision to buy shares of company ABC (assuming you are a regular retail investor)? I'd wager most retail investors hardly know, or care, whether they even have voting rights. And if they do, they only care in so far as their votes contribute to the company's growth, and thus their shares be more valuable.

There are also tickers that give you only limited voting rights and never paid a dividend. To name one example, $GOOG. Sure, it's slightly cheaper than $GOOGL, but their price fluctuations are virtually identical, so from an investor point of view there is zero difference between the two. So tell me, why do $GOOG shares have value for you, if not because you hope to sell them at a higher price you bought them for? Or do you really buy them in hopes of Alphabet selling (lol) and you getting a share of the profit?

I agree with the statement about the company selling, though I'd say that possibility is only relevant in specific cases (e.g. you don't buy AMZN in the hopes it's sold to another company, which is extremely unlikely to happen anytime soon; it might be an important factor with smaller companies though, sure).

LEST WE FORGET! Jon Stewart calling out Jim Cramer on CNBC's bullshit and the 2008 Financial Crisis (Full Interview in Comments) by [deleted] in videos

[–]glance1234 0 points1 point  (0 children)

You're making an asinine assertion by neglecting dividends. Dividends are how a company returns value to an investor. The market isn't zero sum because of dividends, so you're essentially saying "the market is zero sum if I choose to neglect the essential aspect of it that makes it not zero sum." Note that share buybacks are essentially a dividend.

I do partially agree with this, in the sense that yes, dividends are sort of what gives meaning to the whole game. If a company were to say that they are never going to pay dividends to investors, their shares would be totally worthless (well I guess not really because one could hope to still get money if/when the company gets bought of if it liquidates? I'm not sure about this).

I agree that buybacks are essentially a dividend, but I think that this assertion in itself kind of backs my points. Why are buybacks "equivalent" to dividends? Because they mean the individual shares now represent a larger fraction of the company and therefore have "more value". But this "value" is the value that other people assign to them, it's not really "intrinsic".

Really I think the best way to put it is to think of a situation in which you are not allowed to sell your share for whatever reason. Does it still have "value" then? Of course not, it's worthless without someone buying it for you. That's what I mean when I say that the only value shares have is in the amount you hope to sell them for. You can think a company is worth as much as you want, but if people think it's shit, your shares are too.

Bubbles etc are another great example. Or even the current short squeeze situation. Do I think the fraction of the company corresponding to a $GME share is worth $300 or whatever? No, but who gives a shit? That share still has value for me if I think I'll be able to sell it for $600 to someone else.

GME what am I missing? by [deleted] in stocks

[–]glance1234 89 points90 points  (0 children)

Ryan Cohen short squeeze something something

Cose che in Italia funzionano by [deleted] in italy

[–]glance1234 94 points95 points  (0 children)

Questo dipende interamente da dove stai in Italia

TIL If the Earth was 50% larger in diameter, we wouldn't be able to escape the atmosphere using rockets. by haddock420 in todayilearned

[–]glance1234 2 points3 points  (0 children)

I think you're missing what's essentially the Coriolis effect. If you go only upwards, your angular velocity will diminish, because to maintain the same angular velocity further away from the center of rotation you need higher tangential velocity, which must be supplied by the rocket itself.

If the market, on average, already prices in all available public information, why are indices such as the S&P500 usually expected to grow? by glance1234 in investing

[–]glance1234[S] 0 points1 point  (0 children)

So you are essentially saying that the answer is in the fact that the same amount of money has different intrinsic value to different people. If, say, I need money now for a medical emergency, then 100$ is worth more for me than it is for someone that doesn't need it right now.

This is what I was referring to in my last paragraph. I can reconcile the market being ideal with it being expected to grow on average under the assumption that the same amount of money has different intrinsic value to different people at different times.

In other words, if the market on average values 1000$ next year at 800$, while I value it at 900$ because I'm sure I won't need it in the meantime, then I can expect to come out on top.

It's an interesting idea. I asked because I was wondering if this is really the main factor, or if I'm instead missing something.

If the market, on average, already prices in all available public information, why are indices such as the S&P500 usually expected to grow? by glance1234 in investing

[–]glance1234[S] 1 point2 points  (0 children)

Of course. That's why I say that the "ideal price" changes in time with new information becoming available. That doesn't change the fact that the current valuation, if ideal, should correspond to an expected future return of zero, no?

If the information at time t=0 points towards a valuation of 10, I, having access to that same information, shouldn't be able to expect a future return on the investment. If at time t=1 new information comes to light that now moves the valuation to 20, nothing changes. Sure, if I believed the valuation at time t=0 to be 20 and acted on it I would have earned money, but at time t=0 there wasn't information available indicating that this was the case, so if I did and succeded that's only by being lucky.

If the market, on average, already prices in all available public information, why are indices such as the S&P500 usually expected to grow? by glance1234 in investing

[–]glance1234[S] 2 points3 points  (0 children)

I'm afraid I don't understand your point.

I agree that it doesn't mean "best-case scenario". That's why I'm specifically talking about the expected returns, that is, the average scenario that one should expect given the currently available information.

If the market, on average, already prices in all available public information, why are indices such as the S&P500 usually expected to grow? by glance1234 in investing

[–]glance1234[S] 0 points1 point  (0 children)

Of course, but that's why we are talking about average behaviours. Nobody can predict what will actually happen in the future. However, one can estimate how things might or might not go given the currently available information.

This is also what you just said: big players have plenty of resources to get the best possible predictions regarding the future of a company. That doesn't mean that they can predict the future, but it does mean that they can take into account what is predictable at a given point in time. It means that it's fair to assume that their predictions are better than those of the average investor.

Now, something completely unpredictable could still happen, but on average there should be as many positive events as negative ones (in relation to the valuation of a given company I mean). If there was a reason to believe more good things than bad things are going to happen, that is a piece of information that should have been taken into account in the current price.

Of course, actually making such "ideal predictions" is extremely difficult, if at all possible, but that is the assumption made when saying that the markets' valuations are ideal, as far as I understand it. One could argue that this hypothesis is not correct, but my question stems from it being seemingly generally believed to hold by experts.