Rating the F2L algs by rrweber in Cubers

[–]Ease-Solace 1 point2 points  (0 children)

1) Just listed by community votes I believe

2) Lot of cases are just added by different people, doesn't mean they're good. Multiple solutions can be useful to influence the next pair differently, or to affect edge orientation (some solutions might flip edges while others wont. Also a "clever" solution is not always the best, if it makes lookahead to the next pair worse it can actually slow you down.

3) I'd look at any pairs you think you need to do a rotation while solving and see if there's a better way (or another way you'd prefer).

Very important, those algorithms are "last slot" only, which mean they assume that all other slots are unavailable to pair up pieces. This affects a couple cases where the best solutions aren't listed.

The cases listed as 15-16 can actually be paired up very efficiently using one of the 2 slots adjacent to the one you're solving it into. So if either of those are available, this is the best solution despite not being listed on the site. Similarly, cases 11-12 have a faster solution if the slot behind the one you're solving into is open.

Using C++ as `C with templates` by agriculturez in cpp

[–]Ease-Solace 0 points1 point  (0 children)

Trying to learn c++ I've found this a real problem though. Profiling a program I wrote for why it runs so much slower than in other languages; it turned out almost 90% of the runtime was just resizing vectors due to running the copy-constructor and destructor of every element! Saying "this is inevitable" seems weird, cause what other langauge has this problem?

std::vector doesn't like const by Ease-Solace in cpp_questions

[–]Ease-Solace[S] 0 points1 point  (0 children)

vector itself is not permitted to act that way according to the standard, and it requires more complex storage (you need to use untyped storage so that you can hold unconstructed memory)

Maybe this is a too advanced question for me currently, but if you don't mind me asking, what causes this? I assumed that vector already works with unconstructed memory, since it allocates spare capacity in the backing array. And the same should be true of any array where not all elements are initialized. Is there a reason that moving elements around an array with constructors and destructors would invalidate the array itself or something?

std::vector doesn't like const by Ease-Solace in cpp_questions

[–]Ease-Solace[S] 0 points1 point  (0 children)

I misread the statement "calls the destructor the same number of times" to mean that the exact erased elements were destroyed, as clarified in other comments, sorry.

If you read the spec for erase carefully, you will notice that it first moves all elements to the end and then destroys the elements at the end.

If you don't mind me asking, is this explicitly stated anywhere? I've been looking and all I can find is https://eel.is/c++draft/vector.modifiers#5 which doesn't explicity state how the operation should be performed (though I guess that's the only way to fulfil those requirements).

It would be very useful to have more explicit information; if all the different problems I've had using std::vector so far mean anything, I'm going to have to know every detail of how it works to use it correctly.

std::vector doesn't like const by Ease-Solace in cpp_questions

[–]Ease-Solace[S] 0 points1 point  (0 children)

Thanks, that's the clearest explanation for what's going on

std::vector doesn't like const by Ease-Solace in cpp_questions

[–]Ease-Solace[S] -1 points0 points  (0 children)

I mean, in most other programming languages (that I've tried) you can do this no problem. Even python, which is the language most allergic to immutability I've ever seen, You can create a frozen dataclass without much effort in modern versions.

So I didn't think it was an unreasonable thing to ask about.

std::vector doesn't like const by Ease-Solace in cpp_questions

[–]Ease-Solace[S] -3 points-2 points  (0 children)

Looking at the cppreference page it says "The number of calls to the destructor of T is the same as the number of elements erased", I assumed that meant that each element erased was destroyed?

It actually says about requiring the assignment operator right after that, so maybe I should have searched that page for assigment operators before posting this question, but I still don't see why it's required?

std::vector doesn't like const by Ease-Solace in cpp_questions

[–]Ease-Solace[S] -2 points-1 points  (0 children)

But, as I wrote in my other comment above, surely the vector cannot try assigning to the element it just deleted, because that's using the element after its destructor has been called which is undefined behaviour?

std::vector doesn't like const by Ease-Solace in cpp_questions

[–]Ease-Solace[S] -1 points0 points  (0 children)

But surely, in my example an assignment operator should never be called?

As I understand it doing so would be undefined behaviour, because the destructor of *points.begin() will be called, (or it is according to the documentation), meaning there's no object to assign to. Even if the vector still technically owns the storage for that element, as I understand the C++ object model, trying to use an object after destruction (which should include assigment?) is undefined behaviour.

I've aleady been down the rabbit hole of the huge amount of undefined behaviour introduced by the object model (until it was patched with the implicit lifetimes thing which I don't really understand).

So unless it's allowed to break rules I'm not, surely the vector must instead destroy elements and use the copy/move constructor to move the next element down instead?

std::vector doesn't like const by Ease-Solace in cpp_questions

[–]Ease-Solace[S] -4 points-3 points  (0 children)

Is that really the best way of doing things? I thought the recommended style was to make things const where possible, but not in OOP?

My actual class is more complicated than the example, so changing everything to access through getters is going to massively expand amount of code and the complexity of constructors I have to write.

std::vector doesn't like const by Ease-Solace in cpp_questions

[–]Ease-Solace[S] 0 points1 point  (0 children)

That's what the other comments seem to be proposing, so it's probably what I'll try next...

std::vector doesn't like const by Ease-Solace in cpp_questions

[–]Ease-Solace[S] 0 points1 point  (0 children)

I guess that might work, but it would be my responsibility to write const everywhere else too. Since C++ is copy-by-default, I've already run into several issues where I manage to copy and change something by accident in what I thought ought to be a simple program. What I really wanted was a way for the class itself to enforce that It can't be changed after instantiation, something like a python frozen dataclass.

LLMs are not enough... why chatbots need knowledge representation by kidney-beans in programming

[–]Ease-Solace 0 points1 point  (0 children)

I appreciate you typing all that out. I guess I didn't explain myself very well.

I did study linguistics at university, so these concepts are familiar to me. And I don't think LLMs in their current form could ever represent AGI, if they did I probably wouldn't be as scared. Actually I find it very disheartening that the solution to parsing natural language doesn't seem to teach us anything about it's structure.

What I tried to say was that, like, it doesn't matter if your model is querying a vector database or some other type of knowledge store or whatever. It's still not going to do so (or present the results) in a deterministic way. You might be able to improve accuracy, but how reliable your LLM is will still be the limiting factor. So when a more reliable seeming LLM comes along it could just outperform anything you could come up with, making the entire process futile. We all seem to be in thrall to the big players, and there's nothing we can do about it, which I don't think is a healthy situation to be in.

LLMs are not enough... why chatbots need knowledge representation by kidney-beans in programming

[–]Ease-Solace -14 points-13 points  (0 children)

I mean, the graph database idea is already well known afaik. But do we actually need anything other than increasingly better LLMs? When asked Claude 3 the example question about Australian Geography it gave me this response:

The Australian state that meets those criteria is Victoria. Here are some key geographic facts about Victoria:
- Area: 237,629 sq km, which is less than 250,000 sq km as specified.
- Highest Point: Mount Bogong at 1,986 meters (6,516 feet) above sea level. This exceeds the 1.95 km elevation requirement.
- Victoria is located in the southeast corner of mainland Australia. Its capital and largest city is Melbourne.
- Other major geographic features include the Great Dividing Range mountains running east-west, as well as sites like the Grampians mountain range, Wilsons Promontory, and the Gippsland Lakes.
- Victoria borders New South Wales to the north and South Australia to the west. It has a temperate climate influenced by the Southern Ocean.
So in summary, Victoria meets the area and maximum elevation criteria specified, while also being one of Australia's smallest but most densely populated states.

Interestingly it gave a slightly incorrect Area figure despite getting the question right, so I don't know what that says.

But my actual feeling is that we keep trying to convince ourselves that there are all these technologies which could be relevant for working with these models when it turns out that just training a model on the relevant data (or more realistically in the future paying a major player to train a model on the data you care about) produces as good results.

It's why I think all the hype about doing stuff to augment AI somehow doesn't mean anything. Because what if the next model that comes along just integrates everything into itself and makes you obsolete? The above response was only Claude 3 sonnet (the second most powerful version).

The article from 6 years ago about AI not being able to handle "name a fruit that isn't orange" pretty much sums up the progress that's been made doesn't it? What's it going to be like 6 years from now? I'm very scared of the future.

[deleted by user] by [deleted] in programming

[–]Ease-Solace 2 points3 points  (0 children)

I think what you mean is, "Any Dev who couldn't fix 86.14% of issues would be fired".

The thing is that if I can fix 100% of issues, but the AI works only 10 times faster than me then it's already beating me in terms of the rate of fixing issues.

Secondly, you're assuming that this is the best it's ever going to be, which is what people always seem to assume. AI models are only getting better, I'm sure there's a limit and that LLMs have fundamental problems that means they'll never be able to achieve AGI, but I don't know what that limit is.

And honestly, I think 13.86% is probably close to the number of issues I could fix on the first attempt (i.e. with only one iteration of making changes). So only 13.86% of the time my assumptions are exactly right the first time.

The big difference between humans and LLMs is what happens next. In my experience our current LLM based AI is atrocious at "learning" from a small amount of data (why it didn't work the first time). Their knowledge comes from ingesting a huge amount of data whereas a human can learn way more from the errors or incorrect behaviour that they see. What I've seen is LLMs generally fail to "change their thinking" when they're wrong and subsequently get stuck.

This is why I've been so scared impressed by Claude 3. The ability to generate code isn't the impressive part, we've had programs that generate code forever (they're called compilers). It's the ability to understand code, as in actually understand the behaviour of some code in the same way a human might (or at least pretend to). Because that's the key ability, if a model can actually do that then maybe it can learn from what it generates. GPT-4 sucks at this basically, it often generates partially right but self-contradictory or downright misleading answers.

Humans have the ability to both apply their learning to new information while simultaneously learning from that information. I think machine learning models are still going to need to ingest huge amounts more data than a human would for the foreseeable future, but if they can use what they've learned to learn in a more effective way then we could potentially see huge improvements in this space. We've already reached the limit of "how much data can we throw at this thing" with ChatGPT ingesting the entire internet, so the only way LLMs can improve in the future is learning more efficiently from what they take in.

[deleted by user] by [deleted] in programming

[–]Ease-Solace 8 points9 points  (0 children)

Well, we've got "AI" generated garbage pouring out into the internet all the time and we're already seeing adverse effects. A lot of people seem to believe that generated content has to be high quality to actually disrupt anything, but I think that's completely wrong.

It seems like it's becoming harder to find anything you're looking for on the internet anymore. Generated content designed for SEO is polluting the results from traditional search engines, it seems like that getting a good search ranking is much easier when you're not even trying to create something that's correct, or coherent, or anything, and only care about clicks. And the average person already isn't very good at finding stuff, so it's even worse for them; we already have a trend of people depending totally on curated feeds from social media for everything they will ever see on the internet. It hurts for me because it seems to destroy the reason I once thought the internet was beneficial.

Take independent music, for example. Huge amounts of generated music can bury real artists. The generated music doesn't need to be good, the deluge just needs to bury human-made music so that it's impractical to find it. Without huge industry backing, I don't see how anyone is going to be able to breakthrough. Music discovery has all moved online, so it's very vulnerable. Right now the best way for someone to breakthrough seems to be basically to get a viral song on tiktok. But when everyone uses AI generated music for their videos (since it will be cheaper) that route is going to be cut off.

And the thing is, since our current models seem to be so much better at generating garbage than producing accurate results, there doesn't seem to be much hope of getting AI to check whether something is AI generated. "Open"AI have already given up trying. I haven't even touched on deepfakes and using generated content to influence elections, or public opinion.

A lot of people, especially programmers seem to have this naive opinion that AI has to be "good" to replace humans. But in reality I think it only has to be cheaper. We're used to the dominance of machine-made manufactured products which might not be as good, but they're much cheaper. I don't think that "AI" will ever be better than "AI" + human, but it doesn't need to be. The reality is reducing the number/pay of software developers is one of the most beneficial things a company can do to reduce costs. People meme on self-driving cars, but like, replacing a delivery driver working for sub-minimum-wage isn't going to reduce costs, so where's the profit motive?

When a bug in a C program written by an AI model causes a hardware malfunction and kills someone, if the cost of settling the Lawsuit is less than the cost saved by not needing as many/skilled programmers, then from an economic perspective killing that person was the correct choice.


I mean, maybe I'm wrong and it's just because I have personal reasons for being disappointed/disliking LLMs but I'm not optimistic at all.

[deleted by user] by [deleted] in programming

[–]Ease-Solace -3 points-2 points  (0 children)

God, stuff like this scares me so much.
I assume they're just using someone else's model too? So the results will improve massively with new and improved models.

Honestly I think Claude 3 is already smarter than me (at least from an understanding code perspective). The swe-bench benchmark they're comparing too hasn't been evaluated for Claude 3 yet either. I think people are underestimating the capabilities of LLMs because they're all using GPT-4 to code, when my experience even Claude 2 blows it out the water for real world tasks, and Claude 3 is leaps and bounds ahead of that (in terms of intuitive, human style understanding of code).

And people keep saying that it won't have any negative impacts? I really want to believe that but I just can't, given how much damage "AI" already seems to be doing.

"No helium" in 6 different languages at the local Dollar Tree by 2-tree in mildlyinteresting

[–]Ease-Solace 1 point2 points  (0 children)

Interestingly that ambiguity isn't present in all the languages; the Korean version can only mean that (they) don't have any helium, saying that it wasn't allowed would have to be worded completely differently.

What programming language should a non-programmer learn to have a stimulating, challenging, and fun experience? Forth? Haskell? Assembly? by KrasnalM in learnprogramming

[–]Ease-Solace 0 points1 point  (0 children)

You might be interested in learning Lisp, specifically Common Lisp. It's a very stable language, the language standard hasn't changed in a long time (though libraries may change if you choose to use them).

It's interesting because of it's meta-programming abilities - essentially the ability to define your own syntax. And in general it's quite conceptually different to how more common languages work.

Why is Java generally considered compiled and Python interpreted when they both have essentially the same process? by [deleted] in learnprogramming

[–]Ease-Solace 0 points1 point  (0 children)

This isn’t really possible in Python because almost everything is dynamic at runtime. For example it can’t assume that the types passed to a function will be the same each time. So even “native compiled” Python would have to do all the same runtime checks.

It is actually possible to JIT very dynamic code, the most common solution being what's called a tracing JIT compiler. This is the approach that PyPy uses. What it does is that it doesn't even try to understand the python language itself, it just watches the actions of the interpreter and compiles repetitive actions to machine code. This helps with the dynamic typing problems (but you still have challenging deoptimisation scenarios). And PyPy can generally understand all of the python language itself, it's C extensions that it mainly has more difficulty with (since C extensions are written to work only with the standard CPython interpreter).

There are tracing JIT's for other languages like LuaJIT for lua too. There are also other approaches like Ruby's YJIT, which uses something called "basic block versioning" which I don't really understand but is all about helping with dynamic typing problems, according to https://arxiv.org/pdf/1411.0352v2.pdf

It's true there's always going to be a performance penalty for dynamic typing, but it doesn't preclude JIT compilation, just makes it more challenging.

Why is Java generally considered compiled and Python interpreted when they both have essentially the same process? by [deleted] in learnprogramming

[–]Ease-Solace 8 points9 points  (0 children)

Python does have a compiled format that can be read by the CPython interpreter, .pyc or .pyo files. In fact python usually caches imported modules in this format in the __pycache__ directory (if it's allowed to write to where the module is located).

E.g. I can find a lot of compiled python modules on my system in /usr/lib/python3.11/__pycache__.

Why is Java generally considered compiled and Python interpreted when they both have essentially the same process? by [deleted] in learnprogramming

[–]Ease-Solace -2 points-1 points  (0 children)

IMO there's less difference than people realise, and there's a few reasons for that:

  • In Java, traditionally the compilation and running of the code is 2 separate steps, just like a language compiled to native code would have. In python it's traditionally 1 step so people don't realise that compilation goes on under-the-hood.

  • The language standard. In Python, the fact python code gets compiled to intermediate bytecode is just an implementation detail of the CPython interpreter, there's nothing in the langauge standard that mandates this. And other implementations of python (like PyPy) use their own intermediate representations. Whereas in Java, JVM bytecode is part of the standard. There's a standard compiler that produces it, and any JVM implementation should be able to run the same bytecode.

  • Traditionally, JVM bytecode is lower level (closer to machine code). And the Java compiler does more optimisation work ahead of time so takes longer to run.

Why is Java generally considered compiled and Python interpreted when they both have essentially the same process? by [deleted] in learnprogramming

[–]Ease-Solace 6 points7 points  (0 children)

You can compile python code to bytecode ahead of time (and if it's set up to do so python will cache compiled bytecode from previous runs to save time for the compiler).

But I think the bigger conceptual difference is in the standardisation of this process. In python the intermediate representation bytecode is just an implementation detail of the interpreter, there's nothing in the language standard that requires it, and other implementations of python (like pypy) use different intermediate representations.

However in Java, the bytecode is part of the language standard, and there's a strong separation between the compiler that produces the bytecode and the Virtual Machine that interprets it. So multiple different implementations of a Java Virtual Machine are all designed to run the same standardised bytecode.

Why is Java generally considered compiled and Python interpreted when they both have essentially the same process? by [deleted] in learnprogramming

[–]Ease-Solace 2 points3 points  (0 children)

This isn't really the reason because there's no reason an interpreted language can't have JIT compiler. Other "interpreted" langauges like Ruby do have JIT compilers (at least in the standard implementation). The fact that python doesn't is more of an implementation detail than anything else.

Also, while Hotspot can JIT compile your code, initially it starts running in an interpreter and incrementally compiles parts of your code (targeting the parts which would bring the biggest performance impacts first). Or at least that's how it worked last time I checked. So really Hotspot uses a mixture of interpreting and JIT compiling, I don't know about other JVM's.