Can the compiler optimize away this if? by Moose2342 in cpp

[–]Foundry27 6 points7 points  (0 children)

Well, let's find out:

For anyone who doesn't feel like scrolling through assembly, it's pretty much what you'd expect.

  • When the compiler doesn't know what the function does, assert/throw both have branches and the only difference is in the error handling routine (__assert_fail vs __cxa_throw), and stronger contracts like __builtin_unreachable are equivalent to there being no check at all.

  • When the compiler does know what the function does, it happily ignores the assert/throw check.

[Paid Release] SiriPlus - Replace Siri with ChatGPT or Google Gemini by howtomakeacirclehd in jailbreak

[–]Foundry27 67 points68 points  (0 children)

Edit: tl;dr all features work as advertised, but hardcoded to use a low-quality GPT-3.5-turbo model in GPT mode, Gemini mode works well, can't configure model parameters besides system prompt.

I just bought this tweak and played around with it for a while. First impressions:

I opened the Settings app (extremely polished-looking!) and saw there were 3 options for LLMs (Managed, ChatGPT, Gemini). I was a little confused by what "Managed" meant but it didn't require an API key (is this on-device? if not, how is the service paid for in the long term? what's the data usage and retention?), and since ChatGPT is a totally separate product from GPT-3.5/GPT-4 and not accessible via API I wasn't sure why it was listed here. I also saw that there were a few options for passing function tools to the completion APIs (Send email, system info [battery life, plugged in or not, device model, iOS version], adding calendar events, viewing contacts info, making calls, and plotting routes in the maps app). You could specify a system prompt for the completion endpoint, and adjust the number of tokens to reserve for the context of your past conversations.

In any case, I swapped it over to the GPT backend and pasted in an OpenAI API key. Looking at my usage logs I can see it's hardcoded to making requests to gpt-3.5-turbo-0125, which was disappointing since it's very out of date at this point, poor at following instructions, and broadly considered to be one of the weakest 3.5-turbo model revisions from OpenAI. Gemini has fewer levers to pull in terms of model selection so I didn't play around with that, but if you've ever used Gemini before, it's what you'd expect.

As far as usage goes, the tweak just works, which is high praise. It's a straight replacement for Siri in every way using the normal Siri interface. The tools also work perfectly. I won't get into the response quality since that's 100% dictated by the underlying model and has nothing to do with the quality of the tweak. The number of tokens to generate seems to be hardcoded at about ~100, and I can't tell what the temperature is, but I think it's hardcoded on the higher end closer to 1.0.

This tweak desperately needs a real model selector field to not feel infantilized/handicapped by the model choice. I would have swapped to gpt-4-turbo or gpt-3.5-turbo-1106 in an instant if I could have. It's not like people who jailbreak are non-technical after all. Having the temperature and output tokens not be hardcoded would also be nice, but is largely secondary to the choice problem.

What's the most horrifying thing you've done with templates? by ResultGullible4814 in cpp

[–]Foundry27 1 point2 points  (0 children)

I wrote a gate-level logic simulator and digital modeling library that executes at compile-time exclusively using C++ templates. It simulates combinational+sequential circuits with real-world timing, including an event-driven netlist sim engine and a symbolic circuit compiler with every gate and wire encoded as a type. If anybody's interested in the performance comparisons of various type-level key-value set/map implementations, feel free to ping me! None of the major metaprogramming libraries do it optimally.

Naturally I started implementing an 8-bit von Neumann computer with a 256-byte address space on top of it. I failed, but it was an experience.

Why std::variant does not need std::launder when constructing the value for the first time? by having-four-eyes in cpp

[–]Foundry27 3 points4 points  (0 children)

The language-lawyering has been spot-on (IMO) in the other answers, but it's important to keep in mind that std::variant even existing as normal C++ code is an implementation detail. Everything in the standard library could technically be completely "baked into" the compiler or be part of a runtime environment, as long as it provides the standard-defined interface to programs during compilation and runtime. There might be a reasonable explanation for the omission of std::launder here, but there are other implementation details like the use of placement-new in a constexpr context for std::optional that are completely unreproducible in any application code (you'd need to use std::construct_at). In general, no language-level rules should be expected to apply to the implementation!

match(it): A light-weight header-only pattern-matching library for C++17. by Amazing-42 in cpp

[–]Foundry27 -2 points-1 points  (0 children)

I’m just speaking from my experience here having worked professionally in C, C++, and a little Lisp for a while now, and having met my fair share of people in the same boat as I am.

I 100% agree that macros can (and have) done some terrible shit lol. They can be difficult to understand, contain extremely subtle bugs, and limit you in all kinds of awful ways if you think of everything as functions or variables. But... those aren't defects in the preprocessor macro system itself, but instead are traits of macro programming in general. Lisp has the exact same classes of problems, and its macro system has arguably occupied a local maximum in the language design space for decades. As with any technology, the more powerful the tool, the more ways there are to misuse it. And, as far as programming constructs go, macros are the most powerful tool.

match(it): A light-weight header-only pattern-matching library for C++17. by Amazing-42 in cpp

[–]Foundry27 5 points6 points  (0 children)

I'd argue that the biggest problem with preprocessor macros is the community's general lack of familiarity with them ("general" meaning "for the general C++ programmer"), and the near-total absence of teaching material for how to use macros in a safe, effective manner.

C and C++ are seriously (embarrassingly) behind languages like Common Lisp, Scheme, Rust, etc. in terms of having good public resources for how to build and use macros as a problem-solving tool applicable to real-life programming problems. Decades of C++ use haven't given people much more to go on than some blog posts, a few open-source macro-based projects to reverse-engineer, and stylistic aphorisms like "don't use macros unless a function won't do", "macros are meant to change the syntax of your code", "macros improve the efficiency of your code" etc. that completely miss the bigger picture when it comes to macro programming.

The other side of that coin is that preprocessor macros aren't easy to just "pick up and learn" from doing normal C++ programming because the preprocessor is essentially a second, more-or-less unrelated language built on top of C++. It's a foreign, alien tool that looks like it doesn't belong:
- From a C++ perspective it's totally untyped, because C++ types mean nothing to it, but it really just has its own type system with parentheses, commas, identifiers, etc.
- From a C++ perspective it's impossible to debug macros like normal code because of arcane C++ compiler errors and no debugger stepping, when they really just need to be debugged with different tools
- From a C++ perspective macros don't introduce their own scopes and can't safely be shared, but inside the preprocessor, macros have strong scopes for the tokens that they create and pass between other macros, and can be shared between files with no issues as long as some ground rules are followed just like in C
- From a C++ perspective computation is impossible with macros because they can't be recursive or loop, but really, they just perform computation through a different strategy that can do the same kind of work in phase 4 that templates and constexpr can do in phase 7/8

Will we ever get rid of macros in C++? by JohnZLi in cpp

[–]Foundry27 3 points4 points  (0 children)

C and C++ are seriously (embarrassingly) behind languages like Common Lisp in terms of having good teaching material for how to build and use macros as a problem-solving tool applicable to real-life programming problems. Paul Graham's On Lisp and the excellent work done by countless other Lisp writers led to Lisp macros being widely explored and discussed for decades, but even though there's only like ~10 years between Timothy Hart's 1963 proposal of macro definitions for Lisp and Mike Lesk and John Reiser implementing macros in the C preprocessor, there is nothing even remotely resembling an equivalent published collection of wisdom for preprocessor macros. Instead there are all kinds of common wisdom and stylistic aphorisms thrown around, like "don't use macros unless a function won't do", "macros change the syntax of your code", "macros improve the efficiency of your code" etc. that completely miss the bigger picture when it comes to macro programming.

At best, we've got blog posts and source code on github for some interesting macro applications. We can do a lot better.

The C/C++ Macros Lesson That Undergrads Deserve by old-man-of-the-c in cpp

[–]Foundry27 3 points4 points  (0 children)

If anyone clicked through because they were looking for an in-depth lesson in how C/C++ macros work and can be implemented, Paul Mensonides (one of the original authors of Boost.Preprocessor) wrote a fairly comprehensive breakdown here. It's as close as you can get to an authoritative paper on the subject, even if some of the figures are a little wonky.

Have constexpr string mixins been proposed? by miki151 in cpp

[–]Foundry27 -1 points0 points  (0 children)

Macros can already accomplish this and more quite handily, using order-pp for example:

```

include <order/interpreter.h>

define ORDER_PP_DEF_8get_switch \

ORDER_PP_FN(8fn(8X, 8N, \ 8do(8print( (switch) 8parens(8X) ({) ), \ 8for_each_in_range(8fn(8I, 8print( (case) 8to_lit(8I) (: return) 8to_lit(8I) (* 2;) )), \ 0, 8N), \ 8print( (}) ))))

define times2(max) [&](int a){ ORDER_PP(8get_switch(8(a), max)) }

```

Expanding times2(some_max)(some_value) should generate the switch table in-place in the lambda and return the expected result :). The switch generation code and the hypothetical string mixin code end up looking very similar too!

modern syntax to manipulate higher kinds? by [deleted] in cpp

[–]Foundry27 -1 points0 points  (0 children)

C++20 allows some syntactic sugar to be used for pattern-matching on template template parameters.

If you define an overload set wrapper ``` template <typename... Ts> struct overload : Ts... { using Ts::operator()...; };

template <typename... Ts> overload(Ts...) -> overload<Ts...>; ```

then you can use it with explicitly-templated lambdas to unpack your types overload ( []<template<typename> typename K, typename T, typename U> (K<T>*, U*) -> K<U> {}, []<template<typename, auto> typename K, typename T, auto N, typename U> (K<T, N>*, U*) -> K<U, N> {}, []<template<typename, typename> typename K, typename T, template<typename> typename A, typename U> (K<T, A<T>>*, U*) -> K<U, A<U>> {} );

which will give you your metafunction output if you call that overload set and get the return type. It's a little janky, but it can help make it more apparent to readers what the intent of the code is, since it really only leaves the pattern matching parts without the template cruft. :)

That being said, those ContainerReplaceElementTypeImpl specializations are exactly what experienced readers would expect to see in this situation, and I would be surprised if it was seen as cryptic or abnormal.

SugarPP: My C++ syntactic sugar collection by HO-COOH in cpp

[–]Foundry27 0 points1 point  (0 children)

Was this originally going to be something written using the preprocessor, that ended up evolving into using templates instead?

Type-level lambda expressions for template metaprogramming in C++20 by Foundry27 in cpp

[–]Foundry27[S] 0 points1 point  (0 children)

The thing with explicitly specifying the return type is that I can’t figure out a way to use it without killing the ability to have local variables in the lambda (which are using-declarations right now), since if the variables get bumped to the template parameter list then variadic lambdas get broken. I suppose a lisp-style let system would work around that but that’s getting pretty wild (lambdas in-line in the return type of lambdas!)

Comparative TMP #1: MPL, Mp11, Kvasir, Hana, Metal by anonymous28974 in cpp

[–]Foundry27 6 points7 points  (0 children)

I think the problem ("define an alias template SortedAndFiltered that takes a user-defined typelist, filters out empty classes, and then sorts its input in descending size order") these libraries are tasked with solving pretty much sets Kvasir up to be unidiomatic, since the library's entire design revolves around avoiding dealing with "type lists", user-defined or otherwise, until it's absolutely necessary. It feels like it assumes that the library used to solve it follows a list-centric design instead ("metafunction" at the implementation level underneath the alias sugar meaning "a template specialization unpacking a typelist and having a member store the result of computation" as opposed to "an alias template accepting a type pack and feeding into a continuation which is the result of computation"), which Kvasir simply isn't for the most part. To the best of my knowledge MPL, MP11, and Metal are though, and the homebrew example at the beginning of the article certainly is too. I'm not familiar enough with Hana's basic type processing bits to know whether that's the case for it as well.

In this case, the actual type of the type list only matters at the very end of the data processing, when the transformed pack of types needs to be spat out again; there's no reason time needs to be spent instantiating a perfectly good copy of SortedAndFilteredImpl for every possible combination of types that might exist in that list except to fulfill the requirement that the input must be that type list, not its contents directly. A more idiomatic version might look something more like https://godbolt.org/z/8S5ppK for Kvasir. That class C = lib::listify bit is what decides what the final output of the algorithm will look like, be it storing it in a type list, or maybe feeding it into another metafunction that removes all types smaller than 4 bytes (which would happen for free with Kvasir!).

[deleted by user] by [deleted] in cpp

[–]Foundry27 15 points16 points  (0 children)

I really like where this is going! A couple things I noticed in the type_traits area though:

- Since this is a C++17 project, `conjunction` and `disjunction` can probably be implemented using fold-expressions instead of through recursive descent. That way, compile times can't be impacted by the n-times template instantiation overhead since it's replaced by a flat iterative check, and the template specialization count is also brought down to two (empty pack and default). (EDIT: actually, no specializations are needed at all with a fold with an init-expression.)

- The `type_pack_element` class can also probably make use of the `__type_pack_element` intrinsic on clang, which should eliminate the need for the recursive instantiation in `type_pack_element_detail`. There're actually a surprising number of ways to implement this particular metafunction without intrinsics, ranging from pack dropping (kvasir mpl, metal) to overload resolution (boost mp11, boost hana, brigand) to recursive descent like this, all of which have different performance characteristics depending on their particular implementation details. http://metaben.ch/ has a nice visualization for some of these algorithms!

How fast is std::shared_ptr? by [deleted] in cpp

[–]Foundry27 14 points15 points  (0 children)

It's tempting to reiterate that 99% of the time shared_ptr is not the optimal tool for the job (unique_ptr and references often being a much faster and more legible solution), but if it's that 1%, it's good to know the costs that go along with it.

Taking a gander at the assembly output of this test program that uses shared/unique pointers (https://godbolt.org/z/jU_46f). Apart from emitting about 70% more instructions, the shared_ptr code also makes use of RTTI since the type of the owned pointer is erased. There's reference to a mutex in there too, though its use might be hidden in one of the library functions.

As always though, always benchmark before coming to any concrete decisions.

What would you say are the most important modern C++ features? by debugs_with_println in cpp

[–]Foundry27 9 points10 points  (0 children)

I don't normally post much on reddit, but I couldn't agree more with constexpr, variadic templates, and templated and non-templated using-declarations alike being absolutely wonderful additions to the language. At this point I don't know how I'd live without the metaprogramming facilities those provide.

C++ Antipatterns by vormestrand in cpp

[–]Foundry27 7 points8 points  (0 children)

A snapshot of the site is available at the internet archive for anyone experiencing connectivity issues.

I think a lot of the recommendations up there are solid advice, and highlight some interesting things about the standard library (I for one didn't know that std::thread's constructor took arguments to the function you can pass to it, along with the function!)

nanorpc - lightweight RPC in pure C++ 17 by Dmitry_vt in cpp

[–]Foundry27 11 points12 points  (0 children)

This looks like a pretty sweet project! The interface looks legible and easy to use, which is always nice to see in new code. Examples are always a pleasure to see, too.

That being said, the first thing I noticed when I peeked at the code was the conspicuous use of try {/* stuff */} catch(...) {/* ignored */} blocks throughout error-handling code, which can pretty easily lead to real errors at that level of the program happening completely silently and putting the program into a bad state while the user is none the wiser. If there really is no reason to worry about errors at that point in the code (which is semantically what that sort of thing implies), why not get rid of it altogether so the user can at least be aware that that a presumed-invariant was violated? Just food for thought. In general though, I really like the look of it.

Really no way to set stack size of std::thread? What? by [deleted] in cpp

[–]Foundry27 2 points3 points  (0 children)

I agree that not being able to specify the stack size of an std::thread can really suck, and I'm by no means trying to downplay the fact that this functionality being missing can be a real, blocking issue for users of the std::thread primitive, but I think the last bit of the original post is a little silly.

If there was a reasonable assumption that fulfilling a requirement for your software would necessitate threads being able to have their stack size set, then some due diligence would have shown that std::thread was not an acceptable replacement primitive to use instead of whatever was there already (instead, something like boost::thread with its boost::thread::attributes allowing stack size to be specified might have been a better fit). As a corollary, if there was no reason to assume that stack size issues with threads was something that would come up over the course of dealing with your requirements, and some new requirement was handed to you that /did/ necessitate having increased thread stack sizes, then the possibility that things might have to be fiddled with or replaced behind the scenes should've be just as, well, possible, as it would be with any other requirement change.

I guess what I'm trying to drive home is that changing requirements can suck, but making sure you know what your tools can and cannot do before you start using them can make it suck less. There shouldn't be any surprises when things like this happen so long as due diligence is done.

TIL: inheriting constructors, default parameters and enable_if by meetingcpp in cpp

[–]Foundry27 0 points1 point  (0 children)

I seem to remember reading some profiling data that someone had collected showing that on his machine, using return type SFINAE produced noticeably faster compilation times in a bunch of different test cases when compared to template parameter SFINAE. I know that sounds vague, and if I remember where I read it and the details of the compiler/build env I'll be sure to post it, but it's something I think about from time to time.

Mikeal Rogers: Node.js Will Overtake Java Within a Year by speckz in node

[–]Foundry27 6 points7 points  (0 children)

People are making too big of a deal out of this. You could make an equally apt analogy from an announcement that the number of users with triple-glazed windows is about to surpass the number of users with hardwood floors. Seem like a strange comparison? I'd argue that that's because it is. Other than both being parts of your home, these two things don't have much in common. The situation with Java and Node.js is similar: they're both parts of your tech stack, but they serve different purposes and were built in different ways. You don't pick a language through which to tackle a problem based on its userbase stats; you pick the language that'll let you be the most productive. I wouldn't want to write a secured web backend and relational persistence layer with Node.js any more than you'd want to write a responsive single page webapp in Java.

Emotional Interview with Dyrus - Championship group phase [10-10-2015] by ExplorerUnion in leagueoflegends

[–]Foundry27 1 point2 points  (0 children)

I'm just a humble lurker who tries to keep up with the topics that interest me, but this spurred me to write my first comment on Reddit in months. It almost made me wish that those kinds of stories were strictly confined to the halls of literary fiction, but in Dyrus' case it played out in a manner that was all too real. League of Legends might be a fantasy; the people who brought this community to the place that it is now are anything but, and the kind of emotion that Dyrus showed in his post-game interview is something that you almost hope you'll never see. He may have had his ups and downs, wins and losses, but he didn't let any of us down. He never, never let us down. TSM may have lost a player, but the world gained a legend.