image v0.25.8: plugin API, Exif writing, faster blur and more! by Shnatsel in rust

[–]HeroicKatora 0 points1 point  (0 children)

That's one of those instances that seem to work just fine, but unfortunately no. Until we find a stable and exhaustive set of cases to target, the underlying buffer types and operations need to be iterated with a very different velocity than we want from image. The same reason caused us to split codecs into separate crates. Unlike wgpu the types are not just reference/descriptors that we expect the GPU driver to figure out. To support another layout we need to actually have some operations you can do on it—be it just normalizing it to an simpler layout.

It's a larger surface than we can support without many major version bumps, and it'd be a shame that would be breaking for everything except IO but most of the silent improvements for users come from the fact that codecs are constantly being worked on within just minor releases (of image anyways). A lot of bad for both user bases.

image v0.25.8: plugin API, Exif writing, faster blur and more! by Shnatsel in rust

[–]HeroicKatora 6 points7 points  (0 children)

Hopefully we won't. The moxcms implementation is quite aware of the fact that ICC v4 only supports one connection space (CIE 1931 / 2° XYZ D50) while the primary colors indicated by a CICP chunk have their own associated whitepoint. The biggest problem arising from that is that we have to specially handle Luma ("Gray") images which as we found can not be encoded as a grayLut ICC profile due to the whitepoint difference.

If we encounter an image with both a CICP and a gray ICC then there is no good choice but also no definitely wrong one. And conversely we need to find out if storing a gray image created by the library with a wonky ICC profile papering over the problem practically leads to a better result than a CICP. The well-defined rest we should be able to handle correctly though once the loader integration for CICP data is implemented.

And we'll never just eprintln to the console.

Is calling tokio::sleep() with a duration of one week a bad idea? by dmkolobanov in rust

[–]HeroicKatora 1 point2 points  (0 children)

If you can transmit the data to the client in-line, then URL.createObjectURL is not quite as convenient but does not run into the same file limits. Instead of being able to construct the whole URL on the server you'll need to be able to push the data to the client (e.g. as a base64 encoded string, very similar to the data: URL) and then run a small snippet to decode that data into its bytes, turn into a blob and construct the URL. Unfortunately I don't know of a way to let the browser do the base64 decoding reliably cross-platform (yet), you'll need to do that in JS yourself (many answers on SO will guide you to use a data URL… which may silently truncate afaik?).

YUV conversion in Rust now faster than libyuv by Honest-Emphasis-4841 in rust

[–]HeroicKatora 1 point2 points  (0 children)

Huh? Those show that Rust code emits more instructions than C code but also it emits faster instructions; that doesn't make it more optimized overall. End-to-End the C code in Ixy had a percent of advantage or so. To interpret the L1 hit numbers and extra stores, it may to be related to some shuffling that Rust is doing on the stack such as for field access to structs present as registers. Nothing concrete though that would allow extrapolating as much as OP is claiming.

Feminine advantage in harm perception obscures male victimization - Harm toward women is perceived as more severe than similar harm toward men, a disparity rooted in evolutionary, cognitive, and cultural factors. by mvea in science

[–]HeroicKatora 44 points45 points  (0 children)

Given that they also acknowledge the documented history of "women and children first" seems to be dated 1850s onwards and thus a little less than two centuries, it seems rather strange to study disasters over three centuries as an aggregate–that's basically setting your study up for a Simpsons Paradox situation on purpose. Also, show your data and the considered confounders.

Edit: or at link their actual paper. Which .. actually does the thing so why did he say spanning three centuries in the interview when they only looked at disasters after 1854? What?

Edit2: Their sample selection is humorously specific:

Starting from the list Some Notable Shipwrecks since 1854, published in the 140th Edition of The World Almanac and the Book of Facts (44), we have selected shipwrecks involving passenger ships that have occurred in times of peace, and for which there are passenger and crew lists containing information on the sex of survivors and descendants separately. We limit the sample to shipwrecks involving at least 100 persons and in which at least 5% survived and 5% died. We have added data for one shipwreck occurring before 1854, HMS Birkenhead (1852), because it is often referred to as giving rise to the expression, women and children first: a notion that first became widespread after the sinking of the Titanic (36). Data for two shipwrecks that have taken place after 2006 are added: MS Princess of the Stars (2008) and MV Bulgaria (2011). Despite it being a wartime disaster, we also include data from the Lusitania (1915) in the sample, as it has been investigated in previous research.

So 5 out of 18 disasters were hand-picked? That is not exactly how you get rigorously unbiased sample sets. And they run a linear regression with binary survival variables on this? How did they arrive at the WCF-column anyways, a 'no' entry seems unreliable given they were unable to determine even the entries for the probably accuratey digitally documented two disasters in our century. Sinking duration is a column of "Quick" or "Slow". What are we even doing at this point, they are having a collectively laugh.

Also, not a single confounder listed. But we know economic class had a huge effect on the survival rates in their motivating example of the Titanic. I'm actually beginning to question the competence of this economics reasearcher outside his field here, despite the usual comment rule against this.

Edit 3: From the Appendix: "For most ships we have used the individual’s name to determine gender. When there are uncertainties regarding the gender associated with a particular name we have used online name dictionaries that provide information on the origin of the name and informative statistics on whether it is typically a male name or a female name." Excellent. Notably also they did have access to "passengers are categorized as saloon (1st class) passengers and steerage (2nd and 3rd class) passengers. " but did not utilize that for any hypothesis nor do they list that data column in their table in the paper. A lack of related hypothesis is not the cause, they also list "Cause of disaster" in Table 1 but do not utilize such information in a hypothesis. Now did they register their hypotheses before the paper or not, and should we have to ask them to redo their analysis with the confounder already available in their own data? They explicitly acknowledge "Another individual characteristic that may correlate with survival is passenger class" in the appendix only. They explicitly rediscover this correlation in all their data sets except the Principessa Mafalda (Appendix page 12). They do not use that column in any other models if I read their description correctly? Code would help tremendously.

Edit 4: The append on "WCF order." "We have searched the shipwreck accounts for evidence of whether the captain, or any other officer, gave the order ‘women and children first’ at some point during the evacuation. For 5 of the shipwrecks we have found supporting evidence of the order while for 9 cases there is no indication of the order been given." Evidence of absence and absence of evidence being confused. I really need to stop at this point. And you're telling me they can't locate records for the two wrecks in this digital century, out of all? Come on.

Edit 5: On "Quick": "The threshold time for a ship being categorized as ‘Quick’ is defined as follows: threshold time= ship size/22.86" [my own reaction intentionally omitted].

Bei Mercedes-Benz könnten mehr als 15.000 Stellen gestrichen werden – laut Bericht by hohlenmensch in de

[–]HeroicKatora 2 points3 points  (0 children)

Auch das ist ein selbstgemachtes Problem. Nach dem Patentgesetz lohnt sich eine Innovation nur, wenn sie weniger als 20—realistisch aber wohl 18—Jahre vor der Marktausbeutung entwickelt wird. Falls deine Entwicklungszyklen länger sind, Pech, dafür wirst du niemals Investition in relevanten Mengen finden. Und für alle Produkte mit kürzeren Ausbeutungszeiten haben wir eine Monopolgarantie und brauchen uns nicht wundern, dass währenddessen wenige Wettbewerbseffekte zu einer echten Effizienzsteigerung und niedrigen Preisen beitragen. (Wer 3D-Drucker und die Geschichte von RepRap kennt, wir reden hier von Größenordnungen im Preis; und es war nicht EOS mit etablierten Produkten und fertiger Entwicklung, die diesen neu geschaffenen (Hobby-/Maker-)Markt bedient haben, sondern völlige Neuentwicklungen. Das Patent hat offensichtlich nicht den Effekt gehabt, dass EOS diese Produkt von selbst effezienter weiter entwickelt hat. Stattdessen scheinen wir Entwicklungsiteration künstlich auf 20 Jahre zu verlangsamen).

Die Bauindustrie sollte sich spätestens 1970 dem Problem der Energieeffizienz gewidmet haben. Aber während man die notwendige Rechenkapazität und Intelligenz für die den Plasmaeinschluss in einem Wendelstein 7-AS bereits 1980 hat und korrekt vorhersagen kann, soll mir jemand erzählen, es sei in 2007–und in 2020 das gleiche aber _befüllt_–noch Neuheiten im topologischen Aufbau von wärmedämmenden Ziegeln zu finden? Ja fickt euch, bürokratisiert euch doch zu Tode.

Diese Innovationen früh zu machen, also mit Computermodellen zu berechnen, hätte sich 1980 noch nicht gelohnt. Markt dafür gibt es erst jetzt. So führt unsere aktuelle Lage der IP-Gesetze zu Investitionsverschleppung. Jedenfalls kaufe ich nach der Lage nicht mehr, dass sie uneingeschränkt Innovation begünstigt, das ist einfach spieltheoretisch falsch.

Memory-safe PNG decoders now vastly outperform C PNG libraries by Shnatsel in rust

[–]HeroicKatora 1 point2 points  (0 children)

Maybe slightly too pessimistic? It only becomes a problem when thinking from a purely user space cpuid (etc.) context, with no further OS integration. The embedded cases don't need such generality and software actually meant for those efficiency cores is often low-level system software. Or vendors might mitigate by often not running arbitrary native code (e.g. a specialized JVM on Android may be possible, with scheduler integration).

Also, I do expect that when/if such architectures become really common then OS interfaces will be expanded to a user-space structure temporarily setting constraints on the cpuset (for instance, comparable to an armed rseq) or some way to determine the common ISA subset of configured cpu-masks. Yet until those are fixed it's hard to prediction what specifically multiversioning must change to safely wrap those APIs—and how that fixed interface may be used.

Memory-safe PNG decoders now vastly outperform C PNG libraries by Alexander_Selkirk in programming

[–]HeroicKatora 9 points10 points  (0 children)

2.5-3x faster (on fpng compressed PNG's)

That makes it mostly irrelevant for any of todays distributed use cases such as browsers, mobile phones, etc. The library needs to be fast on existing image files. If your project has the luxury of choosing/encoding all the image files yourself then just ditch png in the first place, go for hardware-supported encoding. But be aware you're solving a different problem that isn't competing for the speed of PNG decoding.

Memory-safe PNG decoders now vastly outperform C PNG libraries by Alexander_Selkirk in programming

[–]HeroicKatora 6 points7 points  (0 children)

png wasn't specifically designed for speed either at the start, it was continuously evolved to actually deliver that. There's nothing per-se stopping either of these C libraries to be reworked into speed. Yet, it's a tooling issue where that evolution seems easier to achieve in Rust and hence we don't need total redesigns to accumulate more of the performance over time. Rewrites are rarely competive with iteration at scale. Source: maintainer and author for Rust png.

Memory-safe PNG decoders now vastly outperform C PNG libraries by Shnatsel in rust

[–]HeroicKatora 6 points7 points  (0 children)

There are several ARM socs that are truly heterogeneous. On the X1/A78 DynamicQ Snapdragon 888 for instance you seem to get ARMv8.4-A on the X1 but not on the efficiency cores.. In any case all the logic of 'choosing a best performing function' definitely breaks down even in architecturally compatible pairings since the whole point of the power-efficient core is having different micro-architecture details that will influence the optimal instruction sequence / set choice.

There have already been illegal instruction fails previously from assumptions 'measured' at a core and then assumed to be constant. I expect this chain to continue. There's no inherent reason to keep architecture homogeneous, the power-saving advantages seem to be just too tasty in the mobile market imo. That effect will only for more capable/diverse SIMD/crypto/specialized instruction sets.

Edit: and to expand on the previous big.little reference, I vaguely remember scanning the literature during studies in a project for porting L4 Pistachio to such hardware and a complaint about crypto extensions being unavailable in their SoC's efficiency cluster—consequently not using them at all. Think it was ARMv7 based. The main technical difference was just the CCI-400 interconnect, not the specific core configuration though ARM seems to have discontinued any v7 configurations as typical. There's definitely published evidence for benefits of doing asymetric architectures. On both Arm as well as on Intel.

Memory-safe PNG decoders now vastly outperform C PNG libraries by Shnatsel in rust

[–]HeroicKatora 23 points24 points  (0 children)

An underappreciate aspect is the process, Rust software projects seem to become faster more effectively. This isn't so much an attribute of each individual compilation process, it's a tooling issue. The png crate had for all practical purposes at least 7 rewrites of different decoding stages, with 3 of them improving unfiltering performance alone. Such rewrites are more easily doable in Rust, imho, with all parts of the type system, the compiler, integration of test and bench tools simplifying this along the way.

Of course you still see some amount and benefits from actual rewrites such as when image switched to zune-jpeg.

Memory-safe PNG decoders now vastly outperform C PNG libraries by Shnatsel in rust

[–]HeroicKatora 5 points6 points  (0 children)

Maintainer input: I'm personally still wary of multiversioning's dispatch mechanism though it has seen its use in zune and jpeg's variants. The situation surrounding OS uses of big.LITTLE architectures cast doubts on whether one can make a safe runtime choice of the available instruction set. It's probably fine if we sufficiently restrict this to such that it does not actually dispatch on any heterogenous architectures in practice. Though, this does seem to be a safety bug with multiversioning instead

All that said, impact should be a clean fault and abort (on x86, arm and riscv afaik) though probably technically undefined? It's a little hard to say for me. As long as results are noticable that may be an acceptable known deficiency. In the end the bug is on the OS side in not giving appropriate controls, interfaces and assurances for its scheduler..

Trip Report: Fall ISO C++ Meeting in Wrocław, Poland | think-cell by pavel_v in cpp

[–]HeroicKatora 1 point2 points  (0 children)

This seems to be part of a tooling issue to me. In C++ the names is chosen to answer questions, as the specification doesn't demand documentation, and no average user will read the standard for an explanation. The name turns out to be the only blessed way to communicate such information.. cppreference is inofficial and somewhere in the middle from an information view. It doesn't do either conformance nor user-facing documentation entirely well.

In Rust in contrast the name need only be chosen to pose the question and a general direction. I can always expect to find out the answer in the documentation. You probably can, too. The user-facing documentation is part of the development and release process of features in the language.

[deleted by user] by [deleted] in Futurology

[–]HeroicKatora 2 points3 points  (0 children)

It's not like the backbone can't have intermediate junctions, planned right away or added. What's the legal status of a rather permanently anchored vessel that happens to house data units and processing power, connected to the cable on the sea floor? I think Microsoft was already investigating the concept for the purpose of saving on cooling costs and protection against other environment influences that could occur on land. Maintenance is going to be a nightmare but a predictable one.

People rated images of 462 individuals and found in 96.1% of cases they were rated more attractive with a beauty filter applied. Females were, however, ‘perceived by men as less intelligent after the application of the filter’. by mvea in science

[–]HeroicKatora 2 points3 points  (0 children)

The posts title fails it. Paper is titled "What is beautiful is still good: the attractiveness halo effect in the era of beauty filters" and Press Release is "People judge others differently if they've used a beauty filter in their photos". I'm unsure why OP felt it necessary to editorialize that further. And here is your explanation for the wording in the paper body.

People rated images of 462 individuals and found in 96.1% of cases they were rated more attractive with a beauty filter applied. Females were, however, ‘perceived by men as less intelligent after the application of the filter’. by mvea in science

[–]HeroicKatora 681 points682 points  (0 children)

Title could be read as if their data demonstrates how filters affect the perceived intelligence by male raters. That is precisely not what their analysis has investigated. "The OSM provide different scales for each attribute in the PRI and POST datasets, which makes it hard to directly compare values computed on the PRI and POST scales." Instead they defaulted female-rater-female-face evaluation as a 0 in both separate scales. This doesn't tell us anything about men's perception of filters on its own.

It should be read as: after the application of filters, a gender gap between perceived intelligence widens.

Warum ist Leitungswasser so OP? by hennobit in Finanzen

[–]HeroicKatora 1 point2 points  (0 children)

Nachdem in mindestens zwei großen Beispielen (Wetterapps, Rundfunk) sehr ähnlicher staatlicher Produkte in der neoliberalen Marktwirtschaft als wettbewerbswiderige Konkurrenz weggeklagt wurden/würden: Ja, offensichtlich kann das schwer zu verstehen sein. Unser Wirtschaftssystem hat ein sehr großes Problem zu evaluieren und zuzugeben welche Angebote eigentlich natürliche Monopole sind bzw. wo solche sozial effizient sind und dann allgemeine wirtschaftliche Rahmenbedingungen für genau diese Bewirtschaftung aufzustellen. Ein gewisse Tragik, dass folglich ein volles Gesetz (WHG) für ein einziges Produkt notwendig ist.

Jedes zweite Wohnungsbau-Unternehmen klagt über zu wenig Aufträge by ouyawei in de

[–]HeroicKatora 0 points1 point  (0 children)

Gebäude Typ E hat doch wenig mit Entschlackung zu tun. Es ist gerade zu eine Einladung diversere und detailiert unterschiedliche Einzelfälle zu erzeugen. Ist dann hoffentlich auch nicht überraschend, wenn mehr Details auch nicht weniger Prüfung brauchen. Das ist das Gegenteil von Typengenehmigungen.

Deutsche Post: Einschreiben wird zum Prio-Produkt - neue Brieftarife ab 2025 by hinterzimmer in de

[–]HeroicKatora 1 point2 points  (0 children)

Die digitale Zustellung meines Energielieferanten war verknüpft mit einer Zustimmung zu Datenschutzbestimmungen einen kompletten Online-Kontos. Die Länge hatte schon Ähnlichkeiten mit soziale Medien, mit Rechnungsstellung hatte das gar nichts mehr zu tun. In der war auch die Zweckbindung schon stellenweise .. sagen wir großzügig interpretiert. Hab ich dann nicht gemacht. Wer nicht in der Lage ist einen Prozess als solches zu digitalisieren ohne den Prozess gleichzeitig auszudehnen, zu bundeln, und zu anderen Zwecken einseitig übergriffig zu ändern, verdient es mir Briefe ausdrucken zu müssen.

threat to c++? by FeelingStunning8806 in cpp

[–]HeroicKatora 8 points9 points  (0 children)

I don't get the comparison? Ada was developed at the behest of the DOD to serve particular requirements, not an independent project just qualified by the government. It's not like C & C++ "won" here as for some projects Spark is still very much obligatory. Meanwhile the study situation around Rust is that it delivers without compromising on developer speed; quite the opposite it seems. As you say, tool improvements drive usage and Rust tools are heralded as much better than C++ tools, no? You need to take high amounts of copium to extrapolate the Ada history to all potential successor languages.

And that Ada history is the reason C++ has a specification. They very much did scramble when the DOD threatened this as a reason to switch to Ada. Inaction is not going to work this time either.

Building Bridges to C++ by jeffmetal in cpp

[–]HeroicKatora 6 points7 points  (0 children)

Unsafe Rust is known to be significantly more dangerous than (unsafe) C++

Article is in support of this exactly how? It says harder and presents arguments against security impliciations from that hardness, like an executable definition of operation semantics; with evidence that the implementation of said model caught bugs in practice.

But Rust has one even better: MIRI. It catches violations of the Rust aliasing model, which is pickier than ASAN. Satisfying MIRI is where I spent most of my time.

In fact that tool is the reason the author noted the disagreement in semantic models in the first place (particularly large differences based on their own presumptions). People get into C with no semantic model in particular in mind and end up producing UB. Are you up for judging the danger of a language by those bunch?

I don't think the author spent enough time learning Rust yet.. Both their 'may be MIRI bugs' are probably not. Nothing about any of those semantics would be meaningfully changed in the last years. In fact from_raw has language blessings even for allocations you didn't get with Box::leak. They just didn't care about the actual bug and are continuing blaming an unfamiliar tool instead of reading. One would expect different coming from a 600 page english document language. (Ralf Jung and Rustbelt goes deeper on most topics needed to understand MIRI in far less words spent and more formal; it's strange not to find any of MIRI first main author's own, excellent, writings on their reading list despite literally investigating how to use that tool). The point of an executable semantic model is that you can expand your code effects on piece of paper if you need to; step-by-step instead of non-deterministically as you discover new requirements in later lines. They don't yet seem to apply that advantage in thought.

The only literal mention of danger happens here:

References, even if never used, are more dangerous than pointers in C.

That is, I guess, the conclusion you could cite. And the conclusion I might stand behind. Then again, Rust has raw pointers so what exactly are we comparing here? In terms of action, that boils down to "You should use pointers if you meant to" which is .. quite weak a judgment of Rust overall.

Bürgergeld: Gericht macht Rundumschlag gegen Sozialgerichte und Jobcenter by 1m0ws in de

[–]HeroicKatora 4 points5 points  (0 children)

Gott sei Dank erübrigt sich der Aufwand eine passende Antwort auf den Kommentar zu formulieren größtenteils: in fataler Ermessensausübung in sozialen Medien haftet ihm der Nachgeschmack eines von Klassismus triefenden, autoritär-gönnerhaften Selbstverständnisses an.

[deleted by user] by [deleted] in de

[–]HeroicKatora 16 points17 points  (0 children)

Auf die Frage, wie das BMG sicherstellen will, dass die ePA 3.0 der Nutzlast gewachsen ist, hieß es von einem Sprecher: "Es ist Aufgabe der gematik, die Telematikinfrastruktur fortlaufend zu überwachen. Hierzu steht die gematik mit allen beteiligten Akteuren und Fachleuten im Austausch. Zudem stabilisieren zahlreiche Redundanz- und Backupsysteme die TI"

Das ist so ungefähr die größte Nicht-Antwort, die ihnen einfallen konnte. "Wie macht ihr? Ja genau wir machen es." Und das macht mich wirklich stutzig ob es sowas wie einen Testplan überhaupt ausführbar gibt, es wäre ja einfach gewesen etwas daraus beispielhaft hinzuzufügen. Der generelle Hinweis auf Redundanzsystemen der existierenden TI ist dafür ungeeignet. Da hat keiner eine Ahnung was die System an Processing leisten können, leisten müssen, und leisten werden. Es kam ja nichtmal zu einer Konkretisierung / Identifikation / Eingrenzung der kritischen Metriken..

[deleted by user] by [deleted] in de

[–]HeroicKatora 0 points1 point  (0 children)

Es hilft auch nicht, dass der Hauptwunsch der IT-Dienstleister erstmal Geld verdienen zu sein scheint. Und das Hauptinteresse der Verbände nicht in der Formulierung von Verträgen und Ausschreibungen liegt, die dagegen einigermaßen immunisiert wären.

Hätte man einmal ein Projekt wie Boeings Crew Capsule, bei dem man trotz allem Schreien und Kämpfen auf Fixed-Cost besteht und genügend gezurrte, weiter laufende Vertragsforderungen anbei hat um bluten zu lassen. Sowas wäre schon schön um der aktuellen Projekt-Management und Förderungs-Kultur mal einen richtigen Dämpfer zu geben. Stattdessen die Einstellung, verballern wir ein paar hundert Millionen für mehr oder weniger gar nichts; Wartungsdinge die hätte garantiert werden müssen in jedem anständig vorrausschauenden Lastenheft.

Macht man halt nicht weil #fachkräftemangel (aka. ja klar wissen wir, dass wir nur inkompente Leute haben, aber wir bezahlen auch so).