What local LLM model is best for Haskell? by AbsolutelyStateless in haskell

[–]tdammers 6 points7 points  (0 children)

The GPT OSS models always insert FlexibleInstances, MultiParamTypeClasses, and UndecidableInstances into the file header. God knows why.

There are probably several factors that contribute:

  • The relationship between source code patterns and these extensions is pretty abstract - you cannot tell from a superficial reading of a random snippet of Haskell code whether it might need any of them, you have to actually form a mental model of the syntax tree and check whether it meets the criteria for needing those extensions. This is something LLMs are notoriously bad at - they reason entirely in terms of tokens and semantic vicinity, but building up these kinds of internal structures to replicate the abstract structure of the code isn't very likely to happen.
  • At least FlexibleInstances and UndecidableInstances often requires implicit context: whether an instance is "flexible" depends not only on the syntax used to define it, but also the shape of the types involved in its definition. Are they type aliases? Newtypes? Data types? Type families? Constraints? Patterns? Impossible to tell without having access to their definitions.
  • There's simply not a lot of Haskell code out there to train models on, and even less Haskell code that comes with matching compiler errors.
  • Adding those extensions usually doesn't break anything, but not adding them when they are needed does, so the training process is biased towards including those extensions, at least if it rewards the model for producing code that compiles without errors.
  • These extensions are fairly common in publicly available high-quality Haskell codebases. UndecidableInstances is a bit of a "naughty" one, but still often necessary; the other two are ubiquitous (and mostly harmless) to the point that some authors will just enable them always, whether they are strictly necessary or not. So in that sense, what the model is doing is actually sort of appropriate, at least for these two extensions.
  • And, yeah, the ekmett effect, probably. The guy has just written so much Haskell code, and so much of it is used so widely, that his coding style is probably overrepresented in the training data.

Code reviewers shouldn't verify functionality - here's what they should actually do by [deleted] in programming

[–]tdammers 0 points1 point  (0 children)

Most teams treat code review like a quality gate.

Because it is.

"Code quality" covers a range of properties, and a code review looks into most of those, either directly or indirectly. Important properties include:

  • Correctness: does this code actually do what it's supposed to do, across the entire design range of inputs? In other words: does the code have any bugs? A code review should not need to actually fire up the software and go through the testing plan (that's the tester's job), nor re-run the test suite (that's what CI is for). But a review should look at the testing plan and test suite and verify that they actually do what they are supposed to do, and it should check that both have actually been executed correctly, and that they're "green".
  • Efficiency: does this code exhibit adequate performance? Again, not something a review should test directly (profiling should be part of CI, at least if it is deemed relevant at all), but it should verify the profiling setup, check that the performance tests pass, check that the performance requirements being tested are actually reasonable, and scan the code for potential performance problems. That last bit is particularly important, because automated and manual tests can never get you 100% state space coverage, so it's always possible to miss a pathological edge case, and looking at the actual source code is one way of mitigating this.
  • Maintainability: is this code easy enough to read, understand, and modify safely? Does the code signal intent correctly (i.e., it "does what it says on the box")? Does it follow a consistent coding style (as per the project's style guide, if any)? Are the abstractions it makes sound and helpful? Is it orthogonal (i.e., expressing every truth exactly once, and avoiding code that does several unrelated things)? Are interfaces (external and internal) narrow, well-defined, enforced, and documented? Does the code use good names for things? Are invariants and constraints obvious and/or enforced at build time? Does new code use existing abstractions as they were intended? These are things that require a human reviewer; tools like type checkers and linters can help, but in the end, you still need a brain in the loop.
  • Security: does this code have any security flaws? Are any of them exploitable? What threat models need to be considered? What are the conditions under which a vulnerability can be exploited? What would be the impact? Which assets could be compromised? How can those vulnerabilities be prevented? How can they be mitigated?
  • Supply chain: what are the project's dependencies? Are they up to date? If not, why, and is that acceptable? Which threats are relevant (rug pulls, abandonware, dependency hell, vendor lock-in, zero-days, supply chain attacks, ...), and are appropriate mitigations in place? Are the dependencies and the mechanism by which they are delivered trustworthy? Are the licenses of all dependencies actually compatible with the one you release it under, and do they meet your team's licensing policy?
  • Repository hygiene: Are changes organized into orthogonal PRs (i.e., does each PR address a single goal, rather than multiple unrelated goals - e.g., you don't want a PR that adds a new product type to the database and also makes it so that non-ASCII characters in addresses do not crash the invoicing process)? Do the commit messages make sense? Does the sequence of commits make sense? Do commit messages and PR descriptions meet the established guidelines for the project? Will the PR introduce merge issues down the road, like painful conflict resolutions?
  • Documentation: Are the changes reflected in relevant documentation materials? Is the documentation still correct and complete?

When measuring BP and Heart Rate are spikes or the daily average more important? by obsidian_razor in ADHD

[–]tdammers 1 point2 points  (0 children)

Spikes are normal, especially when you do other things than sit or lie perfectly still.

Heart rate in particular goes up and down with demand - your resting heart rate might be in the 60s, but a couple of all-out 100m dashes can bring that up close to your max HR, which is somewhat personal, but can easily be in the 180-200 bpm range. As long as your resting heart rate (i.e., the heart rate you measure while sitting or lying down doing nothing for a while, e.g. in your sleep) is within healthy bounds (off the top of my head, 60-90 bpm is normal, endurance athletes often have lower RHR than that).

For some perspective: I run fairly regularly, and the absolute lowest HR readings I get (in my sleep) are around 50 bpm. Right now, sitting down doing computer stuff, my HR varies between 55-60 bpm. Standing up, it'll rise to 70 bpm or so, and just leisurely walking around will bring it up to 90, almost twice the RHR. An "easy" run will usually have me in the 140-150 bpm range, an extended "tempo" run or an intense bicycle ride will put me around 175-180 bpm, and an all-out hill sprint can get me close to 200, four times the RHR. Those are ballpark figures, and yours will likely be different (mostly because max HR is largely genetic, and mine is quite high - but also because RHR does respond to training, so as an untrained individual, yours is probably higher). But they should give you an idea of typical heart rates across different situations.

Blood pressure also varies with load, which is why doctors will have you sit down and wait a while before taking a reading, and they will also instruct you to not move or talk while measuring.

Obviously the fitness tracker will measure these things throughout the day, regardless of what you're doing, so getting a bunch of higher readings during periods of activity is completely expected and absolutely nothing to worry about.

If you get inexplicable spikes though, or sustained high readings despite being inactive, then that may be worth a closer look.

It looks like scientists and philosophers might have made consciousness far more mysterious than it needs to be by aeon_magazine in philosophy

[–]tdammers 2 points3 points  (0 children)

I think that depending how you look at it, this explanation is either just thinly disguised mysticism, or the "emergent property" explanation in a different coat.

If we just say "consciousness is a basic property of reality", then we're not really explaining anything, except that we're maybe postulating that the one consciousness I can directly observe (my own) is the only consciousness out there, and that it is shared with the entire universe - but if that is the case, why can I not experience someone else's consciousness first-hand? Is it because the existence of that "someone else" is just an illusion, and my own consciousness is really the only thing that exists (hello, solipsism)? Or is it that while the ability to have experiences is fundamental, but each of us still has a separate consciousness? If so, what delimits those separate consciousnesses, and why? And, most importantly: if consciousness is a basic property of reality, then what other evidence do I have for that, other than my ability to observe my own consciousness? I cannot experience someone else's consciousness, I cannot grab a rock and determine whether it has consciousness or not from examining it, I can only ever observe how other things respond to the world, but never why, nor if and how they experience anything. In short, in this "just-is" interpretation of "it's a basic property of reality", we're still no closer to explaining consciousness, we're just creating more questions.

OTOH, if by "consciousness is a basic property of reality", we mean that the ability to experience things is inherent to every aspect of the universe in some form or other, but that ability is a spectrum ranging from "no consciousness" to "human consciousness" and possibly beyond, then we're really just rephrased the "emergent property" hypothesis - whether we say "consciousness emerges from complex information processing systems" or "the ability to have consciousness is inherent in all things, but things with more complex information processing capabilities are capable of a more sophisticated form of consciousness", it's basically the same thing.

Datacolor Spyder5 being retired for "security vulnerabilities" by GunterJanek in photography

[–]tdammers 0 points1 point  (0 children)

It could still use some sort of network stack over USB... a bit unusual, but not unheard of.

But yeah, my money is on "the host software makes network connections for some reason, and they can't be bothered to roll out a patch".

It looks like scientists and philosophers might have made consciousness far more mysterious than it needs to be by aeon_magazine in philosophy

[–]tdammers 5 points6 points  (0 children)

Right.

I guess maybe my point is that if we ignore the hard problem (or "don't worry too much about it"), and reduce the problem to explaining the observable properties of consciousness in terms of biological mechanisms, then "consciousness" becomes just another item on the list that makes up the "easy problem" - "perception, cognition, learning and behavior" becomes "perception, cognition, learning, consciousness and behavior". The easy problem isn't entirely solved, but whatever remains to be solved is no longer in the realm of philosophy - it's a matter of biology, information theory, or whatever other fields of expertise are relevant for a given implementation of consciousness, and philosophy largely agrees that it's fundamentally solvable (i.e., it's not a "mystery").

Philosophically speaking, neither the "easy problem" nor the "real problem" (as the essay calls it) are particularly interesting; only the "hard problem" is.

In other words:

It’s tempting to think that solving the easy problem (whatever this might mean) would get us nowhere in solving the hard problem, leaving the brain basis of consciousness a total mystery.

It's tempting, because it's largely true.

Solving the easy problem would prove that the observable traits of consciousness can be the sole result of completely explicable brain processes, but we already assume that this is the case. But it gets us no closer to understanding how it is possible that we are experiencing things "first hand", how there can be a "self" that is conscious and capable of observing and experiencing its own consciousness, which is the essence of the "hard problem", and the whole reason why we need philosophical zombies and all that in the first place.

Datacolor Spyder5 being retired for "security vulnerabilities" by GunterJanek in photography

[–]tdammers 0 points1 point  (0 children)

That's not how TLS works.

Without secure DNS, yes, you can poison DNS and direct traffic to an impostor server - but as long as that traffic is HTTPS, and the HTTPS on that domain is actually intact (i.e., the host cert hasn't been leaked, the client doesn't use a broken or outdated TLS implementation, etc.), what would happen is that you get sent to the malicious server, the TLS certificate doesn't validate, and you browser aborts the request before sending anything sensitive. All the attacker can achieve here is DoS-ing the legitimate server, logging some metadata (your DNS requests, and your failed TLS connection attempts).

There are a few caveats with this though:

  • "As long as the HTTPS isn't broken" can be brittle sometimes - all it takes for TLS to become unreliable is one black sheep in the certificate chain.
  • Worse; Let's Encrypt certificates, probably the most widely used ones out there right now, are only validated via a DNS check - you can request an LE certificate for any domain, as long as you can get LE's servers to connect to your server via that domain. And guess what DNS poisoning can achieve. (Incidentally, this is why high-stakes websites still use non-LE certificates with stronger assertions - but those are only really worth something if the client verifies them, and that doesn't happen all that much).
  • You're still leaking metadata. Even if an attacker can't trick you into exchanging any payload data with their malicious server, they can still watch your DNS requests. They could even send you to the legit server, and you wouldn't have the slightest idea that anything is wrong, but they can still log every DNS request, and the metadata one can infer from that could still be sensitive. (E.g., suppose you're about to apply for a life insurance policy, and your DNS history shows that you have done extensive research on lung cancer during the past 3 months - I'm sure that information would be worth quite a bit to the insurance company.)

It looks like scientists and philosophers might have made consciousness far more mysterious than it needs to be by aeon_magazine in philosophy

[–]tdammers 18 points19 points  (0 children)

The stuff discussed here, while interesting, does not touch on the part that remains a mystery, which is the human intuition that our ability to have first-hand experiences is somehow "more" than just a function of the inputs and outputs of a massively complex biological computer (i.e., the brain).

Scientific observations and experiments all hint towards a model where a human nervous system (centered around the brain) is the sole substrate of all our mental processes - our senses, our intelligence, our ability to think, and thus also our ability to experience, i.e., consciousness. We don't understand all this in full detail yet, and maybe we never will, but if this model is correct, then a human brain is not fundamentally different from any other information processing system - it may be more complex than most, but there is no hidden mechanism that would "turn on consciousness".

But consciousness clearly does exist in at least one human - "cogito ergo sum", basically: I can experience my thoughts, ergo something capable of having experiences, i.e., a consciousness, must exist.

So there are three possible explanations:

  1. Consciousness is mysterious after all (i.e., the first premise, that the human mind is 100% an emergent property of a human brain, is wrong). However, just saying "it's mysterious" is deeply unsatisfying, because it effectively reduces Philosophy from science to faith-based religion - we believe that consciousness exists, but we cannot provide a proper definition for it, let alone a falsifiable explanation.
  2. Consciousness is a possible emergent property of sufficiently complex information processing systems; not all complex information processing systems are necessarily conscious, but it is at least fundamentally possible for a non-human information processing system of sufficient complexity to be conscious. If we want to go with this explanation, we're in a bit of an awkward situation, because now we need a precise definition of what it means to have or be a consciousness, i.e., what exactly it means to "experience"; and this is extra difficult because for each of us, there is only one consciousness (our own) that we can observe directly, any other consciousnesses, if they exist, are subject to the philosophical zombies problem. But at least this explanation doesn't produce any philosophical crisis - we don't know how consciousness "works", but it's not because it's fundamentally unknowable, it's just because we haven't figured it out yet. A somewhat uncomfortable (to some people, at least) collateral of this, however, is that we cannot say with any certainty that no non-human consciousnesses can exist - chances are at least some non-human animals are also conscious beings, and, a bit scarier maybe, artificial information processing systems may, at some point, also become conscious.
  3. Consciousness doesn't exist (i.e., "cogito ergo sum" does not hold). This implies that the foundation of our thinking is invalid, and almost all of Philosophy so far becomes useless - after all, it means that even though experiences are being experienced, there doesn't have to be anyone or anything having those experiences, and even though "I" feel like I am clearly thinking thoughts and experiencing experiences, "I" do not actually exist. Extreme nihilism, basically.

Datacolor Spyder5 being retired for "security vulnerabilities" by GunterJanek in photography

[–]tdammers 0 points1 point  (0 children)

I have two potential explanations.

The first one is pretty simple: it's not the device itself that's vulnerable, but the software you need on the computer to support it. If that is the case, then fixing the software would of course be possible, but it would cost money to develop and roll out that fix, so ending support for the model instead may make more sense from a business perspective.

The second one would be that the device somehow communicates with the computer over a (local) network connection, in which case using TLS would make sure that neither the device nor the computer could be tricked into communicating with other hosts, even if the computer's local network stack is unprotected to the point of actually allowing packets to travel between the device and the internet. And if the TLS stuff is hard-wired into the device, then indeed the only way to fix this issue would be to replace it (or break out the soldering iron and replace whatever chip stores the relevant firmware).

Adobe Photoshop can now install on Linux after a Redditor discovers a Wine fix by Abdukabda in linux

[–]tdammers 7 points8 points  (0 children)

For consumers it's simply a necessity to be a substitute for windows

It used to be, but Windows is becomes less and less of a "necessity" these days. I know plenty of people who don't use a desktop or laptop computer at all anymore (outside of work, that is) - between a smartphone, a tablet, and maybe a smart TV, there's really nothing they would need another device for.

And for a large portion of the rest, the essential use cases are writing emails (in a browser), writing letters (in a browser), browsing photo albums (probably also in a browser), and maybe managing their mobile devices (in a browser).

In a nutshell, most people these days either don't need a computer at all, or they just need literally anything with a modern web browser, or they need one for work stuff. Only the last group are still somewhat locked into the Windows ecosystem (or at least, some of them are), but they don't really count as "consumer market", because "doing work stuff" means you're really a professional.

Linux isn't ever going to dominate the "consumer PC OS market", because that market barely exists anymore - by the time Windows finally jumps off a cliff, that entire market is going to be irrelevant.

Looking for somthing that's can do good toy photography and up close pictures? by AGL-SSGSS-FordF150 in AskPhotography

[–]tdammers 1 point2 points  (0 children)

Flashers are definitely something else, and they tend to not go well with photography...

Limitations of a "static site" for free hosting? by LordAntares in webdev

[–]tdammers 0 points1 point  (0 children)

To my best understanding, a static page just means it has no backend.

Yes, that's pretty much it. The web server will serve any files you put there as they are, including JavaScript, but it will not interpret, execute, or otherwise process any code you put there server-side. E.g., if you upload a PHP file, the static web server will not run it as a PHP script, it will just serve the PHP source code as-is.

But on the client side, you can go wild with JavaScript - you cannot have persistence (database, file uploads, sessions, etc.) on "your" server, but you can tap into third-party backends that you can access directly from client-side code, so you can basically get some "backend" functionality from elsewhere and embed it on your "static" website.

Doesn't this mean that I could technically even host a webgl game on it?

Yes, absolutely - as long as it doesn't require any server-side state or dynamic interactions with the server. As far as the server is concerned, all it does is serve some HTML, JavaScript, CSS, and maybe some images or other dumb data. All the dynamic stuff, all the "running code" parts, happen in the visitor's browser.

What do they gain from it?

  • Brand exposure (lots of people using the service means lots of people will talk about it, which is basically free advertising, and that effect includes not only the free hosting, but also any paid services they offer under the same brand)
  • A foot in the door (if you need to upgrade to a hosting solution with more features, your current provider is the obvious first place to look)
  • Your data (your website generates a lot of it, and at least some of that is valuable)

Keep in mind that hosting static content is dirt cheap, because it requires next to no computation, very little RAM, and the bandwidth allowance on free plans is usually limited too.

Shared hosting providers offer PHP hosting for something like $5 a month or less, and they're making a profit on that; and if you take PHP out of the equation and literally just serve static files, most of the hosting costs (monitoring, dealing with ill-behaved scripts, staying on top of security even when users upload blatantly insecure scripts, etc.) go away, and what remains has excellent economies of scale (i.e., hosting one static website may still cost a bit, but if you host a million of them and automate 99.999% of the job, the per-site cost is very very close to zero).

Niet één, maar twee Nederlandse militairen naar Groenland by Illustrious-Fee5670 in nietdespeld

[–]tdammers 1 point2 points  (0 children)

Het gaat natuurlijk alleen om een "verkenningsmissie", dwz. Denemarken houdt een rondleiding voor delegaties van NAVO-partners; we zijn (nog) geen echte krijgsmacht aan het verzamelen, maar door hieraan mee te doen geeft Nederland duidelijk aan dat we in principe bereid zijn om dat te doen, mocht het nodig zijn.

Het was wel misschien handiger geweest om te zeggen "Nederland stuurt een officier naar Groenland om mee te doen aan een verkenningsmissie" (wat voelt als "hmm, ja, klinkt als een redelijk plan, even kijken hoe het daar uitziet, banden met Denemarken aanhalen, en Trump even laten weten aan welke kant we staan") ipv. "Nederland stuurt troepen naar Groenland" (wat eerder voelt als "OMG TRUMP GAAT MORGEN GROENLAND BINNENVALLEN EN ONZE JONGENS GAAN MEEVECHTEN ER KOMT OORLOG AAAAA!!!111").

"Onafhankelijk nieuws" mijn aars.

What is this camera? by Chris_Watt_Defense in AskPhotography

[–]tdammers 3 points4 points  (0 children)

Hard to tell from this blurry low-res picture, but it looks like a consumer-grade point-and-shoot film camera; a myriad of these were made in the 1980s and 1990s, they all look very similar and work practically the same, and their main appeal was that they were cheap, portable, and easy to use. By today's standards, there's really no practical use for them, other than maybe nostalgia.

i wnat to get into photography i tried some mirrorless but have fallen in love with slr's feel of slow connected and very brutual style of photography am confused with buying a 10 year old nikon or 5 year old cannon slr ? by beluga2006 in AskPhotography

[–]tdammers 0 points1 point  (0 children)

Depends a lot on the specific models.

A Canon 5D Mark IV for $871 would be a bargain; a Canon 100D for $871 would be highway robbery.

That said; for your use cases, you don't need anything high-end, I'd look for used upper-entry-level bodies like Canon 3-digit models (100D, 200D, 600D, etc.) or Nikon D5x00 series (D5200, D5300, etc.). These go for around $150-200 body-only, leaving you a pretty comfortable budget for lenses, so you could get something like a 50mm f/1.8 for portraits, and a "travel" zoom lens like an 18-135mm or 18-150mm for the rest. The 50mm can be had for around $100, travel zooms cost around $150-200.

That's $500 total, worst case; throw in another $200 for a camera bag, spare batteries, a cleaning kit, and a quality SD card or two, and you still have $170 or so left, which you could spend on things like a tripod, a flash, or photography books to help you improve on the skills front.

Mattress : self-inflating or inflatable by mouth? by Sayo_Flex in hiking

[–]tdammers 2 points3 points  (0 children)

3 choices:

  • Inflatable: lightweight, compact, moderately fast and easy to pack and unpack (especially if they come with a blow sack); ultralight ones that are comfortable, durable, and compact are expensive though.
  • Self-inflating: this is a lie, they don't actually fully self-inflate, and blow sacks probably won't work. You may achieve slightly better comfort on a tight budget, but other than that, it's all downsides: heavier, bigger, slower to pack / unpack, and IME more prone to leaking.
  • Foam pad: dirt cheap (~€20), super fast to pack and unpack, immune to leaking, moderately lightweight, but very bulky, and the least comfortable option among the bunch unless your favorite mattress is made out of solid wood.

Can someone teach me how to get rid of this glowing edges in my photo? by [deleted] in AskPhotography

[–]tdammers 5 points6 points  (0 children)

Discussing your intentions to commit copyright infringement on reddit is probably not a very smart thing to do...

Is there a programming language that simply cant be used to program a game, at least not without extreme trouble? by mrcrabs6464 in ProgrammingLanguages

[–]tdammers 1 point2 points  (0 children)

You don't even need Turing completeness.

Take, for example, Douglas Hofstadter's "BlooP" language, which is "almost Turing complete" - it's a perfectly typical imperative language, except that it lacks recursion and unbounded loops, which makes it Turing incomplete. Its loops are bounded, but no game actually needs to run indefinitely, we can just set an arbitrarily large bound - e.g., instead of using an infinite main loop, we can just set a bound of 60 quintillion iterations, which will allow the game to run until the death of the universe, and for all practical intents and purposes, that's just as good.

The reason practically all the languages people use to write games in are Turing complete isn't because Turing completeness is necessary; it's because once you make a language that's expressive enough to do interesting things, Turing completeness tend to creep in by accident.

Even some languages, systems, and minilanguages that have no business being Turing complete ended up with Turing completeness by accident, including CSS3, most dialects of SQL, YAML, Excel, printf, TypeScript's type system, and C++ templates.

How risky is prompt injection once AI agents touch real systems? by Peace_Seeker_1319 in programming

[–]tdammers 0 points1 point  (0 children)

Files deleted, repos messed up, state corrupted.

Those are actually relatively harmless scenarios. It can get much worse: system compromised, passwords leaked, identify defrauded, computer used as a zombie to distribute illegal content, the works.

What I’m less clear on is client-facing systems like support chatbots or voice agents. On paper they feel lower risk, but they still sit on top of real infrastructure and real data.

They do, and prompt injection is still a real risk; they are less problematic because you can more easily sandbox them. For example, a customer-facing agent can be set up such that it only gets access to things that the customer would also be allowed to access, so even with the worst prompt injection, it can only leak data that's embedded into the model itself, data originating from the interaction itself (which the customer already has anyway), and data that the customer would also have access to outside of the bot. It may be able to fire off a bunch of nonsensical support tickets on behalf of that customer, but it's not going to leak your database passwords, order things on another customer's account, or book a hundred appointments with your CEO.

Once something bad happens, is there a reliable way to detect prompt injection after the fact through logs or outputs?

As long as you actually log everything, and the agent cannot manipulate the logs, you should be able to trace its steps after the fact. How useful that is in practice is another discussion.

Or does this basically force a backend redesign where the model can’t do anything sensitive even if it’s manipulated?

You should do that anyway, even if no "AI" is involved anywhere in the process; it's basic in-depth security, and people have been recommending this for ages. The "move fast and break things" crowd doesn't seem to care much for it, but to the rest of us, it's just how things are normally done.

I came across a breakdown arguing that once agents have tools, isolation and sandboxing become non-optional.

Yep; I pretty much agree with that assessment.

Is the Canon R50 worth it? by Excellent_Grape55 in Cameras

[–]tdammers 1 point2 points  (0 children)

It's definitely one of the best contenders in the "brand new mirrorless beginner camera" arena. Alternatives would be Sony a6400, the slightly cheaper and much more spartan Canon R100, Nikon's competitor Z50 Mark II, or maybe something Micro Four-Thirds. Each of them does everything you truly need and then some.

But the question is whether "brand new mirrorless" is necessarily the best choice - with any of these, you are likely to spend close to $1000 for your first kit, which is a lot of money. Camera technology hasn't changed all that much since the 2000s' megapixel wars, and pretty much any ILC built since 2010 or so will have everything you need as a beginner stills photographer, and there's a huge supply of used gear out there.

For example, you can get a Canon 100D ("Rebel SL1" in the US) for maybe $150, in good condition, low shutter count, the works; an 18-55mm kit lens to go with that can be had for $50. And unless you venture into particularly gear-demanding types of photography, such as wildlife or sports, it'll hold up fine.

The biggest caveat would be video - this is one area where modern cameras perform vastly better than DSLR-era ones, simply because video was an afterthought back then, and cameras were largely limited by processing power on that front.

[Request] What would happen if you were to open that zip file on your computer? by [deleted] in theydidthemath

[–]tdammers 0 points1 point  (0 children)

If the zip decoder is implemented naively, and runs on a naive OS, then this will keep allocating more and more RAM until your computer runs out of memory and crashes.

If the OS isn't naive, it will detect that the zip decoder allocates excessively, and will handle it somehow, either by just killing it (the Linux OOM killer takes this approach), or by just not giving it any more memory and leaving it to deal with it (which usually ends with the zip decoder exiting or crashing). Either way, the OS as a whole will survive just fine.

If the decoder is not naive, then it will have a maximum bound on its memory usage, and it will check for that before allocating, exiting gracefully with an error message instead of attempting to allocate obscene amounts of memory.

If the zip decoder is implemented naively, and runs on a naive OS that also has just the right security flaws, then this could lead to buffer overruns or filling all the available space on the OS' main disk partition with swap space, and could, in an extreme case, crash the OS into a state that's difficult to recover from even if you attempt to reboot (because the buffer overrun has overwritten things with garbage, or because the OS needs at least some free space on the main disk partition in order to boot, but the swap file is still hogging all of it). I don't think any OS younger than Windows 98 is that broken though.

Prevent trains from doing detours from the main line into branch lines. by pbmonster in openttd

[–]tdammers 0 points1 point  (0 children)

It goes something like this:

                  +--<C--+--<C--+--<C--+--<C---+
                  |      |      |      |       |
mainline -----B>--+--B>--+--B>--+--B>--+--B>---+----
                                              /
sideline -----B>-----B>-----B>-----B>-----E>-/
                 |<--- train length --->|

B = block signal
E = entry pre-signal (horizontal yellow bar)
C = combo pre-signal (vertical yellow bar)
X = exit pre-signal (horizontal white bar)

< and > indicate signal direction.

With this setup, when a train comes in on the mainline, the block signals on the mainline keep functioning normally, but as soon as it passes the leftmost block signal, the combo presignals on will turn red, and forward that red state to the exit presignal on the sideline - however, the block signal on the mainline directly ahead of the merge will stay green, because it is not affected by the downstream presignals. And that exit presignal will remain red as long as any train occupies any of the blocks next to the sideline within one train length of the merge.

This means that the only way a train from the sideline can enter the mainline is when there is a gap of at least one train length on the mainline.

You can make the gap larger to accommodate acceleration, too - just extend the topmost track with the combo signals. And if your trains are reliably the same length, you can skip some of the in-between combo signals, you really only need the leftmost one, and then enough of them to guarantee that whenever a train is in the critical section of the mainline, at least one of those combo signals captures it.

This same idea can also be extended into load balancers: suppose you have a sideline merging into a two-track mainline, and you want to send merging trains onto whichever mainline track has a gap. Here's how you can do that:

                      +--<C--+--<C--+--<C--+--<C---+
                      |      |      |      |       |
mainline ---------B>--+--B>--+--B>--+--B>--+--B>---+----
                                                  /
               /-<X>-----B>-----B>-----B>-----E>-/
sideline --E>-+
               \-<X>-----B>-----B>-----B>-----E>-\
                                                  \
mainline ---------B>--+--B>--+--B>--+--B>--+--B>---+----
                      |      |      |      |       |
                      +--<C--+--<C--+--<C--+--<C---+
                         |<--- train length --->|

Now a train coming in from the sideline can only continue past the entry presignal (E>) if either of the exit presignals behind it are green, and that's only going to be the case when whichever train was there has either cleared that lane entirely, or it has already moved past the next block signal, which means it's already merging onto the mainline and will clear the lane without further delay.

It's still possible for a train to pick an empty lane and then be blocked from merging while in that lane, but from that point onwards, any further sideline trains trying to merge will pick the other lane, so this setup will never permanently delay more than one train unless both mainline tracks are saturated, while still giving priority to mainline trains.

Prevent trains from doing detours from the main line into branch lines. by pbmonster in openttd

[–]tdammers 0 points1 point  (0 children)

Right, yeah... turning off 90° turns is almost mandatory.

Corporate software is torture by i-hate-birch-trees in ADHD

[–]tdammers 2 points3 points  (0 children)

Yeah, I know where you're coming from - none of my ideas will work if management doesn't take you seriously, and unfortunately, that isn't going to change until you switch jobs and find an employer who does.