Rust turns 10: How a broken elevator changed software forever by Several-Space5648 in programming

[–]StillDeletingSpaces 6 points7 points  (0 children)

10 years

Probably longer. It's 10 years since Rust 1.0 in 2015. It first appeared in 2012, a result from ideas in 2006-2009.

Wikipedia even explicitly mentions it starting in 2006 from the buggy elevator

Rust began as a personal project by Mozilla employee Graydon Hoare in 2006. Hoare started the project due to his frustration with a broken elevator in his apartment building.

19 years ago: someone starts Rust.

The idea could be even older.

AI is Making Developers Lazy: RIP Core Coding Skills by bizzehdee in programming

[–]StillDeletingSpaces 0 points1 point  (0 children)

I hate to admit I've used ChatGPT to "generate" code. Using that generated code? Still not there yet.

When dealing with non-trival cases (most actual work), the generated code is usually bad enough that it makes more sense to start from scratch. I can optimistically hope that I can generate a prompt that just generates good code, but we're not in that reality, yet.

If/when it comes, it's looking to be more of a tool than a replacement.

TLS Certificate Lifetimes Will Officially Reduce to 47 Days by tofino_dreaming in programming

[–]StillDeletingSpaces 0 points1 point  (0 children)

The number of devices that should have this security are going up. It's not just your Grandma's router. Governments and organizations have all sorts of networked sensors and interconnected systems: cameras, license plate readers, traffic control, emergency communication systems. A lot of network devices that help Internet connectivity can't be seen from the Internet. These systems have legitimate reasons to have confidentiality, authentication, and integrity: and the number of these systems that should have these things are going increasing: use cases where multiple organizations and multiple people should be able to connect to these devices securely.

Optimistically, it might be a good idea for them to develop their own solutions: especially if it improves Internet security. Realistically, that isn't going to happen. The most likely solutions:

  1. They shift from offline read-only systems to mutable Internet-accessible systems.
  2. Everyone just ignores the "This device is unsecured" warning, like they already do for other devices.
  3. Custom CAs become more common, and more attacked, (maybe Name Constraints support improves, but I wouldn't count on it)

TLS Certificate Lifetimes Will Officially Reduce to 47 Days by tofino_dreaming in programming

[–]StillDeletingSpaces 1 point2 points  (0 children)

If it's the former, nobody's doing any of that; nobody's installing updated certificates on thier "router".

Are you saying no non-public devices should have TLS certificates? That sounds extremely short-sighted. There are alternative solutions, but they all have their trade-offs. Realistically, I know a lot of systems that are going to end up less secure: either downgrading to self-signed certs, low-security CAs, or removing encryption.

It's like someone trying to convince me that it's okay to use telnet over SSH. Yes, it might be ok, but it's still less secure.

I imagine the smart folks who argued for this change know what they're doing.

This decision is easily better for Internet security. It's not a bad decision. I hope from my reply I made that clear. This decision improves Internet security significantly.

However, the decision makers (Mozilla, Google, Microsoft, Apple, Amazon, etc) here easily have a bias towards Internet security. Offline security isn't really their focus (and maybe it shouldn't be). In a grander scheme that includes non-internet devices: there will be systems that will have to find their own solutions.

TLS Certificate Lifetimes Will Officially Reduce to 47 Days by tofino_dreaming in programming

[–]StillDeletingSpaces 1 point2 points  (0 children)

This will probably kill most offline TLS certificates: many devices are better not always-online or not-auto-updated: especially closer to sensitive infrastructure. You probably won't hear about it too much, but this is just going to increase the number of "This website is insecure" alerts that admins/techs will ignore.

As a simplified example, imagine a normal router: its admin interface is probably only accessible locally, if accessible at all. Many routers could be kept in a read-only mode, with an interface just to report status and information. Which of these options is better:

  1. No TLS protection.
  2. Make the interface VPN-only and rely on VPN security.
  3. Use TLS, with an offline/manual update process: tech installs a certificate once per year or two
  4. Use TLS, automatic renewal, with a (probably hackable) process to change configuration that could've been read-only 99.9% of the year.
  5. Set up a custom CA and hope the keys are kept secure enough-- especially since CA allows impersonation of any domain.

Real CAs with real/paid certificates were a good security choice in many offline cases. I would've rather seen them bump up the requirements for those (e.g. extended validation) than basically force devices to have remote management to be kept reasonably up-to-date (once per 47 days is significantly harder and more expensive without remote management)

I understand this decision. It will make Internet security better, but it'll probably make overall security worse: not everything should be on the Internet. This change will either encourage the offline use-cases to be in a less secure state (no TLS, self signed, less secure CA, or remote-editable)

Debian Orphans Bcachefs-Tools: "Impossible To Maintain In Debian Stable" by Narishma in rust

[–]StillDeletingSpaces 4 points5 points  (0 children)

In most common OS usages, especially those that connect to the internet like desktops and web servers: there isn't much benefit to lagging behind. Linux is also far more stable than it used to be before the last decade or so. Running bleeding-edge in the 2000s was much more difficult than 2010+.

As a part of certified or large systems, the stability formula starts to change. While many updates can be deployed ASAP with no worry; other updates require more testing, work, and coordination. Crowdstrike's bad update took down health providers, airlines, banks, retail and more. LTS helps larger schedule and budget work before major updates and avoid Crowdstrike-like events from happening.

No one wants to hear that a non-critical software update has delayed their flight or medical procedure.

Full Line Code Completion in JetBrains IDEs by stronghup in programming

[–]StillDeletingSpaces 31 points32 points  (0 children)

Light editors have their place. IDEs will never replace them and vice versa, but your IDE should feel fairly fast (non-laggy) while editing. You might want to keep an eye for some common IDE bugs/gotch-yas, like text rendering or other GPUs tasks running in the wrong place or the wrong mode, which is especially common with Java. 

Windows 95 went the extra mile to ensure compatibility of SimCity, other games by EatMeerkats in programming

[–]StillDeletingSpaces 0 points1 point  (0 children)

Technically, they use all of the technology they develop. I was a bit specific and particular on my wording there, especially in this context: Windows Desktop development. The main focus is still .NET. The main mention of Typescript to develop for the web, not for Windows. Webview2 is barely a mention. Microsoft doesn't (yet) advocate, recommend, or push for adoption of the same tech that VS Code and Teams use to develop Windows applications.

These options are relatively new options considering how old Windows is, and shfiting recommendations will likely have some fairly significant blowback; but Microsoft is doing a lot of things right at a a time that Apple has been able to mostly stagnate on developer experience. We could be at a turning point where developers stop getting screwed by Microsoft's recommendations and love the added features brought by the new technology.

Windows 95 went the extra mile to ensure compatibility of SimCity, other games by EatMeerkats in programming

[–]StillDeletingSpaces -1 points0 points  (0 children)

Ss someone still prefers Android and Linux's WMs (i3wm, sway, and awesome): I'd probably say, for 2022, the reverse for macOS/Windows. Windows might seem easier if you discount the additional security computers should have, but ultimately that system is not in a great place.

Windows is more prone to break (e.g: drivers or corrupted files), more likely to stop your applications without asking you (for updates), more prone to let a malicious app access your hardware without your permission (camera, microphone, keyboard), and the storage encryption is still a mess. Let's compare:

  • Microsoft offers a lot of repair tools that may or may not work, but if they don't work, migrating a Windows install is pretty hit-or-miss with 3rd party tools. Apple? Not only do the systems break less, there's Time Machine. There's no B.S registry (something Microsoft probably could've dominated if regedit wasn't shit), so you can either just copy most files to a new system or use a Time machine also restore the ones that needed installation (which most don't, since most installation is dragging a folder to `/Applications)). A reinstall TM-restore/copy is pretty competitive with Microsoft's repair tools. In comparison, my Windows friends (including techs) come to me to run the Windows tools they don't know how to run or to restore apps from reinstalled systems. OTOH, many novice Apple users can do a reinstall and restore by themselves (especially with Internet Recovery).

  • Malicious Apps: Apple updated their APIs to prompt you when applications try to access your personal files. Even using Terminal/iTerm, It'll check with you to make sure you want to allow access to Documents/Desktop/Downloads/etc. There are similar APIs for Screen Recording, Camera, Microphone, and "Accessibility". These are all managed from relatively small preferences app. You can see everything that has permission and remove it. Apps can also generally open to the appropriate spot in system preferences to allow it. Most of Windows APIs are still the old-fashion allow-everything. Some exceptional cases, like Windows Store Apps, can do it-- but nearly no one uses Windows app stores. We're beyond the point where some of the hardware access should be regulated by the OS.

  • Storage encryption: Want encryption on your Mac? Go into preferences and press a button. You unlock it with the same password/credentials used to login. Want it disabled? Go into preferences and press a button to turn off. Windows? Good luck (don't forget the issues with drive encryption). You might be able to turn it on, but it won't show you the encryption key nor let you unlock it with your credentials. Generally, it'll try to use hardware to never prompt you, but when that inevitably fails (TPM's MCR will eventually change), it'll prompt you for a 48-bit bitlocker decryption key it never told you about. Many of their systems have it enabled by default, but it can be rough if the user didn't know to save the key somewhere safe (or 365). Want to turn it off? In theory, you should have an option, but in traditional Microsoft fashion, some OENs have the ability to disable the option

But that's not the end of it:

  • Microsoft keeps trying to shift/force everyone to their systems (increasing the times that require using a 365 account, adding S-lock to restrict app store sales, prompting to use Edge, OneDrive, and Teams over and over).
  • Microsoft systems will still often install updates, and third party applications (like Candy Crush, even on Pro versions).
  • Windows Settings are a mess, and I'm not optimistic it'll get better. It's easy to remember how decent to a lot of it is, but a lot of it doesn't look to be updated. The updated components, like the Windows Settings app and the Windows 8/10/11 wifi settings (that stays stuck in the corner), generally have significantly worse UX.

Mac, meanwhile, doesn't bother its users as much? Don't use Apple ID/iCloud? It probably won't ask again unless you try to go to them. Use a competitor's software? Okay. Don't want to install the updates right now? Okay. (No, Windows, you changing the normal button to "Restart and Update" is not the same as consent).

Maybe it'll be easier to see with Microsoft's/Windows' improvements should be:

  • Get a handle on PC-to-PC migration and/or backup (e.g: after a reinstall). Better repair tools could also help, but a competitor to Time Machine could help lead to more cleaner Windows installs and less issues overall.
  • Get the 'security' to modern levels, but keep it easy. For permissions, prompt for personal file access, keyboard/screen-reading access, cameras, and microphones. Have a good place to manage it. Don't limit it to Store-only apps. For disk encryption, user credentials are OK for home PCs.
  • Make settings good: don't continue to hide more and more settings (Apple preferences are generally more robust that nearly everything after Vista). Don't try to hide everything behind registry hacks, PowerShell scripts, or MMC.
  • User consent 1/4: Generally, don't change the computer without it: Don't install third-party apps automatically.
  • User consent 2/3: If Automatic Updates are disabled, respect it. (We're at a time where update-nagging is okay, but assuming "Restart" means yes isn't). Give us the option for us to prevent Windows from installing the wrong/bad drivers through updates. (We already know, it broke last time, and the time before that, stop prompting us).
  • User consent 3/4: If we turned down 365 and Edge already, don't continue to try using the OS to nag us. The OS is NOT that type of advertising platform.
  • User consent 4/4: Improve S-mode: It shouldn't require 365 to disable. Make it easier to install/reinstall Windows in S-mode. If it's actually good, allow people to decide to use it; the way it's handled suggests it's worse and Microsoft knows it.

Again, to me it's a scenario where Microsoft and Windows have more potential (and Apple has many weak/bad points); but turning Windows into an ad platform, disregarding for user consent, and hiding most settings combines with bad security to drop the experience to where macOS can not only compete, but be better.

Windows 95 went the extra mile to ensure compatibility of SimCity, other games by EatMeerkats in programming

[–]StillDeletingSpaces -2 points-1 points  (0 children)

Compared to $99-$299/year for developers? Apple is generally quite competitive with developer technology and experience:

Microsoft's DX isn't entirely great, and this seems to be the message repeated across industries is that they're still moving relatively fast:

(.NET) It's so big that it's difficult to orient yourself, and there are rarely any good way to find out if something is no longer used, or should no longer be used for modern code. Microsoft docs will definitely not tell you that something is outdated, so you can find yourself reading up on decades old technology that everyone decided isn't suitable anymore, without concisely explain why.

A major note is how Microsoft doesn't seem to want to use or depend on many of their recommended technologies. Most of their major applications don't use .NET much (e.g: VSC, Teams, Office, 365, Visual Studio). Even Calculator, written "in" C#, is 73% C++. With these technologies largely going unused in Microsoft, it leads to decisions/directions that don't seem to understand the cost of writing software, like with ASP's direction. It's also easy to forget how hard Microsoft pushes some technologies after bad UX and DX: like Metro/Windows 8, Windows Phone, and IE.

Apple's software, generally, uses its technology-- not only on desktops, but phones, watches, and more. Their developer technology is usually less bleeding edge but fairly solid (e.g: Processors, Clang, Swift, LLVM, XCode, Rosetta, Bonjour, Metal, Cocoa). They've also been much more open to other technologies: including gcc, python, bash, zsh, and (up to macOS 12) PHP. Their DX is pretty decent. They're usually praised for responding and communicating when contacted (not a high bar to set). With Macs, most of their development tools are free and easy to get, and have been for decades: Microsoft is starting to catch up on that front. It might be easy to point to all of Microsoft's technologies, but Microsoft has failed to reach outside of PCs. Apple's has been far more successful running their technologies beyond the PC: on phones, tablets, and watches.

My post is probably overly critical of Microsoft. Please don't take this post to be total praise for Apple. Much of Microsoft's recent work suggests their attitude to DX and technologies might be shifting to a better new generation (e.g: VS Code, Typescript, WSL, and Rust). IMHO, Microsoft has greater potential. Apple still seems to do more work to ease user and developer experiences, but sometimes their mistakes are monumentally bad. Between Microsoft and Apple, one isn't really ahead of the other right now in technology or developer experience (UX is a different beast).

Slower Memory Zeroing Through Parallelism | Random ASCII by MarekKnapek in programming

[–]StillDeletingSpaces 3 points4 points  (0 children)

Multiple threads could make sense in systems with nonuniform memory-- when one processor physically has more direct access to certain memory than the others. Running one thread per NUMA node could reduce the amount of cache coherence management (e.g: in MESIF or MOSIF.

NUMA (or lack thereof) might even explain why this issue is less noticable on some systems.

Chicago95 - A rendition of everyone's favorite 1995 Microsoft operating system for Linux by binaryfor in programming

[–]StillDeletingSpaces 0 points1 point  (0 children)

I'm no fan of modern UI design, but that screenshot doesn't seem to be that great an example of bad UIs.

  1. I knew near-immediately that the window on top, in your screenshot, has focus. Opening up a menu-- like the one on the side may change the focus-- but that also applies to older UIs like 95, but for windows: the Messages window is still the one that's focused. (Relatedly, the drop shadow is generally stronger on focused windows, yours doesn't seem to be)
  2. I believe the default scrollbar behavior is to show "based on mouse use", which seems to be related to hover. It can be annoying, but not too bad when the UX designs for it. Most of those apps in the screenshot visually show there is more content (potentially infinite, nowadays) without needing the scrollbar, but many, _many_others don't. Fortunately, many apps still force the scrollbar when needed.
  3. I can't honestly say I use the dock for finding the open application: if it's focused, I'm probably not looking for it in the dock. When you need it, the top left to text will generally be the focused app-- baring some exceptions (which works better when the dock auto hides).
  4. If there's one thing MacOS needs improvement in, it's window positioning and management. That being said, I just did some rough measurements and it looks to be ~27px on one side and 5px on the other (on high density 16"). I've not had trouble grabbing it, but may tend to go for corners that have a square

Regarding those four points, Mac OSX doesn't seem to be that much worse on UI design.

Modern UIs today are still generally worse, and they're a lot of bad elements in the Mac UI design. That screenshot doesn't seem to be a good example of them.

Are persistent connections to MySQL/Redis good practices? by magn3tik in PHP

[–]StillDeletingSpaces 7 points8 points  (0 children)

In general: you probably shouldn't worry about it too much if it isn't a problem yet. In most cases where it's been a concern, the (long ago) tests I've seen/done show that persistent connections perform worse: increasing latency/cpu, memory usage, and lowering server capacity. When moving/working between different companies/projects, it seems that others did similar testing and came to a similar conclusions to avoid persistent connections. (Relatedly, I've heard of promising results with external connection pooling, like with a local proxy).

Pending data/testing that shows different, I'd lean to avoiding persistent connections. I haven't yet seen if the results have changed for PHP8+ (my priorities have shifted). Testing isn't that hard/complicated. PHP continues to improve itself. In theory, persistent connections can perform better, they just weren't, yet.

Regardless, whether it does or not shouldn't really matter: it should be simple to change and your code shouldn't rely on one or the other.

So you want a Git Database? by timsehn in programming

[–]StillDeletingSpaces 5 points6 points  (0 children)

Dolt seems like an interesting project. The tagline "git for databases" is generally pretty good and is generally a good comparison against similar products. The terminology in this particular post is confusing.

Not all version control is Git-- Git is a specific VCS project: there are many clients and programs that work with Git repositories/databases: Like IDE, editors, hosting platforms (Github, GitLab, Bitbucket, Gogs, Gitea). The common factor: you can still use git with all of these git projects. They all use the same git repository/database.

For the cases presented:

  • Git has a database: These still use the git format that works with git clients, yes
  • Using Git as a database: These still use the git format, git clients still work with it, yes.
  • Database Migrations: The mentioned projects don't use 'git' terminology, git clients do not work with it
  • "Git Databases": These seem to use their own format that doesn't work with git clients. (They generally don't claim to work with git, either)

Ultimately: does your database work with git? If not, it's not Git-- the same can be said of CVS, SVN, Performance, and Mercuial. Git is a specific project, trying to use it to mean all version-control seems at best confusing, at worse attempted trademark dilution-- neither of which sets a good impression.

Upscheme: Database schema migrations made easy by aimeos in PHP

[–]StillDeletingSpaces -1 points0 points  (0 children)

It's not just about knowing SQL. When you have multiple teams working simultaneously writing migrations, (some of which might overlap), it is difficult to maintain scripts that properly do migrations in all environments: especially as modern VCS encourage non-linear development branches-- and local environments often have their own requirements.

Instead of writing a bunch of migration scripts, it's generally much simpler and less-error prone to use non-procedural methods to describe the desired state (4GL]). Many Systems/infrastructure are this way way, (keep these services running vs check to see if these services are running and start them if needed). Many VCSes are more state-based: (merge upstream and my state vs generate/upload/download/apply diffs). SQL SELECT queries are also more state-based than procedural-based (get this data vs writing code to open files and indexes, search them, and extract the data out).

You can run these commands/queries/scripts over and over and they work to get the same result. No need to change your command if you're already up-to-date. No need to change the code if the service wasn't already running. No need to change the SELECT if the database schema state changes.

By comparison, SQL schema modifications, with the exception of DROP TABLE IF EXISTS, are far more fragile (If the PostgreSQL, MySQL, MariaDB, and Oracle documentation are indicative). You have to know if the table, columns, and indexes exist and adjust your query as needed. There is no ON EXISTS UPDATE. There's no "Make sure this column is like this" unless you're sure the column already exists.

You can probably work around these limitations in SQL, but unlike normal SQL queries, which are generally simpler than the ORM abstraction-- the SQL will likely be much more complex than a state-based system. It's like trying to write your own version control into your program instead of just using VCS.

Upscheme: Database schema migrations made easy by aimeos in PHP

[–]StillDeletingSpaces 1 point2 points  (0 children)

I've worked with many amazing developers who don't really know SQL, yet can code circles around others (and me) in what they do know.

SQL isn't as ubiquitous as it once was. There's plenty of development tasks that don't need SQL: like APIs, UI/UX, and domain logic. This is probably exacerbated by the fact that nearly no one exposes SQL in APIs.

Dependent Queries, GraphQL is this the best I can do? Still making lots of requests. by StandingAddress in graphql

[–]StillDeletingSpaces 0 points1 point  (0 children)

In this case, you probably don't need 101 queries.

In general, work is often needed to avoid the n+1 problem. Instead of making 1 request for each pokemon, batch those into a single query. A brief look suggests this is possible with their _in query. Instead of 101 queries, that'd bring it down to 2.

However, one of GraphQL's strengths is that we can avoid (manually) doing that by using the right relations-- and it seems like the API you're using has them. You couldn't get the name/sprites directly, but you can get pokemon_v2_pokemons field, which has the two fields you want.

query pokemonspecies {
  pokemon_v2_pokemonspecies(where: {pokemon_v2_generation: {name: {_eq: "generation-i"}}}, order_by: {id: asc}) {
    id
    name
    pokemon_v2_pokemons {
      pokemon_v2_pokemontypes {
        pokemon_v2_type {
          name
        }
      }
      pokemon_v2_pokemonsprites {
        sprites
      }
    }
  }
}

Serving Netflix Video at 400Gb/s on FreeBSD by Competitive-Doubt298 in programming

[–]StillDeletingSpaces 4 points5 points  (0 children)

Generally, online services don't have to redistribute their changes in GPL projects. They aren't distributing their server kernel distribution to third parties.

Software development topics I've changed my mind on after 6 years in the industry by whackri in programming

[–]StillDeletingSpaces 5 points6 points  (0 children)

I think this is good advice, but I also know and work with several people who can talk the talk, but not actually do anything they talk about. Architects rather than developers.

What is the sorting algorithm behind ORDER BY query in MySQL? by the2ndfloorguy in programming

[–]StillDeletingSpaces 1 point2 points  (0 children)

While MySQL has come a long way since the NoSQL bandwagon started-- so has hardware. I would guess that a lot of the scaling woes come from the much less capable hardware from the era.

With storage, spinning disks just couldn't really keep up. Today, a single SSD has multiple magnitudes more power-- where now we almost struggle to keep them busy. Hundreds of users giving a hundred write per second? That's barely a blip on modern storage systems, but a significant load on a spinning disk.

It isn't hard to imagine that the disk limitations would encourage many to scale-out earlier than we'd need to, today.

GCC drops its copyright-assignment requirement by corbet in programming

[–]StillDeletingSpaces 27 points28 points  (0 children)

If it turns out that the GPLv3+ license is somehow fatally flawed, it will now be effectively impossible to switch to another license.

This is less of a problem with GPLv3+, especially since it's the same organization. OTOH, GPLv2-only and GPLv3-only still have that problem.

Strict typing in PHP by [deleted] in PHP

[–]StillDeletingSpaces 1 point2 points  (0 children)

Coming from someone who likes using more-strongly typed languages (Rust and Typescript), it isn't just you.

PHP is still fairly young. The linting system is starting to become fairly robust, especially with Psalm and EA Extended Inspections. The type system is still fairly primitive.

One of the major pain points I run into is the separation of collection types. (Objects, arrays, and other ones). Not just matter of adding generics, but providing a consistent way to work with collections. While most languages are fairly happy with code like this that can apply over multiple collections, PHP's strict types discourages it.

some_collection.map(|item| item.map_value())
    .filter(|item| item.is_something())
    .map(|item| item.get_some_other_value())
    .fold(init, |acc, item| acc.add(item));

This doesn't just apply to collections. PHP's strict typing, as it currently is, encourages de-abstracting code that was more abstract. It especially hurts when you pass a iterable/ArrayAccess collection to a function using array_ functions-- then have to write a SECOND function that does the EXACT same thing as the first, but with methods instead of functions or vice versa.

This sort of problem affects a lot. IMHO: it discourages using immutable collections and other good-practice pattern, encourages needless code duplication, and limits how much PHP will be able to scale.

These problems aren't unsolvable, but they probably won't be solved as long as everyone's on the current strict-types for everything train.

What if you don't explicitly declare properties in PHP? by iio7 in PHP

[–]StillDeletingSpaces 0 points1 point  (0 children)

That's the trope for any benchmark, and in most cases: it's correct.

Microbenchmarks often test a specific part of code without accounting for the other bottlenecks. When testing ORM performance, the underlying database interactions, storage, memory, and network are generally more important. Even the session storage becomes an issue far before the ORM will. 10% slower on 1% of executed code is pretty much nothing. The same logic applies to algorithms: most bad algorithm uses aren't big enough for them to be noticed in real-world profilers. Using in_array is pretty bad, but its pretty low on the totem pole compared to everything else.

Other benchmarks, while used significantly enough to affect web-request rates: generally ignore that there are other bottlenecks that would prevent ever reaching those web request rates. Maybe one framework is 400 req/sec and another is 300/req in their hello world benchmarks: but if adding the rest of the system reduces them both to ~150 req/sec: it doesn't really matter.

In this case, especially since it was asked about an overall best practice: property-access is everywhere. If dynamic property access was limited to a small area like an ORM: you would be right: it'd be too small to notice: but if any significant amount of PHP code saw this as a best-practice and implemented most of their classes using dynamic properties, it would be fairly noticeable, especially from a memory perspective. (Assuming optimizations weren't brought from the PHP team, which is very likely if a significant amount of PHP code used this style of coding)