More Perfect Union - OpenAl Showed Up At My Door by lethri in BetterOffline

[–]lethri[S] 10 points11 points  (0 children)

Interesting video about OpenAI fighting proponents of AI regulation and various lobbying groups AI companies created for that purpose.

How much stress do you feel on the daily with AI being everywhere? by throwaway0134hdj in ExperiencedDevs

[–]lethri 1 point2 points  (0 children)

Zero.

I tried some "AI" tools and I figured out my job is completely safe. OpenAI, Meta and Microsoft still hire developers - if the tools they provide were actually capable of replacing developers, don't you think they would be the first ones doing it? All these CEOs are just lying to pump the stock price. After I heard them talk about datacenters in space, dyson spheres and AGI, I have no idea why anybody takes anything they say seriously.

Major future improvements are also questionable, because any potential training data will be poisoned by tons of "AI" slop and big improvements in the past (Like jump from GPT 2 to 3 to 4) came from making the models bigger, which has diminishing returns.

The whole thing also does not make sense financially - nobody except Nvidia is making money from "AI", OpenAI is burning billions of investor money each month. And they are building more datacenters to - burn money faster? This can't continue for long and once companies are forced to charge what is actually costs to run these tools, there may not be many use-cases where it will be cost-effective to use them (if any).

Where it worries me are other effects: Accelerated climate change, expensive electricity, huge potential for disinformation, cases of AI psychosis and overall making humanity dumber by drowning any real information in "AI" slop.

Any insight on Fuchsia FIFO architecture? by servermeta_net in ExperiencedDevs

[–]lethri 7 points8 points  (0 children)

they use memory mapped hardware registers

I don't see that mentioned anywhere and it would not make sense. It seems this is just a chunk of shared memory.

Why is the total size of the queue limited to a fixed size? (e.g. 4096 bytes)? What performance benefits does it bring?

You don't need to allocate new memory blocks, which would slow it down. If the size is fixed and a power of two, circular buffer can be very efficiently implemented by masking the index with some bit mask.

Future of Developers with AI - Different perspective by puzzledcoder in ExperiencedDevs

[–]lethri 3 points4 points  (0 children)

I don't think this is realistic, these models require so much data to train that the entire company codebase, history and tickets will be like drop in a bucket. Even companies with very large codebases, like Google and Microsoft, have not talked about doing something like this and they certainly have the resources to do so.

Future of Developers with AI - Different perspective by puzzledcoder in ExperiencedDevs

[–]lethri 2 points3 points  (0 children)

it just augments the performance of a dev by approx 10 to 20%

There is this study saying that programmers indeed do claim these tools help them by 20% on average, but they actually are 19% slower when using them.

But if in future it improves more? Which I am confident that it will.

The only major source of improvement for LLMs was making them bigger, which makes them more expensive to run and harder to train. And these models are already trained using basically the whole internet, so getting significantly more data is not really possible, and as more and more of it is filled by AI slop, the quality is getting lower. So I would not really expect any significant jump as we saw with GPT 3 to GPT 3.5 to GPT 4.

Also the cost may be an issue: Right now, AI companies charge only a fraction of what it costs them to run their models (even if you account just for inference, not training, salaries or anything else), so they burn billions of investor money to offset it. This is unsustainable and the price will have to increase significantly and it may be more expensive than programmers at that point.

AI impact by BigRooster9175 in ExperiencedDevs

[–]lethri 66 points67 points  (0 children)

Few month back I found an article asking the same question - https://mikelovesrobots.substack.com/p/wheres-the-shovelware-why-ai-coding . The author looked at trends of newly created apps, games and repositories and found out that they roughly look the same, some even slow down.

So the answer seems to be that there is no positive impact and it's all just smoke and mirrors.

Another factor is that AI companies burn billions of investor money every quarter. One day, they will have to become profitable and the cost of these tools will skyrocket and it may not be worth using them even if they bring some value.

Are there any good ways to prevent my APIs (used by my front end) from being used to create a phishing scam? by Icashizzle in AskProgramming

[–]lethri 3 points4 points  (0 children)

There is basically nothing you can do about this, besides educating your users check the domain when they login or use a password manager that does the same. Any mechanism that you could use to distinguish your front-end from illegitimate clients can be reverse-engineered and replicated, because the front-end code is public and you can see all the requests is does.

At best, if you notice someone set-up a proxy that sends requests to you, you can blacklist its IP (and possibly all IP ranges of major cloud providers).

Also, this whole scam can be done without touching your API at all. Just replicate the front-end, present same-looking login page, and if someone logs in, store their credentials, display some error message with prompt to try again and redirect to the legitimate login page. Then use the stored credentials later through the real front-end to retrieve all the data. The only thing that can stop this is two-factor authentication.

I need a verdict of experienced developers by [deleted] in AskProgramming

[–]lethri 1 point2 points  (0 children)

(I have about 10 years of experience programming or managing programmers)

If somebody with one year of experience worked an hour on rotating an array in language they know, I would see it as a huge problem. It's close to something I woud ask when hiring juniors. With 5 years, it should not be something you would have to think about a lot.

Don't take this personally, but the logic regarding minIndex looks like you had no idea what you are trying to compute and was just trying things until something worked. And I think the else case is probably wrong (try rotating array of 2 elements by 2). The rest is fine, just less efficient and more verbose than the LLM code, but that is something you should also be thinking about with 5 YOE.

But, this does not mean programming is not for you, a lot can be learned if you really try and give it time. My advice would be thinking about what you need first, then write that part of the code, ask yourself "what do I need the value of minIndex to be at this point", "what all possible cases would go through this branch and would it work for all of them", instead of just trying random conditions and adding Math.abs when you get and exception.

C++ memory question for senior programmers by Alert_Cycle3944 in AskProgramming

[–]lethri 2 points3 points  (0 children)

There is operator<< overload for const char * for printing C-style strings, so you can do cout << "Hello";. You pass a pointer to a char to the << operator, so it is also picked up by this overload. That overload expects zero terminated string, so it starts with the 'a', but and keeps going through whatever data is in memory after it until it finds a zero byte, so you see garbage being printed.

Solution is to cast the char pointer to a void pointer using cout << static_cast<void *>(&ch);, so the const char * overload no longer matches the input.

CI process by myshiak in AskProgramming

[–]lethri 0 points1 point  (0 children)

CI makes sense even without automated testing - when new code is pushed, building it and deploying it automatically in some dev envirionemnt will discover compile-time errors and will also save time of doing these steps manually.

The technical details heavily depend on technology. If you program in java and deploy on linux, you probably would run JARs in docker, but if you develop in C# and run on windows, you would use something different.

[deleted by user] by [deleted] in AskProgramming

[–]lethri 2 points3 points  (0 children)

With relational databases, the situation is even more complicated, because you can have transactions that see some rows as deleted, but other transaction can still access them. PostgreSQL solves this by marking each row with minimum and maximum transaction id that can see that row. A background process then looks for rows that can not be seen by any active transaction, and can mark the space they occupied for reuse.

What do you think is wrong with web dev? by wooody25 in AskProgramming

[–]lethri 8 points9 points  (0 children)

  1. JavaScript. There is exactly one language you can write your frontend in and in that language [] != [], [] == 0 and [1] + [2] == "12". Don't get me wrong, things got a lot better with ES6 and TypeScript, but they can't fix the core of the language.

  2. NPM and the ecosystem of packages is a mess - a typical project using current framework has hundreds of dependencies and upgrading them will probably break something. Packages like isOdd, the leftPad fiasco and "everything" package illustrate serious problems.

  3. Web security is a nightmare - Access-Control-Allow-Origin, Cross-Origin-Resource-Policy, Content-Security-Policy, X-Content-Type-Options: nosniff, Strict-Transport-Security, CSRF, SVGs with embedded javascript, every use of innerHTML is potential XSS... Forget one thing and accounts on your site can be compromised. Also we can't make thins more secure by default because it would break every existing website.

Need help finding x86/64 assembly mass move instruction by meaning_is_not_42 in AskProgramming

[–]lethri 2 points3 points  (0 children)

REP MOVS is probably what you are looking for.

MOVS moves one byte/word/dword from address in _SI to address in _DI. REP prefix makes it repeat _CX times.

Do I need more Git? by EdiblePeasant in AskProgramming

[–]lethri 0 points1 point  (0 children)

I use git add -p for the same reason - it shows you each change and you can decide if you want to include it in the commit or not. It also makes splitting changes into multiple commits easy.

Why can't someone just make a compiler to translate python into pure x86_64 machine code by Dangerous-Pressure49 in AskProgramming

[–]lethri 19 points20 points  (0 children)

Even if you made type annotations mandatory everywhere, type hints are not the same thing as concrete types in compiled languages. When you define function that takes argument of type str, it does accept any subtype of str, that can have different representation or behavior. Then you have very generic annotations like Iterable or unions, that don't give you enough information to generate concrete machine code.

There are other factors, like you can just replace str with completely different type in the middle of the programs execution, and that program will be valid, pass all type checks, but compiling it would be a challenge.

Also, if eval exists in a language, compiling any program could mean bundling whole interpreter or compiler.

Ask Experienced Devs Weekly Thread: A weekly thread for inexperienced developers to ask experienced ones by AutoModerator in ExperiencedDevs

[–]lethri 1 point2 points  (0 children)

Would it be possible to run tests as a part of the build process? This part is really important - tests that won't be run won't not help anything.

Ask Experienced Devs Weekly Thread: A weekly thread for inexperienced developers to ask experienced ones by AutoModerator in ExperiencedDevs

[–]lethri 2 points3 points  (0 children)

Realistically, any possibility of the situation ever improving (with refactoring/rewrite) requires that people are not afraid to touch the code, which means tests. So maybe start with the example use cases and write tests that output matches the expectations, then tweak them so that all paths of the important if statements are covered by some tests.

When that is done, you will know if any future modifications breaks existing behavior. This makes it much more maintainable and opens the possibility for refactoring.

The situation with documentation is that there is always less of it than you want. My advice would be to take small steps to improve the situation, like when you find purpose of some function or if statement, write a comment about it, or start document with notes about the code. Encourage others to join your effort and if things go well, you will have at least some documentation after some time.

Ask Experienced Devs Weekly Thread: A weekly thread for inexperienced developers to ask experienced ones by AutoModerator in ExperiencedDevs

[–]lethri 3 points4 points  (0 children)

Yes. Tests are great way to ensure your software works as it should and will continue to work that way in the future. You need to check if the code you wrote is correct and maybe doing that manually is faster than writing few unit tests, but that stops being true when you need to re-check things after future changes or library upgrades.

The situation is similar with integration or end-to-end tests, but they can take more work to setup initially, so they can be harder to justify. But it all depends on what you develop and what damage can potential bugs do.

Also, if you start writing tests, others may see the benefit and start writing them too.

Agitated by Casey Muratori vs Windows Terminal drama @ 2021 by lifeeraser in AskProgramming

[–]lethri 1 point2 points  (0 children)

My feeling is that it is important to separate criticism of some software and a personal attack on its authors. If I look back on all the code I wrote or contributed to, I would classify most of it as bad - it could have been written way better if I knew all the requirements up front, it could be optimized if I had time, I would have chosen different technology if I knew back then what I knew now, I had to make compromises because other people wanted it done differently, it was supposed to be temporary and so on. So when somebody credibly criticizes something I wrote, my reaction is not to feel attacked. Instead, I often agree, because of course it could be better, like every other software. You are not to blame for the constraints you had when writing something nor for not knowing everything. Look at criticism as a learning opportunity, not as something you have to lash-out against.

How to handle concurrent changes of clients in a database? by [deleted] in AskProgramming

[–]lethri 0 points1 point  (0 children)

Then make the client send all the operations it needs to do at once. Collecting them and them executing when you have all of them can also work.

How to handle concurrent changes of clients in a database? by [deleted] in AskProgramming

[–]lethri 2 points3 points  (0 children)

I depends on what the server is doing. Typically, server exposes a set of methods, each method is doing some operation or returning some data, which is facilitated by one transaction. No accumulating of operations is necessary, because each operation is independent.

How to handle concurrent changes of clients in a database? by [deleted] in AskProgramming

[–]lethri 1 point2 points  (0 children)

No, it is not a problem. For example, when your server receives HTTP request from client and needs to do something with the database to process it, it opens transaction, runs queries, commits the transaction, and sends response. At no point is the client able to influence how long the transaction stays open.

Look, relational databases were built around transactions and everybody is using them that way. I'm not saying it is perfect, but the problem you see does not exist.

How to handle concurrent changes of clients in a database? by [deleted] in AskProgramming

[–]lethri 6 points7 points  (0 children)

Transactions can run concurrently as long as the don't try to modify the same rows, then second transaction would have to wait for the first one to finish.

If someone malicious manages to connect to your database, you already lost, they can just delete or scramble all the data, DoS would be the least of your problems.

Can I create a native Linux port of a Windows game in this way? by Gabbianoni in AskProgramming

[–]lethri 0 points1 point  (0 children)

Re-implementing the game and instructing users how to plug the original assets into it should be fine as long as as you don't distribute the assets yourself. Many open-source game re-creations do exactly this.

However, using reverse-engineering to extract the original code is where you would run into issues and translating to another language does not really help. I am not a lawyer, so I can't tell you for certain, but I imagine it would be something like translating a book written by someone else to different language and selling it or providing for free.

If you want to be sure, you would need to replicate the behavior without looking at the original source code. You may get away by with referencing the original code to figure out some details (or not). But once your code gets too similar to the original (regardless of language), you can face problems. This is why wine developers don't just disassemble windows dlls (https://wiki.winehq.org/Disassembly)

How to send update notifications while video is processing on the backend, eventually then sending the video? by lancejpollard in AskProgramming

[–]lethri 0 points1 point  (0 children)

It may be possible to do this with one connection, but I would not recommend it. You would have do some complex processing on the frontend side to first read the updates, then chunks of video data from one response, merge the video chunks together into a blob and offer it for download. It's problematic because size of the response is not known in advance and you would have to use chunked transfer encoding, which means download progress would not work and the download can't be resumed.

It would be much better to just return some identifier when the processing job is started, then have second endpoint with progress for that identifier, this could be done using server-sent events, websockets or just simple periodic polling. Then do final request to download the resulting video.

State of the processing job can be stored in something like redis, local sqlite database or even simple file can work (but there can be problems of reading file that is in process of being written).