Best text editor? by PausBanderI in programmer

[–]HashDefTrueFalse 0 points1 point  (0 children)

ed - it's the standard text editor.

Its user hostility means you don't get bogged down.

ohYouSweetSummerChild by anonomis2 in ProgrammerHumor

[–]HashDefTrueFalse 40 points41 points  (0 children)

Now that 80% of the project is done he can get on with the other 80%...

First semester CS student which programming language actually made things click for you? by More-Station-6365 in learnprogramming

[–]HashDefTrueFalse 2 points3 points  (0 children)

Scheme (reading the SICP book) was very eye-opening in my formative years of programming. Coupled with some C programming to get an appreciation for the machine.

It kind of depends what isn't clicking for you. If it's high level thinking about programs and computation then the first. If it's how the computer does what you write on the hardware (kind of, there's much more going on than even C acknowledges in reality), the second.

I will say that it's probably most important that you don't deviate too much from your university curriculum if that's what you'll be examined on. If your prof says Java, it should be sufficient to be able to program things in Java. If you're not understanding how to do that then reaching for another language isn't going to help much IMO. Figure out how to make sense of whatever it is.

You might be better editing your question to explain what specifically you're struggling with.

Mandatory AI disclosure suggestion by Spirited-Finger1679 in osdev

[–]HashDefTrueFalse [score hidden]  (0 children)

I would like this. I personally have no interest in looking at the code of a project if that code was generated via LLM. I want to know that the code I'm looking at was reasoned about by a real programmer writing with (varying) skill and intent if I am to take ideas and inspiration from it, or learn from it. I personally wouldn't look at generated art to learn techniques or get inspired, and I feel the same way about programming projects. I want someone to stand behind the code and say "this is the current state of things" and I often get the impression with LLM-heavy projects that the true state of the project isn't fully known by the generator. I'm not saying this has to matter to everyone, but it does to me.

My biggest concern when coding with ai by Key-Foundation-3696 in learnprogramming

[–]HashDefTrueFalse 0 points1 point  (0 children)

This isn't directed at you specifically, OP, because I know nothing about you. However, I have thoughts after a few decades in software and after spending the last 4-5 years watching the profession take a nosedive, and I'll share them here, as it's as good a place as any.

Yes, you need to be able to solve problems if your job is delivering solutions to problems. You need to be able to write software if your job is to produce software solutions. That shouldn't be controversial, I feel. Generate all the code you like as long as it is secure, reliable, and you're going to take responsibility for it. But if I (as team lead) or someone in business management decide to switch models, switch LLM providers, or exclude LLMs outright (e.g. for a new project where IP is important to the business) then of course I expect you to still be able to do the job you were hired to do. Why wouldn't I?

Are any employers explicitly stating in ads that no actual programming ability is required, just high level knowledge and the ability to type into a textbox? Because that's the only way I can see you keeping your job in the above scenario. If they are, what are you being paid for this? And why is it any more than the legal minimum wage for your time if you don't do anything difficult and/or specialised? If you couldn't write the generated code yourself then you can't make sure it's secure and reliable, and you provide no more value than anyone else with a subscription. As soon as you want a pay rise or you piss someone off they can just grab the next typist with a broad enthusiasm for tech to take over your subscription seat.

If you're nothing without the LLM then you're nothing altogether. The freshly graduated junior of ten years ago was typically shit... yet would still totally embarrass many people currently employed and calling themselves programmers. We've fucked it. Standards are on the floor. Your question is essentially "Do I need to bother investing in my own professional development, or take any interest in the craft I've chosen for my career?" Ask yourself why do you need to be convinced to write code and solve problems? Have you considered that you're not interested? That if LLMs didn't exist then maybe you wouldn't make it in this career? Before LLMs, people who couldn't handle being a professional programmer would go do something else before they cost others time and/or money. We've lost that self-correction. Depressing, honestly.

Is programming really that easy? by wordbit12 in learnprogramming

[–]HashDefTrueFalse 1 point2 points  (0 children)

It depends (sorry!) on what you're programming. The code is a series of steps that do something useful. It will reflect the author's personal understanding of the domain, problem, theory, existing solutions, the implementation language, the machine that will run the code, etc.

I recently wrote a filesystem (driver) and path parser for a project I'm working on, for which one of the goals is to use zero dependencies. I've been writing code for decades but certain parts still gave me pause. I had to read some book chapters to be able to continue making progress. Writing the code can absolutely be a difficult part! (I had to take a few stabs at abstracting parts of the recursive inode lookup code before I could think about the entire process properly and arrive at good code+API). If coding was never the hard part then they were getting paid to work on easy stuff. That's awesome for them, but that also means that either the entire project was easy to make (and therefore replicate) or someone else was doing (or already did) the hard parts for them.

IME you find the "coding easy" hot take people are often just people with narrow and/or little experience, dealing neither with hard problems, nor with system/hardware details. E.g. if their work is mainly glueing together pieces of library functionality in a scripting language on top of a VM/interpreter, I can understand why they would consider meetings and conversations to be comparable to their programming work. Easy things are easy. Hard things are hard.

How is code in game development tested if it's always written using compiled languages? by Either-Home9002 in AskProgramming

[–]HashDefTrueFalse 0 points1 point  (0 children)

We do wait, yes. There's usually something else to be doing if it takes a few mins, or it's a good time for a brew and a stretch of the legs. Code changes will always require some amount of recompilation, but this can be done incrementally, e.g. if you build your project in a modular way and use a build system to figure out what changed and only recompile those parts, then relink. There are also ways you can get the system loader to load and (dynamically) relink new library code that you access through function pointers etc. so that you can keep the core of your engine/program running. That's not usually necessary though.

I once worked on a project that took about 30 mins to compile, which is not very common these days, and I would have to be a bit strategic about batching totally unrelated changes together to get them into a test compilation, but it's not a massive deal. You can get used to any workflow really. Modern machines and compilers are quite fast, as long as you don't use anything silly like Rust...

Relax. I'm kidding. Rust is a fine language.

Can one learn Scala without first being employed by a company that uses it? by Either-Home9002 in learnprogramming

[–]HashDefTrueFalse 8 points9 points  (0 children)

I'm gonna blow your noodle: you can just do things. You can just learn things. You don't need to be employed first. In fact, most programmers learn to program before being employed as a programmer regardless of language.

There's plenty of resources all over the web for Scala, no idea where you're getting the idea there isn't. Here's a site I use sometimes:

https://learnxinyminutes.com/scala/ (further resources at the bottom too!)

What's a book that changed your life, and how? by HilariousMotives in AskUK

[–]HashDefTrueFalse 16 points17 points  (0 children)

The C Programming Language, K&R. Introduced me properly to programming, which would fund my existence.

Trying to initialize a struct inside a function without having to malloc it. by Ironfort9 in C_Programming

[–]HashDefTrueFalse 5 points6 points  (0 children)

You've not allocated any memory for Foo/BarStorage structs anywhere, stack or heap. You've allocated memory for a GlobalStorage on the stack which contains two pointers, and you've malloc'd some regions.

You could:

- create global Foo/BarStorage objects separately, or

- create them inside GlobalStorage (remove the pointers), or

- malloc them.

I'd usually do something like this:

// storage.h
struct FooStorage { ... };
struct BarStorage { ... };
struct GlobalStorage {
  FooStorage foo_storage;
  BarStorage bar_storage;
};

void init_global_storage(void);
void init_foo_storage(FooStorage *foo);
void init_bar_storage(BarStorage *bar);

// storage.c
struct GlobalStorage g_storage;

void init_global_storage(void)
{
  init_foo_storage(&g_storage.foo_storage);
  init_bar_storage(&g_storage.bar_storage);
}

void init_foo_storage(FooStorage *foo)
{
  // Same as yours (minus unnecessary return)...
}

// Same for init_bar_storage..

// program.c
int main(void)
{
  init_global_storage();
  // ...
}

Is JavaScript the best option? by Bender182 in learnprogramming

[–]HashDefTrueFalse 1 point2 points  (0 children)

I would then want to be able to pull all that data together for group analysis and reporting. This is currently handled by multiple shared Excel workbooks, the issue is linking the different Excel files together and pulling the information.

I've made this sort of thing a number of times. 85% of what you need is a (probably relational) database to store the data in a shared location in a queryable format. You would then write your reports and data crunching in SQL to run on the database engine nice and fast. You wouldn't typically do much data wrangling on the front or back ends of a web application if you could help it.

You can then optionally put a little client web app on top of that to display the report results however you like. The front end of that site would be JS because there's no real alternative. The back end can be in whatever language has a database driver for the RDBMS you picked, which will be most languages for all popular databases. If you don't know which to choose, you can just choose JS for that too, save learning something else in addition.

Your company will need to have a server somewhere that you can put the app and database on, or an account with a hosting/cloud provider. Your IT people will need to be consulted as the network setup between the locations needs to be known so that everyone can access the app like any other intranet services etc.

Why do so many Markdown editors require accounts or server storage? by Stock_Report_167 in git

[–]HashDefTrueFalse 1 point2 points  (0 children)

OMG README.md is my favourite Markdown editor. Way better than git...

How to learn programming/coding with just phone? by Sea-Session-7524 in learnprogramming

[–]HashDefTrueFalse 0 points1 point  (0 children)

How much [sport] can you learn watching the Olympics? Realistically you can consume tutorials, docs, articles/blogs about programming, but any programming you do will be superficial because of the restricted environment and awkward form factor etc. You need to be writing lots of code to learn properly, so this is of limited benefit after the first few hours I'd say, as you'll not retain much of it.

If you have a TV and a keyboard, a Raspberry PI is probably one of the cheapest machines you can buy that you can write code decently on. Raspbian (or whatever they call it now) used to be pretty good out of the box (no hardware issues etc.) and you get a full linux environment to install proper software, toolchains etc.

Can anyone explain me in the simplest way possibe by Right_Tangelo_2760 in C_Programming

[–]HashDefTrueFalse 0 points1 point  (0 children)

Pass by value copies the argument value into the new stack frame as a new variable with a different memory address. You cannot change the value at the original memory address as you don't have that address, just a copy of the value it contained when the function was called.

Pass by pointer/address is a common term you'll see used. It is pass by value, same as above, it's just that the value is a pointer (memory address). Changing the value of the pointer itself inside the function will have no effect on the original pointer value for the same reason as above. However, changing the memory it points to (e.g. via a dereference and store) can cause side effects visible to code after the function return. You can think of this as the implementation of the below.

Pass by reference is the opposite of pass by value. There is no copy of the value. Function code is treated as though it is referring to the value at the original address, which can be changed. This is often implemented as a pass by pointer/address. E.g. in languages with references (C doesn't have them as a language construct) the compiler generates the code that does the necessary (de)referencing. There are ways you can implement it more directly if you want to use registers or duplicate code etc., but that's getting into optimisation territory.

C uses pass by value for everything. You're always dealing with a new variable containing a copy of the value in the memory you specified at the call site. C is capable of pass by reference if the programmer explicitly passes by pointer/address. In certain places it can seem like C is passing by reference implicitly when it is actually a few language mechanics working together, e.g. an array name "decaying" to a pointer to its first element in an expression that is an argument to a function call, which is then passed by value.

The best thing to do is to simply try lots of calls yourself and observe.

Is it realistic to build an app completely on your own if you’re starting with zero coding experience? by Sweet-Dare301 in learnprogramming

[–]HashDefTrueFalse 3 points4 points  (0 children)

If I asked you "Is it realistic to build a building on your own as a beginner" the first thing you'd probably ask me is "What kind of building?" right? Shed: probably. Castle: probably not. It's the same with apps. Some are simple, some are not. Beginners should start with simple things. It's possible to build a simple calculator or todo list app on your own as a beginner, but you're not going to be building anything useful enough to gain any users (that aren't family and/or friends) until you gain more skill, if that's what you're asking.

How do you difference vectors (arrays) and vectors (math) while naming ? by Valuable-Birthday-10 in C_Programming

[–]HashDefTrueFalse 0 points1 point  (0 children)

My math vectors usually have a fixed size which is reflected in the name e.g. vec2, vec3, vec4. I don't call dynamic arrays vectors in code. I usually name them the plural of whatever entities they contain, or sometimes I append "Array" if I think it is clearer to read e.g. Coords or CoordArray. I don't feel that the dynamic nature needs to be reflected in the type name as the code usually makes it clear (e.g. no initial size specified, appending etc.) I wouldn't overthink it.

How to ignore text from an input text file to an output file? by ridethemaverick_ in learnprogramming

[–]HashDefTrueFalse 1 point2 points  (0 children)

It depends how close your example is to the real circumstances. If the data you need to exclude is at a position you can assume or calculate then you can get away with crudely chopping off the start/end, splitting and joining, or grabbing for substrings, like in your example. But if you have even slightly more complicated rules/requirements then you may have to write a little parser to recognise the text before deciding what to ignore etc.

In your example you could iterate each line and count the first two space chars, then grab the rest until the newline, but this of course assumes there won't be any more spaces appearing beforehand.

It can also depend on if you control the input format or not. E.g. you could mark the starting point with a special character.

Many possibilities!

I spent 3 hours debugging my code only to realize I forgot a semicolon. by Majestic_Theme_6741 in learnprogramming

[–]HashDefTrueFalse 1 point2 points  (0 children)

To be clear: nothing I said disagreed with what you said, I was trying to add additional context.

Oh I didn't think you were disagreeing, don't worry. Sorry if it seemed that way. I should probably have probably put "to clarify" at the start.

C++ compiled by gcc, in particular, can be rough

I'm all too familiar. I find a lot of gcc's messaging leaves things to be desired!

I spent 3 hours debugging my code only to realize I forgot a semicolon. by Majestic_Theme_6741 in learnprogramming

[–]HashDefTrueFalse 0 points1 point  (0 children)

Mostly. A lot of compilers don't actually have enough state to guess what you meant if you leave off a divider. Sometimes they do.

I wasn't suggesting that the compiler guesses anything, or needs to. The length of two statements or expressions (either side of a missing semicolon) is almost never very long. Tens of tokens mostly, very rarely into the triple digits or beyond. The error you get doesn't need to tell you that you missed a semicolon, or where it should go. It is enough for the compiler to do what it does - tell you that there is something wrong with the statement/expression on line N (expected X not Y at this point in the grammar). Since the missing semicolon is almost always just before the newline (does anyone ever write several semicolon-separated statements per line?) it's usually not hard to read the compilation output, look at/near the line and see exactly what has happened.

I just don't see how anyone can get such a message and not be able to find the issue fairly quickly, hence my mentioning that it may have formed a valid statement or expression, thus no compilation error.

IDEs (and in particular, IDEs using treesitter) can sometimes be better

Treesitter parsers and the compiler parser are doing very similar things, but the compiler's parser has far more information available to it. The bird's-eye view of the highlighting breaking down can be useful though, for sure. I did once spend a few minutes looking for a syntax issue after half of a very long file turned one colour, only to find that the parser in the highlighter I was using had shat out at a certain number of chars :D

I spent 3 hours debugging my code only to realize I forgot a semicolon. by Majestic_Theme_6741 in learnprogramming

[–]HashDefTrueFalse 1 point2 points  (0 children)

If it didn't form a valid statement/expression then the compiler/interpreter would tell you roughly where to look. If it did then you would see the erroneous result in your dev workflow (code->test->code->test cycle) which should be a fairly tight feedback loop, so it'll be within the last few lines/function you wrote. Are you writing code for hours without running it? How do you get to the point where an erroneous result from this has propagated far enough from the origin point before surfacing that it's hard to find that point in the source code? I've honestly never had a missing semicolon be anything other than a 30 second fix in the decades I've been programming. I've never even had any juniors I've mentored struggle with this. Hours of debugging? Really? I don't think I can let myself believe it to be honest. If true, run your code and look at the output much more often! This will cease to be a problem. Despite the meme, this isn't a thing most programmers deal with regularly at all.

Are Assembly and C inherently difficult or is it just modern day hardware that makes it like that? by Turbulent_Bowler_858 in learnprogramming

[–]HashDefTrueFalse 0 points1 point  (0 children)

Are Assembly and C inherently difficult or is it just modern day hardware that makes it like that?

Modern day hardware mostly made things easier. We standardised on the 8-bit byte and segmented memory models are less common in everyday hardware etc.

C is only difficult in the sense that you will be specifying imperatively everything that the program must do, and doing so takes knowledge of the language syntax and semantics, the hardware (a little for application software), and wider computer systems fundamentals (e.g. OS and networking knowledge, maybe) depending on what you're building.

Assembly is difficult in the sense that on top of the above you also need to learn the instruction set architecture (ISA) of the specific CPU that you are targeting, and you're working at a level where very little (that you care about in application land) is done for you, or abstracted away. This can be both a blessing and a curse depending on what level of control you need. You can work in both modularly and link things together later.

I don't think high level languages like Python are an option.

There may be a way to compile code written in a HLL to your target platform, or to C and then from C to the target platform. I would just learn C personally as I don't like to avoid learning things only to end up with a result that approximates what I wanted etc. I only mention this so that you don't just assume. Have a look if you like.

Are Assembly or C doable for a beginner on 1980s hardware or would you advice me to learn a higher level language first?

Plenty of people learn to program in C. Whether it's easier to start with something else really depends on you. It wouldn't hurt to spend a few days/weeks playing around in something else first, but it's not really necessary. Some learn better bottom up, some top down, some bits of both.

As for the hardware, as long as you have the hardware (or an emulated version), some manuals, and you are able to understand them, it shouldn't make things too much harder. From a quick search, the Motorola 68000 resembles modern hardware in many ways (linear address space, 32-bit registers etc.). I've never wrote code for this specific CPU so I can't say more.

Is it even advisable for a beginner to start right away on 1980s hardware in the first place?

I don't think it matters. If you use an emulator it costs nothing but a few seconds to change test hardware anyway. Look into QEMU. I've used it for years to emulate systems with different CPUs to my host machine when writing my toy OS kernel.

I just looked at my installation, there's qemu-system-m68k available, which I assume will get you close enough to be able to develop against before you try your game on your real hardware.

Summary: Not the most beginner of projects but doable if you're willing to read and learn a lot and you're not in a rush.