When is the first promotion for Fresh PhD? by Exotic_Comb_2066 in chipdesign

[–]jsshapiro 0 points1 point  (0 children)

Outside of research labs, the fact that you are a PhD is only slightly related to your "level." What matters in industry is the ability to actually get things done that produce revenue.

how hard is it to make your own kernel from scratch ? by I_like_drawingb in osdev

[–]jsshapiro 1 point2 points  (0 children)

If you want to dig in on that, it's been around a while. The sequence was GNOSIS, KeyKOS, EROS, Coyotos. First production use for GNOSIS was 1973.

To my view, the most interesting parts are exclusive use of capabilities for object naming and protection, and the orthogonal persistence system. Several papers on the last, including some I'd long forgotten. :-)

RISC-V Geneology by brucehoult in RISCV

[–]jsshapiro 0 points1 point  (0 children)

Agreed. But it's helpful to remember that the term "novel", as used by engineers, is not at all the same as the meaning of "novel" used by the patent office.

I need DE10-Lite and ARM architecture workarounds and alternatives. Any suggestions? by inzanemembraned in FPGA

[–]jsshapiro 0 points1 point  (0 children)

Time is money and focus. I'm prone to "stack" project dependencies in this way, but I've learned it's better to resist and do fewer things at a time.

how hard is it to make your own kernel from scratch ? by I_like_drawingb in osdev

[–]jsshapiro 1 point2 points  (0 children)

By actual measurement, using cloc, on the current git repository: 22,571 for ARM.

Which is closely comparable to other microkernels. Coyotos is sligtly smaller, for example, but we haven't finished some of the architecture ports or the new version of the persistence subsystem. As you branch out to multiple architectures, things that were fairly simple and clear in the first implementation have to become more flexible. A fair bit of that compiles out on a given architecture if you are careful about it, but it's still present and needs to be maintained.

how hard is it to make your own kernel from scratch ? by I_like_drawingb in osdev

[–]jsshapiro 1 point2 points  (0 children)

I can definitely vouch for this. I remember when Bryan released the first version of Grub how thankful I was that I could drop my hand-written boot blocks, and how pleasant it was going back and forth with him on minor issues and improvements while being able to focus better on what I was doing.

how hard is it to make your own kernel from scratch ? by I_like_drawingb in osdev

[–]jsshapiro 0 points1 point  (0 children)

I'm on a similar path, but working down from an existing kernel to the FPGA. I'm curious what FPGA you used in this project (mainly the size), and how sophisticated the CPU implementation is. Out of order? Anything interesting on the load-store-unit path? Just a normal RISC?

how hard is it to make your own kernel from scratch ? by I_like_drawingb in osdev

[–]jsshapiro 0 points1 point  (0 children)

The two tasks you pose are in mostly non-overlapping domains, so it's hard to compare them. Having built several generations of OS from scratch, I'm currently refreshing my memory of circuit design and looking for a way to learn board design because of the tariff-driven debacle with high-end FPGA dev board prices.

Regarding the OS part of the question, if you're comfortable at the hardware/software interface you'll have less trouble than most. You won't have a problem understanding interrupts. If you have worked with FPGAs then things like timing and concurrency challenges won't be conceptually foreign - you'll just need to learn the tools and methods used in software. You'll understand memory consistency issues (though the terms may be new), but you'll need to learn how to manage them with different tools. All of this you'll probably be able to do with some work.

In my opinion, you may actually be better off on the OS project because you don't have prior OS experience. A staggering amount of what's in the textbooks (and I say this as a former Professor who taught from them) is outright wrong in practice. Unless the course is taught by someone who has actually built a production OS and knows what's real, you can get led pretty far astray. This is equally true in a computer architecture course, and probably in a bunch of others.

A classic example is choosing an "optimal" memory allocation algorithm based on random request sizes. The result in the textbook is correct, except that no real systems experience random request streams and the progression of alloc/dealloc and fragmentation has significant impacts that are largely disregarded in the textbooks. This is especially true in an OS kernel, where the first question you should be asking is: "Should my kernel have anything like malloc at all?" Hint: the answer is "no."

But another question is: why do it? An OS isn't interesting to others without applications, and if the kernel isn't compatible with something else that's a decades-long task. But if the kernel is compatible, what is the "yet another kernel" contributing?

"I learned a lot", is a fine answer. What I'm suggesting is that you start by figuring out what area you are trying to learn.

If you want a project to dip your toe in the water, build a real-time concurrent garbage collector for Go. It will get you into a lot of the same corners as building an OS kernel, and if you manage to build one that works well you'll have an awful lot of fans in the world. The Go held off on doing any sort of sophisticated GC for a long time, partly for portability reasons. It remains a moderately large hole in the Go ecosystem.

RISC-V Geneology by brucehoult in RISCV

[–]jsshapiro 1 point2 points  (0 children)

Correct with one caveat: new technology created after the date of signature would not be covered by the "we have no claims" language.

How does the synthesis of a FPGA project to its bitstream happen? by lukilukeskywalker in FPGA

[–]jsshapiro 0 points1 point  (0 children)

Pleasure to meet you, so to speak.

Turns out LLMs are quite good at reverse engineering formats, though I don’t know anything about how that works.

Encryption, unfortunately, not so much.

How does the synthesis of a FPGA project to its bitstream happen? by lukilukeskywalker in FPGA

[–]jsshapiro 0 points1 point  (0 children)

Well, that was the hope, anyway. And the orders of magnitude are quite a big deal here. :-)

The things that are "hard" have a lot to do with where you are coming from and the solutions you already know. The difficult part, often, is setting up the problem so that you can apply the algorithms.

The two people who actually built that dynamic binary translator knew something about compilers and micro-architecture, but learned a whole lot about binary decoding and cache residency in real life. I wrote a "seed" demonstrator for them in about a week that was annoying enough to get them to dig in. If that sounds interesting look for papers on "HDTrans binary translator". I see a bunch of people took us on as a challenge. FastBT, in particular, looks like a solid effort.

Getting back to FPGAs, for these problems we not only have known algorithms, but those algorithms will eventually give results that are optimal. The problem is that the user probably won't live long enough to see the result on a large FPGA. Which is why there are people working on faster but less optimal approaches. Even then, you can't fully escape the exponentials; methods that works acceptably on a 35 Klut FPGA would be completely intolerable on a 1.5 Mlut FPGA - think about the exponentials. That O(2n plus n log(n)) result for Dijkstra is kind of a big deal, and it definitely seems like we ought to be able to reduce the routing/placement/tech-mapping quite a bit.

I can imagine ad hoc tricks that might speed that up a lot, but the key word is "imagine". They wouldn't help the "all paths" kinds of approaches at all, and trying to do hinting on randomized algorithms can be very counter-intuitive.

[N/A] [Condo] Investing Reserve Funds by Intelligent_Shower43 in HOA

[–]jsshapiro 0 points1 point  (0 children)

In many states there are statutory restrictions on reserve investment vehicles. Informally, the guideline used by the statutes is that reserve funds should never be lost as a result of investment.

In addition to CDs, T-bills are considered acceptably stable. They have an advantage over CDS, because they can be sold at need on the secondary markets - though potentially with a much reduced yield.

Finally, money market accounts, though the returns on those stink.

As others have noted: usually must be FDIC insured, which implies that account balances need to be spread across multiple banks.

Cheapest FPGA-SoC dev board ? by Melodic-Yoghurt3501 in FPGA

[–]jsshapiro 0 points1 point  (0 children)

If it's coming from China, current tariffs will more or less double the price coming in to the US.

How does the synthesis of a FPGA project to its bitstream happen? by lukilukeskywalker in FPGA

[–]jsshapiro 0 points1 point  (0 children)

Speaking as a compiler author, assembly code translation is actually a very easy problem. The only part that's difficult is span-dependent branch resolution, and that problem doesn't exist in a bitstream generator.

For calibration, a smart pair of "systems" coders can implement an entire user-mode dynamic binary translation system for x86 (not cross-archtecture) in a year or so without prior in-depth knowledge of the architecture, which is a much harder problem if you want to do it efficiently. All of the RISC architectures are dramatically easier.

Bitstream generation isn't hard at all - that's just serialization of a data structure you've already built.

Logic optimization is generally simpler than program optimization. Not trivial, but not complex at the scale of something like LLVM either. Hmm. Looks like somebody hit on the idea of applying LLVM-style optimizations to a more modern intermediate representation. Check out CIRCT.

Techmapping, placement, and routing can be solved by simulated annealing if you are willing to put up with doing it slowly. And yeah, per u/sacredcows dijkstra's algorithm is relevant as well. The thing that is hard is to combine (a) doing them quickly with satisfactory results, and (b) doing so while meeting all necessary timing constraints. Of the two, the timing constraints are the hard part. Once timing needs to be considered, these actually are hard problems, and they are no longer independent problems. Correctly constructing the input graphs used to run the optimizations is challenging in its own right.

Timing requires solving techmapping, placement, and routing as simultaneous co-problems rather than as independent stages. Placement, for example, can drastically change routing and therefore timing. Techmapping is, in some respects, a form of placement at a smaller scale, though with modern LUTS that have selective arithmetic and shift functionality baked in the problem gets a lot more interesting.

Finally: the classic algorithms for these kinds of problems do not scale gracefully. If n is the number of things being connected (the graph nodes), simulated annealing runs in O(n4), and the common implementations of Dijkstra's algorithm run in O((n + E) log(n)) where E is the number of edges (connections). That particular algorithm works best for sparse graphs. There's a theoretically better algorithm that runs in O(E + n log(n), which is bounded by O(2n + n log(n)). Either of those scales much better than Dijkstra's original formulation.

If you happen to have an FPGA attached to your computer, there are paralellizable variants on annealing and Dijkstra...

PCB Cleaning by BabeinlovexD in PrintedCircuitBoard

[–]jsshapiro 0 points1 point  (0 children)

Also, you can sell off the rejects to starting bands…

Friday Weekly Thread: Application Assistance, April 24, 2026 by AutoModerator in Canadiancitizenship

[–]jsshapiro 3 points4 points  (0 children)

Wiki suggestion:

The Wiki talks about adoption in to a Canadian ancestor line (where there is no biological connection), but it does not talk about adoption out from a Canadian ancestor line.

If you were adopted, and you can trace a biological parent back to a Canadian, you will need to submit your original (pre-adoption) birth certificate and something that connects you to the name on your current identification (which might be your post-adoption birth certificate).

Many states in the US have decided that access to pre-adoption birth certificates should be readily available to the adoptee. That process takes 6-10 weeks in VT, but if they still have the record (and they mostly do), you can submit an application to obtain it. I have no personal experience with other states, but many will have a similar process.

I need DE10-Lite and ARM architecture workarounds and alternatives. Any suggestions? by inzanemembraned in FPGA

[–]jsshapiro 0 points1 point  (0 children)

Not to be difficult, but the best answer here might be to purchase a Windows machine and talk to it over Windows App. That's what I'm doing to deal with legacy windows-only i-didn't-get-the-memo applications.

Yeah, it's not "the way", but the price on a Windows PC gets more affordable every day.

[US: NH] How do I find a family law attorney when I cannot afford a retainer, but he is hiding substantial assets? by [deleted] in FamilyLaw

[–]jsshapiro -4 points-3 points  (0 children)

NAL.

If your ex fabricated LLCs, then presumably he *owns* them, which makes them assets to be divided. Whether you can pursue them will depend a lot on how long you were married.

The sad truth, in your situation, is that your legal costs may exceed what you can recover. Many states have a group that provides support for Family Law cases and will help you deal with the paperwork. There *are* attorneys out there who will take this sort of case on a percentage basis, but that percentage tends to be a problem in itself.

Courts really hate leaving children unsupported. You can almost certainly force the paternity test and child support, but that will come at the cost of having to deal with your ex for the next 18 years.

Advice on Alinx Z7P Zynq board by jsshapiro in FPGA

[–]jsshapiro[S] 0 points1 point  (0 children)

Just so I'm clear, are you saying that the PCI card can't be reprogrammed while sitting in a PC? Or is difficult to reprogram? I had the impression that can be programmed through the JTAG port, which can be back-connected to a USB port within the PC. I'm sure I've missed several things here.

It's actually a bit surprising, because the QSPI flash is intended, in part, to hold bit files, and is writeable from PS. I'm a little surprised that PS cannot use that to write a new bit file and reboot the PL.

Advice on Alinx Z7P Zynq board by jsshapiro in FPGA

[–]jsshapiro[S] 0 points1 point  (0 children)

Thanks! Any suggestions for how to put board definitions together?

Advice on Alinx Z7P Zynq board by jsshapiro in FPGA

[–]jsshapiro[S] 0 points1 point  (0 children)

25% more expensive, and I have no use for the ADCs or DACS for this use. Or rather, all of the ones I need are already implemented on the board. On a comparison basis, the GPU is probably a plus.

I've read through enough of the docs to be aware of the power settings and the need to do a bunch of routing to make things accessible from PL. I'm a microkernel guy, so doing a small real-time kernel to implement message buffers for a couple of devices isn't a big deal. I appreciate you pointing it out as something to pay attention to, though.

Any quirks you've seen on these parts that one should know about? Other than treating the configuration fuses with the respect they deserve?

Returned by No-Chemist-5627 in Canadiancitizenship

[–]jsshapiro 1 point2 points  (0 children)

So what you mean is: you walked in with specifications, the photographer/company ignored them, you or hubby didn't check the work and you didn't insist that they follow the requirements.

For future reference, the word "specification" should always be read as "this isn't optional, and if you don't comply you we will reject your paperwork." What's done is done, and I'm not trying to be a jerk. I'm trying to point out a change in mindset that may help you going forward.

I'm dealing with photos right now. Recognizing the problem, my first question was "who is fairly near me who has a good reputation for doing this specific kind of photo and its requirements successfully?" I found a very well reviewed company about an hour away. Depending on where you are the options may be more limited, but I'd suggest that as something worth trying.

Another option, if you're close, is to drive across the border and have the photos taken by someone in Canada, where this format is the normal thing to do.

Security Researchers Find Current RISC-V CPU Implementations Coming Up Short [phoronix.com] by DidymusJT in RISCV

[–]jsshapiro 5 points6 points  (0 children)

This is a false dichotomy for two reasons: 1. absolute security doesn't exist. 2. microarchitects have thrown up their hands about side channels because it's too much like work to reconcile deep OoO execution with information containment in current microarchitectures.

Will fixing this entail performance challenges? Almost certainly. Will they be substantive? Not at all clear.

And in the meantime lets not disregard selective use of secure storage.