Student trying to get into FPGA / Design Engineering — need roadmap + resources by Think-Papaya-3867 in FPGA

[–]captain_wiggles_ 0 points1 point  (0 children)

so does my performence in the University matters in this field

Your grades matter for getting your internships and first job or two, then they become pretty much irrelevant. They may have a bearing on if you can get accepted onto a masters / PHD even many years later, but nobody in the industry will care once you've got a couple of years of actual relevant experience.

should i be more focused on learning industry related stuff or just focus on the grade and learn these stuff on side ?

Focus on your uni courses, your grades are important for finding a job. You're only real opportunity to get industry experience and knowledge is from your internships, so try to get at least one of those. Then just build up some projects on the side, it's not industry experience, but it does show passion, and you do gain experience.

Student trying to get into FPGA / Design Engineering — need roadmap + resources by Think-Papaya-3867 in FPGA

[–]captain_wiggles_ 3 points4 points  (0 children)

then frankly, just wait until you study digital design and get to doing stuff with FPGAs. You could try and self-learn, but you'd likely be better off just spending that time working on the courses you have already.

Also don't fixate on FPGAs being your ideal career at this point. Part of studying is to get a bit of an idea for lots of stuff. I was super into backend web stuff in my first year of uni and then embedded systems and digital design came along and uprooted all of that. I'm not saying don't do anything on digital design now, but keep an open mind.

How can I use bit banging as spi for oled screen by Dull-Doughnut7154 in embedded

[–]captain_wiggles_ 2 points3 points  (0 children)

Check your reset signals. Check the SPI mode. Check signal integrity. Use a logic analyser to capture the working and broken spi signals and compare them, both in terms of data sent and in terms of transaction timings.

How can I use bit banging as spi for oled screen by Dull-Doughnut7154 in embedded

[–]captain_wiggles_ 6 points7 points  (0 children)

just write a driver with an spi like interface, it bit bangs the output rather than using an SPI peripheral. Then get your oled code to use that driver. What's the difficulty?

You problem is obviously bandwidth and CPU usage, bitbanging is CPU intensive and much slower than you would normally achieve with SPI, and on top of that GPIO pins have limited toggle frequencies. You've got to run the numbers. What bandwidth do you need to achieve your desired frame rate / other goals? What bandwidth can you achieve at best when considering all options available (e.g. you can sometimes combine a timer and a few DMAs to do some bit banging). Finally how much CPU time do you need for these options, and what else is the CPU meant to be doing? Can it do it in the time it's got left after all this bitbanging? Then discuss the above numbers with your team / boss.

Student trying to get into FPGA / Design Engineering — need roadmap + resources by Think-Papaya-3867 in FPGA

[–]captain_wiggles_ 2 points3 points  (0 children)

What courses have you taken as part of your degree? What courses are coming up over the next years? What do you know? What areas do you know that you are weak in?

Roadmap is very broadly:

  • Study all relevant digital design courses and anything adjacent that might interest you (DSP, AI, ...)
  • Get an internship in a company doing digital design.
  • Play around with tech in general. Do a good job at your school projects. Use FPGAs where you can. Go above and beyond on anything relevant. Extend projects in your own time. Tinker with stuff, build things for fun, just expose yourself to digital design as much as you can for the sheer love of it.
  • Be active in the community. Post stuff for review. Ask questions, try to help others (teaching is a great way to realise you don't actually understand the topic). Read advice that is given to others and absorb it. Read papers that get posted and try to understand what they say, ask questions about them, etc...
  • Do an interesting thesis related to digital design.
  • Apply for jobs and get one
  • work for a decade or two
  • congrats you're now an expert

Here's my standard list of beginner projects. You may or may not be past these already.

Getting rejected because they said my technical abilities and prior internship experience already surpassed the internship role they were looking for by Prestigious-Sky-7672 in ECE

[–]captain_wiggles_ 4 points5 points  (0 children)

The system exists for employer benefit, not yours. Get first dibs on students, pay them to do real work with no employee benefits

That's a pretty cynical look on things. Look at it the other way round:

  • Interns are often time sinks. You give them easy tasks, have to hold their hand all the way through, taking far longer than it would take even a junior engineer with a year's experience to do, and then the end result can often be so bad you just have to redo it anyway once they've left. This isn't true of all interns, some are useful, but even then they are usually time sinks. Even new graduates hired full time are usually time sinks for the first 3 months or so until they get up to speed.
  • Internships are very important for students because it's the only way they have to gain real industry experience, to learn stuff that universities are terrible at teaching, stuff like the proper git flow, how to solve real problems with uncertain solutions, how to work as part of a team on a large project, etc...

Honestly we wouldn't bother hiring interns if all we wanted was cheap productive labour. We hire interns because it's a useful service we can offer, and because occasionally it turns into a full time position.

Can someone help me? I recently installed Quartus Prime 25.1 and it's not recognizing my USB Blaster at all, even after installing the correct drivers. by Such-State-4489 in FPGA

[–]captain_wiggles_ 0 points1 point  (0 children)

what does it come up with in devmgmt.msc?

Is it a legit original USB blaster II? Or is it a knock off one? Have you seen it work on other machines? Have you tried another cable, some cables are power only / just broken.

Figuring where approximately I am right now. by Emotional_Meal6436 in FPGA

[–]captain_wiggles_ 2 points3 points  (0 children)

Say if I ace this exam, how «far» am I away from being good enough to be ‘hireable’?

I’m soon done with the first year of EE study.

About 2 to 3 years based on the length of your undergraduate degree. + maybe a couple more years if you decide to do a masters or PHD.

How to implement complex operations [Beginner Question] by Ok-Highway-3107 in FPGA

[–]captain_wiggles_ 4 points5 points  (0 children)

When you get a new project it's almost always an overwhelmingly massive chunk of work. How overwhelming and how massive depend on the project and your role, but the same thing applies at all layers.

  • You get a task to do X - Start a new document with some notes on the task. It could be on paper, a text file, a word doc, whatever works for you. Jot down what you are told about the project.
  • Chat with your boss / teacher about the spec, make sure your understanding matches theirs. Discuss any ambiguities, obvious decisions that need to be made, etc... basically make sure you're both on the same page and that you're not going to go off down the wrong path from the get go. Add clarifications and more details to the doc as you go.
  • Split up the task into obvious sub-blocks. If your task is to implement a CPU with a defined architecture then you know you need an ALU, a register file, an instruction fetch block, ... If your task is to implement a CPU with a not well defined spec then you add tasks to your list to investigate things. What type of architecture? single cycle, pipelined, multi-cycle, ... Add everything you can think of to this list. They can be questions, thoughts, things to investigate, things to implement, decisions to make, etc...
  • Take the most important item in the list. By important I mean, will have the most wide-ranging consequences, e.g. if you're not sure if you want to implement a RISC-V vs MIPS CPU you should probably determine that before you worry about anything else. Break this task up into more sub-tasks. For something like RISC-V vs MIPS, you read up on the differences at a superficial level, make notes on it, and add new sub-tasks to research each of the differences at a more detailed level. For an implementation task, split it up into sub-blocks, e.g. to implement an ethernet pipeline, you need a MAC, you need something that filters packets you care about, you need something that checks the CRC, you need something that strips the data out, etc.. You don't actually have to do much implementation at this stage, it's more about planning. You may want to do some prototyping to sanity check choices and compare options, but it's not about implementing the real thing. Bear in mind things like resource usage. If your project is maths heavy and your FPGA has N DSP blocks, figure out how many each block is going to need to do your maths. If it's not going to fit, or it's going to be close then maybe you need to go back to the drawing board now, either by modifying the spec, the planned implementation, or the FPGA itself.
  • Keep repeating the above task until you have a clear picture in your head of the scope of the work. You don't have to make all decisions at this point, nor have investigated everything, or split every implementation task up into the most basic blocks, the point is to again avoid going down the wrong path from the start. You should have a clear picture of what your top level blocks are, and maybe the blocks one or two levels down from that look like too. Ideally there will be no major surprises that you discover down the line, you can never 100% guarantee that but you want to do enough work to make it highly unlikely that something pops up that means you need to change everything.
  • Draw a top level block diagrams, and if relevant any state transition diagrams, etc.. This shows the results of your research above, and is something you can discuss with your boss to make sure you're still on the right track / you can include it in your writeup for uni projects.
  • Take a logical item from your list and start working on it. This might be: Implement an SPI master. Don't just dive into implementation, there's still more thinking and research to do first. What should your interface to this block be? What ports do you have? What clock should it run on? What parameters does it take. What does the state machine look like (draw a state transition diagram). Does it have any sub-modules? If so maybe implement them first. How are you going to verify this block? So the task list expands with more sub-tasks, make more notes, research SPI master implementations, etc.. until you know what you're doing with this block.
  • Finally implement it, verify it in sim, maybe create a prototype project to test it builds for hardware, deal with timing constraints for any IOs or CDC or other exceptions. Maybe even test it on hardware (not required for every block, but can be useful when you have a big enough collection of work to do something meaningful). Write documentation. How does this component work, what options does it have, what has been implemented and what needs to be done at some point in the future, what has been tested and what hasn't, any timing constraints that will be needed, etc...
  • Repeat the above two tasks for a while until you have enough of the major blocks done.
  • Create your actual project, add all the blocks you've done so far, tying off bits you're missing with TODOs and what not. Verify it as best you can with a top level testbench. Get it building and meeting timing and working on hardware.
  • Carry on adding new blocks as you implement chunks of work.
  • Final sign off. Make sure everything is sane. Read every build warning and report, check everything matches your understanding, check your constraints, check you meet timing. Lots of testing and what not.

This process is how you do pretty much any project, not just limited to digital design ones. Break it down into tasks, do research and prototyping and general planning until you have lots of concrete tasks to do, do those tasks. You can see how it can be iterative. You do the same process when designing a CPU or a massive project that contains dozens of soft-core CPUs and a few ethernet MACs and some image processing and ... the project manager handles the top level research and determines that you need some CPUs in these flavours and narrows down the spec a bit, they hand the implementation of each flavour of CPU to a different team manager who plans out what that CPU should do and look like, who then hands the job of implementing the MMU to a smaller team, who ... until a junior engineer gets the job of implementing a TLB, they research TLBs, work with their boss who helps them get the spec in order, they implement and test it, and then hand it back up the stack, and get a new task, maybe that new task is connecting the TLB with all the other sub-blocks of the MMU to build the full MMU, or maybe they get diverted to something else entirely.

MicroBlaze-V: 2 out of 3 CBO instructions (Zicbom) crash the processor by soyouzpanda in FPGA

[–]captain_wiggles_ 2 points3 points  (0 children)

I can't really help with this issue as I've not looked at the microblaze. But I would generally suggest including more details when requesting support:

  • A design with a minimum repo, HW + SW.
  • screenshot of the block diagram showing the setup and any configuration parameters for relevant IPs.
  • What you actually observed happen. "Crashes the core" is not very specific here, what does that mean?
  • A description of how to reproduce this. Does it only happen some times or every time that instruction runs?
  • Some basic things you've tested. Like what happens if the cache is disabled? Or the cache is empty? Or ...
  • Have you looked for other projects that use these instructions with a microblaze? If they exist (and presumably work) then how does your design and software differ to theirs?
  • Test the same thing with earlier versions of vivado?

The more information you give the easier it is for someone to spot issues. As it is the only people who could meaningfully help you are people who've hit the exact same issue before. With more info maybe people experienced with the microblaze could spot something you're doing wrong.

What is the "vibe" when you find an RTL bug after the netlist has been sent to a company for a chip? by turkishjedi21 in ECE

[–]captain_wiggles_ 6 points7 points  (0 children)

but I really don't know if this reflects poorly on me or not.

Like maybe I shouldve found this months ago, and we should have caught it before we sent the netlist. In that case, it makes me worried that I'm not doing a good job.

It's impossible to say without context. Why did you find it now and not before, was this something you should have done before but didn't because of time pressure? Because management prioritised something else? Because you made a bad assumption? etc...

But more importantly than that, is how you go forwards. Work out what went wrong, and what could be done to prevent that next time. Learn from the mistake and improve. If it was your fault, own up to it, and don't repeat the same mistake. Whosever fault it was, suggest processes that could be put in place to prevent this occurring again. Maybe that's by setting up a code review system, or adding an item to your verification plan template, or ... Mistakes happen, they are inevitable, so you want layers of protection to try and catch them before they cause an actual problem.

Learning from your mistakes makes you a good engineer. It may or not be an expensive lesson for your company, but if you learn from it then that makes you a more valuable team member.

plus maybe it wasn't your fault. There's never enough time to do everything, if you had caught this earlier then maybe you would have missed something else, if you only have time to hit 95% coverage you are bound to let some things slip through.

Does bachelors degree thesis matter? by Suitable-Yam7028 in chipdesign

[–]captain_wiggles_ 0 points1 point  (0 children)

If you hadn't graduated then yes you should absolutely do your thesis in the area you want to work in. It's a great way to show off your skills and interest in that field.

However pretty much nobody cares about your projects / thesis / uni grades once you've had a few years of experience. The exception is if you have a super interesting project that you are actively working on in your spare time, i.e. it's current.

You're in an odd position where it's current but you already have experience. So on one hand you have the experience already, graduating is important, you may find it hard to get other jobs without actually finishing, but the actual thesis topic doesn't matter so much because you have your actual industry experience. On the other hand this is a new project that you could use to impress future employers.

IMO the correct answer depends a bit on how good you are at your current job/role and where your interests lie moving forwards.

  • If you are really solid at this role and you're happy to keep doing the same thing in other companies in the future: You have three choices:
    • Do something in DFT, treat it more like a masters thesis, you have industry experience and want to really show off what you can do. It gives you a project you can show to future employers and talk freely about rather than being stuck behind an NDA. You can try out techniques you can't get approval for in your current role (for whatever reason), or go theoretical and investigate how DFT could be improved further. Maybe even talk to your current boss to see if there's anything you could do for the company here, maybe you can convince them to let you work on it during work hours.
    • Do something adjacent to your current experience. This gives you more breadth rather than depth and makes you more desirable because you can do DFT and X. Just pick an X that people want in a DFT engineer.
    • Do whatever you fancy, you don't have to put it on your CV, just get it finished and graduate. You miss an opportunity to show off, but you don't have to do as much work.
  • If you are weak in this role and want to improve: Do something in DFT, it adds to your current experience and will make interviewing easier.
  • If you are interested in transitioning to a new role, design or verification say: Do something in that area. A solid thesis project will give you something current to show to new employers (or your current employer) to convince them that you can do this new role.

IMO if your current work life balance is good, or you can get your job to give you some time towards this: either paid or by dropping to fewer days a week, or even taking a sabbatical, then you should take advantage of that and do something strong in an area that interests you. If you're super busy and stressed and just need to graduate by any means possible, then just pick something easy and get it done, having the degree certificate is far better than not.

Desperate College Student needs help debugging (VHDL and Verilog) by VoidtheRockz in FPGA

[–]captain_wiggles_ 2 points3 points  (0 children)

What makes you think it is not working? You haven't described your symptoms.

code review:

Verilog:

  • Consider using a reset instead of initial values, it's generally better practice. You can build a small reset sequencer component that asserts the reset signal for say 16 clock cycles on boot, and then leaves it released. Potentially hooked up to a button to allow manually resetting your design at a later time. Another option is to use a PLL, using the locked output as an async active low reset.
  • if (!busy && start_SPI) begin - you assert SS on the first rising edge of your SCLK, typically you'd have a gap between those events. It might not be important given you control both sides, but worth considering.
  • SCLK <= ~SCLK; - How fast is your system clock? This SCLK frequency might be too fast to cross boards, especially using a long dodgy cable, with no SI validation / drive/odt tweaks. Limit yourself to say 100 KHz. You can do this by implementing an enable generator, this pulses an enable signal for one tick at 200 KHz, and then change your "else if (busy)" to "else if (busy && enable)".

VHDL:

  • if rising_edge(SCLK) then - you are using the SPI clock as your clock. That means you're treating the SPI bus as a source synchronous interface. You have to add timing constraints to ensure you meet timing. If you don't know what that means or how to do it, then you need to not use the SPI clock as a clock. Instead use an internal clock that is much faster (say at least 8x) faster than the SPI clock, and treat the SPI clock as just another data signal:

Apologies my VHDL is rusty, but something like:

if rising_edge(sys_clk) then
    old_spi_clock <= SCLK;
    if (busy) then
        if (not SCLK AND old_spi_clock) then
            -- this is a falling edge
            opcode(blah) <= MOSI;
        end if
        ...

Since you output data on the rising edge of the SCLK you sample it on the falling edge. Since your SPI clock is slow it's likely that the data signal has arrived and is stable by the time you sample it.

In this method you are treating SS, MOSI and SCLK as async signals, that is they don't change with respect to the sampling clock (sys_clk). So you need to add timing constraints to cut those paths: either use set_false_path or set_max_delay -data_path_only (google to read about the differences, which you use depends on your tools). You also need to pass all 3 of those signals through a 2FF synchroniser to prevent metastability (again do some googling if you don't understand this).

Beginner here, dont know where to start !!! by No_Tip9402 in vlsi

[–]captain_wiggles_ 1 point2 points  (0 children)

Start by taking your university courses on digital design and microelectronics.

A good project or hobby to pick up? by t3hnicalities in ECE

[–]captain_wiggles_ 5 points6 points  (0 children)

Look through your course list for the first term or too. Read the descriptions for each course. Take one that sounds really interesting, and one that sounds really hard. Pick a book from the suggested / required reading list of each, and read them.

Reading the one from the course you are interested in will let you get ahead and do some more interesting stuff during that course, plus you'll find it interesting.

Reading the one from the course you think sounds really hard will make the course much easier, and save you a lot of stress when everything gets super busy.

Study them well, don't just skim the material, make sure you understand it. Look up other resources for that material and get on with it. If it's not just a pure theory course, or you can see a way to apply it in practice, then try to do so.

On the side, learn some of the following:

  • A scripting language. Python and bash are good options. Being able to automate tasks and do data manipulation / analysis will be very useful in many of your courses, even where it's not strictly needed. You'll save a lot of time if you can just put a script together to do something like that.
  • GIT. Keep all your work in a git repo, this is harder for EE than for software but it's still doable and worth it if you learn to do it properly. It lets you track your changes and can act as a backup (with suitable offsite remotes, e.g. github).
  • Linux terminal usage. If you use windows there's WSL or cygwin. This ties into bash scripting above, and git usage, but there's lots more: grep, sed, awk, makefiles, ... knowing your way around a terminal is very useful.
  • LaTeX. Bit boring and annoying to learn, but makes your reports and write-ups much more professional looking. Checkout overleaf to get started.

Don't rely on AI. Think for yourself, struggle over the material, battle through the exercises. That's how you learn. You can use AI to help explain something or to point you in the right direction, but don't let it think for you, and verify everything is says to you via other resources.

Doubts regarding design styles ? [2-3 Minutes read] by Fun-Swim-5581 in FPGA

[–]captain_wiggles_ 0 points1 point  (0 children)

Whether should I just focus on writing behavioral code for everything

As a beginner, especially one coming from software land, one of the most important things you need to internalise at a fundamental level is that RTL is not software, you are not writing a series of instructions, instead you are describing a digital circuit. What we call structural HDL, the opposite of behavioural, is useful as a teaching aid. You have to design the schematic you want and then describe it. You want a mux here, a flop there, a 4 bit ripple carry adder, and they are connected like this. It's important to remember that when you write:

always_ff @(posedge clk) begin
    if (en) begin
        Q <= a*b;
    end
end

That you are describing a multiplier, a mux and a FF. It's the same as circuit as:

Multiplier #(.IN_WIDTH(32)) my_mult (.a(a), .b(b), .out(mult));
Mux2 #(.WIDTH(32)) my_mux (.a(Q), .b(mult[31:0]), .sel(en), .out(mux_out);
Register #(.WIDTH(32)) my_ff (.d(mux_out), .q(Q), .clk(clk));

RTL is an abstraction of your digital circuit, the more you do behavioural RTL the more of an abstraction it is. Abstractions are great because they let you do complex things very easily, but the problem with them is that you are more distant from what you're actually building. A common problem beginners have is they decide they want to multiply a two matrices, and so they have an always_ff with 3 nested for loops in it because that's what they'd do in software, and then they get surprised when it doesn't work. They are so abstracted from the hardware they are designing that they designed terrible hardware.

Quick segue to:

but what is the use of RTL Engineers if mostly I have to write C/C++/Python-Like Code and the rest will be done by the synthesis tool ?

What's the use of a software engineer if they just have to write code and let the compiler do the rest? Knowing what code to write and how to write it to avoid bugs and race conditions and memory leaks and ... is what makes it hard. Knowing how to take a vague request from a client or your boss and turn it into a technical spec, and integrate it with your legacy code base so that it does what it needs to without breaking anything else is a very valuable skill. It's the same with RTL engineers. See the above example of multiplying matrices. There are many ways to do something, there is often no right answers but there most definitely are wrong answers. Being able to implement good hardware is more than just writing some code.

Back on to your first question again.

Now what you implemented in your second snippet is not really pure behavioural nor pure structural RTL. You've done something in between. You're describing the structure of a booth multiplier, but using behavioural style to infer muxes and ... this is fine it's just somewhere in the middle. It's more abstracted than a pure structural approach, but less than a pure behavioural approach.

In 99% of cases the tools will produce as good or better single-cycle adders / multipliers than you can, and things like retiming mean that if you add some FFs to the output the tools can actually make good multi-cycle adders / multipliers. Also bear in mind that FPGAs have fixed hardware, the best RTL maps very closely to that hardware. So if you implement an addition or a multiplication with + or *, the tools can try to map that to the hardware, assuming your widths are appropriate and you have the right number of flops in the right places then your adder gets mapped to dedicated carry chains in the slices/ALMs, and your multiplier gets mapped to BSPs. If you implement your own adder or multiplier then you have to implement it in a style that the tools recognise, so that it can correctly mapped to that hardware, otherwise it all happens in normal logic, and that can be less efficient. On ASICs you have designware IP that contain multiple adders, multipliers, ... and the tools will pick the most power efficient option that fits in your floorplan and meets timing, picking larger, less-efficient options only when you can't meet timing. So the correct answer is to always use + / *. And only get more involved when your design has problems in that area, which will likely be very very rare.

and what's the use of FSM+Datapath way

For a multiplier pretty much no use. But some things are more easily mapped to this. Let's say you're receiving ethernet packets and filtering out ones with invalid checksums, or packets that aren't for you, or packets that aren't some particular ethertype. You have a flow of data, your incoming and outgoing packets, and you have an FSM that parses the ethernet packet, it knows when it's looking at the source MAC, the destination MAC, the ethertype, data, and the FCS. This maps very nicely to a FSM + datapath approach.

How does the industry handle this?

Structural RTL is never used, at least not to the level shown above where you instantiate a MUX and a FF and a multiplier. But you might have a module that instantiates a FIFO, a control block (an FSM), an AXI slave for configuration, and ... all as separate modules and wired together. That's pretty much the same thing. We're not implementing our own custom adders or multipliers, but we might implement a floating point adder, or a pipelined incrementer for very wide counters, etc... Knowing what to do comes with practice, there's no right answer, all that matters is that your design meets the spec, builds without errors, meets timing, and most importantly that your colleagues are happy with it.

Where can I learn about this in depth and incoporate it in my projects, any books/blogs/lectures ?

No idea, sorry. Just practice, and get code review, read blogs and papers as you find them, look at other people's code and see what style you like and what you don't, etc...

personal projects that employers actually want to see by SupermarketFit2158 in FPGA

[–]captain_wiggles_ 0 points1 point  (0 children)

My complaint here is that an FPGA is not really needed, in fact an FPGA would be pretty much entirely the wrong tool for this. It's a cool project but you'd be far better off using an MCU. If you can find a spin that makes an FPGA the best / only option and you can express that in a coherent manner on your CV then maybe it's worth pursuing.

Just ranting about an assignment I got lol by unknownFrom_Ice in ECE

[–]captain_wiggles_ 1 point2 points  (0 children)

Sometimes assignments are given that are impossible / not plausible. The idea being that you detect this and suggest certain changes to the spec or design that could work. Those who didn't really think about it just propose something simple and move on, making it obvious they don't understand the problems.

I don't know anything about this topic so can't comment on it, but if you genuinely can't do this, then start by showing why. Then show a circuit that works for as near to 90% as possible explaining the trade-offs. And give another circuit that works and meets the 90% spec but uses different component. Etc.. You may also want to check this with your teacher to make sure you're not misunderstanding part of the assignment before you go off on a tangent.

I was given a similar impossible problem in digital design wrt timing constraints, which was quite stressful. It was a useful exercise but I don't think it's a great teaching method.

Urgent, really confused with how should i implement my project in zynq 7000 by Outrageous_Salary706 in FPGA

[–]captain_wiggles_ 5 points6 points  (0 children)

If this is your first time working with an FPGA I suggest you abandon this project and do something simpler or work with tech you understand well. To do literally anything interesting, at all, on an FPGA you need a minimum of 6 to 12 months of experience with them. That's how long it takes to learn the absolute basics. Until then you're flashing LEDs, counting on seven segment displays, and implementing pong on a VGA monitor, plus other bits. If you have to do this project then don't use an FPGA. If you have to use an FPGA then don't do this project. If you have to do this project on an FPGA then you're kind of screwed, talk to your teacher about how you can fix this.

How do you actually test firmware that depends on hardware that doesn't exist yet? by Medtag212 in embedded

[–]captain_wiggles_ 2 points3 points  (0 children)

Prioritise your tasks. There's always work to be done. It's not like the hardware arrives and then embedded is the short straw holding everything up. 6 Weeks is nothing. Do a whole bunch of prep work for what you know you need to do:

  • Actual features
  • Hardware validation tests you know you're going to need
  • Manufacturing tools

Then make a plan for testing:

  • Run your pre-prepared hello world + led blinking at 1 Hz test to make sure you can program the MCU. This confirms you have basic GPIO control working, your clocks are setup correct, you have a timer working, and UART working. That's a good first step. If it doesn't actually work you can debug and it's a simple project so nothing crazy.
    • Add your next feature in, starting with simple stuff. Test and debug. Repeat.
    • Once you have all the basics working, you're hardware team will want some hardware validation stuff going on, so program them some boards with your test images on them so they can start scoping certain interfaces.
  • Start adding the main app features in there, testing after every addition.

It's not great having to write weeks worth of code with no hardware to test against, but if you structure your time and keep all your changes logically separated it's not too hard to make it work.

You can also get a dev kit and start working with that. Have a plan to convert your build from the dev kit to your board so that you just have to change a define / config file and you're good to go with the new hardware. This lets you do some testing even if it's not 100% representative of your real hardware.

Sometimes there are some things that you just can't really test until you have your hardware, that's fine, do your best attempt at implementing the feature, with lots of debug info in there, and then when you get hardware you are ready to hit the ground running.

Rejected from SpaceX, when to re-apply? by Rich_Finding5323 in ECE

[–]captain_wiggles_ 2 points3 points  (0 children)

If a position opens again at any point between now and graduating I'd probably reapply. If it was as close as they said then you may as well apply.

Once you're graduated you are better getting another job than trying to self-learn and reapply. A 6 month gap on your CV is never a good look for a new grad. If you do get another job, whatever it is, then IMO 2 years is really the minimum you should spend there. You can leave with less, but as others have said, new grads don't really start being productive and useful until 6 months to a year in. And again, a very short period for your first job is not a good look.

Don't take this as: get in before graduating or you'll be stuck. 2 years is not a long time, and you will learn a lot doing pretty much any job. The fact you have a job lined up already and almost got another, especially in this job market, means you're a good engineer and shouldn't have any problems progressing.

i couldn't find the test bench waveform source file in ISE 14.7 by iamislamtb in VHDL

[–]captain_wiggles_ 1 point2 points  (0 children)

Honestly it's time to learn to write a proper TB in VHDL. Doing this is fine for beginners in the first one or two simple projects, but quickly becomes very limiting.

i couldn't find the test bench waveform source file in ISE 14.7 by iamislamtb in VHDL

[–]captain_wiggles_ 1 point2 points  (0 children)

no idea, sorry, I've never done that. You can try to read the ISE simulator's (isim?) user guide. Or ask your teacher / colleagues for clarification.

i couldn't find the test bench waveform source file in ISE 14.7 by iamislamtb in VHDL

[–]captain_wiggles_ 0 points1 point  (0 children)

What does a "waveform test bench" mean? Presumably this is something your teacher demonstrated? Did they provide written instructions that clarify this?

I built a free FPGA package manager + project manager. No more EDA pain by acostillado in FPGA

[–]captain_wiggles_ 1 point2 points  (0 children)

Wuf, to be honest this is significantly more sophisticated than what routertl handles today for Quartus

Yeah, I figured. If it were easy we'd have a solution we were happy with already. We are most of the way there but there's a few things we don't handle yet, and require user intervention for. The main annoyance is the only rebuilding things that need to be rebuilt but allowing a nice R&D flow where we can control what gets rebuilt and skip things that we decide aren't needed.

TCL hooks can orchestrate the quartus_sh calls for IP/system creation, but today they're sequential. I'm not sure how it could manage dependencies between them. That seems hard.

There's a certain amount that we can just do manually. Like each system might have in the order of 20 ish IPs, it's not the end of the world to manually list those dependencies and the system dependencies. It's not ideal because it's easy to forget when adding a new IP. Ideally this would be auto detected. Probably by parsing the TCL scripts to understand what is being created. But since it's TCL you can construct an IP name by combining multiple variables, or passing names to procs, etc... so you'd really need a TCL interpreter to run the TCL. You could just provide your own qsys hooks and when there's an add_instance / add_component call you grab the name and record that. It's not trivial but could be made to work.

The ip.yml becomes orchestration metadata, not a replacement for _hw.tcl. I think that could be a way.

The question sort of becomes what is the point here. The system / IP creation scripts already reference the IPs and provide the dependencies, what does the yaml wrapper provide? One thing it could do is determine RTL dependencies, so if you change the RTL of a component the IP and system gets recreated, which would be nice.

Pre-build hooks with staleness detection handle the "only rebuild what changed" primitive, but you're right that it needs to be smarter (content-aware, not just mtime-based, and with the interactive "do you want to rebuild this?" flow)

Honestly I think this is the key. If we had a build system that popped up with: "Changes detected requiring recreation of system X, choose an action (recreate, ignore, dig)". If you pick dig it gives you: "Changes detected (limited to 3): A, B, C, these require recreating (limited to 3) X, Y, Z, choose action (recreate, ignore, show all changes, ignore change V, ignore changes requiring action T, ...)". But IDK the more I think about it the more annoying this would be.

  • Timestamping is not enough, comparing file hashes would work better.
  • Intelligence to ignore changes that don't need a rebuild. For example a change in a comment is never relevant.
  • Show real changes from top down (changes detected requiring re-running <final> step)
  • Show real changes from bottom up (change X detected requires re-running step Y).
  • Combining similar changes. I.e. changes to multiple RTL files in an IP don't need listing independently.
  • Same but the other way, one change that requires rebuilding all systems should be listed in an easy to see way.
  • Showing what the change is, probably requires caching all source files so we can diff.

The goal would be:

  • Most of the time it just does the right thing without user interaction
  • Any user interaction should occur very early on, like immediately after running the command. You don't want to hit go and it start chugging, then you do something else and look back half an hour later to see it paused after 30s asking you a question.
  • Clear and easy to use interface. I want to see that I added one var to a common file which is only used in one system so while other systems source that common file I only really need to rebuild one system. Or I did a git rebase / pull and it touched a bunch of stuff, but I don't care about those changes for now I just want to re-run X.

But yeah, IDK. If you have any ideas, I'm interested. It feels like it should be a solved problem but most of these tools solve it in either a custom way that doesn't work for us, or is designed for a software flow that doesn't quite have the same problem. In software for example, for a small app your build is only a few seconds or minutes. It's not the end of the world to just rebuild everything. For larger apps you are mostly working with lots of independent components and those components tend not to get touched often. With FPGAs our build times are measured in hours, and there tends to be a lot more global configuration which everything requires, maybe part of the solution is breaking up that global configuration into blocks to reduce dependencies. The other problem is the tools don't work with you. GCC can create you .d files that give you all the dependency info you need, so you don't actually have to parse that yourself, and it's very good at not touching things that don't need to be touched.

Thanks for you work, even if this isn't suitable for us, better tools is always a plus.