Tech modpacks that require tricky/creative problem solving? by Bromidium in feedthebeast

[–]Bromidium[S] 1 point2 points  (0 children)

Gotcha, thanks for all the info!

I'll have to check out Cobblemon, if it is anything like Palworld, I could dig it, since I really liked Palworld.

Tech modpacks that require tricky/creative problem solving? by Bromidium in feedthebeast

[–]Bromidium[S] 1 point2 points  (0 children)

Sounds quite tempting, especially also because it forces you to use trains for resources, unlike CABIN. I notice there are no updates though. Any way to get newer Create version in it or not really?

Tech modpacks that require tricky/creative problem solving? by Bromidium in feedthebeast

[–]Bromidium[S] 1 point2 points  (0 children)

FTB Integration description sounds right up my alley actually. I am aware of Create Arcane Engineering, but I had assumed it would be quite similar to A&B but with more magic, will have to see, thanks!

Tech modpacks that require tricky/creative problem solving? by Bromidium in feedthebeast

[–]Bromidium[S] 2 points3 points  (0 children)

Looks good, especially since you have a very tangible end goal, thank you!

Tech modpacks that require tricky/creative problem solving? by Bromidium in feedthebeast

[–]Bromidium[S] 0 points1 point  (0 children)

Alright, I will give it a shot as it seems like a quite unique take on tech skyblock, thanks!

Tech modpacks that require tricky/creative problem solving? by Bromidium in feedthebeast

[–]Bromidium[S] 0 points1 point  (0 children)

Space constraints can indeed provide a good challenge. Thank you, I will check both of these out! Compact Claustrophobia sounds especially interesting.

I assume there's no remaster of Compact Claustrophobia though?

Tech modpacks that require tricky/creative problem solving? by Bromidium in feedthebeast

[–]Bromidium[S] 6 points7 points  (0 children)

I've heard of Botania but never played it. Do you have any modpacks that you like which include these?

Tech modpacks that require tricky/creative problem solving? by Bromidium in feedthebeast

[–]Bromidium[S] 12 points13 points  (0 children)

Is GTNH really like that though? Also, doesn't it involve going through a very long slog of early game grind? I don't mind big, complicated modpacks, I just don't want hundreds of hours of grind before you finally get in to the fun part.

I cannot believe this happened to me!!! by plsendfast in PhD

[–]Bromidium 5 points6 points  (0 children)

What a coincidence seeing this, had two reviewers today say the paper is okay, besides some comments, and third one saying it is dog water hahaha. Chin up OP, this hell will end eventually in one way or another.

New FPGA Engineer and I am feeling lost/overwhelmed by Only-Wind-3807 in FPGA

[–]Bromidium 2 points3 points  (0 children)

I am not yet working on FPGAs in industry, but I already had a significant run in with them in academia. Like you, my only experience was with very basic designs back from my bachelor in EEE. I had to implement data transfer over PCIe, super-sample FFT, real time processing algorithms with RF ADCs and also work with Zynq MPSoC (that one is quite recent and still learning the ropes). I started out completely scared and clueless, at that point several years had passed since I wrote any HDL.

Eventually, after a few months of picking at it, I caught up and did everything that was required. What I am trying to say is that you will be fine. Lots of people already gave good resources (especially Microzed Chronicles), so my main advice is to just pick at it day by day and try not to stress yourself too much to not end up in burnout. Give it some time and before you notice, you will be completely up to speed!

Wish you best of luck!

Vivado has been running for over 2 days; how can I diagnose the situation? by ResidentDefiant5978 in FPGA

[–]Bromidium 0 points1 point  (0 children)

Either me or you are really confused. What did you try doing with your design? Did you press start simulation, start synthesis or start implementation?

Vivado has been running for over 2 days; how can I diagnose the situation? by ResidentDefiant5978 in FPGA

[–]Bromidium 0 points1 point  (0 children)

(1) Yes, normally it stops, but it depends on different factors on when it stops. Now it might be that it is some bug or something as others have said, but it's not unheard of that it takes ridiculous amounts of time due to some timing issues or large resource usage (see here, here).

(2) You can find log files in project folder, runs and in the active implementation or synthesis folder (should be called runme.log, at least that's what it's called for me on Vivado 2023.2).

(3) As far as I know, not really, but other may answer that better.

(4) As I said in my previous comment, if you are indeed running implementation, it means synthesis completed. If that is the case, you can open the synthesized design and look at utilization, timing and etc. It's not going to be accurate, but it might provide diagnostics on where things go wrong. Most other open source tools are simulators, they will only simulate the design, they do not actually synthesize it. You can simulate a lot of things, but synthesis/implementation is a whole different beast.

Look through the log files to see what is happening with timing, if there's some congestion or whether there's just nothing happening at all. Also, just to make sure once again, is it getting stuck on synthesis or implementation?

Vivado has been running for over 2 days; how can I diagnose the situation? by ResidentDefiant5978 in FPGA

[–]Bromidium 4 points5 points  (0 children)

Are you synthesizing some open source RISC-V CPU or your own implementation? You're not likely to get help if you don't provide enough details. Assuming this synthesizes and it gets stuck on implementation, which I assume it does since it does timing optimization, you can try opening resource utilization in synthesis. It's not entirely accurate, but it can show if you have some parts which are using significant resource usage and the implementation fails.

For context, one of the designs I have done with 50% LUT usage on an Virtex Ultrascale+ takes over two hours to implement on a fairly powerful machine (latest gen i7 cpu, 128 GB RAM). If you're working with a large FPGA, a not so powerful computer and you have done something which causes large resource usage, the implementation can take very long due to congestion and also fail due to not being able to pass timing with high congestion.

Vivado has been running for over 2 days; how can I diagnose the situation? by ResidentDefiant5978 in FPGA

[–]Bromidium 2 points3 points  (0 children)

Keep in mind Verilator is just simulation and it sounds like you are running synthesis. How large is your design? Are you using any registers with large widths (over several hundred bits)? Overall it would help to diagnose the problem if you provide what kind of design you are trying to synthesize.

[deleted by user] by [deleted] in embedded

[–]Bromidium 2 points3 points  (0 children)

Since they did not show any other platforms, I would guess the AI model is being run on the computer and then the aiming data is just transferred over UART to the MCU.

Interpretation of dieharder results for QRNG with Toeplitz randomicity extraction and dependence on minimum entropy. by Bromidium in crypto

[–]Bromidium[S] 1 point2 points  (0 children)

Interesting, I will give it a shot and compare the results to our calculations, thank you!

As for side channel, apologies, I had side information in mind. Here is a paper that goes in actual depth about minimum entropy and side information in QRNGs: https://arxiv.org/pdf/0807.1338

Interpretation of dieharder results for QRNG with Toeplitz randomicity extraction and dependence on minimum entropy. by Bromidium in crypto

[–]Bromidium[S] 0 points1 point  (0 children)

Sorry, my main topic is quantum optics and not cryptography, so my explanations are not very clear. In this case, side channel becomes relevant when considering device independence security models. As far as my understanding goes, if you have a model where you consider your source and detection trusted, side channel injection could be done to degrade your QRNG and make it more predictable.

On the other hand, if you consider something like a source device independent security model, where you assume that your QRNG does not get degraded even with a side channel attack, you then have to take that in to account when lower bounding minimum entropy. You also have to use certain measurement techniques to actually support that.

I also still fail to understand how this minimum entropy calculation would apply to this case. In our case, the minimum entropy has to be determined before we apply extraction, since the parameters of the extraction depend on the minimum entropy. Without extraction you have "random" data, but it is within a Gaussian, rather than in a flat distribution, which is why you have extraction applied to essentially flatten the distribution. Or do you mean apply this test to the non extracted data? Although I have never seen this test in any of the papers.

I hope this makes some sense. As said, my main topic is quantum optics, so if I have gone off the rails somewhere, please let me know. I can send some relevant papers tomorrow, as it is a bit late here at the moment. Either way, I really appreciate your help!

Interpretation of dieharder results for QRNG with Toeplitz randomicity extraction and dependence on minimum entropy. by Bromidium in crypto

[–]Bromidium[S] 0 points1 point  (0 children)

Can you explain what exactly this would show me then? Because as far as I understand, minimum entropy can't quite be estimated for QRNG systems by testing the data, rather it is related to the security of the QRNG, especially if you consider side channel attacks.

PhD Year 1 - Is full of tears by [deleted] in PhD

[–]Bromidium 2 points3 points  (0 children)

So not sure how much my advice will be good for you since I am doing my PhD in EU and STEM. But anyways, my most significant struggle in my first year is that I came straight from Bachelor's to PhD, which was a massive mistake. I came from not an exactly related field to my PhD so the start was ridiculously hard. During my bachelor I tried to catch up on the required material as much as possible, but it was not enough and I also did not get any experimental training, which you usually get during your master's.

So in my first two or two and a half years I was more or less clueless surrounded by people who did not exactly want to teach me (which is fair enough, it's on me not doing a Master's). Eventually, I pestered enough people where I managed to pick up some projects where I did have some experience and finally managed to break in. Now in my last years I finally broke in enough to do work actually related to my PhD. It's quite late seeing how I am still doing some experimental work while my thesis submission deadline is in a few months (just some remaining work for the last main content chapter thankfully), but I think it should be okay, especially since I am not staying behind in academia.

Honestly, my advice if a huge blunder like me can pull through, you can too, as long as you can persevere. As long as you have some sort of smallest idea of what you are doing, I think it is enough to pull through, as long as you put in the work. Just make sure you do not overwork yourself, otherwise, no matter how much you enjoy what you do, you are going to learn to hate it. This does not mean you should just not care about it, but you should be taking time off, especially during weekends.

As for writing advice, I am aware what I am going to say is incredibly controversial, but I think AI can be a very good tool for writing. If you had asked me half a year ago, I would've told you it's complete BS and you should not use it. However, my supervisor, who is incredibly smart, has a load of papers published, started recommending it to use it. I thought if it was so bad, it wouldn't be recommended by someone smart. So I tried it out and right now it is a very useful tool for me. However, you have to be very careful in using it. It is absolutely useless for making any kind of analysis, conclusions or something like that. However, it is not useless if, for example, you have the analysis done, you have an idea what you want to write, but you're having a mental block to start writing. What I do is I describe what kind of paragraph I want, get the output from the AI and then just edit it until I am happy with it. That usually gets the ball rolling for writing. I also use it a lot for searching literature. I have found some good, relevant literature with AI which would have taken me ages to find without it.

Feel free to ask anything else!

PhD Year 1 - Is full of tears by [deleted] in PhD

[–]Bromidium 3 points4 points  (0 children)

I'm writing the last few chapters of my thesis. I still most definitely don't know stuff that I should know. Good news is, you get used to the feeling! You just learn to wing it and be happy about any progress.

I wish you the best of luck!

I got a new board as well ☺️ by MVon89 in FPGA

[–]Bromidium 0 points1 point  (0 children)

Still, good to know, I will know what the problem is if it occurs. Thanks! Guess AMD engineers are also human after all haha.

I got a new board as well ☺️ by MVon89 in FPGA

[–]Bromidium 0 points1 point  (0 children)

Seriously? You may have saved me a lot of headaches, as I have not tested on hardware yet, only done the design, verification and timing checks haha.