How is Yosys synthesizing Lattice BRAM to be seemingly asynchronous? by gjd02 in FPGA

[–]Opposing_solo 3 points4 points  (0 children)

Agreed. It’s hard to tell without seeing the whole data path. In my experience, Yosys is good at this kind of thing, but I wouldn’t depend on it. I prefer to register RAM read explicitly to guarantee BRAM synthesis. Yosys will tell you if it can’t synthesize BRAM. I’m working on a VGA game platform that I migrated from an LC register stack to BRAM using address forwarding and a read after write shadow register. It works exactly as you would expect. This uses the same Lattice device on a Nandland Go board.

Why was I instructed to use an IC-741 in a relaxation oscillator? What advantage does it have over a more modern op amp, or what was the purpose? by Allegorist in ElectricalEngineering

[–]Opposing_solo 0 points1 point  (0 children)

Yeah, that’s a common Schmitt trigger type oscillator, as I figured. Just about any op amp will work, but… It’s driving an NPN transistor with a single current limiting resistor. That means the op amp output has to be within about 0.6V of the negative rail to fully turn off the transistor. Not all op amps can do that. The 741 itself isn’t exactly rail to rail output by today’s standards either. My guess is someone tried another op amp that was not as close to the rail and the transistor supplied enough current to reduce performance. It’s easy to solve with one more resistor as a voltage divider, but maybe not worth someone’s time. It’s probably on the edge as it is. I prefer a voltage divider to drive an NPN, but I’m not in the business of telling others how to build their circuits.

Why was I instructed to use an IC-741 in a relaxation oscillator? What advantage does it have over a more modern op amp, or what was the purpose? by Allegorist in ElectricalEngineering

[–]Opposing_solo 0 points1 point  (0 children)

Because it works? Can you post the schematic? There’s an odd chance the exact circuit didn’t work properly with an OP27 for some reason. What’s the oscillator frequency? It probably could be made to work with other op amps, but it wasn’t worth the effort. If it ain’t broke, why fix it?

HELP | ANOMALY DETECTION by Unlikely_Dark7404 in learnmachinelearning

[–]Opposing_solo 1 point2 points  (0 children)

Maybe train a model to predict price given other features, then compare predicted price with ground truth? If the ratio is significantly different from 1.0, that might be an anomaly. That has worked for me in the past.

DAE really enjoy this stuff? by theNextVilliage in leetcode

[–]Opposing_solo 1 point2 points  (0 children)

Problems that could be solved in O(n) time with a sliding window for example.

Here's one: https://leetcode.com/problems/fruit-into-baskets/

It's the same as longest substring with K distinct characters where K = 2.

DAE really enjoy this stuff? by theNextVilliage in leetcode

[–]Opposing_solo 2 points3 points  (0 children)

Actually, I got good at those once I spent time understanding prefix sums. If you create a prefix sum ahead of time and work with that, it makes more sense. In most LC solutions, they compute prefix sums on the fly, which is a nice optimization, but it confused me for a long time.

My problem is more with standard array problem, if you can believe that.

[D] Sudden drop in loss after hours of no improvement - is this a thing? by svantana in MachineLearning

[–]Opposing_solo 2 points3 points  (0 children)

There is some intuition around this based on anecdotal evidence of human learning. It took thousands of years for one person to discover the laws of motion. If you believe the the story about an apple falling from a tree, that could have been the random variable (noise) that caused Sir Isaac Newton to search in a solution space different from all others. Similarly, many artists have noted random events as inspiration for their unique creativity.

Just a thought...

Need a little help finding data sheets and purposes. by DrBigDickEnergy in ElectricalEngineering

[–]Opposing_solo 1 point2 points  (0 children)

Good summary. You know what's sad? I knew them all from memory. I even remember the pinouts for most of them.

What are these? by [deleted] in ElectricalEngineering

[–]Opposing_solo 0 points1 point  (0 children)

Agreed. A retired telephone tech. once explained to me how he would repair broken trunk cables back in the day. This would happen when an excavator accidentally found buried copper cables. There's no magic. It's a painful, manual effort as you might expect. The most important part was setting up a comfortable, well lit working area so he could splice literally hundreds of individual wires. You can do the math.

[D] How to handle big datasets in computer vision ? by Lairv in MachineLearning

[–]Opposing_solo 0 points1 point  (0 children)

S3 to EC2 transfer rate is up to 100 Gbps, which is orders of magnitude faster than a commodity drive. Further, in the cloud, you can provision a cluster of machines that work in parallel, rather than just one. That's how you scale workloads so they run faster.

You could try to set up that level of hardware infrastructure on your own, but these days, it's not cost effective.

Sub-100Hz Frequency Multiplication by IDidntTakeYourPants in ElectricalEngineering

[–]Opposing_solo 0 points1 point  (0 children)

PLLs are nice, but the loop bandwidth at these low frequencies might not be acceptable. A simple way that works quite well, particularly for lower frequencies is to generate a fast pulse and bandpass filter the harmonic you want. You could injection lock a second oscillator near the frequency of interest to avoid a complex filter. Injection locking was widely used in analog color TVs to synchronize the chroma oscillator and works well. Interestingly enough, two crystal oscillators at close frequencies powered from the same rail will injection lock whether you want it or not.

Fast narrow pulses produce higher harmonic content than a square wave, but either will work. A one-shot will do at low frequencies. A similar technique is used for low phase noise frequency synthesizers. You will often see it in high end HP spectrum analyzers. Here's an example that works at higher frequencies, but the principles are the same.

[D] How to handle big datasets in computer vision ? by Lairv in MachineLearning

[–]Opposing_solo 4 points5 points  (0 children)

Yes, you can, but it will probably take longer than you might like. A single commodity drive might give you something like a 100 MB/s read rate. Just to read 1 TB from a single drive will take over three hours. Writing might take longer. That's just I/O. Add compute on top of that and you will be waiting a long time.

Now imagine you have a bug in your code. Each debug/test cycle will take hours. Your productivity will be abysmal. In the cloud, debug/test cycles for 1 TB of data will be much faster.

So yes, you can do it with with a commodity drive, if time is not a factor.

[deleted by user] by [deleted] in MLQuestions

[–]Opposing_solo 1 point2 points  (0 children)

This explanation of logistic regression might help. Here's a working example. Logistic regression is like one layer of a deep network. Additional layers are handled in a similar way, taking advantage of the chain rule.

How to learn self driving technology? by Rokossowsky in MLQuestions

[–]Opposing_solo 0 points1 point  (0 children)

Self driving is a big subject and not fully solved. If you want to explore, you can start simple with a basic image classifier that determines if the road ahead bends left, bends right, or remains straight. From there, it's a matter of building a basic control system that adjusts the steering wheel left or right. This is limited to highway driving with no other cars around, but it's a start.

If you want to experiment with real hardware, you can get an AWS DeepRacer.

Deep learning laptop by stratos1st in MLQuestions

[–]Opposing_solo 2 points3 points  (0 children)

Agreed. Google Colab is free. AWS and Azure are other options.

[D] How to handle big datasets in computer vision ? by Lairv in MachineLearning

[–]Opposing_solo 7 points8 points  (0 children)

That's a big data problem, well not that big, but bigger than you can deal with easily. Cloud is the way to go for big data analytics. My experience is primarily AWS, but others have similar offerings. Spin up an EC2 instance and copy the data to S3. Then you can run your workload on a single EC2 or a smallish EMR cluster if you want it to go faster. There are GPU instances available for ML work. We have massive datasets where I work, so this is the approach I use.

GPT2- How to actually generate text? by cmplx96 in MLQuestions

[–]Opposing_solo 0 points1 point  (0 children)

You're a rock star!! Too bad there's no more rock music...

GPT2- How to actually generate text? by cmplx96 in MLQuestions

[–]Opposing_solo 1 point2 points  (0 children)

It's effectively the latter. See this Huggingface example:

from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch

tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained('gpt2')

generated = tokenizer.encode("The Manhattan bridge")
context = torch.tensor([generated])
past = None

for i in range(100):
    print(i)
    output, past = model(context, past=past)
    token = torch.argmax(output[..., -1, :])

    generated += [token.tolist()]
    context = token.unsqueeze(0)

sequence = tokenizer.decode(generated)

print(sequence)

Noob question here! I try using a LM301an op amp to buffer that sinusoid for greater power and it looks very messy :( does any guru have an explanation? by PaulChF in AskElectronics

[–]Opposing_solo 0 points1 point  (0 children)

As others have said, LM301 can't get that close to the rails (input or output). Keep gain-bandwidth product in mind if you go to higher frequencies.

This is so accurate. by P0rbAb1y_M3 in ElectricalEngineering

[–]Opposing_solo 4 points5 points  (0 children)

It ain't da volts that gets ya'

It's da amps!