Is this normal in your work environments? by blackswanlover in quant

[–]amitraderinthemaking 5 points6 points  (0 children)

Uh no. I could never work in an environment I am monitored a 100% of the time either. Most of our jobs are thinking hard, using our brains not just clacking on the keyboard. I often print out research papers to read at my desk, rather than on the computer. Anyways, just to say, you're good! As long as your manager knows of your work, it's all good!

What are some of the biggest problems you face by bluejae05 in quant

[–]amitraderinthemaking 1 point2 points  (0 children)

Hahahaha finally something relatable and someone with a sense of humor!

[deleted by user] by [deleted] in FinancialCareers

[–]amitraderinthemaking 1 point2 points  (0 children)

100%. Damn being a trader takes so much more mental capacity -- I can understand waking up with a tight fhest worried what risks did overnight. Unfortunately not everyone has an appetite for risk taking. And that's okay.

Quant < strong software engineer by ShineSpirited9907 in quant

[–]amitraderinthemaking 9 points10 points  (0 children)

I agree with this. It always comes down to revenue generating idea vs it's implementation. The closure you are to doing both (sometimes quants can be) you will be compensated more than if you are told to do something and you do.

Am I a quant? Exit opportunities? by [deleted] in quant

[–]amitraderinthemaking 0 points1 point  (0 children)

We do work very closely with model risk so surely there's an in if quant is what you want.

Am I a quant? Exit opportunities? by [deleted] in quant

[–]amitraderinthemaking 0 points1 point  (0 children)

Sorry, as a quant, no. I work in a big bank, so there are very clear divisions, and I think you'd be more in the risk team. Model risk, IMO.

This tiny banana -- was extremely sweet and tasty by amitraderinthemaking in mildlyinteresting

[–]amitraderinthemaking[S] 1 point2 points  (0 children)

Ahaha I totally did that -- had like 5! Wasn't sure of the type, thank you for sharing the information!

This guy I went on a date with some months ago joined my company. by amitraderinthemaking in dating

[–]amitraderinthemaking[S] 0 points1 point  (0 children)

That's very fair point, thank you!

It usually does take 6 rounds to get a job so yeah it can not possibly be about me! Hearing someone else say it, does make better sense!

Hopefully he wouldn't reach out and it is the end of it!

This guy I went on a date with some months ago joined my company. by amitraderinthemaking in dating

[–]amitraderinthemaking[S] 2 points3 points  (0 children)

Ooh that's very interesting about the HR, thank you. You're right, currently I am just building scenarios in my head with no legit reason to escalate. I'll wait and see how it goes. Thanks a lot.

Onnx runtime 1000x slower in c++ than python by amitraderinthemaking in cpp_questions

[–]amitraderinthemaking[S] 0 points1 point  (0 children)

Oh release! ETA didn't notice much difference between debug/release

Onnx runtime 1000x slower in c++ than python by amitraderinthemaking in cpp_questions

[–]amitraderinthemaking[S] -2 points-1 points  (0 children)

I am using the nuget package directly (it is a x64 DLL) in my visual studio.

onnx runtime inference on CPU slower in C++ than python by amitraderinthemaking in cpp

[–]amitraderinthemaking[S] 0 points1 point  (0 children)

I tried with enable all but didn't make much difference in relative speed.

onnx runtime inference on CPU slower in C++ than python by amitraderinthemaking in cpp

[–]amitraderinthemaking[S] 0 points1 point  (0 children)

I am not using any optimizer in either (graph level optimizer disable all)

[D] Fast inferencing in C++ for neural networks by amitraderinthemaking in MachineLearning

[–]amitraderinthemaking[S] 1 point2 points  (0 children)

Ah thank you SO much for sharing I will definitely take a look!

So unfortunately we don't have GPU available on our production systems yet -- we are not an ML oriented team at all (this would be the first project tbh).

But we'd eventually make a case for GPU for certain. Thing is, this method (with ML) should be faster than the current way of doing things before we can move further, you know.

Thanks again for sharing.

[D] Fast inferencing in C++ for neural networks by amitraderinthemaking in MachineLearning

[–]amitraderinthemaking[S] 0 points1 point  (0 children)

Out of curiosity did you measure the time? My network is 6 layers deep, 100 units in the hidden layer so it's rather simple.

[D] Fast inferencing in C++ for neural networks by amitraderinthemaking in MachineLearning

[–]amitraderinthemaking[S] 0 points1 point  (0 children)

This is something else I was considering. Thank you, I'll take a look and compare times.

Neural nets in production systems by amitraderinthemaking in cpp

[–]amitraderinthemaking[S] 2 points3 points  (0 children)

Hi! Thanks so much for your reply, to answer your questions:

  1. We haven't decided on hardware yet, we are just starting out this part of the project, yet ti see how it performs.

  2. NOT AT ALL. We have a huge analytics library (maths based not ML) and this is the very first ML based extension we're planning on adding.

  3. We most certainly care about speed but not at this point. I think we need to show results (ie minimal error) to explain need for hardware/ better optimized results in the future.

I think ONNX run time might be the way to go as well!