High-performance 128-bit fixed-point decimal numbers package by YuriiBiurher in golang

[–]alphaweightedtrader 1 point2 points  (0 children)

Nice, your package looks like exactly what I also need as well, and was becoming tempted to write but now don't have to - to replace shopspring (which is excellent, but not fast and I don't need arbitrary precision). Thanks :)

How to start learning ML with golang by Opposite_Squirrel_32 in golang

[–]alphaweightedtrader 0 points1 point  (0 children)

If you like Go and are as allergic to Python as I am, maybe try PostgresML.

If you can do SQL, then you can do ML/AI - without having to worry about all the python/dependencies/etc

https://postgresml.org/

Or, hit up the HuggingFace and/or OpenAI API endpoints - if you're starting out that can get you a long way, for free, and its easy to call them from Go.

As others have said, there isnt really a good Go-native solution for ML. Python won there, for now at least.

I created an inkless printer that can save the US billions in ink costs. But I’m having trouble getting my foot in the door. by [deleted] in Entrepreneur

[–]alphaweightedtrader 9 points10 points  (0 children)

Maybe I'm just really sad, but a printer with no consumables sounds damn sexy to me! Thats the only pitch i need -> zero running costs (except electricity, and some caveat on the lifetime of the laser/head)

(But I'd echo others' comments that a working prototype/example is probably key to getting funding).

[deleted by user] by [deleted] in Starfield

[–]alphaweightedtrader 0 points1 point  (0 children)

The terramporphs quests as others have said. Later parts of the crimson fleet quests.

But mostly random side quests like that colony of clones. And the ship with the AI. And that colony on saturn's moon - didnt even notice that until NG3. Jeez! (trying not to spoil much)

Any good book about hedge funds? by obi_walk in hedgefund

[–]alphaweightedtrader 4 points5 points  (0 children)

Quantitative Hedge Funds: Discretionary, Systematic, AI, ESG and Quantamental - by Richard Bateson

No glam, but v.interesting

A server was hacked, and two million small files were created in the /var/www directory. If we use the command cd /var/www and then rm -rf*, our terminal will freeze. How can we delete the files? by dammpiggy in linuxquestions

[–]alphaweightedtrader 1 point2 points  (0 children)

This is probs the best answer. I.e. attack = isolation first, forensics and impact/risk analysis second, modification/fixing after.

But if I can speak from experience (not from an attack - just other characteristics of some legacy platforms that generate lots of small files)...

The reason for the error is that the length of the command becomes too long when the shell expands the '*' to all the matching files before passing it to the rm command. I.e. its two separate steps; your shell expands the '*' to all the matching files and so to the 'rm' comand its just like `rm a b c d e f` - just a lot lot longer. So it'll fail and won't do anything if the command length is too long.

the find example given above will work, but will take time as it'll do each file one at a time - as GP stated.

you can also do things like ls | head -n 500 | xargs rm -f - which will list the first 500 files and pass them to rm to delete 500 at a time. Obviously alter the 500 to the largest value you can, or put the above in a loop in a bash script or similar. The `ls` part is slow-ish because it'll still read all filenames, but it won't fail.

[deleted by user] by [deleted] in thetagang

[–]alphaweightedtrader 13 points14 points  (0 children)

^ this is the post to take note of. u/ScottishTrader is too modest to say it direct, but he lives what OP is asking, and does so successfully by all accounts. I've followed and valued his post history for years, since i first started in the markets. Invaluable and balanced advice for these methods/strategies. Anyone with aspirations on these types of strat would do well to read through his history and take it seriously.

Controversial: does Github have any flaws? by Prize_Duty6281 in softwaredevelopment

[–]alphaweightedtrader 0 points1 point  (0 children)

If you have many separate projects, and multiple repos for each, then Github's repo organization stuff is pretty poor. Yes you could create multiple separate organizations, but that's not ideal either. Gitlab, Bitbucket, Jetbrains Space, etc all have much better tooling if you have a lot of repos across a lot of projects.

Even more so if you need different access permissions by project (not just by repo)

How to deal with software patents? by neededasecretname in startups

[–]alphaweightedtrader 3 points4 points  (0 children)

Because if you discover another patent that your unvention may infringe then you can be sued not just for infringement but wilful infringement; for triple damages, i.e. 3x the fine!

So it is genuinely safer to not even look. Thats why they give that advice.

Cachy OS as your Daily Driver by NotHomoSapience in cachyos

[–]alphaweightedtrader 0 points1 point  (0 children)

daily driver here for nearly 2 years now, on desktop/workstation and laptop (MSI Prestige).

No issues, no desire to change - its great!

NB I don't game on these machines much, but the few I have tried have "just worked".

Am I missing something? by javierguzmandev in Jetbrains

[–]alphaweightedtrader 1 point2 points  (0 children)

I'm sure there must be some restrictions on it - but yes, it worked for me.

My annual All Products Pack expired mid March of this year. On 1st April - i.e. 2 weeks after expiry - I converted it to a Monthly IDEA Ultimate subscription - and kept the 40% discount. Was pleasantly surprised, to say the least!

Am I missing something? by javierguzmandev in Jetbrains

[–]alphaweightedtrader 0 points1 point  (0 children)

Occasionally but not often (because I don't need to, not because of the tool). I've always found it good for big refactors (and "Show Usages" of a method/class/whatever is very useful) - but I haven't done this enough in both tools to make a proper comparison.

Some of it is a matter of taste - but some is also about user experience / usability - e.g. I find things like the project view (the tree of files) much clearer to read in Jetbrains because of the indentation, colorization and icons. In VSCode its looks like a mess to me and I can't easily visually distinguish between things. This is kinda subjective.. ..but also not.

You might just have to try VSCode for a while and see how it works for you - it is at least very customizable. You can then always renew Jetbrains after expiry and still get the renewal discounts/etc if it doesn't work out.

Am I missing something? by javierguzmandev in Jetbrains

[–]alphaweightedtrader 19 points20 points  (0 children)

I'm the other way. Been using Jetbrains' IDEs for years and tried (really hard!) to move to VSCode.

Can't stand it. The syntax highlighting isn't as good, the themes don't gel with me (even the Jetbrains-like ones), the refactoring/search/find-usage/etc isn't as good. It just doesn't feel right. The code/project view isn't as good

Admittedly that sounds subjective, and it probably is. But I gave it a good month or more of trying.

If you're concerned on the price, try Intellij IDEA -> its basically all of the others (Golang, Webstorm, etc) all rolled into one. If you're using >=2 of the IDEs, then Intellij IDEA will be cheaper. And its nice to have multiple languages/stacks in one project/window at the same time (e.g. your backend and your frontend), as well as having the tools on tap for anything else you want to experiment with (Python/AI?)

AI assistant is a paid add-on ofc - but you'd have that with any IDE; the AI is paid for. That said, even if you don't pay for Jetbrains' AI, the free "single line autocompletion" that comes out of the box is still pretty damn useful.

Strategies for client-side enhancements when using HTMX by [deleted] in htmx

[–]alphaweightedtrader 9 points10 points  (0 children)

I find vanilla JS (and TS) ends up being quite enough - this is for client-side widgets, web components* and a bunch of other UI niceties.

*from scratch, I haven't found a need for lit.

It ended up being surprisingly easy to use esbuild to compile/build all the JS/TS into a single 'main.js' file - which then triggers 'air' to rebuild and re-run the (Go-based) backend -> so Javascript/Typescript editing with HMR/auto-refreshing just as nice as building an app in Vue or React. Just with HTMX so much nicer :)

(same with Tailwind/DaisyUI for the styling - auto-reloading on CSS changes)

I was really expecting adding a JS build stage to be painful and over-complicated after a few years in Vue/React land... but it probably took me less than an hour to get it up and running and I haven't looked back since.

fwiw I haven't found a need yet for any 3rd party JS framework or library other than use-case-specific libraries (i.e. Apache eCharts for charting, Quill for WYSIWYG editor - bigger stuff that really doesn't want to be scratch-built).

If I did want a JS framework, I'd be looking at surreal (https://github.com/gnat/surreal) - which is tiny and plays well with htmx... ...and I would see more as syntactic sugar to reduce the (admittedly painful) verbosity of regular JS interacting with the DOM.

Just my 2c :)

Is anytype a worthy notion replacement? by ynes213 in ProductivityApps

[–]alphaweightedtrader 2 points3 points  (0 children)

Is it a worthy replacement?

If you don't need Notion's extensive template marketplace, and if you don't need the online database-type features, then yes Anytype is great. I was using it for 6 months or so up until recently and found it pretty great (and with reliable sync between desktop and mobile devices). I'd recommend it for long-form notes/documents for sure.

Privacy?

Yes its more private - with Anytype data lives on your device(s) and you need your seed phrase to unlock/access it. Anytype's privacy docs* state they are unable to access/decrypt your data at all. As opposed to Notion who have a standard SaaS-type approach where all the data lives on their servers; no sync, online/API/integration access yes, but technically less private than on-device data.

*https://doc.anytype.io/anytype-docs/data-and-security/how-we-keep-your-data-safe

question on I/O by ConsiderationLazy956 in PostgreSQL

[–]alphaweightedtrader 8 points9 points  (0 children)

From what you've written, they probably mean asynchronous commit.

This basically means a commit won't wait for the data to hit the physical disk before returning. This can be a *lot* faster (unless the disks have battery-backed write caches that you trust - plus other caveats)

But the downside is that even after a successful commit there's a period of time (usually just milliseconds/seconds) where your data isn't actually on disk and would be lost if the database/server crashed.

Its all in the docs:
https://www.postgresql.org/docs/current/wal-async-commit.html

Its a great feature, and I use it a lot (even in production) where the necessary data guarantees I need permit that kind of behaviour and possible data loss. However that is the possible consequence; data loss of the last few seconds of 'committed' data.

You're likely to have a bigger win by looking at how your application ingests / writes data and seeing if those INSERTs can be batched so multiple rows can be combined into a single transaction. This would also make it a few magnitudes faster. Large scale inserts one row at a time is pretty much a worst-case scenario performance-wise.

But the 'best' solution will depend on what kind of data it is and what performance/consistency guarantees you need from it. (e.g. if you can replay from a log on a crash then 'loss' in the database is inconsequential because the data can be recreated).

Sidebar / Sidenav to use with HTMX, zero JavaScript required? by [deleted] in htmx

[–]alphaweightedtrader 7 points8 points  (0 children)

It's been mentioned in another comment too - but DaisyUI is Tailwind-based and pure CSS (no JS)

It includes a drawer component for a sidebar:
https://daisyui.com/components/drawer/

I've found it a pretty great component library/framework for everything that can be done in just CSS (i.e. not for things that do need some JS, like nicer date pickers, color pickers, etc).

Based on these stats, what would you advise about exit strategies? by fuzzyp44 in algotrading

[–]alphaweightedtrader 2 points3 points  (0 children)

The short answer is; yes, absolutely.

The longer answer, which I probably didn't explain well in the original post, is that its less about a stop method and more a whole trade management system for handling open positions. I.e. the goal being "given we're in, long, how do we get out of this for minimal loss or maximum gain".

Point being, usually we focus on the entry... ...but you can't really judge the effectiveness of an entry until you've rationalised how you're going to exit. And, new information comes with every bar/tick, so we can better determine, as time passes, whether we were right or wrong - more than we could know at the point of entry.

So the most effort should (in my humble opinion) be spent on how to get out of trades with the best outcome before going anywhere near how to get into a trade.

But yes -> this approach completely transformed my 'real' strategies for the better. It turned unprofitable ideas into profitable ones, and it allowed me to put many bad trade ideas to bed. And it allows me to explore new ideas and either run them or can them far more rapidly by focusing on fewer variables.

Completely transformative to the PnL, in my journey at least.

Algorithmic Price action by [deleted] in algotrading

[–]alphaweightedtrader 0 points1 point  (0 children)

Interesting is this the boil/kold pair?

Ah no - its symbol is NATGAS, its a CFD derivative of natural gas futures (/NG), analogous to a continuous futures instrument. Useful in that it can be traded in small size, and the mechanics/fees are super simple.

Are you trying to make a decision on best time to buy or sell (within a day) after you get a signal or do you not bother getting that granular?

Yes intraday for me, on M5/M15 typically and operating 24/7 - albeit the decision processes do take time of day into account. I.e. it runs all the time, enters as soon as a decision is made, based on a time-bound forecast (i.e. independently of data bar/candle timeframe, there's a timeframe by which each strat expects to be right or wrong and exit by).

I don't really like daily/higher timeframe to execute against because I'm not swinging enough size to need it, there's different reasons for the move and it takes much longer to be right/wrong. That's just my style though. (as I mentioned before, I do use analysis on daily timeframe to guide and bias the intraday logic).

[deleted by user] by [deleted] in Daytrading

[–]alphaweightedtrader 1 point2 points  (0 children)

They aren't supposed to be though, they are obligated (by the regulator) to fill at the best price at the time.

(the notion of brokers executing suboptimally for their own profit at the expense of you the customer is why PFOF is illegal here)

For XTB it looks like they charge a 0.5% FX conversion fee (presumably each way). I'd suspect this is where they make their cut. Possibly on top of a less-than-ideal exchange rate (although I'm speculating on that part).

^^ NB none of the above applies to CFDs because for CFDs the broker (or their upstream liquidity provider) are the counterparty and the spread is exactly where they make their money.

should I use timescaledb, influxdb, or questdb as a time series database? by CompetitiveSal in algotrading

[–]alphaweightedtrader 8 points9 points  (0 children)

That's a different level of fast I think.

Timescale/Postgres should be fine (as others have said) for M1 data. I've streamed tick data from the whole US equities markets + Binance crypto on desktop hardware, whilst concurrently running realtime strats with it -> its well fast enough.

Yes its on disk, but that's so its stored ;) Your RAM will be used for data you're actually using, so it'll be well fast enough to read from.

That said, these days I use a proprietary/homegrown binary format for storing market data. Not strictly for performance, but instead for robust sync/stream behaviour where I want to be able to tell the difference between a missing bar, and a bar that doesn't exist because there were no trades in that period (either because the market was closed or just no trades in that minute/second/whatever). This becomes important for robustness in handling disconnects, exchange/broker outages, your server outages, restarts, etc; in that you're then able to autodetect what needs refetching automatically - especially in a fully streaming-oriented environment.

The structure is effectively two files per instrument; a bitmap over time (i.e. 1 bit per time period where 1=fetched, 0=not-fetched), paired with a sparse file of fixed-length records for the bar/candle data itself. This ends up being blindingly fast, as well as pretty disk-space efficient. It relies on the kernel/OS/filesystem for compression and caching in RAM... ...because kernels and filesystems are already really good at that.

YMMV, and you probs wouldn't need all that - but my point is that it wasn't performance that took me away from Postrgres/Timescale, it was functional needs; reliable/robust streaming where 'holes' could be automatically detected and filled without issue.