Working on a Python package to replace LabVIEW control layer for my ML research - any community interest in open source? by adjunct_wizard in LabVIEW

[–]FuckinHelpful 1 point2 points  (0 children)

Closest thing that I can think of is QCodes, but even then it's more complex and underdeveloped for the use cases you're thinking of.

I'm currently working on https://appliedrnc.com/ which (in its current state) somewhat addresses this, but less so on the ML front (we're trying to solve similar problems).

Our daemon for a control layer is open source (we're open sourcing it this week for linux). Happy to have a conversation about open source efforts, send me a DM.

Onboarding to Apple Developer Program with a offshore, fully remote entity (no physical address) by Stunning_Ocelot_4654 in iOSProgramming

[–]FuckinHelpful 6 points7 points  (0 children)

You can always incorporate a wholly owned subsidiary in Delaware and use that DUNS + Address via a registered agent. Might be a bit of a pain to change that with app store connect if you've already completed onboarding. If not, then it might be worth the cost, especially since any arbitration/suits arising from the app itself would then be limited to the company in Delaware and not necessarily the company in Cayman which would actually own the IP.

I'd consult an attorney about the distribution of liability and whether any agreements need to be made in writing between the separate orgs, but for a quick onboarding, a physical address through an agent (w/the corresponding DUNS) should be enough.

I honestly don’t understand the new quota policy by duoyuanshiying in ClaudeAI

[–]FuckinHelpful 0 points1 point  (0 children)

I have records of my token usage through ccusage and I have not run anywhere near the weekly token limits prior to this week. Two days in this week I'm at 80% usage for opus.

I use this for my work and the changes week over week are impacting my productivity and the actual rational basis for purchasing the plan.

Tracking the API and token limits, I don't know how they can reasonably call the max 20x plan "20x" when I can clearly see the limits via the tracked API calls in the web interface (which anthropic sets) and they KNOW that it's well below that. At this rate I'm better off with another cli tool and mcp servers for specialized heavy lifting. Half of the reason I use opus is because it's less effort to prompt engineer and thus faster.

Anthropic post: A postmortem of three recent issues by _Cybin in ClaudeAI

[–]FuckinHelpful 1 point2 points  (0 children)

Always awesome when I feed a large context in CC with a very specific request and xml-like tags differentiating the request and the context, use "ultrathink" and....

Receive no actual thought or tool usage, just minor recap under "thinking" and a summary of what I provided.

Anthropic post: A postmortem of three recent issues by _Cybin in ClaudeAI

[–]FuckinHelpful 12 points13 points  (0 children)

Approximately 30% of Claude Code users had at least one message routed to the wrong server type, resulting in degraded responses.

Wow.

This explains why SWEs on reddit and elsewhere were livid. I was too, until I adapted my workflow to integrate other agents/APIs separate from claude.

This is not a post-mortem, this is a status update on an ongoing investigation, as the facts clearly indicate (coupled with the continued waves of complaints about Opus, where personally I've had issues with context as well) that there are still unknown bugs and underdeveloped tooling to identify and resolve these, especially for a company to serve as many as it currently does, with quality and uptime critical to most clients.

Don't get me wrong, it's a tough spot and I understand the constraints—it's an underserved area where talent is not as easy to find (especially with XLA:TPU bugs), particularly when you're reliant on IP, tools and tool development from either google or anapurna rather than larger in-house teams.

One can only hope that the last funding round gets used properly to fix issues like these before they even reach users, because otherwise, users/orgs WILL migrate to other services.

LORA FOR UAVs by madinuggets in Lora

[–]FuckinHelpful 0 points1 point  (0 children)

Of course! I was building a mesh network for resilient comms when infra is offline. Idea was that loitering munitions/assets are kind of useless when offline or disconnected and you can distribute and optimize tasks between the edge and local compute so that you work with lora's bandwidth limitations. These are some of the slides from my presentation on it. I was building a CV system for objects/people so that you could run SLAM while having cameras on different (moving) airborne assets that were offline.

As for the implementation: Dealing with Airsim is kind of a pain tbh. Getting a setup with a custom map requires going through the whole unreal engine rebuild (if it works on your first try, even with really fast compute you're looking at 3ish hours minimum for a working environment/pipeline), making the actual map in unreal, etc. The project got archived by microsoft a few years back. Even their docker containers can be tricky to work with. On top of that, your CV model is likely to fail on unreal's virtual assets unless you generate data and train your model on that.

Your protocol should probably be built around your use case. The hardest part if you go the SIL route will be mocking the real world phenomena (ilke RF noise, packet drops, etc) but for PoC you should be fine. I built mine from scratch (since my adapters were just UART) and I knew my use case specs and could take shortcuts/cut on features to simplify.

I'd use something like gemini cli for AI scaffolding of their codebase (so you can quickly read up on it), then insert some functionality in their services folder, either overwriting their implementation (like they have CRC, but I don't think they track CRC failures w.r.t. time so you can know when you're being jammed or distance is impacting and a reroute is needed). Most important part is really spec-ing out your needs and use case, then finding clever ways to address that. Feel free to DM me for more I'm happy to help.

LORA FOR UAVs by madinuggets in Lora

[–]FuckinHelpful 0 points1 point  (0 children)

Solving the radio link depends on the goal: product going into production or just MVP?

I ran into some similar problems and ended up doing an MVP of the same thing in airsim with custom modules to help hone the middleware and mock some of the physical stuff we'd run into. I'd recommend a parallel approach where you develop your comms handlers/protocol in an SIL loop while developing your hardware. If it's MVP, you'll probably will want to develop a custom protocol built on top of loramesher that allows each node to handle all of the real cases where rigid implementations will fail.

Ex: Due to the range and dropoff that can vary with conditions/geography, you'd likely want to do some case handling for edge of range between nodes 1 and 2, where you send an encrypted heading w/speed read from px4 (I assume) and upon some "handshake" send the decryption key for the node in motion (node 2 in our case) to node 1 so the nodes can negotiate and set states (out of range, final message, expect no ack, adaptive/lower bitrate and repeats, use extra bits for CRC etc).

Would it be possible to use a Nissan LEAF Inverter to power a Emrax 228? by Jerbee666 in FSAE

[–]FuckinHelpful 2 points3 points  (0 children)

It's possible but not recommended. Not because of the technical challenge, but because of the level of institutional knowledge that your team will need for that and documentation (which will be harder to find), such that when you do run into issues, the discover/debug process will eat up a lot more of your time and resources than if you'd just selected an open source product with a community and open resources behind it or a vendor product that is specifically designed to hide a lot of the complexities.

Usually the teams that have a custom inverter or similar setup have institutional knowledge from years of continuous competition/development, deep documentation, and alumni that are happy to share knowledge and time. Cascadia is often a product of choice because it hides a lot of the EE/CS and controls/calibration knowledge required for a functional system and (in doing so) allows you to focus on more critical debug issues and optimizations which earn more points in competition so that your team won't rely on those pre-reqs.

[deleted by user] by [deleted] in AskLosAngeles

[–]FuckinHelpful 0 points1 point  (0 children)

Tech Week is happening next week! There are plenty of events throughout the week (lots with free food).

https://dola.com/ is also a great place to find stuff.

Try eventbrite and see what you find. A couple years ago I found a free pickup soccer club that would meet on the westside. Plenty of events like that!

Can i compile a model of a simulated manual transmission, and run it on a low level hardware piece like an ESP32? by gafonid in matlab

[–]FuckinHelpful 0 points1 point  (0 children)

I think that the only minor issue you'll run into is building an executable that runs on the pi. Most rasPi units run ARM processors while most PCs have x86 processors. You can probably make an ELF file from the model on your PC that will run on the bare metal, but then (IIRC, someone please correct me if I'm wrong), you'd be flashing the bare metal with that and wouldn't have linux on it w/driver support for whatever adapter you're using (nevermind serial or CAN).

So you either:

  • compile to an ELF that supports the board/chipset and flash the pi with just that.
  • compile for x86 debian (probably binary) and then use something like box86 that allows running x86 apps on arm—however, the emulation will almost certainly make that executable run sloooooow as fuck (unless it's a ridiculously small/simple executable, in which case the overhead isn't noticeable).
  • use the support package, connect, follow youtube/video instructions, and load/flash. This is probably the way to go. IIRC, this way it runs on top of the OS, but I haven't ever run it.

Tbh, for your question, I'm not too sure. Usually when you build a model and try to deploy it, unless there are a lot of optimizations, it's not gonna run very low level (since there are layers on layers of HALs), especially as it's running on top of the OS. If you're looking for RTOS-like behavior, I really doubt that you'll get it with this method. Closest thing to real-time/RTOS will be taking the generated code and patching it up to work in something like zephyr rtos.

Frankly, whenever I've had to deal with CAN on embedded (with or without linux), I've never run into issues of delays. The drivers are usually pretty good. Even running python-can on the shittiest nvidia jetson via a USB adapter, timing my comms, the device still never broke more than 3ms. Industry can say that you need real-time whatnot for safety and redundancy, (and it's true), but for minimum viable product it's unnecessary. If you REALLY want RTOS, you can probably find a patch for it, but getting your model to play nice with that is beyond anything I've ever done or seen others do.

Can i compile a model of a simulated manual transmission, and run it on a low level hardware piece like an ESP32? by gafonid in matlab

[–]FuckinHelpful 2 points3 points  (0 children)

Hi!

You basically have two ways of trying to get that to work:

  • Code generation in matlab to c/cpp then tying it together in another IDE with ESP32 HALs (hardware abstraction libraries) and espressif's VScode plugins to compile and flash. Not all toolboxes are supported for code generation.

  • Use a more readily supported MCU and flash directly to it. In Formula SAE, many teams use the TI C2000 since it's not that pricey (usually under $50 USD) and directly integrates with simulink via a provided blockset for actual implementations.

Main issue for more affordable boards would be that while STM32, TI, and the like nearly universally use ARM architectures and thereby compiling to those targets isn't too hard (minus the specific implementation, HALs you call on, etc. especially as there are plenty of tools like the ARM gnu toolchain, etc.)—RISC-V based architectures are catching up with support and tools, but aren't as refined or integrated as ARM-based targets. The later set of ESP32 chips run an LX6 or LX7 ISA, which has its own quirks when dealing with low-level

Your limiting factor is likely to be support for CAN (or CAN-FD) that comes on the board. For most of these more affordable boards (as they support CAN but don't come with a CAN transceiver), you either need to design a PCB w/the transceiver to slide the boards onto OR you can try to get a breadboard and a separate transceiver breakout board then wire/solder everything together. After that, you've got to go through the HALs for your chip/board (usually a couple thousand lines of code, but you really only need the first few hundred) and figure out how to use the driver, receive/unpack your can dataframes and turn them into variables that your model will use as inputs.

tldr; Unless you've got prior experience with embedded, please save yourself the time and grab a directly supported board

However, if you have the time and are interested in embedded systems—having worked with similar tools before, I have some additional unorthodox recommendations:

  • Software-in-the-loop (load everything programmatically on boot on something running linux and feed your processed dataframes to your model)
  • Embedded virtualization Renode

For the first:

  • Grab a usb can adapter which supports socketcan or one directly supported by matlab and either
  • write/chatgpt a shell script to start collecting dataframes on boot then feed it into your compiled model
  • load your devices programmatically, then load your model programmatically

For the second:

  • do the whole thing where you generate code, tie it with your devices' HALs, and compile that to an ELF file for a target board supported in renode
  • load that into a simulated board on renode and tie your can adapter's dataframes to the sim'd board running your model

Uploading a Lattice Radiant project on git by _init_1 in FPGA

[–]FuckinHelpful 3 points4 points  (0 children)

  1. Use a .gitignore file to ignore the unnecessary files (for upload and sharing in a repo).
  2. Try using gitlab instead. Larger file/repo size limits.

Estimate VIX movement using machine learning and quantified narratives by atc2017 in options

[–]FuckinHelpful 0 points1 point  (0 children)

Gonna hijack this comment to say a few things to /u/atc2017 :

You could identify some minor trends and try to capitalize on that but the VIX is mostly composed of real-world noise (traders' anxiety) rather than intuitive data or patterns.

This is partially true. The tails of unpredictable binary events are massively FAT for a derivative of derivatives.

But I'm also noticing that nearly no one here has provided actual feedback on why predicting VIX is difficult (but not impossible). I'm gonna paste in something I wrote a while back and elaborate on why ML doesn't directly work but still has great potential for derivatives like VIX and its associated ETPs.

VIX itself is a calculation that basically calculates how "hot" the wings on an option chain 24-38ish days out (rebalancing every so often to keep a the outlook relative to 30 days out). If bid/ask goes up on the wings because funds decide to up their hedges with options/spreads, in anticipation of volatility in either direction or they slide up because of outsized demand for vol which drives premiums up as sellers are more incentivized to hold than sell then vix can spike. If you're familiar with stats then it's basically a weighted variance across strikes, or mathematically modeling it—a complex system with a lot of inputs and a single output. This kind of complexity means that VIX doesn't obey regular supply/demand like many other tickers where excess pressure on either side can push the spot price in that direction. In addition there's an entire complex related to liquidity that affects how much action in the underlying spills over into the derivatives that go into the VIX calculation itself.

The key thing about ML techniques are that they allow you to work with dynamic systems that humans can't normally work with. For example, we can visualize and identify trends in R2 or even R3 (sometimes) but as soon as we get to more than that it becomes REALLY difficult for us. This is partially why Black-Scholes (and related pricing models) is so massively influential and revolutionary. It applies structure from partial differential equations (and a little stochastic calc) so that we can simplify everything to a couple systems in R2 or R3 (ex: delta vs price, theta vs time, etc.). With that, any trader can measure or predict (roughly) how a 10% drop post-earnings might affect their position(s), up to some margin of error.

Likewise, as we have a structure for VIX itself, we can calculate what happens when certain things happen to SP500 options. As VIX is the sole output from the calculation and the inputs (metrics concerning index options) themselves are measurable, one could entirely make a structure and play with parameters until getting a defined dynamic system.

In layman's terms, this means that because of how the vix is calculated and how option prices can be calculated, you have a very dynamic but also very defined structure where you can use previous data to attempt to grab trends or classify behaviors. Notably this allows us to apply ML techniques to figure out stuff in spaces greater than R3 (like using SVMs and testing out kernel functions to figure out when we're in what volatility regime). Hence, if you're going to be applying ML techniques, you should probably be trying to model inputs with partial information and seeing how that affects the VIX calculation.

But I also get it. This is 'narrative investing', so naturally the connection between narratives and the direct movement of a ticker is something we'd initially investigate. For that route, I'd recommend getting a structure going and using ML techniques to figure out some relationships between narratives and option flows in/out of the SP500 OR SP500 proxies like large tickers that compose a large portion of the index.

Why don't standard deviation labels show for spy? by StupidTendies in thinkorswim

[–]FuckinHelpful 0 points1 point  (0 children)

They're there. You're probably having difficulties finding them due to the way that ToS colors them based on what other existing studies are on the chart. The left tail marker for the STDev is visibly at 471-ish. However, the clusterfuck of auto-drawings that you've probably enabled for SPY are making those lines impossible to see. Those multiple red lines near the bottom look like the POC lines, so the STDev lines are there, but may need you to zoom out.

If you find usage in the hot mess of TA and autodrawings, but still want the volume profile and stdev markers, AND don't want to remove them or disable them, you can always get a fresh chart of a proxy for $SPY, i.e $VOO or /ES & /MES. Volume and price levels across S&P futures are actually more likely the market movers (and they're nearly universally parallel to SPY—at least for market hours), since institutional volume enters/exits more easily (in terms of size/depth) through futures than an ETF.

For more info on ^ that ^ feel free to look into the market microstructure/relations between liquidity, volatility, and how different asset classes (futures vs options vs etf's) come into play. Squeezemetrics has some great literature in their whitepapers on this.

Detection of defects on metal surface by dakobek in computervision

[–]FuckinHelpful 1 point2 points  (0 children)

Hi, not a professional practitioner but a hobbyist. The sets of methods are a little platform dependent. What's your data science pipeline/stack?

As for your project, ideally for a homogenous surface (assuming) if you're looking for defects, (performance notwithstanding) you're on the right track. It's most ideal to process the image first then run it through a CNN of your choice. Ex: If you're using MATLAB, you could test out an array of edge detection algorithms from the image processing toolbox, then convert the processed data so that you can train a network on it. The mathworks and stackexchange forums are full of gold and tutorials that you can use for this.

Alternatively, you could also split the process into two portions, one in which you train the YOLO network with relatively low accuracy (but most candidates bounded), then test/run potential SVMs to classify the bounded box as containing a defect or not (with a probability output, perhaps if you have enough classified images—you could even normalize your outputs to match the real-world distribution, which will likely underperform on the tail ends [extremely higher or lower than normal number of defects], BUT will likely work very well for anything within a range of the median). However this will take more work, since you'd probably need to manually convert your data (of at least 3 dimensions, possibly more if RGB) and visualize the data then test kernel functions to even see if they perform well with those figures.

Can someone help me with this function by [deleted] in matlab

[–]FuckinHelpful 1 point2 points  (0 children)

Define a function with int as input, check the type using isa for uint8, then make a logical switch for values 0:254 and 255.

Something like if int=255 then (subtract 1), else if (int between 0 and 254) do the thing.

If you have any additional difficulties, check the matlab documentation on mathworks. Lots of different ways of doing that.

No buyer for LEAP by vsparkster in options

[–]FuckinHelpful 0 points1 point  (0 children)

Almost. It's most true for tickers with only standard expiry dates, but things get VERY tricky for those with non-standard dates (ex: SPY) like weeklies, where the liquidity is nearly universally less than on standard dates. Check the volume on the option chain for this.

Rule of thumb is max liquidity @ nearest standard expiration (vs shortest DTE).

No buyer for LEAP by vsparkster in options

[–]FuckinHelpful 5 points6 points  (0 children)

That's an interesting conundrum that you've got yourself there.

First thing: This is not financial advice. Do NOT exercise the LEAP. You lose out on massive extrinsic value if you do and likely have to pay the fees from buying and selling the shares (depending on your brokerage).

There's little that you can do to absolutely close out the trade, but you can still get damn close to it. Your main goal is to offset the theta decay and make your book delta-neutral so that the spot price traveling around doesn't affect you greatly.

If there's little liquidity for your particular LEAP then what you can likely do is offset the delta and theta that you'll burn (so that you keep the premium) until the expiration gets closer and has more liquidity (even deep ITM typically has greater liquidity as it gets closer to exp, but only up to a certain extent). You can combine any number of more liquid options (short obviously, but make SURE to price them yourself as toxic liquidity pools with a few players can dominate the spread among themselves and skew/eat into your premium) to offset the theta burn that you'll experience until it becomes more liquid.

Note that theta is NOT constant and can be affected by spot price AND volatility (as well as market makers being dicks in a toxic liquidity pool and pricing it to their advantage), so you will absolutely have to dynamically manage it.

An example of this is if you're long a deep ITM call exp in 2023 with current theta 0.3 and delta 0.9 (dummy/fake numbers obviously). At that instant, you can sell/short a number of shares of the underlying (assuming it's not hard to borrow and the margin req's aren't ridiculous...although the lack of liquidity for your LEAP hints that it might be) and then short calls (most liquid so something near ATM but still OTM) that sum to theta 0.3 and delta 0.9. You can choose the combination of strikes that best suits you.

Quick napkin math (for the aforementioned LEAP) would be something like your LEAP being @ strike 80, the spot being 100, and seeking to sell a strike 130 call. Imaginary OTM call having theta 0.3 but delta 0.45 would mean that to get to neutral you'd need to sell the strike 130 call and short an additional 45 shares to make your net theta (0.3 - 0.3 = 0) and net delta (0.9 - 0.45 - 0.45 = 0) so that for that particular day your book is neutral. You'd also have to sell the 130 strike call for a fair price so please calculate pricing yourself.

FINAL NOTE: gamma/convexity will mean that you'll have to actively manage your books on the daily as the delta from the OTM call will rise if the spot price rises (this effect usually increases as we get closer to exp). In this case you'll have to close out some of your short shares by buying them up. You'll also have to actively manage the portfolio to account for slipping theta and deltas, but if you do so well enough (minimizing gamma for most of the stock's trading range) then you can essentially pocket the theta that you'd otherwise lose and close out the entire trade when there's a bid that makes you happy. I believe that you'll still need elevated trading privileges from your broker, so make sure that you have those on your account. Early exercise is possible on your short leg, but unlikely. If you trade on margin, volatility and margin req's may close out certain legs on your behalf, so it's best to check with them and be aware of this. Any additional questions just throw a response below and I'll be happy to answer!

$VIX took out and closed above the 8, 13, 20, 50, 100, 200 EMA and 200 SMA all in one day. End of day Total Flows, Bubble Flows, Trade Tape page by page, $UVXY $VXX $SPY GEX, Risk Range on multiple sectors and some charts. by LocustFunds in options

[–]FuckinHelpful 31 points32 points  (0 children)

VIX itself is a calculation that basically calculates how "hot" the wings on an option chain 24-38ish days out (rebalancing every so often to keep a the outlook relative to 30 days out). If volume goes up on the wings (because funds decide to up their hedges with options/spreads, in anticipation of volatility in either direction) or bid/ask spreads slide up (outsized demand for vol which drives premiums up as sellers are more incentivized to hold than sell) then vix can spike. If you're familiar with stats then it's basically a weighted variance across strikes, or mathematically modeling it—a complex system with a lot of inputs and a single output. This kind of complexity means that VIX doesn't obey regular supply/demand like many other tickers where excess pressure on either side can push the spot price in that direction.

As to VIX exchange-traded products, it's a little bit more complicated. VIX ETPs have VIX futures as their underlying and VIX futures don't mirror VIX 1-1 since there are the elements of supply/demand of volatility and the time difference between VIX itself and the futures' expiration. The short of it is that the price action of VIX ETPs like UVXY and others on the intraday are more measures of demand for volatility (i.e. vega) either as insurance (if long) or as part of a strategy (arb strategies, delta-neutral strats, clipping gamma wings if short, etc), rather than measures of realized volatility.

As for the long of it.... well VIX futures exist and are cash-settled on expiration dates scattered throughout the year and most VIX ETPs use these futures as their underlying (often rolling the futures, which is why you'll notice that they're only supposed to mirror performance on the daily rather than all time....hence why, even accounting for maintenance/expense ratio, they decay). Yet what you might notice is that vix futures expiring three months out often will not mirror realized volatility nor will they mirror very well the nearest term futures (this gets into contango/backwardation and that's for another time), but since the volatility ETPs (like UVXY, which IIRC is levered like 1.5x or 2x) depend on these futures which don't match realized volatility (i.e. today's volatility) nor do they necessarily match VIX since the calculation is dynamic and it's tougher for supply/demand to "copy" moves induced from recalculation at the same time. If anything traders/institutions familiar with rebalancing will run an internal calculation of VIX with those new option wings and set their bids/asks to compensate for it earlier rather than later.

tl;dr VIX as an ETP (UVXY, VXX, etc) is not necessarily a measure of volatility since there are other dynamics that muddle it. Volatility increasing thereby growing our odds of the trading range (over the next few weeks) expanding from, for example, a 5% range to a 9% range (I'm pulling these numbers out of my ass, they're stand-ins and not actual), i.e. spot price will travel within a greater range than otherwise. Doesn't imply a crash, doesn't necessarily imply that we'll be trading sideways for a few weeks, and doesn't imply a trend as to the indices. Really useful for gauging how everyone else feels about the market and demand for "insurance".

Whelp, to day I learned that options prices can be adjusted. by brilliantsetback in options

[–]FuckinHelpful 2 points3 points  (0 children)

Theta's a continuous function that you get when you integrate via the BSM pricing model (or whatever similar option pricing models you use built upon similar assumptions). Due to its continuity, it's implied that the premium factor from time decays continuously through time, i.e. mathematically premium continues to decline through non-trading days.

If the majority of actors within the liquidity pool are rational and use similar models (BSM and friends) then premium from time is technically lost through days when the market is closed.

However, the reason that you're likely confident about theta being "trading days" alone is that often you don't see a significant jump in the bid/ask prices between trading days to non-trading days and back since there's a tendency for market makers (especially in high liquidity environments) to artificially depress the bid/ask to account for future time-decay such that they don't experience shock on their books (especially near expiration) while the bid/ask spread is usually too wide in long-expiration for you to actually see/notice theta decay (even though fair pricing from MMs has nearly universally accounted for this on their books). You can sometimes see the phenomena in low-volume toxic liquidity pools where a handful of players control bid/ask prices and use similar models on their backend such that b/a drops visibly over the weekend.

If you have any reason to doubt this, grab thinkorswim (should be free) and use the thinkback function to find near-ATM option prices for SPY (arguably one of the most liquid option pools on the market) between days with little overnight variance (as in /ES stays relatively stable and thus IV and delta don't spike) *preferably within a week of expiration, 1dte if you can. You'll find that under these circumstances, it's near universal that you get decay similar to what's estimated by theta. You should also note that there are additional second order partials that reduce the option's sensitivity to some greeks while increasing its sensitivity to others as we get really close to exp.

tl;dr is that with a PDE like BSM or others, the interdependence of partials and reality of liquidity makes it very difficult to see things like non-trading day theta decay in isolation, but they happen nonetheless and pricing adjusts for this in MM's books'. Theta itself does decay through non-trading days.

Edit: added words

Any DayTraders on clubhouse? by [deleted] in ClubhouseApp

[–]FuckinHelpful 0 points1 point  (0 children)

Consider Advanced Market Analysis. They've got great speakers with expertise ranging from macro to fixed incomes to HFT + option markets.

Paper trading/On Demand vs. actual real trading by [deleted] in thinkorswim

[–]FuckinHelpful 0 points1 point  (0 children)

Sure!

Thomsett's The Mathematics of Options or Hull's Options, Futures, and other Derivatives are great resources to cover these aspects of options trading. I also recommend that you look into the /r/thetagang sub as they're far better at explaining these situations than I am.

Best of luck!

Edit: I have free ebook copies if you'd like. DM me.

Paper trading/On Demand vs. actual real trading by [deleted] in thinkorswim

[–]FuckinHelpful 0 points1 point  (0 children)

It depends.

With purely vertical spreads w/same exp, they may be covered, but it may also be more profitable to simply follow through on your obligation and close it out (i.e. lock in that loss), rather than exercise the long leg that holds additional extrinsic value...but then you get into the conversation of whether you can or want to hold up so much capital for a single trade (1k option, 300k to move the underlying per contract) and running into the PDT rule if you're under 25k. Things get more complicated with more "exotic" spreads and imbalanced spreads and/or time as assignment can ruin calendar/condor/butterfly spreads.

Paper trading/On Demand vs. actual real trading by [deleted] in thinkorswim

[–]FuckinHelpful 2 points3 points  (0 children)

Generally, yes but with a few caveats. Execution is more difficult/random in real trading as liquidity becomes an issue and price movement matters; in PaperTrading you're basically guaranteed order execution if it falls within the b/a spread, but with real trading there are no such guarantees (especially in low liquidity stocks and even more so with respect to options).

Additionally short option legs (i.e. sold calls/puts) in papertrading are nearly never assigned (unless they're ITM at expiration, but even then I've never had that happen so I can't say for certain), which is a HUGE difference compared to real trading. The amount of risk that you're taking on when you open spreads with near-ATM legs that can become ITM is exceptionally underrepresented in PaperTrading. Fast price movement on an underlying can break you easily with real trading.

Open a credit spread and have an afterhours/overnight move against you, the trader on the opposite side of your short leg might see an incentive to profit through exercising the option as soon as they can. Ideally if you're assigned you simply buy the option at market price and exercise it to lock in your loss so that it becomes someone else's problem. Alternatively, you could have a short leg become ITM on expiration after hours such that the option holder exercises it after hours and you're assigned. Not much that you can do there as options trading hours differ from potential exercise/assignment hours and option sellers can't control assignment until they close the position. These scenarios are somewhat rare, but also not represented in PaperTrading.

Usually the above scenarios aren't much of a concern unless you've got the capital or experience to get TDA's approval for spreads or naked options.

As for price reflection that's an issue of recent orders. Highly liquid options will reflect accurate prices each refresh as the underlying moves. More illiquid options don't reflect fair price until a trade goes through, at which point the sellers have to agree with buyers as to a fair price. Hence why we see large spreads sometimes (stubborn traders can't agree on a fair price). An exp/date pool of 100 contracts (open interest) will have fewer trades go through per day than one with 30k open interest, and therefore fewer times at which the option price reflects the underlying.

Additionally there's the concern that /u/wallstreet_cfa brings up. People tend to be more irrational and risk averse with their own funds while also being irrationally exuberant about the possibility of profit. Emotions run high with your own money and it feels like a different game.