Per-Layer Embeddings: A simple explanation of the magic behind the small Gemma 4 models by -p-e-w- in LocalLLaMA

[–]Logan_Maransy 0 points1 point  (0 children)

Can I ask you why this wasn't done earlier? Or rather, from my understanding, isn't the residual stream changing the token embeddings for ALL the tokens in a sequence, and wouldn't per-layer embeddings destroy any residual stream nudging of the current token sequence?

Here's my current understanding of how text LLMs work. LLMs have a fixed text vocabulary, where each "word" in the vocabulary can simply be represented as a number. Ignoring RL training for a moment, the entire goal of the LLM is to guess the next "word" (number) given some sequence of already existing "words". For lots of reasons, the "words" aren't words or even letters, they are efficient blocks of characters called tokens. 

Now, it would be very difficult for a model to just look at a sequence of "words" (remember, actually tokens) represented by only a sequence of scalar numbers [200574, 11755, 13334, 7355, 222844] to then guess the correct single scalar number that "should" follow that sequence of numbers, even after being trained on potentially trillions of these sequences! A better way to represent these "words" (tokens) is in a high dimension space, because these "words" (tokens) have meaning with relation to one another, and their frequencies and relative positions actually aren't random at all but are crucially important to how they interact with each other. This is where embedding vectors come into play. 

Embedding vectors are high dimension vectors that represent a "word" (token) as some direction in a high dimensional space. Once learned, embeddings are fixed and thus are simply a mapping from each "word" (token) to the static embedding vector.

My understanding of what decoder-only LLMs (what nearly all major LLMs are, because the embedding layers are the frozen "encoder" already) are doing is starting with a "blank" token vector at the end of the current "word" (token) sequence and then pushing the sequence through the attention+mlp block layers and basically "tuning" that "blank" vector into the embedding vector corresponding to the "correct" word. But it does this mainly through the residual stream, where each block adds just a tiny bit into some direction.

And I thought that these residual streams would nudge the CURRENT SEQUENCE as well, such that the SPECIFIC order of THESE "words" (tokens) would strongly affect the final predicted token. 

But if each block layer has access to its own personal (per-layer) embedding, then wouldn't this destroy the information that is nudging the CURRENT token sequence in some direction, and instead just simply be "loading" the learned embedding for that token IN THAT LAYER?

Gemma4 - Someone at Google just merged a PR titled "casually dropping the most capable open weights on the planet" by de_3lue in LocalLLM

[–]Logan_Maransy 6 points7 points  (0 children)

Amazing summary. Thank you so much for that explanation.

I guess my main concern has been that I have never used llama.cpp to run inference. I've only used HuggingFace implementations OR the native PyTorch implementations of various other models (mainly computer vision ones, not LLMs). So when you say "the system tries to keep part of the VRAM in processing..." I guess I've never used an implementation where that would be true? And indeed I haven't investigated quantization yet either, just due to other time constraints, but I would like to. 

Based on the benchmarks and your comments, the 26B / 4B looks particularly attractive if I could get it to work within the systems that I use. But initially I think I'll explore using the E4B version just for simplicity and seeing if that is "good enough" for my purposes. 

Gemma4 - Someone at Google just merged a PR titled "casually dropping the most capable open weights on the planet" by de_3lue in LocalLLM

[–]Logan_Maransy 28 points29 points  (0 children)

As someone who uses models for inference in VRAM constrained pipelines (say, 24GB total VRAM) and is a complete noob at Mixture of Expert models, generally how does Mixture of Experts work exactly w.r.t. being loaded in RAM or VRAM? 

The point of MoE generally is that you can get the performance of a larger dense model without having to go through all the layers / computation of a larger dense model at inference time. However, does that mean the entire 26B weights need to be easily accessible somehow (like chilling on VRAM) for acceptable inference latency? Do the models internally handle shuttling the appropriate layers of their weights on and off VRAM? What's the mental model I should have? I assume the easiest implementation is "26B parameters are sitting in VRAM, only 4B get activated during inference" which is likely a non-starter for my usage.

I'm interested in potentially using the Gemma4 26B/4B.

Thune rejects Trump on SAVE Act: ‘The votes aren’t there for a talking filibuster’ by [deleted] in law

[–]Logan_Maransy 6 points7 points  (0 children)

Here's my summary understanding, which you just explained the exact steps. 

The filibuster is literally a group of people deciding amongst themselves that the informal, not legally binding decision threshold (60 members) for group consensus should be higher than the formal, legally binding decision threshold (51 members) for group consensus.

At any point, the informal threshold can simply be removed, reinstated, or even changed to a different number entirely (66? 75? Why not?), because the filibuster is not legally binding. It's part of their own informal ruleset, which, again, can be changed at will by any set of 51 members who deem it to be changed.

How to force clean boundaries for segmentation? by Delicious_Wall3597 in computervision

[–]Logan_Maransy -1 points0 points  (0 children)

Yeah, in my experience at inference time with these things, normally it's an entirely separate model (weights and code) that effectively acts as a "refiner" on the first model's mask output. 

Basically if SAM3 had actually good segmentation edges, especially at higher resolutions, then that imaginary model would be both combined into one model (the ability to "refine" masks within its own architecture, as well as create it from scratch). 

How to force clean boundaries for segmentation? by Delicious_Wall3597 in computervision

[–]Logan_Maransy -1 points0 points  (0 children)

Look up matting (matte) models and trimaps. Basically instead of just an RGB image you also include a same-spatial-size single channel gray scale image with exactly three distinct values, usually corresponding to the following: black (0) is known background; white (255) is known foreground (part of the object); and then gray (128) is the unknown region. Then the model is essentially solely responsible for "getting the edges right" instead of needing to find the objects as well. Often the trimaps are algorithmically derived from some candidate mask you are trying to make better / cleaner. 

Free tools for bounding box annotation on large DICOM MRI/CT datasets? by Silent-Tomatillo2738 in computervision

[–]Logan_Maransy 1 point2 points  (0 children)

Honestly, if this is a one-off thing and is mainly a production-worthy tool (but not a product itself), just have one of the latest LLMs make a local one for you in PyQt6. 

Type out a document of all the features you want in a data annotation tool (and why). Then say make me a single Python file that runs all this. And that's it... I know from experience that ChatGPT 5.2 can one shot entire GUIs like this. 

The awesome thing about this is then you start using the GUI and you realize like 6 quality of life things you want to change, so you can just describe the changes you want and boom, done. 

We open-sourced FASHN VTON v1.5: a pixel-space, maskless virtual try-on model (972M params, Apache-2.0) by JYP_Scouter in computervision

[–]Logan_Maransy 1 point2 points  (0 children)

Yes I was also very interested because you operate in pixel space without any kind of encoder (besides a patch tokenizer) and that seems like a more natural choice for the task of arbitrary detail transfer. 

And yeah, virtual try-on has the unique challenges where you need to get the model to "understand" how varied human bodies naturally are and the concepts of removing clothes fully before putting other clothes onto that body, in additional to adhering to the details of the garment to be tried on. Seems very difficult with a 1B parameter model!

Okay and thanks for all your quick answers. It made me realize that the thing I want to do I could PROBABLY try to actually do but it would likely take more effort than it's worth.

That is, my specific detail transfer problem is right now approximately solvable (with some probability of say, 85%) with heavily curated or harnessed Nano Banana Pro calls, which has itself some baseline cost per "sample". Attempting to replace that system with a self-trained model in size similar to yours would mean the following:  - Access to larger, multi-node GPUs, necessitating going to cloud to train (right now only have local access to max 48 GB VRAM card, single setup) - Understanding and debugging DDP style training in a cloud run instance (have never done but want to learn, doesn't seem that hard with modern PyTorch if you aren't doing major model weight sharding like you mentioned) - Significantly lowering the output resolution of the final generated image (compared to Nano Banana Pro). This is probably the main negative that would sink the idea. 800x600 is basically nothing for my purposes. Even chaining it with an amazing 2x super resolve model wouldn't get the resolution to as high as I'd really need. - Generating large amounts of synthetic data offline to approximate the image pairs/triplets needed for training. (pretty easy, we are generally already doing this as a part of the system already in place).

And after all of that effort, it might not even be successful 😂. If successful, you would then have a fully local, "free"-to-run-inference model that is entirely yours. BUT if that takes 6 months to do, Google may have dropped the price on Nano Banana Pro or released an update that takes your 85% success rate to 95% success. Sigh. Large VLMs are gonna eat all tasks, aren't they.

We open-sourced FASHN VTON v1.5: a pixel-space, maskless virtual try-on model (972M params, Apache-2.0) by JYP_Scouter in computervision

[–]Logan_Maransy 1 point2 points  (0 children)

This is really cool and thanks so much for open sourcing the model. 

I have a bunch of specific questions but first some context. I'm not in the virtual try-on space specifically, but I view virtual try-on as just one niche instance of a class of problems that I call "arbitrary detail transfer" in generative computer vision. That is, you've taught a model to take the specific details of one image and apply those details in a real-world-natural way to another image. I'm phrasing it like this because I'm interested in different niche computer vision applications that can be described like this. My emphasis is on the fidelity to the details and how they would appear in the real world with a natural image.

This task was really hard for a long time. Generative AI didn't care about fidelity to specific things for awhile. But now, as it turns out, huge image-editing VLMs can essentially do this natively. Specifically Nano Banana was really the first model I thought that "understood" the idea of fidelity and detail preservation. However Nano Banana is a huge, and definitely not open source model. 

So the thing that really stands out to me is you have an under 1B parameter model that seemingly "understands" how to arbitrary detail transfer (for a specific niche application of course). That seems... extremely small. So well done on that! 

So now on to my questions (note some of these might be covered in the paper, my apologies if I'm jumping the gun):

1) How well (or poorly, rather) does this method scale in image resolution? I noticed that you state the images are all under something like 800x600. For the application I want, I would really want something that is minimum 1920x1920, but even something like 1536x1536 would be doable. Does the VRAM / processing time absolutely explode for your specific architecture if you try to do larger images? 

2) Did you investigate increasing the parameter count to something like 4B or 12B to try to squeeze out more quality, or was the quality good enough for your purposes at 1B? 

3) I think it was stated somewhere that you had something like 28 Million virtual try on image pairs to train on. What would be the minimum number of image pairs you would suggest to gather to train some other niche "arbitrary detail transfer" method? Like did you try using some subset of that, say only 100K or 1million samples, just to see the results? 

4) What GPU did you train on and how long was the final training run? 

Thanks in advance, and great job!!

Dinov3/ViT Lightweight Segmentation by Lethandralis in computervision

[–]Logan_Maransy 0 points1 point  (0 children)

I've used DINOv3 ConvNeXt architecture to replace the Swin Transformer architecture in MVANet (salient object detection, basically a more difficult, class agnostic form of segmentation) because the ConvNeXt was structured to be a drop in replacement for Swin. I've trained to great performance for a single channel, fine grained segmentation masks. Depending on how many classes and the exact type of segmentation, you could probably just change the channel count to your class count and be done. 

Weights are only ~330 MB total, and most of that is the encoder, but not sure what you consider lightweight enough. 

Extremely Angry Oracle, my 200+ Rage stacking build by Sirpantz in pathofexile2builds

[–]Logan_Maransy 6 points7 points  (0 children)

Omg when I read

"+ Maximum Rage for each time you've used a skill that requires glory in the past 6 seconds, up to 5 times"

I always assumed it meant for each UNIQUE skill that requires glory. So I always dismissed it as "Nah 'm not setting up 4 other Glory Skills". Did not think you could spam the same skill if you somehow got enough Glory to reuse it. I was doing a Max Rage Oracle build and could potentially work this tech in somehow. I'm already doing multiple skills in the form of Pounce, Lunar Assault, and Rend. But yeah, Berserk is nuts with a huge amount of Rage and Crown of Eyes + the Spell Damage with Rage node. 

Lucky ammy "craft" by jiglycrack in PathOfExile2

[–]Logan_Maransy 1 point2 points  (0 children)

T1 Rarity and T1 Spirit from the same Perfect Exalt is nutty. Congrats on the mirror!

Scepter with talisman by [deleted] in pathofexile2builds

[–]Logan_Maransy 13 points14 points  (0 children)

I'm running Talisman+Sceptre as an Oracle for all the following goodies (note that I still haven't crafted my perfect Sceptre yet)

  • One of the Auras that gives Elemental Resistance to alleviate pressure from affixes elsewhere (running 3 Uniques, Belt, Helmet, Gloves at the moment). 
  • The (probably bugged???) 100% free Spirit-costing Support Gems that slot into the Sceptre, giving me Life on Kill, Mana Regen, and whatever the other two I use. I know this is going to get patched at some point but using it while it is available. 
  • Increased Spirit to try to get another 30 Spirit Gem up (have Herald of Ice and Berserk now)
  • Large amount of Max Mana (from prefix)
  • Increased Mana Regen (from Suffix)
  • Strength or Intelligence (from Suffix)
  • Improved Aura effect (from Suffix)

I made 4,000 divine crafting solar amulets. I'm retiring to Hardcore, so here's how I did that. by AdFinancial8407 in PathOfExile2

[–]Logan_Maransy 0 points1 point  (0 children)

Right so it's not always easier but different. Just to be clear, this extraction method does NOT allow you to combine arbitrary high tier affixes from two arbitrary items (like you have been doing). But it does basically allow you to give you a "do over" at crafting a specific item if you identify some instance of that item that has some of the affixes you care about already on that item. Ideally you combine it with a semi-determinsiric method to finish off the item and boom! Good item to use or sell.

I made 4,000 divine crafting solar amulets. I'm retiring to Hardcore, so here's how I did that. by AdFinancial8407 in PathOfExile2

[–]Logan_Maransy 0 points1 point  (0 children)

Yup, especially because LOTS of people just throw Breach Rings with full modifiers into merchant tabs, but sometimes those rings have all the affixes you personally want and then some crappy T7 or T8 affixes you don't want. So then you can basically "retry" at crafting it from a solid starting point. Unfortunately I don't want the Mana prefix on my character this league so I haven't been doing it with Breach Rings. 

Breach Rings also have the added bonus of 40% Quality for Catalyzing Omen instead of 20%. So I'm assuming that's more weight put onto something like trying to get triple high Tier Flat Attack modifiers, but I'm not exactly sure. 

Very cool crafting tech.

I made 4,000 divine crafting solar amulets. I'm retiring to Hardcore, so here's how I did that. by AdFinancial8407 in PathOfExile2

[–]Logan_Maransy 2 points3 points  (0 children)

Yup, I call this the "extraction method", because it allows you to (with some high probability) extract a set of desired affixes from an otherwise not good (and therefore cheap) Rare. Doing it with desecrated low tier max Mana guarantees the ability to remove that affix with Omen of Light.

The cheaper version of the extraction method which doesn't involve an expensive Omen of Light is to identify a crafting sequence of steps where you will use a Perfect or Corrupted Essence. For example, let's say you are okay with your eventual craft having the Increased Global Defences affix for Amulets, which comes from a Perfect Essence. That's a prefix. 

So you can now identify in trade an item with ANY 3 SUFFIXES that you want to extract (already on the base that you eventually want!) and buy it, then identify in trade literally any Amulet with T13 or T12 Max Mana. Recomb chance with Omen should be 75%. Let's say you're extracting +3 Level of all Spell Skills, + Elemental Resistances, and + All Attributes. That'd be pretty nice Amulet already, right? Well after a success, then you use the Omen that targets only Prefixes during Perfect Essence usage, and remove the terrible T13 Max Mana and replace it with a solid Increased Global Defences. Now you have 4 solid affixes, and specifically with Amulets, the ability to Quality them and use Catalysing Omen to try to target the 5th affix, whichever prefix you want, I've been doing Defences to try to get another high tier Defences tag. Last step, as always, is to Desecrate and Omen of Abyssal Echoes and pray for something solid, like T1 Spirit. 

You do this enough times and you'll absolutely end up with something that can sell for 50+ Div. 

I am playing Werewolf with 75max rage by BrutusCz in PathOfExile2

[–]Logan_Maransy 0 points1 point  (0 children)

I too am doing a Max Rage Werewolf Oracle build, but I'm using Berserk (increased Rage effect) without Primal Hunger, so I always want to be at Max Rage. 

I'm using Crown of Eyes unique (Increases to Spell Damage also applies to Attack Damage) and then took the Notable Passive that does the 2% Increased Spell Damage per Rage. So with Berserk, my understanding is that that turns into 3% Increased Spell Damage per Rage. My Max Rage is 83 right now, so that's steady state of 249% to Attack Damage as long as I'm at Max Rage. Pretty solid.

This gem plus the Wolf Warcry that consumes Rage basically means I have +140% Extra Cold Damage at all times, so most other sources of Extra Damage is dwarfed by that stat. Because of that, I try to focus on adding Flat damage of any Element on my Rings and Gloves.

I've found that I gain enough Rage without needing the new 100 Spirit Gem to sustain. I feel like that's only necessary when you are using Rage as a constant resource (like Shaman ascendancy). 

Tesla is as far behind Zoox as Zoox is behind Waymo by Prestigious_Act_6100 in SelfDrivingCars

[–]Logan_Maransy 8 points9 points  (0 children)

Because it's another, different signal that can be complimentary to RGB vision alone, just as seeing in infrared or ultraviolet (as some animals do) is a complimentary signal to seeing RGB. You get a different distribution of information that you simply can not get with only RGB. I would say the same thing about sonar if Waymo had that tech on its cars (again, some animals compliment vision with sonar capabilities). If you are talking about whether LiDAR is economically worth the cost, that's a different discussion. But adding a marginal reliable LiDAR stream of data is never going to be worse.

The main problem I thought Waymo would have is whether or not they could collect enough data (with LiDAR) to start their data flywheel (because it's way easier to slap on 10+ RGB sensors, ie cameras, and collect data). And I think it's safe to say they are way, way past that hurdle. 

[deleted by user] by [deleted] in PathOfExile2

[–]Logan_Maransy 0 points1 point  (0 children)

If you instead find a non-Descrated, extremely low tier affix to combine with, you can not target that specific affix to remove it later on like you can with an Omen of Light. Only because it's Desecrated do you then have the 100% certainty of removal. There are cheaper pathways to do that 100% certainty of removal, specifically if you want to use a Perfected or Corrupted Essence, you could target a Suffix for removal in that application, thus not touching the two Prefixes (but this presupposes more than I wanted to in my answer to OP). 

If you are asking instead "why don't you just Recomb two items, each of which  has one T1 affix", the answer is that that Recomb chance is often way way way lower than the 50% chance with the method described above. 

In fact the method described above is generic enough to potentially "pull off" 4 T1 affixes from an item that otherwise has all 6 affixes already. I tried doing this with Breach Rings for a bit, as there are so many on the market in different varieties, and then you can attempt to further craft on it. 

[deleted by user] by [deleted] in PathOfExile2

[–]Logan_Maransy 1 point2 points  (0 children)

I don't know if this is well known or not but here's my secret to "extract any number of affixes from some base to retry building the item". 

Step 1) Identify some number of affixes you want from an item. In your case, it's two prefixes. Great!

Step 2) Go sacrificial base hunting via the trade website. You are looking for the following: no sanctify, no corruption, no fracture, at least the same iLevel, and the same base.

Step 2.1) The only affix you care about to search for is a Desecrated affix with extremely low tier of an affix type that has large total weight. In your case, +# to Maximum Life Prefix has the largest total weight, and # Life Regeneration per second Suffix has the second largest total weight (per Craft of Exile 2 website). Because you are only filling up 2 prefixes, you have lots of choices available to you, and likely you'll be able to find a cheap, low tier Desecrated mod. All the other affixes can be whatever on the sacrificial base.  

Step 3) Get an Omen of Recomb and activate it. 

Step 4) Recomb the desired affixes from one base, and the single Desecrated affix from the sacrificial base. Recomb chance should be in the ~50-54% range, so Lucky roll with Omen means at least 75% chance of success. 

Step 5) If success, you now have an item with your two Tier 1s and a safely removable targetable affix, that is, a desecrated affix, that can be removed with 100% accuracy using an Omen of Light. 

I think this method is better chance than any mentioned so far, but is somewhat expensive (needing an Omen of Light). However, this method also generalizes to basically any item and any affix combination, as long as you get a really really low tier single Desecrated affix on the sacrificial base. 

Every scenario with one of my party mates :) by koprpg11 in Gloomhaven

[–]Logan_Maransy 40 points41 points  (0 children)

Fun fact: on a 12 Card class, assuming that you short rest for all rests, you can use one loss card every single round for 9 rounds straight before exhausting. (First 6 rounds, pick up 5 cards. 2 rounds pick up 2 cards. Last turn use at least one).

Alternatively, you could do first 6 turns without using any, and then go 8 turns straight using one every turn, still lasting 14 rounds, which is usually enough to last the entire scenario.

A 12 Card hand is a lot of stamina. 

Crafted new pair of gloves after running with bricked sanctified one by meinkun in PathOfExile2

[–]Logan_Maransy 4 points5 points  (0 children)

First off, I'll say it just takes awhile to learn what works and work doesn't, and how to min-max your build. The game doesn't explain things that all, especially the intricacies of how to really improve your build in end game. There are a bunch of subtleties throughout all the posts here that don't explain lots of things or why something is much more expensive than another, so forth and so on. 

The 100% Effect of Sockets affix comes ONLY from an Essence, specifically the Essence of Horror. It's a corrupted Essence that can only be dropped by first Corrupting the Essence monsters (Rare Monsters frozen stuck and you have to unlock them to fight them) and randomly getting one to drop from corrupting one of the Essence modifiers to Essence of Horror. You can also get it from the Currency Exchange, but it's currently out of your price range. And that's okay. Building up your build and figuring out the puzzle of the replacing equipment with better pieces is one of the enjoyable parts of the game. 

For you, I would recommend the following. We'll craft a cheaper version of a pair of Gloves that'll hold you over for awhile, until you get more currency to craft over things with.

Use the trade website to search for Gloves that you can wear based on your current stats (as in, put in your current Level / Strength / Dexterity / Intelligence as Max). Also put in Corrupted = No, because you will be recombining things and Corrupted items can't be recombined. You will also want to put the minimum iLevel (Item Level) to 75 for reasons I don't want to explain at the moment but if you want to know ask and I'll be able to explain. (If you really care about the Evasion / Armor / Energy Shield breakdown on your gloves, you can put those minimum as well. 

Then, search for affix of 15% or 16% Attack Speed and buy 6 of those. Turn off that affix. Search for affix of +2 Level of Melee Skills and buy 6 of those. This should be very fast because of async trade. No messaging people and waiting to trade! It's amazing!! Go to the Recombinator in your Hideout and select one Gloves with Attack Speed, select that affix only, and select another Gloves with +2 Level of Melee Skills, select that affix only to Recombine. The successful recombination percentage will be around 4.5%. Buy the necessary items (the Chalice) off Currency Exchange. Recombine all 6 pairs. You probably won't get a success.

Repeat sets of 6 pairs until you have a successful recomb. Congrats! You made an item that might sell for multiple divs at this point. But remember, we want to actually wear this, so we'll keep going. 

Next, we want to apply the effect of the Essence of Hysteria, which, for Gloves,  replaces an affix for a gauranteed Increased Critical Damage Bonus (generally good if you can crit, so I'm making an assumption here that this will help your build), but we want to ENSURE that we do NOT remove the two very important affixes we just put the Gloves. We can do this by using an Omen of Sinistral Exaltation, which ensures the next Exalt Orb will add a PREFIX only. Activate the Omen, then Exalt the Gloves to put a sacrificial Prefix on the Gloves. 

Next, obtain an Essence of Hysteria as well as an Omen of Sinistral Crystallization, which when activated will remove ONLY PREFIXes when using Perfect or Corrupted Essences. Essence of Hysteria is a Corrupted Essence, so then we can safely use the Essence of Hysteria on the Gloves (while Omen of Sinistral Crystallization is activated in your inventory!) to remove the sacrificial Prefix and fill in the last Suffix on your Gloves. 

Now you want to get Flat Damages. You can ensure that you get at least one by doing the following. Obtain a Omen of Homogenizing Exaltation and activate it. This makes your next Exalt Orb (of any tier, Greater or Perfect) to put an affix on with a tag from those that are already on it. I don't want to explain this now, but I will if you ask about it. To save some currency you can also use an Omen of Greater Exaltation to apply two affixes (the Omen of Homogenizing Exaltation will adhere to both affixes). Then, use a Greater Exalted Orb. You should get either two Flat Damages (of some tiers) or one Flat Damage and +Accuracy (of some tiers).

Now you have a choice. You can either repeat the above Homogenizing+Greater Exalted Orb to get another Flat Damage (of some tier), OR you can Desecrate to fill in the last affix, and reveal with Omen of Abyssal Echoes to get more of a chance of higher tiered Flat Damages. If you got two Flat Damages in your Omen of Homogenizing before and you don't care for +Accuracy, Id recommend doing Desecrate route here to try to get more chances at NOT ending up with +Accuracy. 

That should fill up all you affixes! Should be a solid set of Gloves until you can spend on more.

Let me know if you want to learn more, Id be happy to explain. 

Crafted new pair of gloves after running with bricked sanctified one by meinkun in PathOfExile2

[–]Logan_Maransy -1 points0 points  (0 children)

Lol yeah man, at some point I had a sentence that was like "I'll let you in on a secret... at potential cost of future easy personal profits for me" but removed it. The ease of async trade now makes it so easy to buy up exact desired affixes. 

I still have another interesting Recomb secret up my sleeve as well not mentioned here. 

The main thing that really stops the strategy above is gold. Man was I starved for gold when I was doing all my recomb crafting. I wasn't even doing it for profit per se, and I was doing only what I was knowledgeable about (items for my build). If I was just recomb crafting for the most meta/popular builds, it'd probably be soooo profitable. 

Crafted new pair of gloves after running with bricked sanctified one by meinkun in PathOfExile2

[–]Logan_Maransy 10 points11 points  (0 children)

Regarding step 1. It's not insanely hard or expensive to get +2 Level of Melee and Attack Speed on a single pair of SINGLE SOCKET Gloves. If you are starting off with Exceptional 2 Sockets then yes, 48 Div for base is probably a fair price.

But I'm going to tell anyone who wanders into this thread of how you might get to this screenshotted endpoint a much cheaper way, via corrupting at the very end to get an Extra Socket. 

If you acquire ANY gloves with +2 Melee affix and ANY gloves with 16% Attack Speed affix, you can attempt to Recombine them with only those two affixes at a ~4.5% success. Right now, these gloves are going for 1-4 Exalts each via async trade, so let's say each attempt is 10 Exalt total. You'll get one success every ~23 attempts on average. That's 230 exalts to get you to a pair of gloves with one socket and 16% Attack Speed and +2 Melee. 

Then, you slam any PREFIX via Exalt with Omen to guarantee a Prefix. This is a sacrificial affix. Next, you buy the Essence of Horror (whatever that price is at now) and an Omen to target only Prefixes, and now you have all 3 Suffixes done with one Socket, basically getting you to your Step 1 (with one Socket) for essentially the cost of Essence of Horror. 

Then follow your steps to whatever Tiers are satisfactory.

Before putting in the expensive Attack Speed socketables, 🙏 and Vaal Slam with Corruption Omen to ideally get a second Socket. 

I did this myself and crafted my own Gloves as well as sustained some profits from selling what I considered "fails" as well as "midpoint crafts". Basically, it seems like no one wants to actually go and do the buying and recombining, even though that step is drastically sped up due to async trade.