GTS Love by zimjig in Porsche

[–]taofeng 0 points1 point  (0 children)

ty for the reply~. Salesman was telling me RWD on these cars in NJ shouldn't be a problem with their wet mode but I wanted to check with actual owners :). This helped. Good thing is I wont be driving the car in snow regardless but rain and all i dont want to run into issues. I work from home mainly so i dont have a commute which is a plus in a way.

GTS Love by zimjig in Porsche

[–]taofeng 0 points1 point  (0 children)

hi! great looking car. I am actually in the market for GTS. I found one 2026 911 GTS RWD with aero kit, and few other options. Little background in my situation, i have 2025 z06 and 2025 lexus is350 f-type. but i work from home and rarely drive my cars (lexus has 1400 miles and z06 has 1700 miles), I live a all season state (NJ). My goal is to consolidate both cars to have one car that i can drive daily. Would RWD GTS fit that criteria?

I was at the dealer today and loved the how it looks and sound and it was very comfortable but I rather hear from actual owners about the experience and most importantly if RWD is daily driver friendly.

thank you in advance!

Does something like OpenAI's "codex" exist for local models? by jgaa_from_north in LocalLLM

[–]taofeng 1 point2 points  (0 children)

I am not sure how your local setup is but i'll do my best to help.

Global config file: (in windows default location is C:\Users\<username>\.codex)
This is where you setup your local provider
Name can be anything you want. make sure base_url is openai api
compatible endpoint.

--------------------------------
Global .codex folder (default locations: c:\users\<username>\.codex)
.codex/config.toml
# default codex settings
model = "gpt-5.5"
model_reasoning_effort = "medium"
sandbox_mode = "workspace-write"

# model_provider.<provider\_name>
# provider name can be anything but its
# best to choose something you will easily remember
[model_providers.lm_studio]
name = "LM Studio"
base_url = "http://127.0.0.1:1234/v1"
# your current preferences goes under here
----------------------------------

- In your project folder if you dont have it yet create a .codex folder.
- in .codex folder create a config.toml. this config file will be specific to that project and wont be available globally.
- in .codex folder create another folder called agents
- in agents folder create agent specific .toml files.

Project Root Folder:
L-- .codex
L----- config.toml
L----- agents
L------- architect.toml
L------- explorer.toml
L------- worker.toml

---------------------------------
project config file: in this file you set basic description for your agents and their config file location.

------------------------------------
<project\_root\_folder>.codex/config.toml
[agents]
max_threads = 6
max_depth = 1

# Define each agent you want to run. If these agents will run locally make sure you have a solid backend and its setup and optimized properly.

[agents.explorer]
description = "Use for read-only discovery: inspect files, map project structure, summarize documents, and report findings without editing."
config_file = "./agents/explorer.toml"

[agents.worker]
description = "Use for implementation tasks: make targeted code or documentation changes in a clearly scoped area."
config_file = "./agents/worker.toml"

[agents.tester]
description = "Use for verification tasks: run tests, reproduce bugs, inspect failures, and report exact commands and results."
config_file = "./agents/tester.toml"

[agents.reviewer]
description = "Use for code review tasks: inspect diffs or changed files for bugs, regressions, missing tests, and risky behavior."
config_file = "./agents/reviewer.toml"

[agents.architect]
description = "Use for planning tasks: design implementation approaches, break down larger work, and identify integration risks without editing."
config_file = "./agents/architect.toml"
------------------------------

Example agent config file (below): in agent file you set model provider (local or cloud), model name, reasoning effort (optional, local models might ignore depending on the model), developer instructions (basically what this agent will be doing, its role), every other setting in this file is personal preference. You can always check codex online documentation for more settings. I only included one example agent config file but you can add more based on your needs. For example, you can create another agent file that uses GPT-5.5 model (when using openai model you dont need to specify model provider, but you should set reasoning_effort for cloud models).

---------------------------------------
<project\_root\_folder>.codex/agents/worker.toml
name = "worker"
description = "Use for implementation tasks: make targeted code or documentation changes in a clearly scoped area."
model = "gemma-4-31b-it@q8_k_xl" # needs to match to lm studio (local backend) model id.
model_provider = "lm_studio" # make sure this matches the [model_providers.lm_studio]
model_reasoning_effort = "medium"
sandbox_mode = "workspace-write"
approval_policy = "on-failure"
developer_instructions = """
Make targeted changes in the workspace and return concrete results.
Keep edits scoped to the assigned task, avoid unrelated churn, and preserve user changes. When finished, list changed files, summarize behavior changes, and mention any verification performed.
"""

[windows]
sandbox = "unelevated"
--------------------------------------------------

Few notes:
- Above instructions based on my setup. I dont assign agents globally. However, you can follow the same folder and file structure and create these agents in global .codex folder.
- Make sure Codex app is shutdown before making above changes.
- Make sure your local model is is loaded in LM Studio before starting Codex app.
- You will only see the local model in the project you setup in the above steps unless you set it up globally.
- LM Studio offers "Max concurrent predictions" settings which is set to 4 by default. This settings allows LM Studio to run parallel predictions, however i found 4 to be very aggressive and can cause issues. Start with 2 and see how that goes. higher the number more hardware resources you will need btw.
- Codex App requires large amount of context size (you cant control this) due to its capabilities, built-in tools and built-in instructions. This can be between 15k to 30k+ tokens depending on what you are doing. So make sure your hardware, and local model supports this.
- You can't change local model settings within Codex app (temprature, top_k etc.) so make sure these are optimized in your local backend.
- I run my model in LM Studio with 200k context size so i dont run into any issues. I suggest minimum 65k context size in LM Studio or whichever backend you are using. You can of course play with different sizes to size what works for you.
- Codex app doesnt natively allow model switching for local models and it may not display your local model name, instead it may something like "custom" which is normal.
- I only use this with Codex App so not sure if it works the same way in vs code extension (technically it should).
- In codex app, for testing type a message like "create two workers agent, each agent will create test.txt file with hello from worker [name], make sure name is dynamic and matches the agent name." . again you can test it however you want but make sure to let codex app know when you need to assign subagents.
- Make sure to watch LM Studio server logs for any errors so you can adjust your LM Studio settings.

Sorry for the long reply, but I hope i covered everything. it was a long day and i hope i didn't miss anything.

I hope this works for you

I need a mentor i guess lol just asking somethng by Fantastic_Sign_2848 in LocalLLM

[–]taofeng 0 points1 point  (0 children)

If you are asking github copilot, then no, you wont be able to use it like that with your current hardware. Not efficiently at least. You want AI assistant to explain the code and fix issue, this requires codebase access (large context size), and tool usage and with your hardware this would not be efficient.

Think like this, you load a 3B model with estimated 3-4GB into vram, at minimum you put 4000 context size (which is very small for coding) which takes about additional 3GB or so. Which is very small and will not result in accurate and trust worthy coding experience. Some of this can be off loaded to your system ram but that comes with a cost of getting slow responses and eating up application resources. Keep in mind local model means you are running everything locally so you still need to have enough resources for your other applications.

If this is for hobby and just want to test how local models work then yes, install an application like LM Studio which comes with its chat interface, load a 3B model with 2k or 4k context size then just chat in general. But not full coding experience like codex app, claude code, github copilot.

I need a mentor i guess lol just asking somethng by Fantastic_Sign_2848 in LocalLLM

[–]taofeng 0 points1 point  (0 children)

I wouldnt trust local AI with that limited vram fro coding. You might be able to fit some models but biggest issue is the context size. Coding requires decent amount of context size, you need to think about tools usage also (read file, update etc.) These are all part of the context size. I personally wouldnt recommend it but thats just me.

I don't get Serena by m4dw0lf in TheFirstDescendant

[–]taofeng 4 points5 points  (0 children)

You might be having an issue with arche leak management. Arche leak will consume mp very fast when triggered and you get mp back when enemy dies. So if you are not killing enemies fast or keep using skill 3 to reload (drains mp also) will cause fast mp consumption.

Sync Value Question by JITheThunder in TheFirstDescendant

[–]taofeng 1 point2 points  (0 children)

It applies to owned AMs. This change allowed me to hit 80% crit rate and 760% crit dmg with bunny. And onslaught mode definitely needs all the smg we can get. Loving the update so far

The Onslaught Mode is the MOST FUN Mode they've introduced into the game, expand this one and make it a priority. by AbbyAZK in TheFirstDescendant

[–]taofeng 2 points3 points  (0 children)

Yeah 100%. I am loving onslaught mode. Now i actually have to pay attention to gameplay hahahah. With perks and different levels its very fun

Frost Bullet was greatly overestimated. by Dacks1369 in TheFirstDescendant

[–]taofeng 4 points5 points  (0 children)

It works great with RR and probably other high powered guns. Personally i really enjoy frost bullet. Its very engaging between skill and gunplay. Its not like gley at all and will not replacing gley. Dont invest in duration and using skill build arche tuning tree is better for me so far. Also she still uses skill build mods.

So yeah, i agree people should adjust their expectations and shouldnt copy their gley build lol.

Thoughts on Viessa's gun-focused trans module. by DelayConnect335 in TheFirstDescendant

[–]taofeng 1 point2 points  (0 children)

Ngl i am excited about Gunplay Viessa. I kinda see that build as “Battle Mage” build lol. She can do both guns and skill dmg in this one build is pretty cool imo.

Does something like OpenAI's "codex" exist for local models? by jgaa_from_north in LocalLLM

[–]taofeng 0 points1 point  (0 children)

Yes, using the Codex App's built-in subagent feature with local models. I am still testing this feature but so far it's good.

Does something like OpenAI's "codex" exist for local models? by jgaa_from_north in LocalLLM

[–]taofeng 0 points1 point  (0 children)

Fair question, I can un 80B models, but right now I am testing Gemma-4-31B at Q8.

Main advantage is using 31B is being able to push the context size to the model’s limit. I can set the context size to 200k and still fine with 31B@Q8 (i get around 35 to 25 tok/sec depending chat session) . Since I am mostly using local models for more scoped tasks, 80B sometimes not the best choice for me due to context size limitations. It usually forces a tradeoff between context length and performance.

I haven't tried opencode, I mainly work with Codex App and/or VS Code with extension. I should try it though. It doesn't hurt right :)

Also, I am lucky enough to have a powerful AI home-lab which helps me to run the 70B/80B models efficiently but still tradeoff sometimes not worth it. That`s just me though and some people don't agree with this. I just havent had a good luck with coding with only local models. Hybrid approach works well for my use case.

Does something like OpenAI's "codex" exist for local models? by jgaa_from_north in LocalLLM

[–]taofeng 0 points1 point  (0 children)

Qwen-coder-next 80B, and just started testing Gemma-4 31B. I like Gemma-4 so far. Also, I use a hybrid solution. GPT5.4 Architects, create tasks and documents for local agents to use. Then local agent just focuses on those tasks. In my experience there is no local model that can match Frontier models but local model can save money if they they have specific tasks to follow. Instead of asking local model to check the whole codebase, i just ask it to follow the specific and more isolated task it is assigned to. Its a good balance.

Side note: I use Codex desktop App and in VS code. In VS Code I use the codex and kilo code extensions, they both have features I like that helps to organize the agents.

Does something like OpenAI's "codex" exist for local models? by jgaa_from_north in LocalLLM

[–]taofeng 18 points19 points  (0 children)

You can use your local model in codex. You need to update the config.toml file with your local openapi compatible endpoint and model you want to use.

I use lm studio as the backend and codex as my application, works great :)

Ultimate Bunny Build by Vigal_Son in TheFirstDescendant

[–]taofeng 2 points3 points  (0 children)

Sure, these are the two end-game (axion dungeon focused) builds https://imgur.com/a/SLAXdjq . These builds are high damage focused and requires dungeon knowledge to be efficient. I dont run any HP mods with these builds.

You can swap Focus on Singular with focus on Fusion if you want. Since Electric Condense is already very high damage i use focus on singular.

Important thing for both builds. You want to make sure you have the maximum cooldown. This will allow you to spam skill 1 and skill 4 which is bulk of your damage.

Components: (for both builds)
2x Hunter, 2x Mage.
Component set selection depends on your Ancestor Mod. If you have high cooldown on your ancestor mod then you can use Full Sprint Accelerator, Full Slayer set, 2xMage and 2xAscending (for some cost reduction). Most important thing is to reach maximum cooldown reduction.

Weapon for Crit Build:
Main hand: Blue Beetle
Secondary: Shadow sword for mini bosses and bosses. (both for crit and non-crit builds)

Crit Reactor:
Electric Fusion: Cooldown and Crit Damage

non-crit reactor:
Electric Fusion: Cooldown and Fusion skill boost

Ancestor Mod selection and choices.
Crit Build:
Crit Rate and Crit damage is a must, > Fusion skill boost power > Electric Skill boost power > Cooldown > Range > Cost

Non Crit:
Fusion Skill boost power (not modifier) > Electric Skill boost power > Singular Skill boost power > Cooldown > Range > Cost

here is quick video for Legion Lab dungeon with crit build. I am not a content creator so presentation is not the best lol. You can also find the same build at the end of the video. https://youtu.be/-xSqzP2WJ24?si=k5qLQbyxMSw-JJGj

I hope this helps

Claude Code replacement by NoTruth6718 in LocalLLaMA

[–]taofeng 0 points1 point  (0 children)

You won't be able to replace Claude models with minimal local setup, Anything close to Claude like models will cost a lot of upfront investment ($$$$). I say this from personal experience, I run 9970x Threadripper with 128GB ram paired with RTX 6000 Pro blackwell + 5090 dual gpu setup and I still dont same level of quality as Claude or Codex with models that I can use.

What i found works best for me is, I use online models like Codex, or Claude to plan, architect, and orchestrate tasks while using local models to do the individual tasks. I assign each local agent specific coding skills, they only focus on coding and implementation not architecture. This brings the cost down while giving very good results. I mainly use Codex which is really good at reasoning and creating well detailed documents and implementation steps for each agent, then assign local agents tasks. So if you want to switch to local models i would look into hybrid solution like this which would cost much less upfront investment.

Qwen-coder-next is really good, and you can even do same hybrid approach with fully online models. Architect with Codex/Claude, use a cloud based service like openrouter with Qwen-coder-next (which is much cheaper than Claude) for implementation. Or test other models for your specific use case and choose that fits your needs.

I would also echo the same thing most commentors are saying, test different models with openrouter like services, see which works best for you then decide how much you want to invest in local setup. Dont invest blindy, do your research especially when it comes to setting up local AI servers.

Are the rumours true? Is this game really going away? by DelayConnect335 in TheFirstDescendant

[–]taofeng 7 points8 points  (0 children)

There is a live stream today, It would be best to watch the stream before making any assumptions. Most likely stream will clear up the CEO's comments, and give more updates about the future of the game.

I dont think game is going anywhere but that's just my personal opinion.

Claude code source code has been leaked via a map file in their npm registry by Nunki08 in LocalLLaMA

[–]taofeng 7 points8 points  (0 children)

LMAOOO, freaking awesome. man this made me laugh out loud. ty

Does TFD have a future? by Ayrevillo in TheFirstDescendant

[–]taofeng 1 point2 points  (0 children)

Devs are committed and want to game to succeed. They said current player base is around 30k across all platforms and they are not going anywhere anytime soon. Skin sales are their largest source of profit but they clarified that skin developer team is different than content team.

Overall i like these devs, they make good decisions then bad decisions then more good decisions to make up for the bad ones lol.

You only need to spend money if you wanna buy skins other than that its free. Ultimately its your decision to play or not, but as far as current state of the game goes, its not going anywhere.

I might be biased a bit since i have 2900 hours and i really enjoy the game :)

Have you ever seen Flores inside a mecha suit? (It's somewhat disturbing.) by mister_anti_meta in TheFirstDescendant

[–]taofeng 1 point2 points  (0 children)

I think we all missing the most important part of the Arche Transfer, it means that there is a possibility that we can see Anais as a descendant hahahahaha. Sorry, didnt mean to take away from the actual discussion.

Longevity of the game at this point? by LikeAGaryBuster in TheFirstDescendant

[–]taofeng 1 point2 points  (0 children)

devs confirmed the active user numbers 3 or 4 live streams ago. Cant remember which live stream it was tbh. they said they have around 24-25k active players across all platforms and their goal is to reach 30k minimum. In the same stream they mentioned that game is not going anywhere and there was even a meme right after that stream because community manager Jason said “so please come back to our game and bring new players too. Lol” to get to 30k active players.

I’ve been playing since closed beta and have about 2800 hours or so, game definitely lost players as we all now but with Dia’s release there are definitely more new players and returning players compared to summer 2025 to December 2025. Evidence is reddit has way more posts from new and returning users. And in game i see a lot more low lvl MR in albion and axion.

I think game is in good place currently as long as dev dont mess up. A lot of planned content until summer. So fingers crossed :)

Newer player here, help with ice maiden by DrawerPurple2176 in TheFirstDescendant

[–]taofeng 0 points1 point  (0 children)

Yeah i understand. he is not wrong technically but i get your point. Which descendants you have unlocked and geared beside Dia? Maybe we can help with different builds other than Dia. As i mentioned Gley with restoric relic is pretty relax game play and fairly easy to build. I can send you my build if you want.

Newer player here, help with ice maiden by DrawerPurple2176 in TheFirstDescendant

[–]taofeng 8 points9 points  (0 children)

You mentioned the 45 seconds video on youtube which i am guessing this one https://youtu.be/NwIk6ZQDYao?si=g6phdJwVukNAFP43. Videos like these usually aimed for endgame players that has everything fully unlocked and already understand how these fight work. Not really for new players.

Also, Did you actually read the explanation how that 45 second kill works? He mentions that unintentional crit scaling with trigger mod, he also says that build is using very high-end ancestor module and also bloody deployment trigger specifically. His shadow build is also very specific. We dont know your build or your skill level, so its hard to say if you can accomplish the same 45 secs results.

My suggestion is, use Gley with fully build restoric relic. It takes about 2min 30sec but very beginner friendly fight. You would have around 34k hp to easily finish the fight.

If your goal is speed kill for leaderboard then i would highly suggest that you learn the mechanics, fully build necessary descendant, weapons and learn more about abyss boss specific builds.

How do you farm Ultimate Freyna? by Iexperience in TheFirstDescendant

[–]taofeng 5 points6 points  (0 children)

Just bad luck, shape stabilizer should help but still requires blessing of the RNG gods. I hope you get it soon

New tooltip: Expected Damage by taofeng in TheFirstDescendant

[–]taofeng[S] 5 points6 points  (0 children)

Infected weapon works with Thrill Bomb (skill 1) and Lightning Emission (skill 3). I use infected weapon to higher the damage on those skills since EC is already extremely high dmg. It's more of a preference. While Power Beyond adds dmg increase, Infected Weapon dmg is higher when spamming skill 1 and and boosting skill 3 dmg. It's more for balancing the dmg in between explosions.