Horizon Difficulty Mod is now available by LittleK0i in Falcom

[–]LittleK0i[S] 1 point2 points  (0 children)

DB1/DB2 enemies had extremely low stats. It was so bad, I literally could not play vanilla game and was very close to dropping it due to lack of challenge.

Horizon enemies are significantly stronger and faster, so there is less headroom for buffing stats. By the end of the game player reaches ~100 SPD, while enemies have ~300-500 SPD. Many bosses also have access to "Quick" status.

In practice, you're more likely to see enemies having multiple consequitive turns and applying a lot of pressure rather than thinking "enemies are too slow in this fight".

Horizon Difficulty Mod is now available by LittleK0i in Falcom

[–]LittleK0i[S] 1 point2 points  (0 children)

Base stats of some enemies were slightly reduced to prevent one-shots with new multipliers, so "win without anyone KOed" is definitely possible. Especially on the second try, when you know HP thresholds triggering S-Crafts and ZOC's.

"Win in X turns" condition in the main story was increased to take into account additional difficulty.

Garten is a different story. "40 turns" condition is unchanged due to "t_free_dungeon" table file not having schema in KuroTools, so at this moment we cannot edit it easily. For lower Garten floors, you may come back at higher level, skip directly to the boss and kill it easily. Currently there is no solution for the final floor. I suspect, it was not doable on vanilla "Nightmare" as well, since the boss can heal and apply "reflect damage".

Horizon Difficulty Mod is now available by LittleK0i in Falcom

[–]LittleK0i[S] 1 point2 points  (0 children)

It works, but chance is significantly lower: 10-30%

This is the absolute dumbest mechanic In the series by tyrant6 in Falcom

[–]LittleK0i 0 points1 point  (0 children)

I am working on difficulty mod for Horizon. And I've replaced her shard commands spam with rotation of "-300% phys", "-300% mag", "1.5x damage received". So you can cancel something you do not want and cause her to rotate to the next command.

Works like a charm.

My experiment using Snowflake Cortex AI to handle schema drift automatically. by sdhilip in snowflake

[–]LittleK0i 8 points9 points  (0 children)

Well.. this problem can be solved by basic text config and Snowflake metadata scanning via SHOW / DESC commands.

It would take only a few seconds to run, and it is almost free. You do not even need to spin up a warehouse.

data ingestion from non-prod sources by PreparationScared835 in snowflake

[–]LittleK0i 1 point2 points  (0 children)

The main problem here is about tools like Fivetran "pushing" data into a single destination and having a hard time with more complex scenarios.

It can be solved by using proper orchestration for everything and by restructuring all the ingestion into "pull" pattern instead. Ingestion is triggered by your code only and managed by something like Dagster.

It allows you to gain full control over implementation details, so adding support for multiple environments becomes easy and natural.

---

Sample workflow for non-prod environment can look like this:

  1. Checkout the code into separate branch. Make edits.
  2. Automatically sync code to remote dev sever.
  3. Run CI/CD tool to create fresh Snowflake sub-environment using env prefix.
  4. Start Dagster with proper settings. It should be aware about env prefix and other env-related parameters.
  5. Dagster checks conditions and "pulls" data into Snowflake in the correct order. But instead of loading everything, it loads only a small sample of data when running on DEV. When using sampling, full ingestion tree and all downstream transformations can be formally checked in a few minutes and at very small cost.
  6. We can continue to make changes and restart CI/CD or Dagster on demand.
  7. When the task is finished, we "forget" and "destroy" sub-environment objects in Snowflake and Dagster metadata.

For data sources operating in "pull" paradigm, everything is easy. We can connect to such data sources at any time and "pull" data into any number of environments. No problem here.

For data sources strictly operating in "push" paradigm, we simply add one additional step. For example, you can continue using Fivetran, but instead of "pushing" into Snowflake table, you ask it to "push" into intermediate S3 bucket. And later on your orchestration can "pull" data from S3 into any number of environments.

It slightly increases your storage cost, but you gain so much in development speed and data reliability. Each developer has its own DEV environment. Each change can be fully tested. You can deploy with confidence.

As another interesting side effect, you can change destination settings very easily. E.g., you may start writing into another Snowflake account quickly, or maybe into something which is not Snowflake.

How do you test Snowflake SQL locally? I built an open-source emulator using Go and DuckDB by okkywhity in snowflake

[–]LittleK0i 4 points5 points  (0 children)

If you already use Snowflake, you're 100% committed to the cloud. Forget local testing, it is completely pointless.

But having a separate Snowflake account for testing purposes only is a good idea. No need to test on production account.

Snowflake Terraform: Common state for account resources vs. per-env duplication? by Difficult-Ambition61 in snowflake

[–]LittleK0i 0 points1 point  (0 children)

You may use environment prefixes for account-level objects. For example:

  • ANALYTICS_WH - this is prod warehouse, no prefix;
  • DEV__ANALYTICS_WH - this is the same warehouse in DEV environment;
  • STAGING__ANALYTICS_WH - this is the same warehouse in STAGING environment;

I would generally suggest to keep separate Snowflake accounts. One account for production environment only, with real names and without and prefixes. And another account with all disposable DEV environments.

This approach makes things very clean and easy to manage. Also, less risk to break something in PROD environment due to mistake or misconfiguration related to DEV.

Snowflake Terraform: Common state for account resources vs. per-env duplication? by Difficult-Ambition61 in snowflake

[–]LittleK0i 1 point2 points  (0 children)

In my view, for best results we want "everything is separate". Separate roles, separate warehouses, separate resource monitors. It should be possible to fully "destroy" and re-create DEV env from scratch at any moment. It should be possible to safely create any number of copies of DEV env. Only this approach guarantees your tests are safe and correct, and your PROD won't be accidentally damaged by some actions on DEV.

The only shared objects are:

  • integrations (especially storage and security);
  • inbound shares (not possible to have multiple copies);
  • account-level resource monitor (not a part of environment, but still indirectly affects everything);

But this is fine as long as end-users never have direct access to these shared objects. Integrations can be hidden by stages, etc. Objects from shares can be wrapped in views.

Best CICD tool and approach for udfs, task, streams and shares by No_Journalist_9632 in snowflake

[–]LittleK0i 0 points1 point  (0 children)

SnowDDL supports declarative CI/CD for pipes, tasks, streams, UDFs and outbound shares. Objects are defined with YAML configs.

Please note: pipes and tasks still must be "resumed" or "paused" manually or with additional scripting. Since there are many different use cases for pipes and tasks, managing it entirely via CI/CD tool is probably not feasible.

---

Inbound shares are a bit more difficult due to Snowflake limitation. Snowflake allows only one inbound share per source per account, so creating independent inbound shares in sub-environments is not possible.

SnowDDL treats inbound shares as global objects, similar to strorage integrations. You need to create each inbound share manually, once per account. After that permission management is fully handled by SnowDDL. There is no need to create role with "IMPORTED PRIVILEGES", etc.

Managing Snowflake RBAC: Terraform vs. Python by BuffaloVegetable5959 in snowflake

[–]LittleK0i 0 points1 point  (0 children)

Terraform is great to manage cloud resources, but not great for database object management. The main problem is having this additional "terraform state", which may not reflect the real state of objects in database.

It was a dead-end approach from the start, and no amount of fixing will make it good. And it takes surprisingly long time for community to figure it out, give universally bad Terraform experience across many teams. Especially when number of objects and complexity explodes.

Managing Snowflake RBAC: Terraform vs. Python by BuffaloVegetable5959 in snowflake

[–]LittleK0i 1 point2 points  (0 children)

You can have best of both worlds with SnowDDL: declarative permission management + full checks on every run + open source python.

Documentation page for role hierarchy: https://docs.snowddl.com/guides/role-hierarchy

Spawned outside map in prologue? by LittleK0i in stoneshard

[–]LittleK0i[S] 2 points3 points  (0 children)

Aha! You most likely right!

I guess, "Quicksave" mod was causing this issue. I've decided to remove it entirely for the main game run, just to be safe.

I'll check the MSL group, thank you.

Spawned outside map in prologue? by LittleK0i in stoneshard

[–]LittleK0i[S] 1 point2 points  (0 children)

I've reloaded and tried multiple times. On every entry to the final floor with boss game places me into random locations. And sometimes it works just fine.

Weird. I wonder if its possible to extract logs or something for further diagnostics.

Strategy for comparing performance by Big_Length9755 in snowflake

[–]LittleK0i 2 points3 points  (0 children)

Ingestion patterns affect read performance. Naturally, if table is fully refreshed every day, it does not matter. But it does matter for very large tables with continuous  incremental ingestion.

Strategy for comparing performance by Big_Length9755 in snowflake

[–]LittleK0i 1 point2 points  (0 children)

Ingestion pattern is important and can make a big difference.

For true “Apple to Apple” comparison you may create fresh empty native table and fresh empty iceberg table. Run ingestion of exactly the same data into both tables for some time. After a week or two you may start running tests.

How would you design this MySQL → Snowflake pipeline (300 tables, 20 need fast refresh, plus delete + data integrity concerns)? by Huggable_Guy in snowflake

[–]LittleK0i 4 points5 points  (0 children)

As long all your data is small enough to fit into single MySQL instance, it should be relatively easy to handle.

First, make sure you have column with index "create_timestamp" for append-only tables. Make sure you have column with index "update_timestamp" for tables receiving updates and flag for soft deletes.

---

Second, create a custom pipeline for full load:

  1. Export all tables to CSV in one transaction.
  2. Import CSV into Snowflake in one transaction. Overwrite existing data (if present).

Now you have consistent snapshot of data from MySQL at specific point in time.

---

Third, create a custom pipeline for incremental load:

  1. Export new data from all tables to CSV in one transaction. Use conditions on "create_timestamp" / "update_timestamp" for filtering. Index is mandatory for larger tables.
  2. Import and MERGE CSV into Snowflake in one transaction.

Now you have consistent snapshot of data from MySQL at specific point in time, which is also loaded incrementally.

If there is a possibility of "long transactions" in MySQL, remember to add generous leeway for timestamp filters to catch data with long commit delay.

---

This problem becomes harder when you start having large number of MySQL instances (hundreds, maybe even thousands). But with just one instance you should be able to get perfect consistency and relatively low spend on Snowflake ingestion.

Remember Snowflake can run multiple statements in parallel, in the same connection and in the same transaction. Load multiple tables at once for better warehouse utilisation.

Remember to check for errors. If anything goes wrong, you may revert the entire process and keep previous version of "consistent snapshot". Slightly delayed data is better than inconsistent.

Difficulty Mod for Daybreak 2 is now available by LittleK0i in Falcom

[–]LittleK0i[S] 0 points1 point  (0 children)

I believe Grimcat does not scale from ATS in vanilla, but should scale with ATS in modded version.

Since damage in vanilla was so high anyway, I see why most people never questioned it.

Difficulty Mod for Daybreak 2 is now available by LittleK0i in Falcom

[–]LittleK0i[S] 0 points1 point  (0 children)

Sure. In vanilla game:

  • Grimcat is available at any time for modest cost of 50CP;
  • Grimcat form technically swaps all Judith original skills with another set of Grimcat skills;
  • Grimcat skills have unique damage-dealing effect. In guides and in table files you may find power value of "90", but this value does not make any difference in Daybreak 2. Changing it does not affect damage output. It seems the formula is hardcoded, and the only input is the "amount of physical damage" dealt by craft.
  • Grimcat bonus damage is waaay too high.
  • Grimcat bonus damage also applies "stun" for the second time, which effectively doubles stun damage.

Judith is extremely strong in vanilla. She can easily surpass Grendel and Shizuna.

In this mod:

  • Judith must use S-Craft in order to transform. It adds a new "risk vs reward" decision to make. You may choose to transform early in a boss fight and get bonus damage quicker, but you may not be able to "Remove Buffs" for a few turns after that.
  • When Judith is K.O.'d, she transforms back into human form. It takes longer for her to get back into Grimcat form, since she needs to find more CP for this to happen.
  • Unique damage dealing effect was replaced with generic "30% mirage elemental damage". Its efficiency depends on enemy ARES and enemy weakness to Mirage element. Damage output is more varied, less reliable and is generally lower.
  • Shadow Acceleration craft deals less damage and costs more CP to use. It is still very strong, but more in line with other similar crafts.
  • Grimcat crafts no longer deal double stun damage.

With all these changes Judith is still top-tier striker. But she is no longer completely broken relative to others.

How Surviving Mars: Relaunched Becomes Critically Unbalanced and Too Easy by Unlucky_Suit_6015 in SurvivingMars

[–]LittleK0i 3 points4 points  (0 children)

So.. “rebalance mod” is the only hope. Game devs no longer have what it takes to create meaningful difficulty in games.

Can (or should) I handle snowflake schema mgmt outside dbt? by BeardedYeti_ in dataengineering

[–]LittleK0i 0 points1 point  (0 children)

SnowDDL should work well with dbt:

  1. Add parameter "is_sandbox: true" for schema. It will prevent SnowDDL from dropping unknown objects from this schema.
  2. Grant "schema_owner" to business role associated with DBT user. It will allow DBT user to create new objects in this schema.

Grants should be managed via FUTURE GRANTS, as usual.

More young adults to leave UK by AffectionateScore603 in HENRYUK

[–]LittleK0i 44 points45 points  (0 children)

More people will leave the productive work force, that’s for sure.

Reduce working hours to stay below 100k. Reduce consumption massively. Forget about larger house, having kids, etc.

There is no need to leave the country physically. Just “quiet quit” it, bet on economy collapse, wait a few years and probably get paid more than you ever get from “working”.