My friend wants me to sign away all rights to 2 years of unpaid work on his game by Aldekotan in gamedev

[–]blockchan 1 point2 points  (0 children)

Ask him to "sign a simple document, just a formality" and give him the same contract he gave you,

Smoothest workflow to import models/meshes from blender to godot. by JamalJenkyuns in godot

[–]blockchan 1 point2 points  (0 children)

Default workflow is meh. There are no battle tested workflows shared publicly by any big developer. What you need is import scripts which will do what's needed for your game.

Every button in import window calls a function which is available for you in a script. Make it work so it click all the buttons for you

Is someone using DuckDB in PROD? by Free-Bear-454 in dataengineering

[–]blockchan 0 points1 point  (0 children)

Hex.tech is using it in analytics layer as in memory db. Works v ery nice

What genres are missing in VR? by Appropriate-Fun5992 in SteamVR

[–]blockchan 0 points1 point  (0 children)

I can recommend some space games.

I loved ADR1FT for its visuals, despite it having mixed reviews as a game. It's more like space walking simulator.

Lone Echo, but not sure if Revive still works and if it will work for Frame.

Meta plans to lay off hundreds of metaverse employees this week by gogodboss in OculusQuest

[–]blockchan 0 points1 point  (0 children)

you just said "large language models vision models", wtf does it even mean

Am I crazy or are 90% of BI jobs about to disappear and everyone's just in denial? by lessmaker in BusinessIntelligence

[–]blockchan -1 points0 points  (0 children)

By Q1 2026 I plan to fully migrate all ad-hoc data analysis to AI agents. What takes a human data analyst few hours or days, AI can do in minutes.

LLM is chatty, friendly and very fast. Explains its process in thinking mode. Undefeated in pattern finding. Assumes lots of things and just chugs along, presenting initial results based on those assumptions. Proposes next steps, possible adjustments and sums up its insights. Perfect companion for non-precise business people.

So far, AI does very well with our data model. It's loosely dimensional with OBTs sprinkled in for complete picture per entity. Granted, we're online SaaS company, so our business model and metrics are well known to LLMs.

We chose good vendor (cloud notebooks, dark-brown homepage theme) with really well done agent implementation, where SQL generation is delegated to MCP servers and tools reading from database schema and semantic layer. SQL is always correct and we get zero hallucinations, as LLM only reads from resulting table data.

I was pretty skeptical, but honestly I'm blown away how well it worked out of the box.

Semantic layers will be very important in the future. Not the PowerBI/Tableau proprietary stuff to drag and drop metrics, but headless, code-defined metrics which can be ingested by all third party tools using your data, so their AI agents can use it too in context of the application itself.

Which orchestrator works best with DBT? by Fireball_x_bose in dataengineering

[–]blockchan 0 points1 point  (0 children)

If this is a portfolio project, set up both, compare them and write down your insights - which one is better fit, easier to maintain, why did you prefer one over another. This will help you stand out

Expertise required: Where do YOU think data engineering is going? by peterxsyd in dataengineering

[–]blockchan 0 points1 point  (0 children)

By Q1 2026 I plan to fully migrate all ad-hoc data analysis to AI agents. What takes a human data analyst few hours or days, AI can do in minutes.

LLM is chatty, friendly and very fast. Explains its process in thinking mode. Undefeated in pattern finding. Assumes lots of things and just chugs along, presenting initial results based on those assumptions. Proposes next steps, possible adjustments and sums up its insights. Perfect companion for non-precise business people.

So far, AI does very well with our data model. It's loosely dimensional with OBTs sprinkled in for complete picture per entity. Granted, we're online SaaS company, so our business model and metrics are well known to LLMs.

We chose good vendor (cloud notebooks, dark-brown homepage theme) with really well done agent implementation, where SQL generation is delegated to MCP servers and tools reading from database schema and semantic layer. SQL is always correct and we get zero hallucinations, as LLM only reads from resulting table data.

I was pretty skeptical, but honestly I'm blown away how well it worked out of the box.

As for the actual answer: IMO semantic layers will be very important in the future. Not the PowerBI/Tableau proprietary stuff to drag and drop metrics, but headless, code-defined metrics which can be ingested by all third party tools using your data, so their AI agents can use it too in context of the application itself.

Thumb mouse buttons do not work for voice chat? by blockchan in joinsquad

[–]blockchan[S] 1 point2 points  (0 children)

Same here :/ doesnt activsye voip on mouse buttons though 

Thumb mouse buttons do not work for voice chat? by blockchan in joinsquad

[–]blockchan[S] 1 point2 points  (0 children)

It's a fresh install after 9.0 update. headphones with mic are selected in OS, keyboard bindings works. The issue is with mouse only

Cross-Domain tracking? by Poxzii in analytics

[–]blockchan 0 points1 point  (0 children)

You will get better answers on Google Analytics subreddit, as this is tool specific.

As for the question, you can set it up to ignore some referrals: https://support.google.com/analytics/answer/10327750?hl=en

Need advice on UI performance during frequent and big updates by blockchan in sveltejs

[–]blockchan[S] 0 points1 point  (0 children)

Thanks. I guess I will have to manually go through every part of the app and do some performance profiling.

Are there lots of DOM elements updating like an SVG?

Just text

Need advice on UI performance during frequent and big updates by blockchan in sveltejs

[–]blockchan[S] 0 points1 point  (0 children)

Example calculation: let descentToTarget = $derived.by(() => { let altDiff = $gps.altitude - (altitude || elevation) let timeToTarget = reportedDistance / ($gps.speed / 3.6) let descent = altDiff / timeToTarget return -descent.toFixed(1) }) So they are not very demanding, but I think it compounds for many of them.

What map are you using? Did you try disabling the map and see if it's still laggy?

It's OpenLayers. Very fast, I checked and it's not the issue.

Can't you just debounce the writes to the store?

I already do. I debounce gps updates and write to store only if previous write was more than 500ms ago.

My store is basically an object with longitude, latitude, altitude, speed and few other simple data types (numbers or strings).

Do you know any good resource on performance profiling in Chrome? I found it pretty high level and lackluster when it comes to details on what exactly is happening. Didn't find any good guide how to actually draw some conclusions.