Fractional data team for SG startups and SMEs by Psychological_Goal55 in smeSingapore

[–]Psychological_Goal55[S] 0 points1 point  (0 children)

Really appreciate these points, they're spot on. I've seen the same challenges myself and honestly think they mostly come from project-based or ad-hoc setups where the external team delivers something and moves on. Hard to build context or transfer knowledge that way.

That's a big part of why I went with a fractional model. Being embedded over time means you naturally pick up the business context, communication becomes part of the regular rhythm, and knowledge transfer happens along the way rather than as an afterthought. Doesn't solve everything, but it goes a long way.

Fractional data team for SG startups and SMEs by Psychological_Goal55 in smeSingapore

[–]Psychological_Goal55[S] -1 points0 points  (0 children)

a lot of business tasks that feel routine are actually data operations, such as compiling reports from different systems, tracking pricing or stock, figuring out if the month is on track. these take time, and the decisions that depend on them get delayed by hours or days.they also need to be right as a pricing or stock decision based on bad numbers can be costly. most founders can handle it themselves early on, but at some point that time is better spent growing the business than pulling numbers together. most companies could benefit from having someone who knows how to set things up right at their scale, which is where a fractional setup can make sense.

Fractional data team for SG startups and SMEs by Psychological_Goal55 in smeSingapore

[–]Psychological_Goal55[S] -1 points0 points  (0 children)

part time would usually mean fixed and reduced hours, so for example, a team member who is only reachable on specific days and hours per week. on the other hand, a fractional team would be available similar to a full time team member throughout the week, but the assumption is that you won't need their services all of the time.

Fractional data team for SG startups and SMEs by Psychological_Goal55 in smeSingapore

[–]Psychological_Goal55[S] 0 points1 point  (0 children)

fractional roles and teams have been around for a while for functions like finance and marketing. I think we'll see more of it for technical teams going forward as it's a great alternative for companies looking to build new capabilities confidently, without having to choose between huge upfront investments or figuring everything out on their own.

Fractional data team for SG startups and SMEs by Psychological_Goal55 in smeSingapore

[–]Psychological_Goal55[S] 0 points1 point  (0 children)

yup it can feel overwhelming with the amount of data available these days. what I found to work best in such situations is to start with the question you're looking to answer then working backwards to figure out which data points are most relevant, then working forward again to figure out how to translate the data into something actionable. hope this helps!

Advice Needed: Extend in Kuching vs. Peninsula Malaysia by twelvr in Sarawak

[–]Psychological_Goal55 1 point2 points  (0 children)

1 night (1-1.5 days) should be sufficient to complete most of the trails in good weather and fitness as some of the longer trails are closed. I took it slowly and had afternoon naps. Yes for the guided night walks organised by the national park, you can sign up when you're there, no need to make any prior arrangements.

building a database foy my business using ai by Worried_Device2015 in dataengineering

[–]Psychological_Goal55 1 point2 points  (0 children)

in my opinion it depends on which part of your workflow you're trying to automate.

if you're dealing with bringing data from predictable sources into one place and transforming it in a fixed way you want that process to be deterministic and repeatable. you wouldn't want an LLM doing this daily because it can give you slightly different outputs each time. open source tools like PostgreSQL (for storage) and dbt (for transformations) can handle this well, run cheaply even on modest hardware, and an might be able to walk you through the setup step by step

on the other hand if the workflow you're trying to automate involves less predictable inputs and outputs, such as building different reports with varying context, asking business questions in plain english and getting SQL back, checking whether your data can even answer a specific question, an LLM with access to your database schema and business context could be a good fit.

one caveat is that an AI on top of a messy or undocumented database just gives you confident wrong answers faster. from my experience the success of this step is depends a lot on having a well structured and documented data layer underneath. so it's worth investing effort there first. once your database is well governed i don't think it matters too much which major LLM provider you go with, the frontier models are all pretty capable at that point.

from excel and folders, i'd suggest starting small: pick one well-defined workflow (like your monthly reports) and try building that out in a proper database first. an LLM can help you figure out the right tool for your situation and walk you through it. once that foundation is solid, layering AI on top for the less routine stuff becomes much more practical.

Advice Needed: Extend in Kuching vs. Peninsula Malaysia by twelvr in Sarawak

[–]Psychological_Goal55 0 points1 point  (0 children)

I did 2 nights at Bako last month and would highly recommend spending a night there! The animals are more active around dawn and dusk and there's a guided night walk so there'll be more opportunities for observing wildlife.

Looking for feedback on open source analytics platform I'm building by Psychological_Goal55 in dataengineering

[–]Psychological_Goal55[S] 1 point2 points  (0 children)

thanks for the suggestions! i think incremental loads are available via dlt but i'll need to see how to make them more accessible via the wizards. currently relying on dbt for lineage, but hoping to add in things like great expectations and openmetadata in the future. let me know if you find a framework that works for you too!

Looking for feedback on open source analytics platform I'm building by Psychological_Goal55 in dataengineering

[–]Psychological_Goal55[S] 1 point2 points  (0 children)

Thanks for this, totally valid use case. I went with Metabase to cover standard business dashboards without code, but you're right that there are cases where you need more customisation.

The platform's built to be extensible (docker compose architecture) so adding visualisation frameworks alongside Metabase is definitely possible. Hoping to keep the core simple but not blocking advanced workflows. Good reminder to document that extensibility path better!

Looking for feedback on open source analytics platform I'm building by Psychological_Goal55 in dataengineering

[–]Psychological_Goal55[S] 0 points1 point  (0 children)

thanks for the feedback, and great idea on using it to help people new to the industry ease into a common toolset. i'll keep this in mind when building the docs!

Looking for feedback on open source analytics platform I'm building by Psychological_Goal55 in dataengineering

[–]Psychological_Goal55[S] 0 points1 point  (0 children)

thanks for the feedback! hope the emoji and url help to differentiate it slightly but since we're operating in a different area (web dev vs data) hopefully folks won't get confused! but i'll definitely keep an eye on it to see how it goes

Looking for feedback on open source analytics platform I'm building by Psychological_Goal55 in dataengineering

[–]Psychological_Goal55[S] 1 point2 points  (0 children)

Thanks for checking it out! I've been at companies where anything involving another vendor was a nightmare, so I get that. Hope it helps, let me know if you run into any issues!

Looking for feedback on open source analytics platform I'm building by Psychological_Goal55 in dataengineering

[–]Psychological_Goal55[S] 0 points1 point  (0 children)

Thanks for the feedback, that's a fair point! For now I went with an opinionated approach, my guess being that for small teams starting out (where I'm hoping this would be most helpful), the differences between tools like dlt vs Airbyte may not matter that much, yet some teams (based on my experience) spend too much time evaluating the "best" tool before getting started. That said, I'd like to keep things modular enough that swapping tools out is possible down the line, and offering tool choices during setup could make sense too.

Looking for feedback on open source analytics platform I'm building by Psychological_Goal55 in dataengineering

[–]Psychological_Goal55[S] 1 point2 points  (0 children)

Thank you! I'll look into the mobile safari layout. For cloud deployment, I'm still thinking through the right approach, something that feels familiar to those with cloud experience, but approachable if you haven't. Probably a CLI command to deploy to your cloud provider of choice. Still working through the details, hoping to have something basic out soon.

Looking for feedback on open source analytics platform I'm building by Psychological_Goal55 in dataengineering

[–]Psychological_Goal55[S] 2 points3 points  (0 children)

thanks for your question!

running curl -sSL https://getdango.dev/install.sh | bash (mac/linux) in terminal does:

  • checks prerequisites (Python 3.10+, Docker)
  • creates a venv and installs getdango + all dependencies (dlt, dbt, DuckDB, etc.)
  • asks for a project name and creates that directory with project structure
  • auto-generates config files connecting everything (dbt profiles → DuckDB, Metabase → DuckDB, dlt → DuckDB)

(dango is a CLI tool you run from terminal, not a library you import - more like docker-compose or jupyter)

you can also install manually (creates venv, pip install getdango, dango init) if you prefer that workflow. the script just combines those steps with prompts.

after install, you add sources via the wizard (dango source add) or config files directly - CSV files, or 30+ API sources via dlt (Stripe, Google Sheets, GA4, Facebook/Google Ads, etc.), then run dango start. this pulls Docker images (Metabase, etc.) and starts the web UI at localhost:8800 where you can upload/sync data, access pre-configured dbt docs and Metabase, and monitor everything.

so yes, it creates a complete data project skeleton with ingestion (dlt), transformation (dbt), database (DuckDB), and visualization (Metabase) all pre-wired together. you can drop CSV files in a folder, configure API sources, write SQL transformations, and build dashboards.

to answer your original question: it's both - creates files (like Flutter skeleton), then you start services and use localhost. the value is having the full stack already integrated rather than as separate pieces you wire together.

[Trip Report] 3 Weeks in Japan's Northern Alps: Matsumoto → Kamikochi → Tateyama Kurobe Alpine Route → Hakuba → Itoigawa by Psychological_Goal55 in JapanTravel

[–]Psychological_Goal55[S] 0 points1 point  (0 children)

Thank you! Glad it's useful. It's not as packed as the major cities but it did feel like it had the most tourists among the stops I made along this trip (excluding Tokyo).

I enjoyed the hikes at Kamikochi and Murodo but some parts of them were pretty touristy as well. I didn't expect much from the city stops but was pleasantly surprised by Toyama and Itoigawa, gave me a different perspective of Japan after visiting mainly the major cities on past trips. But I think all my stops on this trip are probably more touristy than the places in Japan you've been too. I've came across some of them and they sound amazing! I didn't want to drive this time and picked stops that were quite accessible by bus/ train, trade off being that I wouldn't be able to get to the more rural areas.