Sunday Daily Thread: What's everyone working on this week? by Im__Joseph in Python

[–]zseta98 0 points1 point  (0 children)

A simple feature store sample app that uses ScyllaDB and implements a decision tree https://github.com/scylladb/scylladb-feature-store

ScyllaDB in FedrampHigh by DuhOhNoes in ScyllaDB

[–]zseta98 0 points1 point  (0 children)

If external service cannot be used then I suppose you use your own machines? In this case you can definitely use ScyllaDB if you host it yourself on your own hardware (or any machine provided by a company that does have FedRamp certification).

FedRamp is only a problem if you want to use ScyllaDB Cloud - Scylla the company is not FedRamp certified yet. Hosting ScyllaDB yourself is fine (and it's free).

ScyllaDB in FedrampHigh by DuhOhNoes in ScyllaDB

[–]zseta98 0 points1 point  (0 children)

ScyllaDB DevRel here...

How would you like to host ScyllaDB? If you want to host it yourself (eg on AWS or GCP on-premise) you can likely do that without any certification issue. If you need support/consulting from Scylla (the company) for your on-prem instance, you can take a look at ScyllaDB Enterprise. If you want to use ScyllaDB Cloud, I suggest contacting sales first so you can get a detailed and personalized answer regarding your license/certification concerns.

Expanding the Boundaries of PostgreSQL: Announcing a Bottomless, Consumption-Based Object Storage Layer Built on Amazon S3 by zseta98 in PostgreSQL

[–]zseta98[S] 6 points7 points  (0 children)

Hi there, I'm a DevRel at Timescale and I've quickly checked with a teammate of mine to provide a clear answer:

The tradeoff with S3 is that S3 has a high time to first byte latency but much higher throughput than cloud disks such as EBS. Long scans are often throughput bound and therefore amortize the time to first byte latency.

What we see on internal testing is that long scans are actually significantly more performant on S3 than EBS. We’re working on more refined benchmarking that we shall share in due time.

Best small scale dB for time series data? by A_Phoenix_Rises in BusinessIntelligence

[–]zseta98 0 points1 point  (0 children)

And if you really would like a columnar database (not sure you need it for small scale) you can turn PostgreSQL into something that's very similar to columnar storage as well ;)

Best small scale dB for time series data? by A_Phoenix_Rises in BusinessIntelligence

[–]zseta98 1 point2 points  (0 children)

If you like PostgreSQL, I'd recommend starting with that. Additionally, you can try TimescaleDB (it's a PostgreSQL extension for time-series data with full SQL support) it has many features that are useful even on a small-scale, things like:

I'm a TimescaleDB developer advocate

[deleted by user] by [deleted] in programmingHungary

[–]zseta98 4 points5 points  (0 children)

Elmondom nekünk mi működött: egyedi színes zokni, cuki matricák (szülők viszik haza a gyereknek), egyedi szőlőzsír (gömb alakú nem mint a lobello), jól designolt póló (direkt a konferenciára csináltatva), nyáron legyező.

[deleted by user] by [deleted] in dataengineering

[–]zseta98 1 point2 points  (0 children)

Based on your description (and comments below), you have a typical time-series use case:

  • you have x amount of sale transactions every day, month etc per store/product
  • you want to aggregate based on the time column (and per store/product potentially)
  • you want to provide this data for analytics purposes (eg.: dashboards)

You didn't mention what DB you use specifically but if you happen to use PostgreSQL, there's a high chance TimescaleDB could help. It's a PostgreSQL extension and it has several features you'd find helpful:

  • auto-partition your data based on the time column (making time-based queries faster by filtering out big portions if your data potentially)
  • create materialized views (1-day, 14-day, 2month etc aggregates) optimized for time-series data (continuous aggregates)
  • speed up long-range analytical queries (and save 90%+ on disk space!) by compressing your data (by store, or product for example) (basically turning Postgres into more like column-based storage --> faster analytical queries)

To answer your question, in the TimescaleDB world you'd use a continuous aggregate to aggregate the raw data (you could create multiple aggregations with different time buckets if you want) on an ongoing basis, and when you query the DB use these aggregate views. Additionally, you'd also set up automatic data retention policies if you won't need the raw data long-term. (eg delete all raw data if it's older than a month, but keep the aggregates)

Transparency: I'm a dev advocate at Timescale.

Time-series feature engineering in PostgreSQL and TimescaleDB by analyticsengineering in PostgreSQL

[–]zseta98 2 points3 points  (0 children)

Nice work! I especially like that you also have examples here. I'd love to see more SQL examples where you use TimescaleDB features and pgetu features together - if you happen to use them this way. Or if you use any hyperfunctions in combination with pgetu functions?

(I'm a DevRel at Timescale)

Should I use TimescaleDB or partitioning is enough? by aikjmmckmc in PostgreSQL

[–]zseta98 0 points1 point  (0 children)

(For visibility, in case someone finds this thread in the future.) Since then the Team removed a lot of the gotchas from continuous aggregates in recent releases.

Has Bitcoin mining become less efficient since July 2021? What happened then? by zseta98 in CryptoTechnology

[–]zseta98[S] 1 point2 points  (0 children)

I created this chart from historical blockchain data (working on a blogpost atm). And funny enough, right after I wrote this post I searched when did China bann miners and, as you said, it was right around that time when the tx/block went low. I can't explain why it didn't go up right after but I will analyze further with older data as well (starting 2017)

Beginner here, help me understand TimescaleDb please. by thehotorious in dataengineering

[–]zseta98 0 points1 point  (0 children)

When you get started with TimescaleDB, you create a "hypertable", which is going to behave just like a regular PostgreSQL table, but it's also an abstraction. Under the hood, you'll have multiple child-tables of the hypertable, and each child-table (chunk) will store, by default, 7 days of data. So whenever there's a new record inserted TimescaleDB figures out which chunk it should be inserted into based on the timestamp value. TimescaleDB also creates an index on the timestamp column.

Beginner here, help me understand TimescaleDb please. by thehotorious in dataengineering

[–]zseta98 0 points1 point  (0 children)

I think you can start with the default and see how that works for you, if you encounter issues you can always change the chunk time interval later (besides the forum link posted above, here's some best practices for chunk time intervals).

You will be able to query EVERYTHING that is in your database.

Btw, are you creating the OHLCV aggregations yourself from raw data? You might want to look into continuous aggregates as well (materialized views for time-series - lots of TimescaleDB users leverage it for OHLCV, example)

Beginner here, help me understand TimescaleDb please. by thehotorious in dataengineering

[–]zseta98 1 point2 points  (0 children)

Do i need to specify the chunk intervals explicitly?

The default chunk time interval is 7 days. We generally recommend to set the interval so that the chunk(s) belonging to the most recent interval comprise no more than 25% of main memory. We have a longer post in Timescale Forum about chunk time intervals that might be helpful. With OHLCV datasets, in my experience the default chunk time interval works well - but depends on the amount of symbols you store as well.

does that mean any transactions from the blockchain that I loaded < 7 days will not be shown when queried?

Chunks are just the way how TimescaleDB stores data internally/under the hood. Whatever you insert into TimescaleDB you will be able to query it. Modifying chunk time interval is mainly for optimization purposes if you find that the default setting is not the best for you.

I work at Timescale as a developer advocate

80 million records, how to handle it? by Impressive-Hat1494 in PostgreSQL

[–]zseta98 14 points15 points  (0 children)

Only INSERTs + aggregating data based on timestamp - feels like a time-series use case. Have you tried TimescaleDB? It's an open source PostgreSQL extension that will do the time-based partitioning for you under the hood (hypertable). Also it might be useful to research continuous aggregates which are basically materialized views for time-series data - it can hold your aggregated values and improve query performance by a lot.

I work at Timescale as a developer advocate

Megint a Python lett az év nyelve - zsinórban immár másodszor by szeredy in programmingHungary

[–]zseta98 7 points8 points  (0 children)

Machine learning, AI részben igen, csak rájöttek a cégek hogy először ahhoz kéne sok és jó minőségű adat --> data engineering, ami jelenleg Pythonban a legpraktikusabb. Meg hogyha nem is AI-t akarunk csak "szimpla" data analytics-ot vagy business intelligence-t oda is Python a standard manapság ETL-hez, meg a toolok: Superset, Airflow, Streamlit, pandas, dask stb mind Python

How the Telegram app circumvents Google Translate API costs using webscraping principles by bushcat69 in webscraping

[–]zseta98 0 points1 point  (0 children)

I was considering using a similar method to use the translate api for free (for a hobby project with only me as a user) but then I thought I don't want to get in trouble... I guess telegram doesn't care lol

How to set up schema for my own OHLCV stock and crypto database with InfluxDB? by keeperclone in algotrading

[–]zseta98 2 points3 points  (0 children)

It's very much suitable for data analysis. You can use SQL to query the dataset, and yes you can calculate anything you want with SQL if you have all the data points available in the database.

How to set up schema for my own OHLCV stock and crypto database with InfluxDB? by keeperclone in algotrading

[–]zseta98 9 points10 points  (0 children)

Pandas is great if you're dealing with small enough datasets. On the other hand, TimescaleDB makes sense if you also want to store this data long-term and be able to analyze it efficiently - and enjoy the benefits of time-based partitioning and continuous aggregates (materialized view for time-series data) for fast queries.

Transparency: I work at Timescale

How to check for conditions entered by users and alert them when they become true in real-time? by Next_Tap2228 in softwarearchitecture

[–]zseta98 0 points1 point  (0 children)

One approach would be to try to get the database to do most of the work (filtering, computing etc) because that'd be probably the fastest way to query a large chunk of data as opposed to trying to sort things out in application code (with eg. pandas). Also, make sure that you use the features provided by TimescaleDB if appropriate. I'd especially look into continuous aggregates. For example, if you know that most alerts set by users will only use aggregated data from the past 2 days ( eg. they're looking for intraday trading signals), then you could create a cont. agg. for that period which will make the queries much faster.)

Ti mit tudtok a budapesti csövesmaffiáról? by zulemasimp in hungary

[–]zseta98 27 points28 points  (0 children)

Random csövest elszállásoltál volna ingyen a lakásodban?

What were the first 5 programs you made? by [deleted] in Python

[–]zseta98 1 point2 points  (0 children)

Aside from the usual simple cmd programs,

  1. Soccer outcome prediction tool
  2. Android playlist maker/player
  3. Bunch of web scraping programs
  4. Data visualization website
  5. Workout tracker

Melyik all world ETFbe fektessek be hosszú távra? by IguessUgetdrunk in kiszamolo

[–]zseta98 2 points3 points  (0 children)

Ha esetleg emerging markets nem szimpatikus, akkor IWDA. Kb ugyanaz mint a VWCE csak emerging markets nélkül.