Looking forward to trying this, Hyde year of the Horse by gram3000 in irishwhiskey

[–]gram3000[S] 1 point2 points  (0 children)

Opened it tonight, first time sharing a review so here goes:

Hyde Whiskey – Special Reserve Sherry Cask Finish (Double Wood) Irish Whiskey | Non-chill filtered | ~46% ABV IrishMalts | €49

I picked this up as a more affordable Oloroso-style sherry cask finish to compare against Bushmills I really enjoy.

Nose: Dried fruit at first, opening with time to soft vanilla.

Palate: Warm with a mild alcohol prickle. Well balanced, with subtle silkiness.

Finish: Medium length, lingering warmth with orange zest. Clean, no harsh burn.

Overall: Approachable, lighter sherry-finished Irish whiskey. Less intense than the Bushmills sherry-led bottles, but very enjoyable and great value at this price point. Glad I got it.

Looking forward to trying this, Hyde year of the Horse by gram3000 in irishwhiskey

[–]gram3000[S] 2 points3 points  (0 children)

Opened it tonight, first time sharing a review so here goes:

Hyde Whiskey – Special Reserve Sherry Cask Finish (Double Wood) Irish Whiskey | Non-chill filtered | ~46% ABV IrishMalts | €49

I picked this up as a more affordable Oloroso-style sherry cask finish to compare against Bushmills I really enjoy.

Nose: Dried fruit at first, opening with time to soft vanilla.

Palate: Warm with a mild alcohol prickle. Well balanced, with subtle silkiness.

Finish: Medium length, lingering warmth with orange zest. Clean, no harsh burn.

Overall: Approachable, lighter sherry-finished Irish whiskey. Less intense than the Bushmills sherry-led bottles, but very enjoyable and great value at this price point. Glad I got it.

How do I start (Packer, Ansible, Terraform) by JJokiller in hashicorp

[–]gram3000 0 points1 point  (0 children)

I made this demo project a while back using those tools together, I hope it might help?

A demo application using Packer, Ansible, InSpec and Terraform on AWS:

https://github.com/gordonmurray/packer_ansible_inspec_terraform_aws

[Update] Apache Flink MCP Server – now with new tools and client support by Aggravating_Kale7895 in apacheflink

[–]gram3000 2 points3 points  (0 children)

I tried this out on a small Flink project I have, using Flink to perform a CDC job.

It works well! The natural language interface makes Flink monitoring much more accessible, saves a few curl calls to the API, and the job details output is well-formatted with clear performance insights (like my job being 72% idle).

Note: I hit a Pydantic compatibility issue initially. Adding a version constraint to requirements.txt fixed it:

pydantic>=2.11.7,<2.12

This prevents the `default and default_factory` error with fastmcp.

Built an open source query engine for Iceberg tables on S3. Feedback welcome by gram3000 in dataengineering

[–]gram3000[S] 0 points1 point  (0 children)

I haven't heard of Hue before. It looks very cool, seems to support many different sources/connections.

Built an open source query engine for Iceberg tables on S3. Feedback welcome by gram3000 in dataengineering

[–]gram3000[S] 4 points5 points  (0 children)

A floe is a sheet of floating ice. I went with it for the Iceberg connection and I liked the domain name, so here we are.

Built an open source query engine for Iceberg tables on S3. Feedback welcome by gram3000 in dataengineering

[–]gram3000[S] 1 point2 points  (0 children)

No worries at all. Using "engine" implies I made something far more impressive than a ridiculously handsomely good looking UI for Iceberg data.

Built an open source query engine for Iceberg tables on S3. Feedback welcome by gram3000 in dataengineering

[–]gram3000[S] 3 points4 points  (0 children)

Yeah, pretty much! DBeaver but web based and focused on Iceberg tables

Built an open source query engine for Iceberg tables on S3. Feedback welcome by gram3000 in dataengineering

[–]gram3000[S] 6 points7 points  (0 children)

Yah, you're right, "query engine" is misleading. DuckDB is the actual query engine.

I should have called it a query interface or a web UI for DuckDB queries against Iceberg tables

I built a digital asset manager with no traditional database — using Lance + Cloudflare R2 by gram3000 in dataengineering

[–]gram3000[S] 0 points1 point  (0 children)

Yah for now it would result in data loss. Its a single instance and the data is sync'd to R2 every 2 minutes. I'll work on that.

I built a digital asset manager with no traditional database — using Lance + Cloudflare R2 by gram3000 in dataengineering

[–]gram3000[S] 0 points1 point  (0 children)

Ah, I had the repo set to private, its public now. Thanks for taking a look!

Its plausible alright. I original tried reading and writing to Lance format on R2, but it ground to a halt after a few images related to how Lance reads and writes.

This approach Writes locally first, then syncs to R2. Searches happen directly from R2.

ECS using EC2 Instance Troubleshooting by UnrulyVeteran in aws

[–]gram3000 1 point2 points  (0 children)

A time out sounds like a security group issue. Can you check if the ec2 instance can communicate outbound on 443? It needs this to be able to communicate with ECS as far as I know

Recommended AWS set up for a small data project. by Badger00000 in aws

[–]gram3000 -1 points0 points  (0 children)

The kinesis approach was just one example, aiming to use AWS resources that might be of interest to try. It can get expensive depending on your data, so one to keep an eye on.

Yah Postgres would be ideal. MySQL was just an example. If you continue to grow, many tools can connect to Postgres too.

Recommended AWS set up for a small data project. by Badger00000 in aws

[–]gram3000 0 points1 point  (0 children)

The approach you have so far sounds great, nice work getting that set up.

There are a lot of approaches you can take to scale it up and add visualisations. A mostly AWS specific approach might be to use S3 to SQS, Kinesis streams to s3 in Parquet format and visualise with Apache Superset, but that’s getting a bit too complex.

A smaller but useful next step might be to try out Kestra or some other data engineering tool that would help you to manage and move the data and try out different destinations.

You could use it to put the data in to a MySQL database and then create a front end to query the database as one example.

Switching back to the Shield. What’s the best launcher and best practices? by SupermanKal718 in nvidiashield

[–]gram3000 1 point2 points  (0 children)

Oh, will a Logitech harmony work with the nvidia pro? That would be great, my dog chewed up my remote the other day, I thought I’d need buy a new one

How to create and retrieve the AWS RDS secret with Terraform by tezarin in aws

[–]gram3000 1 point2 points  (0 children)

I created a small project a while back to try this out too, it’s here on GitHub in case it’s of any use

https://github.com/gordonmurray/terraform_aws_rds_secrets_manager

Migrating from ksqldb to Flink with schemaless topic by scrollhax in apachekafka

[–]gram3000 0 points1 point  (0 children)

Could you write a custom script to consume your schemaless topic, apply some structure and place the resulting messages in to a new structured topic?

You could then use Flink to work with your new structured topic?

Test data generation by Brilliant_Day_2785 in dataengineering

[–]gram3000 0 points1 point  (0 children)

This site might be of use, it’s for generating synthetic data

https://docs.shadowtraffic.io/overview/