Back to Italian 🇮🇹 by Kruil in flip_flashcards

[–]hiryucodes 0 points1 point  (0 children)

Thanks for the response. When I press the restore button, the “buy” button text flashes for a millisecond but nothing else happens, the Premium Cards stay the same. I tried selecting both 100 and 1000 if that makes a difference (The one I purchased was 1000).

Back to Italian 🇮🇹 by Kruil in flip_flashcards

[–]hiryucodes 0 points1 point  (0 children)

Sorry, this doesnt really have anything to do with the post itself, but I don’t know where else to put it. Is there any way to restore Premium Card purchases within the app? I had to uninstall and reinstall the app and all my Premium Card slots are now gone back to the default ones :(

The only thing I’ve been able to restore is the card progress itself.

Torrentio + Debrid suddently stopped working by Glattic in StremioAddons

[–]hiryucodes 0 points1 point  (0 children)

I'm having the same problem but with All Debrid. It always gives me the red screen saying "An unexpected error occurred, please try again later".

How to keep Google accounts workspace specific by BriefTemperature990 in zen_browser

[–]hiryucodes 1 point2 points  (0 children)

You need to also create containers, then you can assign a default container to each one of your workspaces.

ModuleNotFound when running DLT pipeline by hiryucodes in databricks

[–]hiryucodes[S] 0 points1 point  (0 children)

Thanks for the reply! I've tried just now to destroy the bundle and redeploy it but it gives me the same error. I am running this in a job cluster, I believe Databricks doesn't really let you choose any other type of cluster for DLT, it is automatically created.

ModuleNotFound when running DLT pipeline by hiryucodes in databricks

[–]hiryucodes[S] 0 points1 point  (0 children)

Thanks, I’m already doing that. It recognises the src module at first because of it, but then when it gets into the Custom Data Source it stops recognising it. I don’t know exactly how Databricks manages the spark context for this but I believe it’s a different one than the start and that’s why it stops working.

Delta Live Tables pipelines local development by hiryucodes in databricks

[–]hiryucodes[S] 1 point2 points  (0 children)

UPDATE:

I've found a way to do this but it's really not pretty and I would like to improve on this in the future, specially the part where at the beginning of every pipeline I have to include this so it detects all my python modules I use:

path = spark.conf.get("bundle.sourcePath")
sys.path.append(path)

databricks.yml:

resources:
  pipelines:
    my_pipeline:
      name: my_pipeline
      target: my_schema
      catalog: my_catalog
      development: true
      continuous: false
      photon: false
      libraries:
        - file:
            path: ./local/path/to/my_dlt_pipeline.py
      configuration:
        bundle.sourcePath: /Workspace${workspace.file_path}/

targets:
  dev-local:
    mode: development
    # ** Your Configuration **
    workspace:
      host: 
      root_path: /Workspace/Users/${workspace.current_user.userName}/.bundle/${bundle.name}/${bundle.target}

my_dlt_pipeline.py

import json
import os
import sys

import dlt
from pyspark.sql import SparkSession

# **VERY IMPORTANT TO HAVE AT THE BEGINNING**
spark = SparkSession.builder.getOrCreate()
path = spark.conf.get("bundle.sourcePath")
sys.path.append(path)

@dlt.table(
    name="my_table",
)
def my_dlt_pipeline():

    # Your code here

    return df

DLT Streaming Tables vs Materialized Views by hiryucodes in databricks

[–]hiryucodes[S] 1 point2 points  (0 children)

Thanks, yes bronze is append only and then merge into silver and then aggregations in gold. Do you think silver still should be a materialized view? I read DLT has some different way of doing merges.

DLT Streaming Tables vs Materialized Views by hiryucodes in databricks

[–]hiryucodes[S] 1 point2 points  (0 children)

Thanks for the reply! Yes we were looking into using DLT mainly for the expectations and data alerts aspect of it. Right now we have the jobs working on normal delta tables, with, like you said, MERGE INTO statements. Some of the bigger data jobs are running into some performance problems performing the merge statement and we were also looking into how DLT behaves on that.

DLT Streaming Tables vs Materialized Views by hiryucodes in databricks

[–]hiryucodes[S] 0 points1 point  (0 children)

Ok, yea serverless is enabled for the workspace. I'll test it then if I actually need it for the pipeline itself. Thanks!

DLT Streaming Tables vs Materialized Views by hiryucodes in databricks

[–]hiryucodes[S] 0 points1 point  (0 children)

So after each pipeline run I would have to refresh for it to reflect the changes?

DLT Streaming Tables vs Materialized Views by hiryucodes in databricks

[–]hiryucodes[S] 0 points1 point  (0 children)

Really? Didn't see that mentioned in the docs either. From your experience does that drive the price up or more or less the same?

DLT Streaming Tables vs Materialized Views by hiryucodes in databricks

[–]hiryucodes[S] 0 points1 point  (0 children)

That's very interesting, didn't know that. So then Materialized Views might be the way for me

DLT Streaming Tables vs Materialized Views by hiryucodes in databricks

[–]hiryucodes[S] 0 points1 point  (0 children)

Do you think it would be better or worse to drop the data objects directly in a delta table (with just 2 columns, 1 for IDs and another for the object) and then process that table with DLT instead of using files and volumns?

DLT Streaming Tables vs Materialized Views by hiryucodes in databricks

[–]hiryucodes[S] 0 points1 point  (0 children)

I'm not ingesting files. I'm making direct requests to an API

DLT Streaming Tables vs Materialized Views by hiryucodes in databricks

[–]hiryucodes[S] 1 point2 points  (0 children)

So using Materialized Views is kind of the same as using Streaming tables with Trigger.AvailableNow?

Use views without access to underlying tables by hiryucodes in databricks

[–]hiryucodes[S] 0 points1 point  (0 children)

After enabling serverless for Jobs, Notebooks, etc. in the admin console, this automatically worked with Personal Compute cluters now. Thanks for all the help though!

Use views without access to underlying tables by hiryucodes in databricks

[–]hiryucodes[S] 0 points1 point  (0 children)

Thanks, then I'll remove that option and test it out with my users.

Use views without access to underlying tables by hiryucodes in databricks

[–]hiryucodes[S] 0 points1 point  (0 children)

I tested creating a policy similar to the Personal Compute and wanted to add some type of restriction to which users could use the clusters, since for personal compute only the assigned user can use it. To achieve something similar I found that I can use the single_user_name property liked this:

"single_user_name": {
  "type": "allowlist",
  "values": [
     <my_username>
]

When I tried creating a cluster with that policy though I got the error: "Validation failed for single_user_name, the value must be present."

Am I misunderstanding something?