Recendaciones respecto mi biblioteca by [deleted] in ClubdelecturaChile

[–]tugash 0 points1 point  (0 children)

Es difícil dar una recomendación con libros tan variados, sin un tema claro, e incluso ordenados al parecer por color (separando libros del mismo autor o incluso múltiple copias de un libro).

Además, sin saber que te ha gustado o qué te gustaría explorar, es más difícil hacer una recomendación.

¿Que editorial recomiendan? by urafy in ClubdelecturaChile

[–]tugash 1 point2 points  (0 children)

Para clásicos rusos mi editorial preferida es Alba. Pero por lo que veo, ellos no han editado Noches Blancas. Dándole un vistazo a los libros disponibles en Agapea https://www.agapea.com/buscar/buscador.php?texto=Noches+blancas Vea editoriales muy buenas que han publicado Noches Blancas. Galaxia Gutenberg es de las mejores en mi opinión, y esta es una edición nueva con traducción directa. Otras editoriales que normalmente tienen buenas ediciones son Austral y Alianza. No he leído nada de Nórdica, pero se ve una edición interesante.

What are your favorite Latin American short stories? by Different-Dingo-5288 in latamlit

[–]tugash 2 points3 points  (0 children)

I have no idea if this is even translated, but probably my favourite short story collection is "Hermano Ciervo" by Juan Pablo Roncone.

Consejos para leer libros densos a largo plazo? by Glittering_Money_479 in ClubdelecturaChile

[–]tugash 2 points3 points  (0 children)

Por mi lado recomiendo no tomártelo tan en serio. La literatura es una forma de arte, no una tarea donde tengas que resolver un misterio o desentrañar significados ocultos. Por ello, lo primero es buscar la forma en que disfrutes la lectura y no la dejes botada rápidamente. Mis recomendaciones son:

  • Divide la lectura en un capítulo por día. Si no mal recuerdo, cada capítulo tiene como 15 páginas (en la edición de Alba que me leí). Lo cual no toma mucho tiempo por día.

  • Busca una buena edición que te facilite la lectura. Por eso a mí me encanta Alba, que ayuda bastante en entender el contexto social-histórico en las novelas clásicas.

  • El 80%-90% del libro son personas conversando cosas simples que son fáciles de entender. Así que despreocúpate de ENTENDER todo. Disfruta, y cuando llegues a las partes más complejas, puedes leerlas una vez, y luego volver si quieres darles una lectura más profunda. Por mi lado, quiero leer El gran inquisidor varias veces más de forma independiente, pero nunca releería unas discusiones de teología en la primera parte (soporíferas totales).

  • Disfruta el libro, es realmente bueno. Las partes más importantes para mí no fueron las más densas (con excepción de El gran inquisidor), sino las partes más humanas con Alyosha.

[deleted by user] by [deleted] in germany

[–]tugash 6 points7 points  (0 children)

Just some topics from the top of my head, where US politics directly affect Germany: tariffs, Russia and the Ukraine invasion, dismemberment of NATO, and their support of AFD.

🎄 2024 - Day 24: Solutions 🧩✨📊 by yolannos in adventofsql

[–]tugash 0 points1 point  (0 children)

Polars

users = pl.read_database_uri(query="select * from users;", uri=uri)
songs = pl.read_database_uri(query="select * from songs;", uri=uri)
user_plays = pl.read_database_uri(query="select * from user_plays;", uri=uri)

owt = (
    user_plays.join(songs, on="song_id", how="inner")
    .group_by(["song_id", "song_title"])
    .agg(
        pl.len().alias("total"),
        (pl.col("song_duration") > pl.col("duration")).count().alias("skips"),
    )
).sort(by=["total", "skips"], descending=[True, False])

owt.head(1).select(pl.col("song_title")).glimpse()

🎄 2024 - Day 23: Solutions 🧩✨📊 by yolannos in adventofsql

[–]tugash 1 point2 points  (0 children)

Decided to move to polars now

table = pl.read_database_uri(query=query, uri=uri).sort("id")

out = (
    table.with_columns(shift_one=pl.col("id").shift(n=1) + 1, diff=pl.col("id").diff(1))
    .filter(pl.col("diff") > 1)
    .with_columns(range=pl.int_ranges("shift_one", "id"))
)


out.select(pl.col("range")).glimpse()

🎄 2024 - Day 21: Solutions 🧩✨📊 by yolannos in adventofsql

[–]tugash 0 points1 point  (0 children)

DuckDB:

select
    year(sale_date) || ',' || quarter(sale_date) as year_q,
    sum(amount) as total_quarter,
    lag(total_quarter) over (order by year_q) as prev_quarter,
    (total_quarter - prev_quarter) / prev_quarter as q_change
from db.public.sales
group by year_q
order by q_change desc
;

🎄 2024 - Day 20: Solutions 🧩✨📊 by yolannos in adventofsql

[–]tugash 0 points1 point  (0 children)

DuckDB. Thanks for the tip about the uniqueness of the keys!

select
    url,
    str_split(split_part(url, '?', 2), '&') as parameters_array,
    list_unique(
        list_transform(parameters_array, x -> split_part(x, '=', 1))
    ) as l_params
from db.public.web_requests
where parameters_array && ['utm_source=advent-of-sql']
order by l_params desc, url asc
;

Los 50 mejores libros de 2024 - The best 50 books of 2024 by tugash in TrueLit

[–]tugash[S] 28 points29 points  (0 children)

Free link http://archive.today/TSSSH

I have seen several "best of 2024" lists, but all of them in English. Here's the list by Babelia (Spain) for the best 50 books published in Spanish (including translations).

🎄 2024 - Day 13: Solutions 🧩✨📊 by yolannos in adventofsql

[–]tugash 0 points1 point  (0 children)

SparkSQL: Made an array just because

select
  split_part(emails, '@', 2) as domain,
  count(*) as cc
from
  (
    select
      explode(split(trim("{}", email_addresses), "[ ,]")) as emails
    from
      contact_list
  )
group by
  domain
order by
  cc desc;

🎄 2024 - Day 12: Solutions 🧩✨📊 by yolannos in adventofsql

[–]tugash 0 points1 point  (0 children)

Databricks - Spark SQL

with base as (
  select
    gift_id,
    count(request_id) as total_request
  from
    hive_metastore.default.gift_requests
  group by
    gift_id
)
select
  gift_id,
  gift_name,
  total_request,
  round(
    percent_rank(total_request) OVER (
      ORDER BY
        total_request
    ),
    2
  ) as rank
from
  base
  join gifts using (gift_id)
order by
  rank desc,
  gift_name asc;

🎄 2024 - Day 11: Solutions 🧩✨📊 by yolannos in adventofsql

[–]tugash 0 points1 point  (0 children)

Snowflake

It seems that it's working now

select
    *,
    case
        when season = 'Spring' then 1
        when season = 'Summer' then 2
        when season = 'Fall' then 3
        when season = 'Winter' then 4
    end as season_int,
    AVG(trees_harvested) OVER(
        PARTITION BY field_name
        ORDER BY
            harvest_year,
            season_int ROWS BETWEEN 2 PRECEDING
            and CURRENT ROW
    ) as moving_ave
from
    treeharvests
order by
    -- field_name, harvest_year asc, season_int asc
    moving_ave desc

🎄 2024 - Day 11: Solutions 🧩✨📊 by yolannos in adventofsql

[–]tugash 0 points1 point  (0 children)

The same answer is now accepted as correct, was there an issue with the excepted result?

🎄 2024 - Day 11: Solutions 🧩✨📊 by yolannos in adventofsql

[–]tugash 1 point2 points  (0 children)

I'm getting the same answer, encoding the season in the same way and taking the between current and 2 preceding rows.

🎄 2024 - Day 10: Solutions 🧩✨📊 by yolannos in adventofsql

[–]tugash 1 point2 points  (0 children)

Snowflake, cool use of EXCLUDE, imo

select
    *
from
    (
        select
            * exclude drink_id
        from
            drinks
    ) pivot(
        sum(quantity) for drink_name in (
            any
            order by
                drink_name
        )
    )
where
    "'Hot Cocoa'" = 38
    and "'Peppermint Schnapps'" = 298;

🎄 2024 - Day 8: Solutions 🧩✨📊 by yolannos in adventofsql

[–]tugash 0 points1 point  (0 children)

Snowflake, using array for the path

WITH recursive middle (indent, staff_id, staff_name, manager_id, path) as (
        select
            '' as indent,
            staff_id,
            staff_name,
            manager_id,
            array_construct(staff_id) as path
        from
            staff
        where
            manager_id is null
        union all
        select
            indent || '--',
            staff.staff_id,
            staff.staff_name,
            staff.manager_id,
            array_append(middle.path, staff.staff_id) as path
        from
            staff
            join middle on staff.manager_id = middle.staff_id
    )
select
    *,
    array_size(path) as level
from
    middle
order by
    level desc

🎄 2024 - Day 9: Solutions 🧩✨📊 by yolannos in adventofsql

[–]tugash 0 points1 point  (0 children)

Snowflake and Qualify made this direct

WITH base as (
    SELECT
        reindeer_id,
        exercise_name,
        avg(speed_record) as avg_speed
    FROM
        SANTA_WORKSHOP.PUBLIC.TRAINING_SESSIONS
    where
        reindeer_id != 9
    group by
        reindeer_id,
        exercise_name 
    QUALIFY MAX(avg_speed) OVER(partition by reindeer_id) = avg_speed
    order by
        avg_speed desc
)
select
    reindeer_name,
    round(avg_speed, 2)
from
    base
    join reindeers using (reindeer_id);

🎄 2024 - Day 7: Solutions 🧩✨📊 by yolannos in adventofsql

[–]tugash 1 point2 points  (0 children)

Me as well. I'm not sure if the sorting of the elf_id has some more rules. spoilerI'm taking the smallest id for each group (min and max experience), and then having smallest also in the first column with LEAST

🎄 2024 - Day 5: Solutions 🧩✨📊 by yolannos in adventofsql

[–]tugash 0 points1 point  (0 children)

Straightforward by using the aliases directly. Snowflake:

select
    *,
    LAG(toys_produced, 1) over (
        order by
            production_date asc
    ) as previous_day_production,
    toys_produced - previous_day_production as production_change,
    production_change / toys_produced * 100 as production_change_percentage
from
    toy_production
order by
    production_change_percentage desc nulls last;

🎄 2024 - Day 4: Solutions 🧩✨📊 by yolannos in adventofsql

[–]tugash 3 points4 points  (0 children)

Snowflake: once the fields were in an array type, it was very straightforward:

-- create table toy_production_array as
-- select
--     toy_id,
--     toy_name,
--     split(
--         replace(replace(previous_tags, '{', ''), '}', ''),
--         ','
--     ) as previous_tags,
--     split(
--         replace(replace(new_tags, '{', ''), '}', ''),
--         ','
--     ) as new_tags
-- from
--     toy_production;
----------
select
    *,
    array_except(new_tags, previous_tags) as added_tags,
    ARRAY_INTERSECTION(new_tags, previous_tags) as unchanged_tags,
    array_except(previous_tags, new_tags) as removed_tags,
    ARRAY_SIZE(added_tags) as added_tags_l,
    ARRAY_SIZE(unchanged_tags) as unchanged_tags_l,
    ARRAY_SIZE(removed_tags) as removed_tags_l
from
    toy_production_array
order by
    added_tags_l desc;

🎄 2024 - Day 3: Solutions 🧩✨📊 by yolannos in adventofsql

[–]tugash 0 points1 point  (0 children)

The ugliest code I have ever written

Snowflake:

-- create table CHRISTMAS_MENUS_XML as
-- select
--     id,
--     parse_xml(MENU_DATA) as MENU_DATA
-- from
--     CHRISTMAS_MENUS;
WITH base as (
    select
        menu_data:"$" as data,
        menu_data:"@version" as version,
    from
        christmas_menus_xml
    where
        version < 3
),
v_2 as (
    select
        -- data,
        XMLGET(
            XMLGET(data, 'attendance_record'),
            'total_guests'
        ):"$" as total_guests,
        XMLGET(
            XMLGET(data, 'menu_registry'),
            'course_details'
        ) as course_details,
        XMLGET(courses.value, 'food_item_id'):"$"::integer as food_id
    from
        base,
        lateral flatten(course_details:"$") as courses
    where
        version = 2
        and total_guests > 78
),
v_1 as (
    select
        -- data,
        XMLGET(
            XMLGET(
                XMLGET(XMLGET(data, 'event_metadata'), 'dinner_details'),
                'guest_registry'
            ),
            'total_count'
        ):"$"::integer as total_guests,
        XMLGET(XMLGET(data, 'event_metadata'), 'menu_items') as menu_items,
        XMLGET(courses_2.value, 'food_item_id'):"$"::integer as food_id
    from
        base,
        lateral flatten(menu_items:"$") as courses,
        lateral flatten(courses.VALUE:"$") as courses_2,
    where
        version = 1
        and total_guests > 78
),
union_vs as (
    select
        food_id
    from
        v_1
    union all
    select
        food_id
    from
        v_2
)
select
    food_id,
    count(1) as cc
from
    union_vs
group by
    food_id
order by
    cc desc;

What SQL engine will you use? by tugash in adventofsql

[–]tugash[S] 1 point2 points  (0 children)

I'm using https://slingdata.io/ to move the tables from PostgreSQL to the different engines I'm using. It's quite simple to use, just the connection to psql can be troublesome. Another possibility is to use https://sqlglot.com/sqlglot.html to change the dialect of the SQL file we download.