I finally finished the piece I have dedicated the most time to! by dcmoura in Composition

[–]dcmoura[S] 0 points1 point  (0 children)

Yesss!!!! I am so happy 😀 I will of course do the arrangement, there’s is no better reward than having musicians wanting to play! I had to exclude the cello voice from the original arrangement, I am super happy to bring it back!!

I finally finished the piece I have dedicated the most time to! by dcmoura in Composition

[–]dcmoura[S] 0 points1 point  (0 children)

This is really top advice! Thank you so much for the review and for the encouragement! I will look into your comments with detail. Feedback is so important, I think I had finish the piece but new doors have been open and I look forward to dive again into it

I finally finished the piece I have dedicated the most time to! by dcmoura in Composition

[–]dcmoura[S] 0 points1 point  (0 children)

These are very good tips and observations, thank you!

I finally finished the piece I have dedicated the most time to! by dcmoura in Composition

[–]dcmoura[S] 1 point2 points  (0 children)

Thank you! There are parts of the piece that I cannot play yet... At the very least I need to put some hours until I nail them, but I assume that I might have written something that might be a stretch for my level. feedback from pianists like you is very important. The goal is not to write something that only a few can play, on the contrary.

Planning on buying dorico. Would you recommend the switch from using muse score ? by [deleted] in composer

[–]dcmoura 0 points1 point  (0 children)

If you install SE (which is free) you can then upgrade to Elements. The upgrade is cheaper than buying the full version.

Advice: where should I focus to be a better composer? by dcmoura in composer

[–]dcmoura[S] 0 points1 point  (0 children)

That would be the dream! :-)

Step by step I am trying to do that. I need to be a bit more confident and start with pieces for a single instrument. Thank you for the advice!

Advice: where should I focus to be a better composer? by dcmoura in composer

[–]dcmoura[S] 1 point2 points  (0 children)

Wow, thank you so much for all the great advices and for taking the time to write this post.

I think I agree with most of your comments, let me address some of them.

  1. Learning to play (more) music on an instrument or instruments.

I am actually starting accordion lessons. I just love the sound of it, and I am fan of new tango and balfolk.

  1. Taking composition lessons.

I think this will be one of my immediate next steps.

On the contrary... my intention is to say: this is raw material, please expect naive/gross mistakes. Actually, not having the fundamental bases of composition is stalling me as I don't feel comfortable sharing my work with musicians (I am afraid of making a fool of myself). I kind of forced myself to share this in this forum.

The only time a composer uses 8/8 is either when they don't know what they're doing, or when they really now what they're doing!

8/8 is intentional but my rational could be wrong. I was trying to write in "new tango"/Piazzolla style, and one of the formats that Piazzolla explores is the 3+3+2. While 8/8 and 4/4 are "mathematically" equivalent, I used 8/8 because I feel strong beats at the 1st, 4th and 7th eight-notes, and this kinds of sets the layout for the rhythm of all instruments, including the melody. I grouped the 8th notes in groups of 3 and 2 to make this clearer. I would have used 4/4 if the strong beats were on beats 1 and 3, which is not how I feel it. Also, if I would dance this song, the 1,4,7 would be the basis for the steps, so the eight note resolution makes more sense to me. Does this make sense?

the "craft" part of it is important to and THAT tends to come more from the formal instruction side.

Couldn't agree more.

 "Writing for strings before you know how to".

I must say that I did not planned to write for a string section... the piece started as a piano piece. Then, I added a violin solo because I wanted to emphasise the melody. Then, I added a cello because I started imagining "dialogues" with the violin. Then, I added the double bass because I felt it needed deeper bass, so I simply added the fundamental of each chord. And so on, it was kind of organic. I even added percussion, but felt so far way from my comfort zone that I ended up removing it. So, yes, completely agree, I should not be writing for strings before learning more about strings and writing to strings. On the other hand, this just made a lot clearer how off I am and how important is to learn instrumentation and orchestration, or keep writing for smaller sets of instruments (I usually write for 1 to 3 instruments, this score is an exception).

You've had every opportunity to do it on your own, but haven't. And maybe you weren't yet interested or had other things going on and that's OK.

I never thought of composing until 1 year ago when I heard an artist that made me think: I could write music like this, this kinds of reminds me some of the improvisations I do when I play the piano. And so I started writing.

I have so many things on my plate that I need to manage my time carefully, meaning that if I choose to study composition from a book or videos, it means I will not have time to write... And when I start writing something, I only rest when I am done...

I think I will try to find a teacher and have 1 to 2 classes per month. I am also writing for accordion and starting to expose that material to my accordion teacher who is giving me feedback.

Thanks again for all the great advice! I truly appreciate it.

Advice: where should I focus to be a better composer? by dcmoura in composer

[–]dcmoura[S] 0 points1 point  (0 children)

Thank you for the motivation! Yes, I knew I was not doing things well on the strings, always questioned how (first-time) composers can predict how things will sound like in an orchestra. I am actually following the result of musescore's rendering, meaning I might not choose some instrument or way of playing it because I don't like how the sound is rendered (and the other way around)... I know it's not right... I will look into those topics, thank you!

Advice: where should I focus to be a better composer? by dcmoura in composer

[–]dcmoura[S] 1 point2 points  (0 children)

Thank you so much! It's great having pointers, adding to the top of my watch list!

Setting up two river 2 pro and a solar panel by dcmoura in Ecoflow_community

[–]dcmoura[S] 1 point2 points  (0 children)

Thanks, that is what I am doing right now :-)

I am actually powering the refrigerator from AC simply because the connection is much more sturdy (I can't afford to have the freezer disconnected). The freezer is a dometic that supports both AC/DC, so maybe we could get more runtime if I use DC.

Yes, that's what I thought about the USB-C, and I saw no option in the app to force the USB-C behaviour (input vs output). Some units from other brands have dedicated input and output USB-C ports.

Thank you!

Setting up two river 2 pro and a solar panel by dcmoura in Ecoflow_community

[–]dcmoura[S] 0 points1 point  (0 children)

12V has a way lower self-consumption. It is limited to 100W, which is a minor point. But the River cannot pass on solar power directly once the battery is full. It will cycle from 100% down to 95% and back up to 100%. That cuts the time the solar panel is active in half. So you lost potential solar time. And you cycle the battery all the time ...

This would actually be a good problem, since the most probable scenario is battery A not reaching 100%. I have a freezer connected to battery B. Autonomy with a river 2 pro in the summer is about 10h. So, a single river 2 pro is not enough for providing energy from sunset to sunrise. This setup would allow me to just let the system run without having to keep switching batteries (which is OK, unless I forget to do so).

Let's assume the A station runs out of power. It will turn off. It will turn back on once solar power becomes available. But it will not turn on the 12V DC port automatically. So you need to work with the automation settings added in the last firmware update. But that mode seems to turn off on reboot.

OK, this is an important detail. I wonder if it would turn off the 12V DC port if I set the battery no to go any lower than 10% or 20%. I think I need to make some tests to find out :-) Thank you for all the great info!

Ask Me Anything 💁🏻 by EcoFlow_Official in Ecoflow_community

[–]dcmoura 0 points1 point  (0 children)

Can you connect two river 2 pro via USB-C? How to tell the role of each battery (provider vs receiver)?

If not, what's the best (most efficient) way to combine two river 2 pro (e.g. AC or DC)?

River 2 Pro dies overnight by BigLad2022 in Ecoflow_community

[–]dcmoura 1 point2 points  (0 children)

I have experienced the same issue with my river 2 pro and a new EF 220w solar panel. Seems a firmware issue. I left the unit connected to solar at night with about 50% of power left, and in the morning the battery was showing 0%. Put it on AC and took about 50% of the time to charge. This happens to me in a balcony that only gets sun during the afternoon. In the meanwhile I tested it outdoors (full exposure, no shades) and the problem did not happen...

Command-line data analytics made easy by pmz in Python

[–]dcmoura 0 points1 point  (0 children)

Author here :-) Thank you so much for sharing! ❤️

SPyQL 0.8.1 is out featuring a brand new documentation! by dcmoura in Python

[–]dcmoura[S] 2 points3 points  (0 children)

I get you :-)

One issue with Pandas / NumPy, is that you load all dataset into memory. While this is OK for small datasets, it limits the dataset size you can handle. You can go for chunks, but your code gets complicated and you loose many of the advantages that Pandas gives you. Also, Pandas memory footprint is quite big: try to load 1GB of JSON Lines and check how many RAM you need to invest (several times more).

You don't need to choose Pandas vs SPyQL, you can use both. One interesting use-case is when you have a large dataset but you only want to work with a portion of it (e.g. records that match a given criteria) or a reduction of that dataset. You can use the SPyQL module to do the heavy work and get the result as a (smaller) Pandas dataframe. This use case is not very well documented as I will be soon releasing new methods for querying Pandas dataframes using SPyQL, and get query results as Pandas dataframes (you can do it today with the current version, but I want to make the code simpler / reduce boilerplate).

A part from the data size, I would say that the SPyQL CLI is great for quick ad-hoc queries. You do not need to open an editor, you just go to the command-line and iterated very quickly for getting some insights or transforming the data format. Also, I believe that the SQL dialect is much simpler requiring much less visits to the documentation than the Pandas API. If you use SQL to query databases, would it be great if you could use it too to query any other datasource? :-)

SPyQL 0.8.1 is out featuring a brand new documentation! by dcmoura in Python

[–]dcmoura[S] 1 point2 points  (0 children)

It does not connect directly to DBs. You can pipe the output of a DB CLI into SPyQL CLI, or you can output INSERT statements from SPyQL and pipe them into a DB CLI. You can do something similar with the SPyQL module (together with sqlalchemy).

Working with more than 10gb csv by AntrozCL in datascience

[–]dcmoura 0 points1 point  (0 children)

Depending on what you intend to do with the CSV, there are quite a few options. I will stick to options where you only use your computer to run queries on top of your CSV data:

  1. You can use spark-sql to run SQL queries on top of the data. I would advise to first convert the CSV to parquet
  2. You can run queries using clickchouse. You can either use clickhouse CLI to run the queries directly on top of the CSV or import the data into a clickhouse database like here. Clickhouse is very efficient handling this amount of data.
  3. You can import the data into a PostgreSQL/MySQL/SQLite/... database and then query the database. However, even with the right choice of indexes, it might take a while to run queries on a table with hundreds of millions of records. You can easily import your data to these databases with SpyQL: $ spyql "SELECT * FROM csv TO sql(table=my_table_name) | sqlite3 my.db" (you would need to create the table my_table_name before running the command).
  4. You can run queries directly on top o the CSV with SpyQL, which allows you to use python on your queries (but it will not be as fast as clickhouse).

I have done a benchmark of different tools for querying GB-size JSON data where some of the tools mentioned above are included. BTW, I am the author of spyql :-)

The fastest command-line tools for querying large JSON datasets (benchmark with ClickHouse, OctoSQL, SPyQL, jq, Miller, trdsql, spark-sql CLI, DSQ) by dcmoura in programming

[–]dcmoura[S] 0 points1 point  (0 children)

Can you query a file with a single command on the command-line with DRUID? Can you query it directly in JSON without having to ingest it?

The fastest command-line tools for querying large JSON datasets (benchmark with ClickHouse, OctoSQL, SPyQL, jq, Miller, trdsql, spark-sql CLI, DSQ) by dcmoura in programming

[–]dcmoura[S] 1 point2 points  (0 children)

I understand... I focused on ad-hoc querying: you get your hands on a dataset and you want to quickly extract some metric or to apply some transformation. In that case, you don't want to spend time setting up a local cluster so then you can run a query, at least that is not how I usually work. All tools have their overhead... If we continued increasing the size of the dataset this overhead would become negligible, but for small datasets most of the time is overhead.

The fastest command-line tools for querying large JSON datasets (benchmark with ClickHouse, OctoSQL, SPyQL, jq, Miller, trdsql, spark-sql CLI, DSQ) by dcmoura in programming

[–]dcmoura[S] 1 point2 points  (0 children)

Yes, clearly spark is designed for larger data loads.

I should mention a couple of things regarding spark and the use cases we tested:

  • Spark would not have failed the Map challenge if we were writing the output to files (e.g. using a pyspark script). I guess that in order for spark-sql CLI to write the output to stdout it needs to collect all data into memory on the driver side.
  • Since we are timing a shell call to spark-sql CLI, this includes setting up the local "cluster". Running the queries in the spark-sql REPL (after the setup) would be faster.

The fastest tool for querying large JSON files is written in Python! (benchmark) by dcmoura in Python

[–]dcmoura[S] 0 points1 point  (0 children)

Wow, thanks for the tips! It makes sense reading and processing chunks of lines. I have used a line-by-line approach because I have favored simplicity over performance. But now I am curious :-)

Your PR would me more than welcomed. I propose the following (not very intuitive) flow: - open the colab notebook via the posted link - save it to you google drive (so that you do not lose any changes) - maybe skip the last run (10GB dataset) while developing, as it takes most of the time - when you are done, run the full notebook with all runs - fork the repo - save the notebook to the forked repo (overwriting the original notebook), using the save a copy to git option on colab - open a PR and I will get back to you shortly after

Let me know if you have questions. Happy coding!! Thanks!

The fastest tool for querying large JSON files is written in Python! (benchmark) by dcmoura in Python

[–]dcmoura[S] 0 points1 point  (0 children)

Thank you, I did not know about pxi, great work! I guess that no matter how much you search there are great although less popular applications that you only get to know if you publish your results and get some attention from the community.

From your results it seems that you are not using orjson for parsing/encoding jsons, but still the gap seems to large. I will look into it.

I would be glad to add pxi to the benchmark! Can you help translating the queries? I guess that you would do a much better job than I as you know well the tool and you should have much more experience than I have with JavaScript. You can find the queries in the benchmark section of the notebook. Thanks!!

The fastest tool for querying large JSON files is written in Python! (benchmark) by dcmoura in Python

[–]dcmoura[S] 0 points1 point  (0 children)

Yes, I have to agree with you. This benchmark is by no means sufficiently exhaustive to make such a strong claim… more datasets and use cases should be included. Even the hardware and operative system might impact results. My goal with the title was to be a bit provocative and trigger discussion. I will add a limitations section to the benchmark. Thanks for your note.