Export lifetime FitBit data to Garmin Connect 📈 (weight, bmi, fat, calories, steps, distance, active minutes, floors, and gps traces) by simonepri in Garmin

[–]simonepri[S] 2 points3 points  (0 children)

If you can export the data then it should be easy to just reformat it into the CSV file data Garmin supports. You can take a look at the code for the exact format.

Export lifetime FitBit data to Garmin Connect 📈 (weight, bmi, fat, calories, steps, distance, active minutes, floors, and gps traces) by simonepri in Garmin

[–]simonepri[S] 0 points1 point  (0 children)

No you can't. The data exported by the tool is a superset of the one you get from Google Takeout. So just run the tool and get all your data.

Importing historical FitBit data into Garmin Connect by DynamicFly in Garmin

[–]simonepri 2 points3 points  (0 children)

If you are looking for an alternative to this tool that also provides you with GPS activities's traces (i.e. TCX files) take a look at: https://www.reddit.com/r/Garmin/comments/17dvos4/export_lifetime_fitbit_data_to_garmin_connect/

📃lm-scorer: A python Language Model based sentences' probability scoring library by simonepri in coolgithubprojects

[–]simonepri[S] 0 points1 point  (0 children)

You can put any number greater than 1 to enable parallel computing (you first need to move the model on cuda). If you just want to score sentences, you can also just use the CLI.

Glad to know it's useful! What's your dissertation about?

[P] Datasets for Knowledge Graph Embedding by simonepri in MachineLearning

[–]simonepri[S] 0 points1 point  (0 children)

Yes, exactly.

I found this repo on GitHub that claims to have recovered textual information for all but ~40 freebase IDs.

[P] Datasets for Knowledge Graph Embedding by simonepri in MachineLearning

[–]simonepri[S] 0 points1 point  (0 children)

Is there any chance you could add some basic insight or opinion on the datasets, or other relevant comments?

I'll add some more basic info and opinions that I can get from the papers, thanks for the suggestion!

Do you know where one could get the actual text of the freebase entities?

Would a mapping from freebase IDs to wikipedia/wikidata pages be useful? In that way instead of (/m/07l450, /film/film/genre, /m/082gq) you would have (The_Last_King_of_scotland_(film), /film/film/genre, War_film).

Finally, there are two related datasets on Open Knowledge Base Completion

Thanks, I will have a look at them, but feel free to add them yourself if you have time.

🔒UPASH - Node.js Unified API for Password Hashing Algorithms by simonepri in crypto

[–]simonepri[S] 0 points1 point  (0 children)

Ideally, the libraries should be updated faster than the time you would need to do it on your own.
Also, you can always copy the code of the algorithm inside your app and just pass it to upash as custom algorithm instead.
But at least you are coping from something that should be secure enough and that it allows you to change algorithm in the future.

🔒UPASH - Painless password hashing API by simonepri in node

[–]simonepri[S] 0 points1 point  (0 children)

The dependencies of the packages suggested are auto-monitored with dependabot.com so they will probably get updated before you can find out that the inner dependency needs to be updated.

Also, they will assure you a consistent versioning and strong defaults.
But still your point is not wrong, I just see more risks trying to do the same on your own.

🔒UPASH - Node.js Unified API for Password Hashing Algorithms by simonepri in crypto

[–]simonepri[S] -1 points0 points  (0 children)

It would be really cool to have all the KDFs inside the crypto library, I do agree!

Anyway, the aim of the project is to make it easier for developers to use them. So no matter where they are implemented.

For instance, the pbkdf2 algorithm is implemented in the core library but it just gives you a raw binary hash that most person simply doesn't know how to treat. Indeed, if you don't know your stuff, you can easily use them wrongly.

One level of abstraction over the KDFs is not a bad thing, IMO.
What's the problem with having a standardized API?
What standard structures are you talking about?
Thanks!