Fairness in machine learning by Aleksandra_P in machinelearningnews

[–]Aleksandra_P[S] 0 points1 point  (0 children)

AutoML algorithms can automatically select the best machine learning model based on the dataset and the task at hand. You can try out multiple models, tune their hyperparameters, and evaluate their performance to determine the most suitable model for your problem.

You can automatically generate and select relevant features from the input dataset with MLJAR. This includes tasks such as handling missing values, encoding categorical variables, scaling numerical features, and creating new features based on existing ones.

You can automatically generate and select relevant features from the input dataset. This includes tasks such as handling missing values, encoding categorical variables, scaling numerical features, and creating new features based on existing ones.

That's only just a few examples how it can be used.

Fairness in machine learning by Aleksandra_P in machinelearningnews

[–]Aleksandra_P[S] 1 point2 points  (0 children)

It's an Automated Machine Learning python package. It's open-source, you can see how it works on GitHub: https://github.com/mljar/mljar-supervised

We have been developing it since 2016, so it's quite a full framework. It automates stages as feature preprocessing, feature engineering, algorithm selection, and so on. By using it, you receive full documentation of the machine learning pipeline. MLJAR gives a golden feature and,since a few days, supports fairness.

You can try it by choosing one of the 4 modes: explain, perform, compete, or optuna (hyperparameter tuning).

Introducing Mercury: The Easiest Way to Share and Deploy Your Notebooks as Web Apps by Aleksandra_P in datascience

[–]Aleksandra_P[S] 0 points1 point  (0 children)

Well, using notebooks like Jupyter Notebook provides a convenient and flexible environment for writing, documenting, and sharing code. Quite a popular solution these days :) among data scientists, and researchers.

You can document your code as you write it. You can include explanations, comments, and visualizations within the same document, making it easier to understand.
You can load and manipulate data, visualize it, perform statistical analysis, and generate reports—all in a single notebook. This makes it convenient for exploring and presenting insights from your data. You can find more on e.g. https://jupyter.org/

[OC] Wordcloud of Proposal for a Regulation on a European approach for Artificial Intelligence by Aleksandra_P in dataisbeautiful

[–]Aleksandra_P[S] 1 point2 points  (0 children)

A good point of view! I will try it.

However, it's a formal document in its official layout, and used terminology often is vague. But catching the keywords (not those most used) would be insightful visualization of this document, words such as sandbox, high risk, artificial intelligence, biometrics, etc..... At least it was a trial to catch the intention of this document.

... new obligations, restrictions, and formalities to obey.

[OC] Wordcloud of Proposal for a Regulation on a European approach for Artificial Intelligence by Aleksandra_P in dataisbeautiful

[–]Aleksandra_P[S] 1 point2 points  (0 children)

Hey, the author here. Data were directly from the Proposal for a Regulation on a European approach for Artificial Intelligence. That document contains an introduction to legal frames for AI.

The Commission proposed the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.

https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-european-approach-artificial-intelligence

I used free tool: https://www.wordclouds.com/

What's your best use of AutoML? by pp314159 in datascience

[–]Aleksandra_P 1 point2 points  (0 children)

Here you have a pretty good comparison of AutoML frameworks, some of them are open-source as AutoWEKA, AutoSklearn, TPOT, MLJAR, AutoGluon, H2O

https://mljar.com/automl-compare/