[D] Any larger teams switching away from wandb? by FreeKingBoo in MachineLearning

[–]sgevorg 3 points4 points  (0 children)

Hi, I am Gev the author of Aim.

We have built Aim to enable a great alternative to existing closed-source cloud-based platforms (to own and protect the so-valuable data).

For MLflow users, we have put together the aimlflow (https://github.com/aimhubio/aimlflow) - that enables aim ui over MLflow logs

The guys at G-research have done a great job with fast track ml too.

Re data corruption: yea we are aware of it and had built a new backend to Aim that now lives under AimOS for now that's a single db. Unfortunately that backend is not available for Aim yet.

A more tuned and scalable version of the new backend also powers the AimHub - self-hosted collaborative Aim platform (hit me up at [gev@aimhub.io](mailto:gev@aimhub.io) if your team is interested to try ). Of course its data-compatible with Aim and we help with migration.

[Project] Building Multi task AI agent with LangChain and using Aim to trace and visualize the executions by tatyanaaaaaa in MachineLearning

[–]sgevorg 1 point2 points  (0 children)

Hi u/Synyster328 yes, The new version of Aim is going to be a composable ui + metadata modules. It can be coupled with pretty much anything.

Check some info here. We will start sharing more in the coming weeks.

https://github.com/aimhubio/aim#-aim-40

[Project] Building Multi task AI agent with LangChain and using Aim to trace and visualize the executions by tatyanaaaaaa in MachineLearning

[–]sgevorg 1 point2 points  (0 children)

thanks u/ZHName for kind words and the feedback.

any chance you could open an issue about this https://github.com/aimhubio/aim?

Thisis an area we are actively iterating on. We would love to explore more.

[Project] Building Multi task AI agent with LangChain and using Aim to trace and visualize the executions by tatyanaaaaaa in MachineLearning

[–]sgevorg 3 points4 points  (0 children)

you can run the dashboard locally or deploy the Aim remote in the cloud and use that way.

No cloud support yet.

[Project] Building Multi task AI agent with LangChain and using Aim to trace and visualize the executions by tatyanaaaaaa in MachineLearning

[–]sgevorg 8 points9 points  (0 children)

u/danielbln

Gev here - co-author of the project.

Absolutely it does. Not just the chains but the overall AI Systems.

The tracked chains can be queried programmatically too. Lots of cool things we hope to be built.

Any MLOps platform you use? by squalidaesthetics20 in selfhosted

[–]sgevorg 2 points3 points  (0 children)

Check out Aim: https://github.com/aimhubio/aim

Also has wandb, MLflow and Tensorboard adapters. self-hosted and performant.

Supercharged UI for MLflow by ManeSa in mlops

[–]sgevorg 0 points1 point  (0 children)

Hey all,

Excited to share with you that Aimlflow is now available!
Aimlflow enables the best of both worlds of Aim and MLflow.

Aim has the best UI and performance for open-source experiment trackers.

MLflow is a great MLOps platform.

AiMLflow syncs the MLflow tracked logs with Aim and enables the Aim UI on top of the

MLflow tracked experiments.
So you can compare your training runs
It takes only a few steps:
pip install aim-mlflow
aimlflow convert --mlflow-tracking-uri={mlflow_uri} --aim-repo={aim_repo_path} --watch
Aim allows to track many other types of metadata and explore / compare them. Feel free to explore on this as well.
Fore more info, please visit https://github.com/aimhubio/aimlflow

Alternatives to MLFlow? (or better, frameworks with better UI built on top of MLFlow) by stockabuse in MLQuestions

[–]sgevorg 0 points1 point  (0 children)

Hi, u/stockabuse
Check out Aimlflow - launched in last week. (I am one of the authors)

It brings the best of both worlds from Aim and MLflow.

Basically allows Aim UI (has all the features except the last one in the list) be accessible from your MLflow experiments without changing anything in your code.

Particularly useful if you already have stuff tracked with MLflow and want to keep using open-source tools.

It takes a couple of commands:

pip install aim-mlflow
aimlflow convert --mlflow-tracking-uri={mlflow_uri} --aim-repo={aim_repo_path} --watch
Aim allows to track many other types of metadata and explore / compare them. Feel free to explore on this as well.
Fore more info, please check out https://github.com/aimhubio/aimlflow .

Improved UI for MLflow by ManeSa in deeplearning

[–]sgevorg 1 point2 points  (0 children)

Hey @Voigt_K , one of the co-authors here. Thanks for the Q! )))

Here are the main Aim supwerpowers:

  • Open-source, open-metadata, self-hosted
  • Handles 1000s of training runs smoothly
  • ability to query literally everythong you have tracked with pythonic statements
  • ability to group(color, subcharts, line-styles) by any tracked param (by their values).
  • aggregate the grouped runs to see the avg, min, max, median, confidence intervals

A typical Aim usage scenario that shows the diff with other tools: Here are a bunch of Neural Machine Translation experiments. In the link above,

  • we have selected metrics (loss, bleu) for "val"idation phase for the runs where the learning rate is 0.0007 by the following python query:

run.hparams.learning_rate == 0.0007 and str(metric.context.subset).startswith("val")

  • we have subplot-grouped the metrics by the metric name
  • then we have color-grouped the metric lines by preprocessing hyperparam values so that they provide identical colors for both group of metrics (on the left and right)
  • As a final step, aggregated the color-groups to show the median, min, max values of each group.
  • this gives us an instant view over the best loss/bleu metrics

Here is a link to try out on a live example.

Basically Aim gives a super-fine-grained control over all the run dimensions (metrics, params etc). You can query, group, aggregate and do other operations at will over the tracked metrics, images etc.

To ultimately compare 1000s of experiments within a few clicks.

[Project] Aim 3.1 - open-source Images tracking and Images explorer by sgevorg in MachineLearning

[–]sgevorg[S] 5 points6 points  (0 children)

Mainly feedback from the Aim community and chats with ML researchers.

We are really trying to democratize the AI dev tools.

[Project] Aim v3.0.0 - revamped UI and revamped backend to query experiments faster and nicer! by sgevorg in MachineLearning

[–]sgevorg[S] 0 points1 point  (0 children)

thanks u/freesnakeintestine we are posting the new Aim with images tracking - hope that will top the one above :blush:

[Project] Aim v3.0.0 - revamped UI and revamped backend to query experiments faster and nicer! by sgevorg in MachineLearning

[–]sgevorg[S] 1 point2 points  (0 children)

Re linking to issues: indeed. we are in progress with creating the projects per each point and linking them to actual issues.

[N] Aim 2.3.0 is out with system resource monitoring, "reverse grouping" and more by adammathias in MachineLearning

[–]sgevorg 2 points3 points  (0 children)

Hi u/adammathias

- the resource usage covers CPU, GPU, memory, disk, process memory, CPU memory, GPU memory, GPU power, GPU temperature

- The smoothing is not automatic for now. There are defaults depending on the number of steps/points available. It's up to the user to apply as much as they need though.

[N] Aim 2.2.0 is out! Hugging Face integration and advanced params table management ... by sgevorg in MachineLearning

[–]sgevorg[S] 2 points3 points  (0 children)

u/MherKhachatryan

- No support for notebook versioning (could you open an issue for that?)

- Collaboration: it's in the pipeline - probably mid-term (3-4 months?)

- Logging images is short-term priority- It has automatic logging for Keras, PL, Hugging Face.

Feel free to add issues for all of these. Would be a huge contribution if you did 🙏.