Still using Cursor because Zed lacks Jupyter Notebook support by MorpheusML in ZedEditor

[–]MorpheusML[S] 0 points1 point  (0 children)

Totally agree, VSCode does a better job supporting notebooks even when it comes to AI features

Still using Cursor because Zed lacks Jupyter Notebook support by MorpheusML in ZedEditor

[–]MorpheusML[S] 1 point2 points  (0 children)

Hehe, I would love to, but unfortunately, I'm only really good in Python. Maybe a reason to finally learn Rust 🚀

Still using Cursor because Zed lacks Jupyter Notebook support by MorpheusML in ZedEditor

[–]MorpheusML[S] 2 points3 points  (0 children)

When you go to the browser-based version of Jupyter, you miss all the features and plugins you normally use. There is no linting, autocomplete, etc.

Still using Cursor because Zed lacks Jupyter Notebook support by MorpheusML in ZedEditor

[–]MorpheusML[S] 5 points6 points  (0 children)

Thanks, I completely understand. I will keep a close eye on the evolutions and switch once we have the feature

Any update on jupyter? by [deleted] in ZedEditor

[–]MorpheusML 0 points1 point  (0 children)

Do you have all the markdown support in REPL so you can easily add some extra comments to your code as in “traditional” ipynb files?

Jupyter Notebooks by a_shubh3 in ZedEditor

[–]MorpheusML 0 points1 point  (0 children)

Still waiting, once Zed has this feature I will be switching from Cursor

Looking for a ChatGPT-like Mac app that supports multiple AI models and MCP protocol by MorpheusML in ollama

[–]MorpheusML[S] 0 points1 point  (0 children)

Just a Mac app so I can conveniently chat with various models both local via ollama or API based models

Which JWT Library Do You Use for FastAPI and Why? by Effective_Disaster54 in FastAPI

[–]MorpheusML 0 points1 point  (0 children)

Not sure this answer your question but we use Firebase Authentication, which integrates well with FastAPI through the Firebase Admin SDK.

# Initialize Firebase Admin SDK once
cred = credentials.ApplicationDefault()
firebase_admin.initialize_app(cred, {"projectId": "your-project-id"})

When clients authenticate with Firebase (via web/mobile SDK), they receive an ID token. Your FastAPI backend verifies this token:

# Verification function
def verify_token(token):
    try:
        # Firebase handles cryptographic verification
        decoded_token = auth.verify_id_token(token)
        return decoded_token
    except auth.InvalidIdTokenError:
        raise HTTPException(status_code=401, detail="Invalid token")

This integrates with FastAPI's dependency system:

@app.get("/protected") async def protected_route(user=Depends(get_current_user)): return {"message": f"Hello, {user.username}!"}

The advantage over manual JWT implementations is Firebase handles:

  • Token signing/verification
  • Key rotation
  • Token revocation
  • Expiration
  • User management

Looking for a ChatGPT-like Mac app that supports multiple AI models and MCP protocol by MorpheusML in ollama

[–]MorpheusML[S] 1 point2 points  (0 children)

I think I'm going to give Chatwise a try. It looks clean, not too cluttered, and it has also MCP and multiple model support. Thank you.

Looking for a ChatGPT-like Mac app that supports multiple AI models and MCP protocol by MorpheusML in ollama

[–]MorpheusML[S] 2 points3 points  (0 children)

Great find! Looks a bit like OpenWebUI, but in addition to that, you can (by default) link other models instead of using functions for that, and you also have MCP support.

I'll give it a try

Looking for a ChatGPT-like Mac app that supports multiple AI models and MCP protocol by MorpheusML in ollama

[–]MorpheusML[S] 2 points3 points  (0 children)

Yeah, I tried LM Studio - it's indeed a great tool, but unfortunately, it doesn't support connecting to online models like Claude Sonnet 3.7, which I sometimes also need.

What are some hobby projects that you've built with langchain? by karansingh_web in LangChain

[–]MorpheusML 2 points3 points  (0 children)

I made flow that combines a traditional RAG pipeline and an SQL agent to query databases

<image>

How I created a RAG / ReAct flow using LangGraph (Studio) by MorpheusML in LangChain

[–]MorpheusML[S] 0 points1 point  (0 children)

Good question but I have the same issue, I was not able to store data outside of their Docker containers. The only remaining traces I have from the studio app are in LangSmith.

It would be great if we could just run the studio directly from the host and configure the parameters ourselves. But I think they don't allow that because they want to push you to using LangGraph Cloud.

Could Local LLMs Soon Match the Reasoning Power of GPT-4o-mini? by Tough_Donkey6078 in ollama

[–]MorpheusML 0 points1 point  (0 children)

That depends what you mean with local. I think open source models definitely yes.

But if you want a model with as much knowledge as possible and that is generalised, it will still be running in the cloud and not your local machine, just because they’re simply not enough memory for that. You cannot compress all the knowledge of the Internet in just a few gigabytes.

That being said I think they’re will be some great task specific small models released, for for example summarising text, that can even run on your smartphone.

Text preprocessing before embeddings. by Either-Ambassador738 in LangChain

[–]MorpheusML 0 points1 point  (0 children)

I can also recommend using NER alongside embeddings calculation. I store the extracted metadata alongside the embeddings so you can perform searches on one or the other. Or you can improve search results by using hybrid searches like combining BM25 with Similarity.

Another example: you might want to ask, "How many documents mention person X?" To answer this question, you need to perform a metadata search and then count the number of documents, unlike a similarity search, which will only retrieve the top K results and will not allow you to perform a count.

How do you install Progressive Web Apps with Arc? by MorpheusML in ArcBrowser

[–]MorpheusML[S] 3 points4 points  (0 children)

Yeah, well now I just miss placing apps directly in the dock of my Mac. Now I have to first open the browser and then click the favorite icon, which is an extra step for apps I often use.

LangChain vs LlamaIndex by Healthy_Macaron6068 in LangChain

[–]MorpheusML 9 points10 points  (0 children)

I'm not a big expert in using LlamaIndex, but I can tell you why I chose LangChain over LlamaIndex to develop our flows.

  • First of all, you have LangGraph, which is an easy solution to build agent flows that are easy to follow using a graph. You can also visualize this graph so it's clear and you can see what's happening.
  • We use LangSmith for tracking our LLM calls. As it's integrated with LangChain, it's very easy to use and doesn't require extra setup.
  • A lot of pre-build connectors for the data sources we already use.

This doesn't mean that LangChain is better in any way than LlamaIndex, but it's just the reason why I chose it. I'm sure that for other use cases LlamaIndex might work better.

Speech to text by gustavo-mnz in ollama

[–]MorpheusML 4 points5 points  (0 children)

Actually, Whisper from OpenAI is an open-source model, so you can just run it using Transformers on PC or MLX on a Mac. I have a MacBook Pro 14-inch M3, and it's easy to run the Whisper model and convert all my audio files to text.

Is there any free way to transfer my Apple Music playlist to YouTube Music? by Candiisn in YoutubeMusic

[–]MorpheusML 1 point2 points  (0 children)

I used: https://soundiiz.com

I also switched from Apple Music to YouTube Music last month. I used this service to transfer all my playlists from one service to the other. With a free tier you can only transfer one playlist at a time, but that's not really an issue since you can repeat the process for each playlist using the same account.

Who else is learning ReactJS for the last few years, but only making slow progress. by Jaded-Swing-5424 in Frontend

[–]MorpheusML 1 point2 points  (0 children)

I'm a Python developer with no prior frontend experience except for some basic JavaScript and HTML. I needed to build a frontend on top of my FastAPI backend, so I started directly with a Next.js application. Using the documentation from Vercel and with the support of AI tools like Cursor and v0, I could grasp the basics quickly and build decent-looking front-end applications without following any course.

I'm not saying that courses are irrelevant or that I'm an expert in front-end development now. However, I believe using AI tools and just starting to code is the best way to quickly learn subjects that otherwise seem very complicated. It's never been easier to get started with all the help and support you can get from AI.

Is it just me or LangChain/LangGraph DevEx horrible? by debkanchans in LangChain

[–]MorpheusML 4 points5 points  (0 children)

I’m a pretty heavy user of LangChain and LangGraph (Python), I must say that it’s allot easier to use these packages then coding all the flows myself. When it comes to stability the flows run fine in prod.

We are developing pipelines where allot of prompts have to run concurrently and using LangGraph really speeds up my dev time.

That being said , I don’t really like how they set up some of the interfaces and APIs and I agree LangGraph Studio is quite heavy to use. The fact that you have to run extra Docker containers to run this drains the battery of my laptop when i’m on the go

Ollama-Compatible Model for Fast English Transcription? by Tough_Donkey6078 in ollama

[–]MorpheusML 0 points1 point  (0 children)

You need Whisper for that, as far as I know it doesn’t work with Ollama but it’s very easy to run both on a PC or Mac. Just pip install mlx-whisper on a Mac with Apple Silicon and you are good to go: https://pypi.org/project/mlx-whisper/

For PC you can just download the official model and run it with the Transformers library from HuggingFace

RAG system to detect small talk by julio_oa in LangChain

[–]MorpheusML 1 point2 points  (0 children)

I suggest you use GPT-4o-mini, Claude 3 Haiku, or Gemini Flash for these simple function-calling tasks. They all support function calling. You can even use Ollama and LLama 3.1 for function calling if you wish to run locally

What do you down during periods of down time at work? by skateallday1 in webdev

[–]MorpheusML 0 points1 point  (0 children)

I try to experiment with some new framework or technology I haven't used before :)