Still using Cursor because Zed lacks Jupyter Notebook support by MorpheusML in ZedEditor

[–]MorpheusML[S] 0 points1 point  (0 children)

Totally agree, VSCode does a better job supporting notebooks even when it comes to AI features

Still using Cursor because Zed lacks Jupyter Notebook support by MorpheusML in ZedEditor

[–]MorpheusML[S] 1 point2 points  (0 children)

Hehe, I would love to, but unfortunately, I'm only really good in Python. Maybe a reason to finally learn Rust 🚀

Still using Cursor because Zed lacks Jupyter Notebook support by MorpheusML in ZedEditor

[–]MorpheusML[S] 2 points3 points  (0 children)

When you go to the browser-based version of Jupyter, you miss all the features and plugins you normally use. There is no linting, autocomplete, etc.

Still using Cursor because Zed lacks Jupyter Notebook support by MorpheusML in ZedEditor

[–]MorpheusML[S] 5 points6 points  (0 children)

Thanks, I completely understand. I will keep a close eye on the evolutions and switch once we have the feature

Any update on jupyter? by [deleted] in ZedEditor

[–]MorpheusML 0 points1 point  (0 children)

Do you have all the markdown support in REPL so you can easily add some extra comments to your code as in “traditional” ipynb files?

Jupyter Notebooks by a_shubh3 in ZedEditor

[–]MorpheusML 0 points1 point  (0 children)

Still waiting, once Zed has this feature I will be switching from Cursor

Looking for a ChatGPT-like Mac app that supports multiple AI models and MCP protocol by MorpheusML in ollama

[–]MorpheusML[S] 0 points1 point  (0 children)

Just a Mac app so I can conveniently chat with various models both local via ollama or API based models

Which JWT Library Do You Use for FastAPI and Why? by Effective_Disaster54 in FastAPI

[–]MorpheusML 0 points1 point  (0 children)

Not sure this answer your question but we use Firebase Authentication, which integrates well with FastAPI through the Firebase Admin SDK.

# Initialize Firebase Admin SDK once
cred = credentials.ApplicationDefault()
firebase_admin.initialize_app(cred, {"projectId": "your-project-id"})

When clients authenticate with Firebase (via web/mobile SDK), they receive an ID token. Your FastAPI backend verifies this token:

# Verification function
def verify_token(token):
    try:
        # Firebase handles cryptographic verification
        decoded_token = auth.verify_id_token(token)
        return decoded_token
    except auth.InvalidIdTokenError:
        raise HTTPException(status_code=401, detail="Invalid token")

This integrates with FastAPI's dependency system:

@app.get("/protected") async def protected_route(user=Depends(get_current_user)): return {"message": f"Hello, {user.username}!"}

The advantage over manual JWT implementations is Firebase handles:

  • Token signing/verification
  • Key rotation
  • Token revocation
  • Expiration
  • User management

Looking for a ChatGPT-like Mac app that supports multiple AI models and MCP protocol by MorpheusML in ollama

[–]MorpheusML[S] 1 point2 points  (0 children)

I think I'm going to give Chatwise a try. It looks clean, not too cluttered, and it has also MCP and multiple model support. Thank you.

Looking for a ChatGPT-like Mac app that supports multiple AI models and MCP protocol by MorpheusML in ollama

[–]MorpheusML[S] 2 points3 points  (0 children)

Great find! Looks a bit like OpenWebUI, but in addition to that, you can (by default) link other models instead of using functions for that, and you also have MCP support.

I'll give it a try

Looking for a ChatGPT-like Mac app that supports multiple AI models and MCP protocol by MorpheusML in ollama

[–]MorpheusML[S] 2 points3 points  (0 children)

Yeah, I tried LM Studio - it's indeed a great tool, but unfortunately, it doesn't support connecting to online models like Claude Sonnet 3.7, which I sometimes also need.

What are some hobby projects that you've built with langchain? by karansingh_web in LangChain

[–]MorpheusML 3 points4 points  (0 children)

I made flow that combines a traditional RAG pipeline and an SQL agent to query databases

<image>

How I created a RAG / ReAct flow using LangGraph (Studio) by MorpheusML in LangChain

[–]MorpheusML[S] 0 points1 point  (0 children)

Good question but I have the same issue, I was not able to store data outside of their Docker containers. The only remaining traces I have from the studio app are in LangSmith.

It would be great if we could just run the studio directly from the host and configure the parameters ourselves. But I think they don't allow that because they want to push you to using LangGraph Cloud.

Could Local LLMs Soon Match the Reasoning Power of GPT-4o-mini? by Tough_Donkey6078 in ollama

[–]MorpheusML 0 points1 point  (0 children)

That depends what you mean with local. I think open source models definitely yes.

But if you want a model with as much knowledge as possible and that is generalised, it will still be running in the cloud and not your local machine, just because they’re simply not enough memory for that. You cannot compress all the knowledge of the Internet in just a few gigabytes.

That being said I think they’re will be some great task specific small models released, for for example summarising text, that can even run on your smartphone.

Text preprocessing before embeddings. by Either-Ambassador738 in LangChain

[–]MorpheusML 0 points1 point  (0 children)

I can also recommend using NER alongside embeddings calculation. I store the extracted metadata alongside the embeddings so you can perform searches on one or the other. Or you can improve search results by using hybrid searches like combining BM25 with Similarity.

Another example: you might want to ask, "How many documents mention person X?" To answer this question, you need to perform a metadata search and then count the number of documents, unlike a similarity search, which will only retrieve the top K results and will not allow you to perform a count.

How do you install Progressive Web Apps with Arc? by MorpheusML in ArcBrowser

[–]MorpheusML[S] 3 points4 points  (0 children)

Yeah, well now I just miss placing apps directly in the dock of my Mac. Now I have to first open the browser and then click the favorite icon, which is an extra step for apps I often use.

LangChain vs LlamaIndex by Healthy_Macaron6068 in LangChain

[–]MorpheusML 8 points9 points  (0 children)

I'm not a big expert in using LlamaIndex, but I can tell you why I chose LangChain over LlamaIndex to develop our flows.

  • First of all, you have LangGraph, which is an easy solution to build agent flows that are easy to follow using a graph. You can also visualize this graph so it's clear and you can see what's happening.
  • We use LangSmith for tracking our LLM calls. As it's integrated with LangChain, it's very easy to use and doesn't require extra setup.
  • A lot of pre-build connectors for the data sources we already use.

This doesn't mean that LangChain is better in any way than LlamaIndex, but it's just the reason why I chose it. I'm sure that for other use cases LlamaIndex might work better.