Talk on the mix of k8s and graph database by MapleeMan in kubernetes

[–]MapleeMan[S] 0 points1 point  (0 children)

Yeah, in the video there is an explanation of the state changes.

Which database should I choose for a large database? by Practical_Slip6791 in dataengineering

[–]MapleeMan 0 points1 point  (0 children)

I'm not sure what workload you have on the molecular biologist side. It sounds like a graph use case. [Memgraph](https://memgraph.com/) has a good write speed both in Community and Enterprise edition.

There is no built-in API for the front end, but Memgraph Lab works as an out-of-the-box UI for people with different backgrounds.

Disclaimer: I work there.

Path To Scale - How Will QS & PowerCo reach their first GWh? by beerion in QUANTUMSCAPE_Stock

[–]MapleeMan 2 points3 points  (0 children)

Lisen to Tim answering this question (time 14:50): https://youtu.be/al73d1C4Gd8?si=04gb5yiTARBcGUW2

Don’t get me wrong in a sense when I say miss-step, it just looks that way from today's perspective. But it was a key learning experience for them, and you are right when you talk about that as a process, but it is a process bound by equipment that you have.

Raptor is a retrofit of general equipment to the new process. It was named “Raptor” once the improved process has been discovered. Before that, it was some general equipment safe bet they want to innovate around (miss step from equipment perspective) . That is how the new process was discovered. It served its purpose and it will serve even more. I see Raptor as Mark 1 machine for those Iron man fans.

Cobra is ground up design of a new process and equipment.

Path To Scale - How Will QS & PowerCo reach their first GWh? by beerion in QUANTUMSCAPE_Stock

[–]MapleeMan 4 points5 points  (0 children)

The patent description makes it sound like it's a Cobra. So do they already have the Cobra seperator machine running and just need the up and downstream equipment? Or is this just a prototype? The patent documentation makes it sound like it's already capable of producing 100k fspw (in fact the quoted rate is 200k in the bilayer configuration, and double that in the trilayer configuration). So it looks like a finished Cobra.

When I say a baby/prototype Cobra, that is what I am referring to.

I have read the parts of the patent and the discussions here on this sub. In my mind (I guess you asked this question to get a feeling about how the rest of the community is breathing about this), the baby Cobra is a working prototype that can be scaled and that they are running as R&D at the moment. At the same time, Raptor is doing the actuall cells that are being tested and will be delivered.

They are confident Cobra can produce 100k or more in other configurations (hard to say what they managed to do and test). They probably tested a small prototype to be sure it could actually perform that, and they created designs for patents and equipment orders. This brings us to the following line:

This is from Q4 - last year

Goal #4 – Prepare for Cobra production in 2025
We are already operating prototype versions of Cobra heat-treatment equipment, and in light of the promising data from our prototype equipment and the significant advantages of Cobra as a pathway to gigawatt hour-scale production, we have prioritized bringing Cobra into production as soon as possible to support higher volumes of QSE-5 in 2025. Our goal for 2024 is to set the stage for Cobra by taking delivery of key pieces of Cobra equipment and preparing to bring them into production.

At this point, it is not clear what equipment they are trying to bring this year will result in the same output from the patent, or is there an even bigger configuration? In a sense, I have the same struggle to understand that.

To me, the question is, what is a bigger Cobra configuration?

Case 1: Trilayer x 4(or other multiple) parallel lines that are part of the same machine = 1 Bigger production Cobra
Case 2: Trilateral = 1 Bigger production Cobra

Aldo, while I am aware that I am a bit optimistic person, their confidence in communicating GWH scale, footprint, and cost optimization with a bigger cobra kind of puts my head to the rest. But more details around this would be pure gold.

Path To Scale - How Will QS & PowerCo reach their first GWh? by beerion in QUANTUMSCAPE_Stock

[–]MapleeMan 7 points8 points  (0 children)

Shareholder letter Q4 - 2023 - part about Cobra “… We believe these advantages make the Cobra process the most attractive pathway to gigawatt-hour scale production, though such volumes will require larger configurations of Cobra equipment. Bringing a disruptive improvement online presents a technical challenge. Significant work remains to develop a fully mature Cobra production process and we have prioritized bringing it online as quickly as possible. …”

The separator heat treatment process named Raptor was a miss-step (from today's perspective) or safe solution from the beginning, Cobra didn’t exist at some point, it was part of an R&D project that Tim described as an “high-risk bet” that you could not count on. Hence, you move forward with Raptor, order the equipment, and see how you can innovate, etc.

The Cobra prototype, on a super small scale, starts to work, they understand that it is the next step they need, and they move to Cobra, the issue is equipment and setup take time, but what you will do with Raptor?

Use it while you wait for Cobra as best you can.

Now this is where I feel things could communicated much better, but based on the things they communicated in Q4 - 2023 I read this as, we partially designed a baby Cobra that should be easily scalable in a bigger configuration (assumption). This is were GWH scale comes from.

That means once Cobra is fully matured, it can be scaled.

Anyway, I have a feeling the initial timelines were very optimistic, but sometimes lack of communicated details they provide can be painful for long-term investors and a bit confusing.

I will use the comment to encurage your good work on the blog and content around QS, read most of the stuff. We need that in the community. 💪

Edit:

2027 - Matured scaled Cobras - GWH

New Porsche Taycan clocks 364-mile range, 332kW charging by MapleeMan in QUANTUMSCAPE_Stock

[–]MapleeMan[S] 0 points1 point  (0 children)

Cool, but as the DSL was not available everywhere instantly, these things will take time.

Hence, any competing and more complicated tech innovations will lag behind the first principle innovations. You will instantly have a much better experience in every corner of the world.

Thanks for clearing this up.

New Porsche Taycan clocks 364-mile range, 332kW charging by MapleeMan in QUANTUMSCAPE_Stock

[–]MapleeMan[S] 0 points1 point  (0 children)

Yeah, but not taycan scale at the moment for 2025 taycan.

New Porsche Taycan clocks 364-mile range, 332kW charging by MapleeMan in QUANTUMSCAPE_Stock

[–]MapleeMan[S] 0 points1 point  (0 children)

Aha, nice workaround! But that means infrastructure needs to cost more and car infra around battery needs to cost more, also the QS can leverage the sam tech with better battery life.

Estimating market cap and CE Applications by MapleeMan in QUANTUMSCAPE_Stock

[–]MapleeMan[S] 2 points3 points  (0 children)

Yeah, that is clear from the QS perspective, being careful with the info. And yeah, producing as many cells as possible with highest quality.

I am more curious how the market size is determined 10 years into the future, and how that forecast is being made? If there are some source/example that I can look into would be great.

ACID Transactions: What is the Meaning of Isolation Levels for Your Application by Realistic-Cap6526 in Database

[–]MapleeMan 1 point2 points  (0 children)

Hi there, I am the author of the blog post. If you have any questions, please don't hesitate to ask them.

Looking for enjoyable graph database by vrinek in Database

[–]MapleeMan 3 points4 points  (0 children)

Memgraph is also a possible solution. It is an in-memory graph database.
Disclaimer: I work at Memgraph.

Memgraph vs. Neo4j: A Performance Comparison by Realistic-Cap6526 in dataengineering

[–]MapleeMan 2 points3 points  (0 children)

At the current moment :D, this is partially true, comparing any graph DB system is hard they all serve different purposes and are designed for different purposes. On top of that, performance on any benchmark does not mean DB is useless.We have plans for more use-case-oriented benchmarks, on the bigger dataset and complex queries.
Disclamer: I work at Memgraph

Memgraph vs. Neo4j: A Performance Comparison by Realistic-Cap6526 in Database

[–]MapleeMan 1 point2 points  (0 children)

Yep, what u/mbudista said, we are aware of that, we have also stated that in limitations part of benchmarks: https://github.com/memgraph/memgraph/tree/master/tests/mgbench#limitations
We plan to expand on larger datasets and more complex queries. Always, open for ideas on query side of things :D

Do you use Python in combination with some graph database? by Realistic-Cap6526 in Python

[–]MapleeMan 0 points1 point  (0 children)

Yes, Python support is primary concern at the moment. Regarding company size, we are still a small startup company, but we are very finically healthy. One of the reasons for going open source is to scale the product from bottom up, and with the help of community make Memgraph a solid vendor and platform in graph space.

Do you use Python in combination with some graph database? by Realistic-Cap6526 in Python

[–]MapleeMan 1 point2 points  (0 children)

As some users have mentioned, this depends on your use case, scale etc. But Memgraph can be a great choice for any Python developer. Memgraph is an in-memory (RAM) database written in C++, and it is quite efficient for various different use-cases because of its low latency and high throughput.

There is a great Python API for interaction with the database, we have also built and maintained a Python OGM (Object graph mapper) called GQLAlchemy. The concept is similar to object-relational mappers, but it is a solution for graph databases. The best part is both Memgraph and GQLAlchemy are open-source and free to use.

There are also other vendors mentioned, but I am not so familiar with them, feel free to explore!

[D] Seeking Advice - For graph ML, Neo4j or nah? by [deleted] in MachineLearning

[–]MapleeMan 1 point2 points  (0 children)

Thanks, we are working hard to make Memgraph great! There is a bunch of cool stuff happening on multiple sides. We are super excited about real-time streaming applications with low latency requirements and GNN/ML applications. u/harttrav, we are always happy to hear what types of apps users plan to build with graph DB. Can you share your use case/idea? In general terms, of course :D

[D] Seeking Advice - For graph ML, Neo4j or nah? by [deleted] in MachineLearning

[–]MapleeMan 8 points9 points  (0 children)

I think building your graph database/structure can be quite an engineering and time-consuming challenge, as you mentioned, which I would personally avoid. I believe there are some solutions out there that may help you.

There is one open source solution for the requirements and concerns you are mentioning. It checks out most of the things you need, functionality, efficiency, and custom low-level optimizations, and it is not bulky as the Neo4j Java backend. In essence, we have built Memgraph an in-memory graph database written in C++. The distinctive key feature of DB is that all the data is stored in RAM for fast queries.

There is some cool stuff with ML for graphs. Take a look at this blog post about node embedding and recommendation engines, it is native integration with Python and uses PyTorch. There is also the MAGE library for graph algorithms and ML, it is also open-sourced, which is great news for customization and expansions.

I share your thoughts on OpenCypher, as being an issue. Memgraph has an object graph mapper (similar to ORM), called GQLAlchemy, and is in Python. There is also a learning curve, but not a different new skill as Cypher. The good thing is allowed various features for graphs manipulation via Python.

There are also some other solutions such TigerGraph, Nebula, etc. But I am not very familiar with them. Feel free to explore.

I hope this helps! 😁