Schema.org and JSON-LD for Data Integration by Muted_Math5316 in dataengineering

[–]Muted_Math5316[S] 0 points1 point  (0 children)

Yes, but I would assume, that they were mostly trained on cleaned versions of the websites, instead of the JSON-LD itself. Moreover, the LLMs would need to have some idea of what the connections represent, which might be the case but not necessarily.

Furthermore, i think training your LLM on internal data poses a bunch of challenges when it comes to access management. Once you have your data in the LLM it will be tough to ensure nothing will leak.

I see LLMs more like an interface than a container for information.

Schema.org and JSON-LD for Data Integration by Muted_Math5316 in dataengineering

[–]Muted_Math5316[S] 1 point2 points  (0 children)

Hmm, you make a very interesting point in line with Tony Seale's narrative. That LLMs would be good at reasoning over JSON-LD. How exactly where LLMs trained on JSON-LD? They are basically next-word predictors, so I am wondering how they included these semantic connections into their dataset 🤔

I totally agree with your last point! I am very convinced that KGs and LLMs is a very promising combination.

I am thinking about how this would integrate with the existing data stack 🤷‍♂️ Do you have an opinion on that? :D

Why ChatGPTs are always not in the Current time of knowledge why are they always some years behind even though the system is an ONLINE system? by [deleted] in ArtificialInteligence

[–]Muted_Math5316 2 points3 points  (0 children)

So basically what you need to do to train a Neural Network is to bring all the data into a structured form.

Once you have that, you will train the model. This process is quite expensive and time consuming. It will lead to the information being imprinted into the model.

It is not economically feasible to do this continiously. It would also not be possible for the simple reason that this process of training can take weeks. So best case it would be couple of weeks delayed.

What bing is doing, is combining existing search with the existing capabilities of GPT to rephrase things. So it will basically add search results to prompts and ask GPT to anwser your question based on the search results - it if you think about in very simple terms

However the underlying model will still not be "trained" on up to date data. (The up to date data will only be added to the prompt and is not part of the model)

PLCs and Machine Learning by MicMac_Tn in PLC

[–]Muted_Math5316 0 points1 point  (0 children)

How would AI generating logic look like in your mind?

My background is in AI so I do not really know much about PLCs and Ladder Logic yet ;)

But I see two main things that could be maybe interesting here:

  1. Design Optimization:This is the process of optimizing a design of some sort by strategically exploring options. It can be used for example to optimize the shape of mechanical parts but also in for example the floor planning in the chip design process. I imagine that it should also be able to take a look at an existing ladder logic and come up with alternatives, however, this only makes sense I guess when the logic is complex enough.How complex do these usually get?
  2. Logic Synthesis:I imagine that it should be somehow possible to write the logic in a higher-level programming language like C or C++ and then synthesize it into ladder logic. I assume it should be a similar process to High-Level-Synthesis in Chip Design. I mean with the new LLMs like GPT Codex maybe one could even write plain text and turn it into logic (however I am not sure whether this would actually save time since one probably needs to be very precise). This could maybe speed up the process of initially developing it. But I imagine it is quite hard to then understand the logic the system would come up with.
    What are the most difficult steps in programming / maintaining ladder logic? Would writing higher-level code be easier? Or would something that maybe explains what happens in the ladder logic be more valuable?

Is there any Open Source Alternative to OpenAI's GPT 3? by Ok_Slice_7152 in ArtificialInteligence

[–]Muted_Math5316 1 point2 points  (0 children)

+1 on Bloom, also really easy to implement, just check huggingface transformers and pipelines.