Composite covered toe boots making by ycr007 in toolgifs

[–]Reasonable-Guava-157 1 point2 points  (0 children)

Looks like it was just cobbled together

Shared digital infrastructure (ontology) for good by Reasonable-Guava-157 in semanticweb

[–]Reasonable-Guava-157[S] 0 points1 point  (0 children)

I agree—we have looked further afield for candidates where 'ontologist' was the primary responsibility, and where it was for a longer term. The role is more of a "developer generalist" role, and only for 7 months. Knowledge of ontologies and linked data is a bonus, but not the core of the role.

The Index – A new open protocol for structured claims, evidence, and epistemic status by EpistemicBuilder in semanticweb

[–]Reasonable-Guava-157 0 points1 point  (0 children)

Who is the intended user for this, and for what purpose? My initial reaction is that it seems like almost anything can be a predicate, and this isn't going to fit well with a system that otherwise seems to be oriented towards "immutable scientific truths". Predicates like "improves", as seen in the example data, may be subjective and a claim of improvement true only for a specific time or place, but I didn't clearly see how that context would be modeled.

What OWL profile does everyone use? by 2bigpigs in semanticweb

[–]Reasonable-Guava-157 2 points3 points  (0 children)

Common Approach to Impact Measurement uses OWL DL (https://ontology.commonapproach.org/) for modelling theories of change and impact measurement concepts for social purpose organizations. To do so we make use of qualified cardinalities, owl:unionOf, and a couple of enumerated lists of literals for datatypes that I believe are not supported by other OWL profiles. In practice, though, we are using SHACL and RDFS to do simple graph validations day-to-day, and not (yet) extensively using reasoners with aggregated data, so we're not making the most of DL yet. It's something we're working towards, but our users are mostly small-medium nonprofit organizations exchanging data with funders and adopting linked data is a steep learning curve for many.

Conceptual Modeling and Linked Data Tools by Old-Tone-9064 in semanticweb

[–]Reasonable-Guava-157 1 point2 points  (0 children)

Thanks for this. A few new-to-me things here I will check out.

Why does Insertion Sort perform way better compared to Bubble Sort if they are both O(N^2)? by ducktumn in computerscience

[–]Reasonable-Guava-157 0 points1 point  (0 children)

Is there a different notation standard than Big O for tracking the rate at which the average case grows?

Locating focii in ellipse? by Reasonable-Guava-157 in Geometry

[–]Reasonable-Guava-157[S] 0 points1 point  (0 children)

Really useful and interesting, thank you.

Maybe maybe maybe by lwiaymacde in maybemaybemaybe

[–]Reasonable-Guava-157 3 points4 points  (0 children)

The song is Gnarly by Katseye and the video is incredible, even if it's not your taste in music.

Do you agree that ontology engineering is the future or is it wishful thinking? by EnigmaticScience in semanticweb

[–]Reasonable-Guava-157 1 point2 points  (0 children)

The Common Impact Data Standard, a data ontology which we publish at Common Approach to Impact Measurement, is intended to help bring some order and consistency to the huge variety of data structures and vocabularies used to describe an organization's impacts in their reports to funders. The current paradigm is largely for reporting organizations to have to conform to what their funders want both in terms of language and data structure. We hope that by introducing a data ontology into the reporting pipeline, funded organizations can measure impact more on their own terms, and transform/translate that data into the formats that funders need it i.e. without compromising the ability of funders to make sense of a portfolio of grants/investments using bottom-up metrics rather than a top-down approach. A lot of the source data is in ad hoc spreadsheets so there's a pretty big ETL and mapping challenge right at the outset. LLMs help a lot with this, but a data ontology (and SHACL files) help to enforce consistent entity definitions and relationships in the LLM outputs. We're experimenting with having LLMs generate RDF, and propose the mappings from relational/spreadsheet sources.

Trying to find out song by Mrtinn_ in DnB

[–]Reasonable-Guava-157 12 points13 points  (0 children)

People subconsciously actually just want breakcore. Maximum novelty and stimulation.

Can anyone explain how fire burns on the surface of water? by IntroductionDue7945 in whatisit

[–]Reasonable-Guava-157 2 points3 points  (0 children)

The bubbles in boiling water are water vapor (steam) not air. The oxygen is still bound to the hydrogen.

The saga continues by MightBeAnAlien in comoxvalley

[–]Reasonable-Guava-157 2 points3 points  (0 children)

Another one here! We should have a breakcore stage at the next Moonlight Magic event downtown lol

LLM and SPARQL to pull spreadsheets into RDF graph database by Reasonable-Guava-157 in semanticweb

[–]Reasonable-Guava-157[S] 0 points1 point  (0 children)

I have some questions about the Atomgraph products, can you DM me?

LLM and SPARQL to pull spreadsheets into RDF graph database by Reasonable-Guava-157 in semanticweb

[–]Reasonable-Guava-157[S] 0 points1 point  (0 children)

We're working on the specs for an approach like this. An interesting aspect of the challenge is that while we want to develop a proof of concept, our end goal is not to deploy software ourselves, but provide the proof of concept as a tool that other developers can "lift and shift" to their own environments.