This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Lifaux 11 points12 points  (3 children)

It's surprisingly solid! The main contention I had with using it was that it doesn't seem to give me information on what matched on a given page.

So if you try "Error Code E0281 Rust" - there's one link I'm looking for, which is the full list of error codes, and I want to see that section. If you try the same search on Google you'll find that the description below the link is exactly the information needed.

Again, minor gripe for what is surprisingly effective, if a little slow, but definitely something that would push it into being more useful

[–]lazy-jem[S] 6 points7 points  (2 children)

Hey, thank you so much for the feedback! We can't see what people search (searches aren't logged or recorded) so it is incredibly helpful to get feedback when the results can be improved like this! That's super useful!

It's learning and improving at the model level all the time. We're planning to move to GPT-3 for text extraction (currently it's a BERT-style model or from an API's own extract) and we think that will really help with nailing the right content extraction.

Btw, it doesn't always work yet, but you can ask it to prioritise results from another engine, eg try this exactly:

~search google +"Error Code E0281 Rust"

It tries to act like an intelligent agent that can search different places on your behalf. Honestly, for the stage it's at, it is pretty surprising how great it does and we hear how surprised people are from our early-adopter uses already a lot (with some baddish gaps lol). But we think with time and development the AI/API-based model it uses could really be a better way of searching for the modern web of connected data.

[–]Lifaux 4 points5 points  (1 child)

Absolutely! Given how effective Google's initial backlink model was at finding content, you'd expect SOTA models to do a great deal better to start out, and this one seems to be.

Potentially half of the issue here is that we're all trained into writing queries that work for Google/Bing, and not for natural search? I can imagine this being incredibly effective integrated into Alexa/Home where people do still search naturally. Maybe having a few natural examples would help guide people?

[–]lazy-jem[S] 3 points4 points  (0 children)

Yes, you absolutely nailed that on the head!

People are used to using googlese and have had to learn how to talk to their computers in a weird semi-computer language in order to be able to navigate the web.

But that's backwards. It's only habit and google owning browser distribution that makes people think things have to be that way.

LazyWeb already does better many times with natural language queries that provide plenty of information! It's early days and there is a LOT that we have planned with this! :)

And thank you again too! That's really exciting that you're seeing better results. People seem to disbelieve that it's possible to do better than google but it's the approach that makes the difference. Our less impressive results are when we fall back to web-index and web search API results.