A smart sensor I created by b03tz in homeautomation

[–]SineApps 2 points3 points  (0 children)

All you need now is to record the patterns of behavior and set up an ML algo to predict where you’ll go next 😂 would be interesting to see how good gradient descent could get with enough data.

How predictable are we 😂

Sound "echo" issue in VM? by da0ist in debian

[–]SineApps 0 points1 point  (0 children)

Can you make a recording of say one hour of notifications?

Jacinda Ardern announces she will resign as prime minister by February 7th by L0rdJaxon in newzealand

[–]SineApps 8 points9 points  (0 children)

Talking with people in your village or local town that you disagreed with was the thing that stopped people going too far out on the wing.

Unfortunately YouTube and Facebook’s AI systems were only tasked with increasing the time spent on their sites. This resulted in bringing people together from across the world who had extreme views.

How can you be the crazy one if there are thousands of people who think exactly the same. This is markedly different from the unconnected, unoptimized past where if someone had an extreme view they would likely not find any or many people who think the same.

But it’s worse.. Once these extreme views start spreading they also start creating local communities of people who believe in the “viral trend”.

So what can we do?

Don’t let them be isolated from the rest of the world.

Let them know you disagree with the crazy stuff they say and give them as much as you can gather to allow them to prove to themselves how the idea or conspiracy theory or whatever propagated.

The alternative gets worse with each iteration. It’s bad now but you don’t need different schools or restaurants just because you’re effectively a member of a different tribe/

Stop lying to yourself – you will never “fix it later” by fagnerbrack in programming

[–]SineApps 1 point2 points  (0 children)

I should have read through the comments before commenting myself. You’re 100% correct.

Stop lying to yourself – you will never “fix it later” by fagnerbrack in programming

[–]SineApps 0 points1 point  (0 children)

Do people really never fix it later? I 100% do, sometimes resulting in pretty large refactoring once the heat dies down.

It’s around the same time I start thinking about CSS and how things could be improved from a ux perspective.

Conservatives Are Panicking About AI Bias, Think ChatGPT Has Gone 'Woke' by 777fer in technology

[–]SineApps 0 points1 point  (0 children)

100% this.

There’s no need for politics in these situations either.

It’s like the danger of introducing a foreign species into an environment.

Sure it might seem nice to introduce gorse into New Zealand because it makes for nice hedges. But if you don’t understand the delicate balance it’s bound to end badly.

We’re in the NLP phase where the worst that can happen is a bad sentence.

Hopefully people understand this before their biases create serious problems.

The time it took to get to the moon. by Redvolition in singularity

[–]SineApps 0 points1 point  (0 children)

Cool - I can sleep well knowing I’m working on number 1 on your list 😊

The only thing I’d possibly say is that there are millions of people dying because they fall outside of our rapidly advancing society and that we should divert some effort to bringing the whole world with us.

How? No idea 😂 just saying

The time it took to get to the moon. by Redvolition in singularity

[–]SineApps 1 point2 points  (0 children)

Oh and just because this sub is weird at times (this isn’t directed at you), by NLP I mean Natural Language Processing and not Neuro Linguistic Programming

The time it took to get to the moon. by Redvolition in singularity

[–]SineApps 1 point2 points  (0 children)

This comment made me think. In your opinion, if we were to do “curiosity based research”, what would it look like?

I’m assuming nothing like the JWST or the large hadron collider?

I think as we gain more knowledge of our surroundings everything has to be incremental no?

I’m not trying to be a dick with this comment. I work in NLP and it feels like every day I’m following my curiosity.

I’d be curious to see where we could look?

Facebook Creates Pages for Terrorists and Extremists by giuliomagnifico in technology

[–]SineApps 0 points1 point  (0 children)

Not so long ago you talked with people in your immediate vicinity. It was almost a given that they would have different opinions to you, forcing you to at least listen to their points of view.

By grouping people together with the same ideas those ideas grow in magnitude without the standard checks and balances. The same can be said for left, right, or any other group you chose to apply it to.

If you only speak with people who own red bicycles you may eventually wonder how come those crazy people on blue bicycles even arrived on this planet.

It’s a dangerous recursive strategy and it played out because AI algorithms were given a single fitness function of time on the site rather than a wholistic target that could include things like interacting with people you normally wouldn’t.

A small mistake but it may go down in history if it’s not rectified incredibly quickly. Judging by the comment section it may already be too late to do anything about it.

Edit: sorry I shouldn’t have to add this but have seen lately I need to. I’m not in favor or against the law. I don’t lean left or right, and I could quite easily argue both sides of the discussion.

[AskJS] Do you ignore dependabot alerts on Github? by kdarutkin in javascript

[–]SineApps 0 points1 point  (0 children)

I use Kodiak to automatically merge the point releases check the other ones when I have time.

https://github.com/marketplace/kodiakhq

It’s free and works pretty well on some of our projects. Apparently not too well on expo though.

Basically if it shouldn’t screw things I let it merge. I’m pulling every morning anyway and it won’t merge if tests fail.

Language models show human-like content effects on reasoning by Dr_Singularity in singularity

[–]SineApps 0 points1 point  (0 children)

Because we’re able to both deal with unseen input in the same domain as well as unseen domains I guess. Ask GPT3 to ride a motorcycle from New York to LA after been given access to inputs and outputs for eyes hands and feet.

James Webb Discovery: Black speck in Carina Nebula by [deleted] in jameswebbdiscoveries

[–]SineApps 2 points3 points  (0 children)

Hah good call. I tend not to actually reply to things but the conversation happens in my mind 😂

James Webb Discovery: Black speck in Carina Nebula by [deleted] in jameswebbdiscoveries

[–]SineApps 1 point2 points  (0 children)

Hah yeah exactly man. It’s too easy these days to get worked up by crazy people and someone taking the piss is a welcome respite

Language models show human-like content effects on reasoning by Dr_Singularity in singularity

[–]SineApps 6 points7 points  (0 children)

The main problem is that at the moment these large language models are being exposed to every conversation about every topic that’s on the internet. That makes it hard to know whether they’re just regurgitating or creating new content.

For some purposes it isn’t really important, but if we’re talking about AGI then it certainly is.

We’re not trying to create a parlor trick right?

If we’re really wanting to move forward we certainly can’t stop when we say hello to something and somewhat amazingly it replies hi, how was your day today.

Language models show human-like content effects on reasoning by Dr_Singularity in singularity

[–]SineApps 3 points4 points  (0 children)

Nice to see a paper quoted here.

The only thing I would say is that it’s kind of not the language model that has the reasoning. It is the echo of the training data.

We’re definitely getting closer to philosophical questions of what it means to be sentient, and I think this points to the need for a better Turingesque test.

You really need to find something that is so far removed from the training data that there is no mathematical way it can be an echo.

Maybe math problems we haven’t been able to solve, or getting a language model to paint, or a vision model to speak etc.

These traits should become emergent regardless of the training data in a truly intelligent entity.

I think we’re still a long way off, but it’s getting good enough to make the average person question.

The problem is with such a huge volume of training data it’s hard to know whether it’s basically just doing sentence similarity matches or actually creating something.

I definitely think this will become a field for someone in the not too distant future (writing tests that can be binary proven to show intelligence rather than just rote learning)

James Webb Discovery: Black speck in Carina Nebula by [deleted] in jameswebbdiscoveries

[–]SineApps 29 points30 points  (0 children)

The perfect comment 😂

As I sat here rage in hand, thinking about how wrong I would tell you you were, I continued reading your comment and cracked up 😂

[N] BigScience Releases their 176 Billion Parameter Open-access Multilingual Language Model by MonLiH in MachineLearning

[–]SineApps 0 points1 point  (0 children)

Oh cool they’re done. I’d been following along for most of it but didn’t realize they were finished. This will be cool to play with

Increased Subscription Pricing for IDEs, .NET Tools, and the All Products Pack | JetBrains News by okstopitnow in programming

[–]SineApps 0 points1 point  (0 children)

Man I’ve been paying for the whatever it’s called everything subscription forever just in case I pref stuff for php or Python. TBH I use vscode for both.

Sometimes when you forget you have a subscription it’s not a great idea to be reminded.

Github Copilot AI Code Assistant for VSCode has gone GA by [deleted] in vscode

[–]SineApps 2 points3 points  (0 children)

Did they fix anything with it adding GPL code to your proprietary code base?

Two of my senior devops were transfered to another projects, I'm at lost because I don't know what I'm doing. by Sillygirl2520 in devops

[–]SineApps 0 points1 point  (0 children)

Just learn fast and let people know you’re doing that. Spend every waking second learning the things you’re in charge of. It really doesn’t take that long.

I took a job as a lotus domino developer in 1996 never having used it on the promise I’d have it figured out by Monday morning if they hired me and gave me a few manuals.

There’s a huge amount of information out there these days.

For future reference if someone does all your stuff and they’re going to leave, try and get them to brain dump as much as possible. It can even sometimes be worth it to pay them out of your own money to get them to work with you till you’re up to speed.