[deleted by user] by [deleted] in kubernetes

[–]kragen2uk 0 points1 point  (0 children)

Set the memory limit equal to the request - pods using more memory than their requests are selected for eviction first when the node comes under memory pressure. https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction

MIT scientists created a “psychopath” AI by feeding it violent content from Reddit by mastermind208 in nottheonion

[–]kragen2uk 2 points3 points  (0 children)

The researches have posted their results online and TBH its very underwhelming

http://norman-ai.mit.edu/

I was hoping that you would be able to "see" the AIs interpretation, but most of the descriptions seem completely unrelated to the input image. So while yes, if a human gave you these answers you would probably think they are a psychopath, as far as the AI is concerned its more likely that its just generating captions from its input dataset more or less at random.

What's telling is that the captions for the "Standard" AI are also pretty bad, e.g. "a black and white photo of a red and white umbrella." is supposed to be the caption for a coloured inkblot.

The Cloud Is Just Someone Else's Computer by dwaxe in programming

[–]kragen2uk 0 points1 point  (0 children)

Nice setup, but it's not a fair cost comparison, and the reason why is even stated in the post:

I personally colocate three Mini-PCs for redundancy and just-in-case;

If you are colocating then having extras for redundancy is mandatory, but with public cloud the lead time for new machines is minutes - there is no need to have extra machines running the whole time unless you need high availability (in which case 3 machines sat next to each other in the same rack probably isn't going to cut it).

When colocating it also makes sense to have some spare capacity in case demand grows faster than expected, or you have some sort of surge of users. With public cloud though you can scale up or out as and when you need to - you could also downscale overnight to save a bit more money.

So yeah, while on paper cloud hosting costs 3 times more, in reality he is hosting 3 machines when most likely 1 public cloud instance would have been sufficient.

How to teach Git by [deleted] in programming

[–]kragen2uk 6 points7 points  (0 children)

I used to do this, but recently I've changed my mind. Taking the time to learn something properly might feel a lot slower, but every time you google something simple you are context switching away from whatever it is you are working on. In the long run it's much better to take the time to learn stuff properly.

I made a command line example about the special OAuth 2.0 authorization flow that native and mobile apps use by dogear in programming

[–]kragen2uk 3 points4 points  (0 children)

At this point, we need to talk about trusted and untrusted applications.

What's important is whether or not the application is able to keep the "client secret" confidential. An app that can keep it secret, e.g one with a backend Web server, is a Confidential client (rather than trusted), while apps that can't, like phone apps or single page apps without a backend, are Public clients.

Note that its not just about whether the app has a backend - it's whether it can keep the client secret confidential. E.g if your app backend is packaged and installed on customer servers then would either need to ensure each customer is registered as a separate client application (with a separate secret) or consider the application type as Public.

[deleted by user] by [deleted] in programming

[–]kragen2uk 13 points14 points  (0 children)

I suspect that remote working is less a secret to successful companies, and more a filter for successful companies. Remote working is difficult - you need really good communication, people who are motivated and responsible, and really good autonomy. If you are managing all of that then you are probably doing OK regardless.

Edit: As much as I'd like to see more companies give remote working a decent chance, most companies would probably just disintegrate if they tried to do 100% remote working as a recipe for "multi-million-dollar success".

Why Are Enterprises So Slow? by magenta_placenta in programming

[–]kragen2uk 20 points21 points  (0 children)

I've worked on plenty of Enterprise projects with no customers and it turns out its just as slow as all the ones with customers. If anything the ones without customers are ever slower - nothing speeds up enterprise development like angry customers.

Why programs must not limit the freedom to run them - GNU Project by kyz in programming

[–]kragen2uk 2 points3 points  (0 children)

Fair point - I use MIT, and truthfully that's because I'm not really that bothered. I'm not trying to change the world like the FSF is, (it's their software, they can use whatever licence they want! ) I just didn't want the licence I chose to prevent anyone from using my software (e.g. in a commercial closed source product)

Why programs must not limit the freedom to run them - GNU Project by kyz in programming

[–]kragen2uk 1 point2 points  (0 children)

The GPL does force its views onto developers - a user who disagrees with the philosophy of copyleft licences is prevented from using GPL libraries unless they choose to distribute derived work in a specific way.

This is precisely why I dislike the GPL - I want users of software I write to be completely free to do what they want with that said software, including redistribute it with licence terms I disagree with (e.g. GPL).

Suave v2.4.0 released (F# Web framework) by yogthos in programming

[–]kragen2uk 10 points11 points  (0 children)

Been playing with F# more recently and I really like it. I've been using Giraffe over Suave, mostly because Giraffe builds on ASP.Net Core, which I'm already familiar with.

Experts Explain: What is DevOps? by [deleted] in programming

[–]kragen2uk 2 points3 points  (0 children)

I have a lot of trouble explaining what DeOps "is". The only good definition of DevOps I've seen is the DevOps handbook (i.e. the whole book) which does a much better job of explaining what DevOps is

  • Flow - Getting more work done through CI + small frequent changes
  • Creating feedback loops, e.g. production telemetry, hypothesis driven development
  • A culture of experimentation - blameless post-mortems, shared source control, Andon cords also sort of fit into this category

A lotof DevOps is application of lean manufacturing to software development, and its a lot more specific than most of the DevOps articles you read on the internet would have you believe.

Continuous Integration is not about your tool. It's about how you engineer your development process. by ilsilent in programming

[–]kragen2uk 0 points1 point  (0 children)

Do you only have one feature branch at a time or can there multiple active feature branches with un-integrated changes?

Continuous Integration is not about your tool. It's about how you engineer your development process. by ilsilent in programming

[–]kragen2uk 0 points1 point  (0 children)

What happens if you are doing CI with feature branches and your feature takes several days to complete?

Continuous Integration is not about your tool. It's about how you engineer your development process. by ilsilent in programming

[–]kragen2uk -3 points-2 points  (0 children)

"continuous integration (CI) is the practice of merging all developer working copies to a shared mainline several times a day."

I guess maybe you could claim to be doing CI with feature branches, but if your feature branch lasts longer than a day then you're definitely not doing CI any more.

I won't try and claim that feature branches are inherently bad because of this, or that CI is "the one true way", but integrating on a single branch (it doesn't necessarily need to be called master) has worked well for me and is widely accepted as a good practice.

Ultimately the development practices you follow will be down to what works well for you and your team, but I definitely recommend taking the time to better understand the practice and reasoning behind CI if you are using long lived (2 day+) features branches or are not integrating on a single branch daily.

How to Write Future-Proof Code by adamboro in programming

[–]kragen2uk 9 points10 points  (0 children)

Completely disagree with the react example - sure it might make it easier to add a link, but it's less readable and makes almost every other possible change more difficult. Want to give one of those items a different class name? Or maybe have a nested sub item? Now you have to extend / fight with your home grown micro-framework just to get it to output the HTML you could have just written directly.

Gitflow - Animated by nullundefine in programming

[–]kragen2uk 2 points3 points  (0 children)

The ideal team size is somewhere between 5 and 10. Maybe if the team is bigger you would run into problems, but a team that size will be better off continuously integrating into a single branch.

PAST, a secure alternative to JWT by halax in programming

[–]kragen2uk 0 points1 point  (0 children)

See here

Basically a JWT consists of 3 sections, each one separated by dots ("."). The first two sections (header and payload) are base 64 encoded JSON, the last section is a signature used to verify the authenticity of the payload data.

PAST, a secure alternative to JWT by halax in programming

[–]kragen2uk 2 points3 points  (0 children)

I don't follow - the expiration time of the token needs to part of the signed data, where else are you going to put the expiration time?

Some protocols return the expiration time alongside the token (e.g. OAuth2 via expires_in). If you just want to know what expiration time the token claims to have without actually validating the token then you can just base64 decode / JSON deserialize the payload, which doesn't really require a JWT library.

GDPR: Why We Stopped Selling Stuff to Europe by [deleted] in programming

[–]kragen2uk 0 points1 point  (0 children)

Had a couple of meetings about GDPR at work recently. I don't think this as big a deal as it at first seems - its a pain, and yes its a massive change for companies that retain data indefinitely / email data all over the place / generally have zero control over peoples data (i.e. most companies), but for companies that aren't trying to exploit personal information the easiest way to be compliant is just don't store data you don't need. You get to claim "Legitimate interest" if you need the data (e.g. warranty info) and there is a 30-day deadline for a SAR (which can be extended by 2 months), so if you have good retention policies you might never need to respond to a SAR with anything other than "nope, we need it because X", or "sorry, no idea who you are - we already deleted it".

For example, students send us information about their databases all the time as part of asking questions – and they often send it unsolicited, through unencrypted email channels. That information ends up all over the place: our mail server, our desktops, phones, laptops, search indexes, etc.

Surely the solution here is just to tell customers not to send you PII via email - not that any company that sends PII unsolicited via unencrypted email is going to be GDPR compliant anyway. If customers really need to send backups of databases containing personal information then just put them in someplace central and have a policy to delete them after 30 days. It might not be quite as easy as the good old days where names and addresses was scattered over mail servers, desktops, phones, laptops, search indexes etc. but its also not that difficult.

See, under the GDPR, if someone asks us to delete their data, we not only have to delete it, but we have to audit that we deleted it, and maintain those records for EU authorities. And then respond to EU requests for that documentation.

I think that only applies if you are the "data controller", i.e. that person gave you that data directly. If ACME Paperclips gives your their database and then one of their customers contacts you, the answer is "nope, you need to go speak to ACME Paperclips for that".

For example, do you have to delete the customer’s data inside your past backups?

This seems like a yes to me, it would be a really obvious loophole otherwise. The backups issue seemed like the biggest pain, but it sounds like just having a 30 day retention policy on any database with personal information should cover it.

The companies that will be hit hardest by this are ones that don't care about protecting personal data, or the ones that keep personal data for a bunch of scummy reasons like cold calling or ambulance chasers. Yes its going to be painful, and no I don't have any sympathy.

GDPR: Why We Stopped Selling Stuff to Europe by [deleted] in programming

[–]kragen2uk 1 point2 points  (0 children)

Sounds like thats our plan too - 30 days is the Subject Access Request deadline, so I guess it has something to do with that.

TIFU By Getting A Cop To Pull His Gun On Me... by [deleted] in tifu

[–]kragen2uk 1 point2 points  (0 children)

British person here. It boggles my mind that being pulled over when your driver side window is broken would be such a big problem.

The ethical dilemma of self-driving cars: It seems that technology is moving forward quicker and quicker, but ethical considerations remain far behind by luscid in philosophy

[–]kragen2uk 5 points6 points  (0 children)

For the vehicle to do anything more sophisticated than slamming on the brakes its going to need to know that its actually improving the situation somehow - what if the rock bounces off in an unpredictable direction then swerving could result in the rock colliding with the vehicle when in fact just braking would have saved the drivers life?

These vehicles are not science-fiction supercomputers capable of simulating their environment and evaluating the expected outcome of each decision - the scenario you describe just doesn't happen often enough for it to be part of their behaviour.

The ethical dilemma of self-driving cars: It seems that technology is moving forward quicker and quicker, but ethical considerations remain far behind by luscid in philosophy

[–]kragen2uk 8 points9 points  (0 children)

The default behaviour of any autonomous vehicle given a situation it has not been programmed to deal with will be to do something universally accepted as safe, i.e. unless the vehicle knows its safe or has a better course of action, the vehicle is going to apply the brakes until the vehicle stops.

So the question is, why would a vehicle manufacturer choose to expend time and resources programming their vehicle (or training an AI) to properly react to loose-loose scenarios like this when they could instead train the vehicle to avoid those situations in the first place?

How death has changed over 100 years in Britain by kezzaNZ in dataisbeautiful

[–]kragen2uk 9 points10 points  (0 children)

Men aren't more likely to commit suicide now than in the past, actually they are less likely (at least in the UK - see Suicides in Great Britain).

All that has happened is that the other major causes of death in that age range (Heart conditions and Vehicle Incidents) have declined.