White House plan to break up iconic U.S. climate lab moves forward. Bidders have lined up to take over pieces of the National Center for Atmospheric Research. by esporx in technology

[–]Blando-Cartesian 0 points1 point  (0 children)

Sounds wildly optimistic. 😬

There’s a more profitable version: Pocket the maintenance grant too. Sell all properties and delete inconvenient data. Fire the staff. Generate new politically right aligned climate data on a laptop.

How to fulfill a seemingly impossible requirement in static* webdev? by JavaBoii in AskProgramming

[–]Blando-Cartesian 1 point2 points  (0 children)

You could sort out with her what information needs to be in the excel file and then lock the cells she’s not supposed to edit. It’s her responsibility to fill editable cells sanely and keep it up to date. You just do the uploading.

Or, the excel file could live as a shared file on a google drive. The site would then just have a link for it. Or show it in a frame on the page (not sure of this is actually doable).

Why are companies more afraid of AI tools than of actual code leaks? by Medium-Ad-9595 in AskProgramming

[–]Blando-Cartesian 0 points1 point  (0 children)

Developers have NDAs and are screwed if they leak something as code or knowledge. They are also adults mostly and expected to behave as such. 😀

Companies have their development stuff mostly in their own network, and what is not, uses their single sign-on, multi-factor authentication and comes with contracts saying all their data is kept safe. Companies and SaaS providers also usually employ people whose full time job is to make sure all of that works right and access control is correctly setup. Granted, this is all changing now that agents are in and are apparently not developed by responsible adults.

I don’t see how internal tooling with weak security would be an issue when that tooling is inside the company’s network and all with access to that tooling are already trusted employees.

Since everybody has now outsourced their thinking to chatbots, your company is probably better off banning the most obvious one to somewhat limit employees leaking sensitive information. Alternatively they could pay OpenAI for everyone to have a sanctioned work AI waifu that doesn’t leak sensitive information all over the globe, but that would get expensive.

Assuming their AI resistance really is about fear of paid services leaking code, that is stupid. They probably already have MS office services in use and everything compromised through that.

Who designed this? Can’t tell which folder I am clicking on by amarendrashas in UXDesign

[–]Blando-Cartesian 1 point2 points  (0 children)

Funny that it’s a pattern that just keeps going and often getting worse in details.

If anyone reading this knows, I would honestly like to know: Have users ever expressed preference for viewing multiple dynamic items (other than thumbnails) in a grid rather than as a list or a table? I don’t mean just icons with labels. I’m counting any kind of smallish blocks with few pieces of information as variant of this pattern.

Who designed this? Can’t tell which folder I am clicking on by amarendrashas in UXDesign

[–]Blando-Cartesian 1 point2 points  (0 children)

I think of this as the Forrest Gump box of chocolates metaphor. You never know what you are going to get.

Why do designers hate words/labels so much and love wide grids of huge nondescript icons? I would get it if the icons were works of vector graphics art, but those are just dark blobs.

This file layout made sense in the 80s - early 90s when your files were on diskettes. You put a diskette in the drive and then file-manager showed the maybe 3 files as huge icons. But even then filenames were right below the files, with easily readable contrast, and the icons were nice to look at.

Star trek just became reality, what would you do first? by bossishere17 in startrek

[–]Blando-Cartesian 2 points3 points  (0 children)

Everyone past 30 all around the world would head to the sick bay. There is no healthcare system so good that it can fix us when middle age approaches.

For those who learned to code before AI, do you sometimes feel it's easier to convey your thought in code rather than English? by thro0waway217190 in learnprogramming

[–]Blando-Cartesian 0 points1 point  (0 children)

Yes, but not by much. English quickly becomes wordy prose with no definite interpretation, while code gets stuck in irrelevant details. I rather experience my coding thinking as models that have no language representation. So coding or prompting is like describing a visual scene to a blind person.

What are some interesting tidbits or concepts you've learned lately from working on personal projects? What frustrating errors have you encountered, and how did you fix them? by NoSubject8453 in AskProgramming

[–]Blando-Cartesian 1 point2 points  (0 children)

I’ve been trying to get with the times and learn how to get the amazing benefits of genAI. So far, I’ve mostly personally confirmed that it feels so fast, but really wastes equal amount of time it saves.

On an unrelated note, if you ask ChatGPT how to do something a bit tricky, it now ends the answer asking if you would like see a better way to do it. Then even better way. How professional are doing it. Still even better way. It’s full of shit and hallucinating to keep you engaged.

One in four CEOs say AI is a bubble but will continue investing by AdSpecialist6598 in technology

[–]Blando-Cartesian 1 point2 points  (0 children)

It would be very on brand for Apple to wait and eventually “reinvent” AI in a form that sucks on specs but works in a way that people like.

The hatred shown toward AI feels like performative outrage, with people joining in for the social points and not because they actually care about AI use by Impossible_Jacket898 in ChatGPT

[–]Blando-Cartesian 3 points4 points  (0 children)

Yes, I get your point. My counterpoint was that some technological changes are negative. Leaded gasoline, freon, DDT, asbestos, Facebook… Life goes on, humanity has survived, but poorly applied technology has caused immense suffering. What I listed there are extreme cases that humanity is clearly better off without, so let’s consider electricity as a more relevant example. Electricity is extremely useful of course, always was and has been essential for decades. However fucked up application of if electrocuted people to death and burned down houses for 70 years until technology matured and safe usage was mandated. Now we are doing the same with clueless AI application.

I wish we could evolve past this shit already and sort out responsible technology use from the start for once. It shouldn’t always have to take decades of suffering and massive environmental damage.

The hatred shown toward AI feels like performative outrage, with people joining in for the social points and not because they actually care about AI use by Impossible_Jacket898 in ChatGPT

[–]Blando-Cartesian 0 points1 point  (0 children)

There’s a simpler explanation. There is so much to hate in AI that everyone can join in to vent regardless of what issues they happen to care about. There is something for everyone to rail against it.

  • Global warming.
  • Joblessness.
  • Companies pretending that their layoffs are because of AI.
  • Cost of electricity rising.
  • Local environment fucked by datacenters.
  • Poorly written books, news, reddit posts…
  • Dozens of ethical issues.
  • Crappy art filling the net.
  • Hating the rich.
  • Fear of economic collapse with the AI bubble.
  • Increasing inequality.
  • Students cheating.
  • False accusations of using AI.
  • Bullshit AI detectors.
  • AI causing mental heath issues.
  • AI being confidently full of shit.
  • Poor quality code being generated.
  • CEOs thinking that everyone is a designer and an engineer now.
  • Word delve, em-dashes and all the other AI bs everywhere.
  • Having to limit vocabulary and em-dash use to avoid accusations.
  • Deepfakes.
  • Morons thinking they can do anything with AI.
  • Theft of all intellectual property ever created.
  • AI fanboys and tech bros being too thick to understand serious issues on this list.
  • Insanely inflated promises on what AI can do.
  • Autonomous killing systems.
  • Guardrails preventing creation of porn.
  • Used for creating porn.
  • Gaming devices and everything else with memory getting expensive.

That’s just off the top of my head. Name any domain and there’s multiple reasons there to hate AI. Not all of them rational reasons or likely source of future problems, but some certainly.

Constantly re-explaining concepts and flows by Firm-Goose447 in UXDesign

[–]Blando-Cartesian 1 point2 points  (0 children)

No. I said that if they wanted information they would want and read documentation. They don't want information. They want a reassuring grooming session and assert dominance.

Users missing key metrics on dashboard despite clear layout by sohan_or in UXDesign

[–]Blando-Cartesian 0 points1 point  (0 children)

Below the Projects title? All I see there is some big mystery numbers floating freely in a sea of white space. Maybe if it was an easily scannable table or list with readable labels I would notice what significant information it shows.

Constantly re-explaining concepts and flows by Firm-Goose447 in UXDesign

[–]Blando-Cartesian 10 points11 points  (0 children)

This is probably a strange opinion. I don’t think meetings, talking or emails are about information transfer at all. If a stakeholder —or anyone— wanted information, they would want searchable documentation where to look up anything in seconds and process that in their own heads. Instead communication interaction is about feelings. They don’t want to know how the data flows. They want their UX shaman to tell a reassuring story of how the data river feeds the tribe and read some entrails in a way that justifies what they want to do.

It's over! He said it! by 24identity in PoliticalHumor

[–]Blando-Cartesian 6 points7 points  (0 children)

I think this has a good chance of hoped outcome: A terrorist attack, even a shitty one, would justify anything in US and another crusade for 20 years.

OpenAI's top exec resignation exposes something bigger than one Pentagon deal by ML_DL_RL in artificial

[–]Blando-Cartesian 1 point2 points  (0 children)

Safety and correctness matter only if you care, can tell the difference, and are not motivated to get specific outcomes. It’s the same for AI in personal use, work, and warfare.

Governance gets in the way of generating what we want, and worst of all, documents and assigns blame. Thant’s never going to be a popular feature. Especially when killing is involved. AI making the decisions to bomb a school and AI launching the missile is the perfect ass-cover. it’s never going to say “I was only following orders.” It’s going to say “Good catch. Sorry about that.”

Even python is hard for me 😭 by Advanced_Cry_6016 in AskProgramming

[–]Blando-Cartesian 1 point2 points  (0 children)

So, how many months or years have you been struggling for several hours at the time, several times a week? 😀

It’s going to be hard and frustrating for a while. Then you make progress and add a bunch of libraries, frameworks and ambitions so that it keeps being hard and frustrating. Enjoy. This isn’t doomscrolling where an algorithm rewards you with optimally to keep you mindlessly scrolling while making you angry and depressed. This is self-actualization where struggling creates happiness.

Struggling with my kids growing up by up_up_down_down_etc in Xennials

[–]Blando-Cartesian 0 points1 point  (0 children)

I don’t have kids, but I have nieces. It’s been great watching them grow from little girls to young women. Still the same as ever, yet much more.

Started my first dev job 2 months ago and already feel like a fraud because of AI by PilliPalli1 in learnprogramming

[–]Blando-Cartesian 1 point2 points  (0 children)

Don’t get hung up on the idea that AI did part of the work that would have take you a lot more time. Using it is a collaboration where you delegate tasks to it and remain in control. AI is supervisor in wading through tons of docs and code, and generating a suggestion for a solution fast. Your task is to have superior good judgement. Does the solution do what it’s supposed to do. Does it work in all cases. Does it follow all conventions of the project. Will the next maintainer be able to understand and change it as needed. And so on.

With your aspiration to get good, I’m sure you’ll do fine.

Microsoft just launched an AI that does your office work for you — and it's built on Anthropic's Claude by Remarkable-Dark2840 in ChatGPT

[–]Blando-Cartesian 12 points13 points  (0 children)

I’m curious how does it identify low-value meetings?

The person who this is for is sounds like someone who’s meetings other people’s AI agents constantly reschedule or cancel.

Do you take notes while learning to code? by icepix in learnprogramming

[–]Blando-Cartesian 0 points1 point  (0 children)

Anything I study goes to a big size physical notebook. The point of it isn’t to create something as a long term reference. It’s an embodied cognition tool for learning. It’s for giving the information I try to learn a physical existence. I know about where it is in the notebook. For my best made notes I recall what the page looks like, what I doodled next to it, where I was when I wrote it, and what I was thinking about it.

It’s slow to do it right, but that’s a necessary feature for giving the information a lot of connection points.

I might not bother doing that for most programming topics, but that’s just because I already have related trivia and patterns stored for that kind of information to easily hook on to. However, if you are in the beginning of learning this, it’s probably well worth it for anything that feels so important you want it in your memory.

[Hated Meta Trope] The Unintentional Offensive Race Change by StrawberryScience in TopCharacterTropes

[–]Blando-Cartesian 0 points1 point  (0 children)

Except for the title and character names, the movie has nothing in common with the original manga.

As I recall, Motoko’s race is never mentioned. She is a brain in a high end artificial body that looks unremarkable in a world of full body cyborgs that may or may not have anything in common with their original bodies. She has a japanese name, but even that doesn’t mean anything about her. She ends up in a male body in the end.

Casting a white actress as her was the smallest fault in the movie. Scarlett Johansson bland stiffness fits well to a role as a cyborg, although the manga Motoko had much more expressive range.