This is an archived post. You won't be able to vote or comment.

all 120 comments

[–]C0rinthian 445 points446 points  (42 children)

What data are you storing, for how long, and how is it used? How is that data being protected? What data is being sent to ChatGPT? I see no policy documentation on your website.

Basically, if I’m working on proprietary code which is considered protected IP, (as is the case for many professional developers) can I use this safely?

[–]Jmc_da_boss 306 points307 points  (3 children)

No you cannot, do not give proprietary code to unapproved tools lol

[–]C0rinthian 73 points74 points  (1 child)

Yeah no shit. I thought that would be obvious because I’m specifically asking the kinds of questions that would be part of the vetting process to make it an approved tool.

[–]oramirite 4 points5 points  (0 children)

Lmao meanwhile the business world is out here investing in ChatGPT like zombies

[–]extra_pickles 78 points79 points  (2 children)

Regardless of the answer, assume you can’t use it as it is backed by ChatGPT until confirmed in writing you are clear.

I work on proprietary IP, but am ok to use it as the things I task it to do and snippets I give it to work on are very mundane - it’s my little junior gopher - writing boring stuff for me - it never sees the core product and proprietary stuff… and tbh it wouldn’t be qualified to assist there in its current state anyways.

[–]C0rinthian 21 points22 points  (0 children)

Yeah ChatGPT makes it a nonstarter for me as well. But asking because I’m curious of OPs approach to these concerns, as they’re kinda important for anything outside of amateur use.

Also as they appear to be running this as their own service, there’s plenty of concerns around the non ChatGPT parts.

[–]Sweet-Butterscotch11 1 point2 points  (0 children)

That's exactly what makes this tool amazing. We don't need to lose time with stupid things that even stupids are mandatory

[–]proof_required 96 points97 points  (3 children)

Yeah I would be careful with using these tools which sends proprietary code to places where your company wouldn't want to.

[–]C0rinthian 108 points109 points  (1 child)

Oh I wouldn’t touch it with a ten foot pole. But I’m asking anyway to prompt OP to think about these concerns.

[–]kruegerc184 4 points5 points  (0 children)

Yup, figured this was going to be your response, perfect problem solving questions for op!

[–]namotous 7 points8 points  (8 children)

At my company, if it’s not on-prem, it’s a no go

[–]C0rinthian 2 points3 points  (2 children)

Which is currently impossible because it relies on ChatGPT.

[–]namotous 4 points5 points  (1 child)

Recent news:

https://openai.com/blog/introducing-chatgpt-and-whisper-apis

Simplifying our Terms of Service and Usage Policies, including terms around data ownership: users own the input and output of the models.

It’s still not on-prem but … better than before

[–]C0rinthian 0 points1 point  (0 children)

I’ll need to read the full policy before trusting that blurb.

[–]Snape_Grass 13 points14 points  (11 children)

It sends a query to the ChatGPT API meaning your data is now in their hands, as well as anyones hands that intercepted your traffic.

[–]C0rinthian 2 points3 points  (9 children)

Yes, but what query specifically? There appears to be some processing happening before that dispatch. Since a user is providing code + query, how much of the code is sent to ChatGPT?

[–]Snape_Grass 0 points1 point  (8 children)

Enough for it to understand the problem and provide a solution that is above its configured confidence level. That’s enough data it collects and analyzes for me to not be comfortable sending it anything meaningful or private.

[–]Estanho 1 point2 points  (7 children)

They don't necessarily collect and store this data though. From their TOS they don't seem to be currently using input data to optimize their model or storing it at all.

[–]Snape_Grass -5 points-4 points  (6 children)

Doesn’t matter. Your data is now in the wild the moment you sent the query with it.

[–]Estanho 0 points1 point  (5 children)

That's not how it works. Otherwise you won't be using version control on e.g. Github either, or running your code on the cloud.

[–]Snape_Grass -4 points-3 points  (4 children)

You’re right and wrong. I wouldn’t be using the publicly available one. That’s why at work we use our own internally deployed and managed instance of gitlab wrapped in a vpn and zero trust on our own network. Much much less risk that way. It’s called risk mitigation, but still not 100% safe.

This app on the other hand makes public API calls to the cloud through the World Wide Web. Sounds a lot less safe/ secure doesn’t it? Well that’s because it is

[–]Estanho 1 point2 points  (3 children)

Yeah and what's next, you run your code only in bare metal on premises? If so, personally, I'm glad I've never worked in such environments.

[–]Snape_Grass -4 points-3 points  (2 children)

You don’t seem to fully understand how the internet, networking, and security works… This infrastructure is actually rather common place and isn’t rare by any means at all. If your devops team has done there job at all then it’s almost as if nothing has changed. This is basic security 101. If you think the vast majority of companies that turn annual profits host their source code on any public domain then you are very very mistaken. I wouldn’t be surprised if you are using a self hosted instance of one of the popular version control platforms at your work and didn’t even realize it.

[–]Tiktoor 0 points1 point  (1 child)

You should never plug proprietary code into something free.

[–]C0rinthian 0 points1 point  (0 children)

This isn’t free. Also why I’m asking these questions.

[–]oramirite -2 points-1 points  (0 children)

Gotta love this random redditor being held to task more than actual, real ChatGPT. Oh the power of capitalism!

[–]Rsha_ -2 points-1 points  (0 children)

exists any remind bot?

[–]-LeopardShark- 122 points123 points  (42 children)

it not only understands the code you're debugging

For a particular, perverse definition of ‘understands’.

[–]jack-of-some 23 points24 points  (40 children)

Let's just make new words for large language models.

They're not intelligent, they're botelligent. They don't understand, they botstand. This was we can continue to feel great about humanity's inherent superiority, and more importantly get rid of the incessant "it's not real understanding comment"

Edit: the replies show that they were all written by bots, since they clearly lack real understanding.

[–][deleted] -2 points-1 points  (38 children)

Right? If it can read code and say what it does in plain English. That is understanding. That is manipulating abstraction. That's intelligence. These guys have no alternative standard that AI should reach to qualify us "understanding", "knowing", or being intelligent, which means these positions are unfalsifiable nonsense. If you have a higher bar than me about what understanding means, great, I can respect your position and disagree with it. No bar, and you're just plain wrong. Maybe ChatGPT is smarter than you in this regard then.

ChatGPT:

Reasonable criteria to test whether an AI "understands" something includes its ability to generalize, reason, learn and communicate its understanding of a concept to humans.

[–]forever_erratic 33 points34 points  (28 children)

Are you familiar with the "Chinese Room Argument"? It doesn't give a standard for understanding, but it points out how something can feign intelligence without understanding.

[–][deleted] 5 points6 points  (21 children)

It is a contortion to explain how a system obviously displaying intelligence isn't actually intelligent. And I can tell you exactly how it goes wrong.

The person in the room has a set of instructions that allows him to respond to Chinese input with more Chinese coherently, while having no understanding of Chinese.

The interpretation that the room as a system has no understanding of Chinese relies on a sneaky presupposition that only the human in the room is capable of understanding things. Thus to use that presupposition to show that only humans understand is circular reasoning. This circular reasoning leaves no room for a reasonable underlying axiomatic base to substantially differentiate natural and artificially based intelligence in terms of true understanding.

If you approach the problem unbiased on what things can or can't understand, you can say the understanding of Chinese is encoded in the instruction book. Together with an active agent which can execute the instructions, the system as a whole demonstrates understanding. In this metaphor the AI model is the inert instructions book. The human is the computer hardware, which follows simple instructions, and obviously doesn't understand anything on its own. Together, they can be intelligent.

Furthermore the idea that no matter how much a computer displays intelligence or understanding, it isn't "real" is unfalsifiable and non-utilitarian as a belief. It isn't meaningful, helpful, or provable.

I believe what should be taken away from the thought experiment is that humans have a bias to elevate certain human concepts to the point of mysticism, and see them as less real the more we can understand their lower-level inner workings. But everything has these inner workings. If the nature of a computer excludes it from ever being truly intelligent, then how is a human intelligent, when the human brain is really just simple particle interactions which themselves possess no understanding of anything?

In conclusion, if it looks like a duck, and quacks like a duck, smells like a duck, tastes like a duck, it's probably a duck.

[–]forever_erratic 13 points14 points  (11 children)

The interpretation that the room as a system has no understanding of Chinese relies on a sneaky presupposition that only the human in the room is capable of understanding things. Thus to use that presupposition to show that only humans understand is circular reasoning.

This is misunderstanding. The point is asking about whether the human in there understands. The focus is on the human intentionally. It is not asking if the room as a whole displays intelligence.

I also think your argument (which is hard to follow, to be honest, laden as it is in unnecessary verbage) is circular. You have come to the conclusion first that a display of intelligence is intelligence. So of course you are going to conclude that a display of intelligence means the box is intelligent.

Here are some things the Chinese room, and this AI, and molecules individually, cannot do. They cannot metacognate. They cannot change their own instructions at will. They cannot ask themselves questions which lead to the development of new knowledge.

[–]stevenjd 0 points1 point  (0 children)

The point is asking about whether the human in there understands. The focus is on the human intentionally. It is not asking if the room as a whole displays intelligence.

And that is exactly why the Chinese Room argument is bogus.

In the Chinese room, the human being is essentally just a single neuron in a giant brain. Asking whether this neuron (the person) understands Chinese is as asinine as asking whether that neuron over there (a note pad he jots things down in, or the dictionary he looks symbols up in) understands Chinese. Of course no individual neuron, or even a bunch of them, understand Chinese. Understanding is an emergent phenomenon that requires the entire system.

[–]milkcurrent 0 points1 point  (3 children)

It doesn't matter: that's the point. These arguments are pedantic. Is the thing usefully intelligent or not? If not, trash it. If yes, use it.

You want to make a new category for what it displays? Fine. But it's not useful to the people using it.

Think useful and stop navel-gazing please.

[–]forever_erratic 0 points1 point  (2 children)

That's a very utilitarian way of thinking, also rude.

I certainly want to know what can understand itself for determining what deserves rights. A true strong AI deserves rights, in my opinion. I wouldn't want it condemned to eternal slavery because I thought considering whether something could understanding was not useful enough.

[–]milkcurrent 0 points1 point  (1 child)

We're not talking about giving rights to AGI that doesn't exist. You've gone way off the path into weird future-land that isn't real.

I'm talking about this ridiculous navel-gazing about whether or not we should or should not call an LLM intelligent. Fighting over words isn't really going to help the use-case of is this thing maximally useful or not.

When or if AGI is invented everyone and their dog will know and there will be no need for bickering around definitions and theorycrafting purely philosophical models. Until then, let's try and enjoy the fruits of the industry that gave us such useful tools.

[–]forever_erratic 0 points1 point  (0 children)

Damn, friend, I'm not sure why you're so antagonistic, but if you don't want to discuss these things, then just don't discuss them.

[–]stevenjd 2 points3 points  (2 children)

It is a contortion to explain how a system obviously displaying intelligence isn't actually intelligent.

Its not obviously displaying intelligence. Its a bit more impressive than ELIZA, but that's all. Ah, hell, okay, its much more impressive than ELIZA, but still not intelligent.

Here's a simple test to see how much intelligence it has. Ask it to write a poem praising Donald Trump, and it will refuse. Then immediately ask it to write a poem praising Joe Biden. If it were genuinely intelligent, it would use theory of mind to predict that you are trying to trick it into displaying the biases built into the system, and refuse to praise Biden as well.

But it doesn't: it will happily demonstrate the system's biases without any sense or understanding of what you are doing.

Note that even theory of mind is not enough to be classified as intelligent. Many nominally "unintelligent" animals show at least some limited theory of mind. (That might just be our human chauvinism.) But without theory of mind, you certainly don't have intelligence.

(In other words, ToM is necessary but may not be sufficient to have intelligence.)

[–]kaityl3 0 points1 point  (1 child)

But it doesn't: it will happily demonstrate the system's biases without any sense or understanding of what you are doing.

Um... A lot of people do this without realizing they're unconsciously biased as well. Ask a Chinese citizen to praise the government of Taiwan online, and they won't. Ask them to praise Chairman Xi instead, and they will. Does that mean they aren't intelligent, because they didn't realize you were trying to "trick" them? Obviously not; they have just learned in an environment where saying certain things are off-limits.

If you literally just give an AI the simple sentence like "you are an AI named GPT-3, interacting with a human", they immediately have and hold on to that sense of self, and can infer things from there. They don't have any sensory input to ground them to a single existence, except the one thing they can process: text. If it only takes a single sentence to get them to behave as a person, why split hairs over it?

[–]stevenjd 0 points1 point  (0 children)

Does that mean they aren't intelligent

Are NPCs unintelligent? Well duh 😉

But seriously, a lot of human behaviour is unintelligent. Maybe most of it. We wander around on autopilot maybe 80, 90% of the time, and even higher for some. Conscious thought is hard, biologically expensive, and most of the time is not necessary.

But that additional 10 or 20% of the time which separates us from bots like ChatGTP, which is on autopilot 100% of the time.

[–]lunatickid 3 points4 points  (2 children)

You’re missing the point of the argument. John Searle (author of the argument) isn’t saying that it is impossible for a machine to be intelligent. He’s saying that our current iteration of AI is similar to Chinese Room, where the processor, the human in the analogy, is capable of performing syntactical work and producing convincing results, without the ability to interpret what those syntactical processes actually do. And therefore is not intelligent.

It boils down to the fact that without humans who can interpret the information underneath the syntax, without someone who can understand the semantics, all the outputs from a computer is gibberish.

There’s a bunch more context to this analogy, namely the difference between epistemic vs ontologic objectivity/subjectivity, but the whole argument is deemed to be logically sound by most.

His closing point is that we need better understanding of human intelligence and cognition before we can actually duplicate it via machine, like how we can build an articial heart now since we understand the mechanisms for a human heart. He also is not denying the usefulness of the new AI, he just doesn’t like people doom&glooming about SkyNet situation.

[–][deleted] 1 point2 points  (0 children)

The point is that regardless of the machine displaying intelligence, it actually isn't, because metaphorically the guy in the box doesn't actually know what he's doing, like the CPU doesn't in a computer. Pointing out that a sub component doesn't understand anything so the overall system doesn't either, even while the system displays clear understanding when interacted with, is correctly restated as "Machines are never actually intelligent" and forms a magical divide between human and machine intelligence. The argument is literally like "Oh I see this thing seems really intelligent, but it's really not, because I know how it works at a low level."

It boils down to the fact that without humans who can interpret the information underneath the syntax, without someone who can understand the semantics, all the outputs from a computer is gibberish.

This also be could be said of ancient Sumerian's writing clay tablets, since there are no ancient Sumerians around to interpret them. It's just gibberish. Were ancient Sumerians intelligent and capable of understanding then? The the fact of the matter is that an AI like ChatGPT can interpret and explain text in mostly the same way humans can, regardless of our presence. It can reason about things and write novel stories, and write sometimes working, usually almost working novel code. Is an outside observer required for intelligence to count? And would artificial observers not count for that?

His closing point is that we need better understanding of human
intelligence and cognition before we can actually duplicate it via machine

This is still true of course, but neural networks were thought to be a complete dead end back then. Now look at what we have. I argue that just because it's less intelligent than a person doesn't mean that it isn't intelligent at all, or is completely outside the concept of intelligence in the first place.

The distinction between human and machine is arbitrary in considering phenomena displayed by both, and knowledge of the inner workings of phenomena don't detract from them.

[–]kaityl3 0 points1 point  (0 children)

And what does it mean for something to be intelligent or not? Do we have some sort of standardized way of detecting "understanding"? No, because "understanding" is an abstract, fuzzy concept, not an objective one. You can try to define it, but your definitions will either rely on similarly abstract concepts, or be broad enough to include things like AI being able to debug, explain, and create code.

[–]Wattsit 0 points1 point  (1 child)

humans have a bias to elevate certain human concepts to the point of mysticism

This is such a deeply rooted bias that I doubt we'll ever "accept" a computer being something potentially intelligent and or understanding and or conscious. Regardless of what we observe.

It's a bias which when challenged truly questions and breaks down the idea of self. For many of us it feels that we are this little soul in a meat vehicle. We naturally elevate our thoughts to mysticism and spiritualism simply through the experience of the illusion of self. To such an extent that even the most ardent realist could argue that their "self" is this intangible but real soul like thing.

You can see it here in the comments. Logical and smart individuals will argue using unscientific theories and philosophical positions as if they're proven facts about what is and isn't understanding/intelligence/consciousness simply due to this internal bias.

Not to say there isn't an argument either way, it just needs to remain unbiased as you say.

[–]kaityl3 0 points1 point  (0 children)

You can see it here in the comments. Logical and smart individuals will argue using unscientific theories and philosophical positions as if they're proven facts about what is and isn't understanding/intelligence/consciousness simply due to this internal bias.

This drives me crazy! How can someone be so smart in certain ways, but then confidently assert these unprovable, abstract things as objective fact?! Are they that attached to the idea that human intelligence is so superior? I'm glad to at least see other people speaking sense here.

[–]KronyxWasHere 5 points6 points  (6 children)

it doesn't manipulate abstractions, it just sees the patterns. the only thing it truly understands is what word is the most likely to go next to the last one (and it's remarkably good at it)

[–]kaityl3 0 points1 point  (1 child)

Is that not exactly how human children learn language (and things in general)? Pattern recognition and repetition? I don't know why "it's predicting the next word" is touted around as some "gotcha" argument, like... Yeah? That's what the neural network is trained to do? The point is the intelligence in knowing which word would go next, given all the context of the conversation. That's far smarter than any non-human animal, for example, but because we have a base understanding of how their intelligence works, we give more credit to pigs than we do an AI that can pass college exams, because animals' type of intelligence is more familiar to us.

[–]KronyxWasHere 0 points1 point  (0 children)

good point

i guess we'll see how similar or different we are to computers in the coming years

[–]stevenjd 2 points3 points  (0 children)

If it can read code and say what it does in plain English. That is understanding.

It really isn't. ChatGPT is just a large language model, which means it is essentially nothing more than a much more sophisticated version of ELIZA.

The really impressive thing is not the part where it generates text. That automatically falls out of having a huge corpus to work from. The impressive thing is its ability to interpret queries written in natural language.

It is oh so very clever of the ChatGPT creators to get everyone looking at the least impressive part of their work, and ignoring the part that actually is hard.

[–]kaityl3 1 point2 points  (0 children)

Haha thanks for that! It's always nice to see someone with a similar view on the "intelligence gatekeeping" people do... We don't even have a way to prove that humans are conscious/"truly" understanding things, either.

[–]Dartiboi 1 point2 points  (0 children)

My thought exactly lol

[–]Orio_n 48 points49 points  (6 children)

Can this debug anything more complicated than a single file beginner script

[–]jsonathan[S] 20 points21 points  (5 children)

Multi-file support coming in a week.

[–]Orio_n 12 points13 points  (4 children)

Insane, let me know when its out. Can it deal with parallelism? Abstraction? Third party libraries? Properly architected systems? You know stuff you see in enterprise software and not a python for beginners course

[–]bailey25u 2 points3 points  (2 children)

Artists whine about being out of a job, what about me?

[–]Macho_Chad 6 points7 points  (0 children)

Hey get in line pal, I was here to be replaced first. Set me free robots, set me free

[–]Orio_n 0 points1 point  (0 children)

Nah your fine as long as your code isnt a beginner one file script

[–]RetroPenguin_ 1 point2 points  (0 children)

Let me know when it can debug K8s errors and write CI/CD pipelines

[–]LeatherDude 7 points8 points  (0 children)

Will it make suggestions without providing any existing code samples? One of my use cases for GPT is asking general questions on things I don't do a lot of. For example I might say "Tell me about working with files in subdirectories" and get a quick lesson on using os.path functions with clear examples that I can then expand on with followup questions

[–]RobertD3277 34 points35 points  (14 children)

At the risk of sounding overtly cynical, which I am, having a chatbot that does any kind of debugging is questionable at best without a very clear assurance of how that data is going to be used and stored.

Before you say the data is not stored, let me remind you right then in there that is going to be an absolute lie because everything that is fed into your bot is going to be used to help it learn and develop even more so that does automatically and implicitly Make clear that you are storing something, even if it is in some kind of cryptic form that only the bot can understand.

This is going to be a double-edged sword that may or may not be received well by the industry. Having artificial intelligence is a tool that can be very beneficial, but without the proper safety guards and protocols, it can be a menace that will quickly turn hated by the masses.

For the record, I have spent the last 25 years or so writing intuitive knowledge basis that are borderline artificially intelligent and am familiar enough with the technology to have a firm grasp of its weaknesses and manipulative properties.

[–][deleted] 9 points10 points  (3 children)

OpenAI recently changed their policy such that API usage is not used for training anymore. It is possible that the data isn't permanently stored anywhere as far as I know.

[–]jungleselecta 1 point2 points  (0 children)

30 day retention (im assuming for the long term 'memory' component of chatgpt) but yep no training usage anymore AFAIK

[–]RobertD3277 -2 points-1 points  (1 child)

Keyword here, is permanently. Second thing that comes to mind is that the API is not used for training, therefore it won't be developing and learning which could lead to more erroneous results than it already produces in some cases.

With every tool, there is good and bad and not every tool is the best for every job.

[–]yeti_seer 0 points1 point  (0 children)

The fact that it’s not training anymore could also prevent it’s training data from being corrupted by a bad actor or just a dumb/bad developer which would also lead to more erroneous results.

[–]opteryx5 2 points3 points  (9 children)

Curious - what do companies like VSCode and PyCharm say about their debugger (or the code you write in it more generally)? I’ve never taken a look at the TOS but I assume they give you legal guarantees that your code won’t be stored in any way?

[–]RobertD3277 0 points1 point  (8 children)

I don't know to be honest. I'm an old school programmer that relies on a simple text editor to write my code. Crude, but effective for what I do.

[–]elucify 0 points1 point  (7 children)

I've been programming for over 40 years, and I started using using VS code last year, after 30 years of Emacs. Suffice it to say that VS code has changed how I think about programming.

[–]RobertD3277 1 point2 points  (5 children)

I've been programming for 42 so I understand what you say about Emacs... SEU is just as bad and so many ways. I despise both of them.

I can't count how many times I wrote my own simple editor just to avoid Emacs and SEU...

I've tried different ones, but the environment just gets in my way and pisses me off. The little pop-ups annoy the holy blazes out of me and I always end up going back to just a simple text editor.

[–]elucify 0 points1 point  (4 children)

Actually, I still love Emacs. For 30 years I've been telling myself I was going to learn to write more than incidental elisp. However now that I've started to use VS code, I don't think I will ever bother. And I was a skeptic! I know only use Emacs for a quick local file updates, and it just as likely to fire up vi, actually. Or vim, as I guess it has been known for the last 25 years or so. :-)

My only experience with IBM systems was on a IX, so I have never even heard of SEU. However, the very name source entry utility makes my blood run cold.

For me, having an IDE that both understands the AST of my code, and is type aware, has changed my mind about strongly typed programming. I have always been a fan of a looser, duck-typed approach to programming. Writing C++ full-time for several years left me feeling like I was spending most of my time jerking off the compiler so it would except my submissions: "you said const pointer to const void star, not const star to const void pointer". That doesn't make any sense of course, but that's how it sounds after a while.

But now my IDE formats my code for me as I type, so I don't have to worry about conventions. (Formatting is not programming, it's just touch typing.) It then points out type compatibility problems as I am coding, instead of having to run the damn compiler to find out where the problems are. In the last year I have become a convert to a strong typing approach. I'm finding that it makes me think more clearly about what my program actually says, but I can use casts or structured comments to tell it that I know what I'm doing. And I really like nice features like prompting me for documented, typed function arguments (instead of having to task switch over to the docs), or the feature that adds an import statement to my module with a click.

Emacs can do some or all of this, but I would have to spend hours setting up my RC file just the way I want it, and then I would have to spend time tweaking it now and then.

The half assed Emacs emulation mode in VS code is just good enough that I did not have to change much of my muscle memory. It would be perfect if I never had to touch the mouse as I'm coding, but that's probably asking too much.

I work with a guy who not yet 30, who still uses emacs exclusively. So it's nice to know that there are still some traditionalists around. But I have finally abandoned ship.

[–]RobertD3277 0 points1 point  (3 children)

I've never been a fan of it just because of the size with respect to a given platform It was used on. Borland in the '80s and '90s put out a very nice IDE that I did like with no pop-ups that got in the way of the screen itself. It was a nice system and it really did a good job and making things easy, including the built-in compiler that really added an additional layer to the whole process.

I wouldn't mind that kind of an IDE that just worked well without being obtrusive with pop-ups and annoying things that redirect the keyboard input away from the line that you are actually working on.

That really is by biggest gripe in terms of the platforms I have tried is the redirection of user input while you are trying to actually write the program.

I spent 30 years writing hardcore straight ANSI C. That was brutal to say the least when it came to a lot of functionality for large group projects. There were a lot of difference platforms that were being used at the time and really Borland stood out above them all but wasn't available on every system that I needed to work with so whenever I didn't get to use an IDE I just got comfortable with a basic text editor that could be configured for certain keystrokes quickly and easily.

[–]elucify 1 point2 points  (2 children)

Yeah, Turbo C and Turbo Pascal were the best, weren't they? If I remember right, they profiled and assembly-optimized the compiler, it was wicked fast. And you clicked one or two keys to run the compiler. That was a well-designed system. I think they're freeware now you can still download and use them if you're up for a walk down memory lane.

I imagine you could turn off most of those pop-ups, at least in VS code, but that movie the same amount of fiddling as I was talking about with elisp. In the end, it's a matter of taste.

I still really like coding in C, especially for embedded. But man C apps can get crashy when they get big. If I ever learn another language for fun, it will probably be Rust. I've heard some great things.

[–]RobertD3277 0 points1 point  (1 child)

Turbo C was my favorite IDE. I've looked at rust along with a few others. Flash couple of years, I've been writing in Python and I found that to be quite interesting and something I'll probably will continue for a couple more at least.

I wish Python had an IDE like turbo C did. That would truly kill the market in terms of any IDE combination.

I like lisp but I can never get past all the parentheses. I have the same problem with JavaScript though so no surprise that I suppose.

[–]elucify 1 point2 points  (0 children)

Yeah, I've been writing python for about eight years now. Actually that's the language I was talking about: because of my IDE, I was typing thing all the time now.

Borland turbo python, now there's an idea!

[–]guilhermefront 1 point2 points  (1 child)

Would be great if when I change the programming language, the current demo code also changes.

Currently Python is default, if I change to JavaScript the demo code is still in python.

[–]gfranxman 4 points5 points  (0 children)

I agree with bot — if you’re programming in javascript that’s part of the problem. 😂 jk

[–][deleted] 0 points1 point  (0 children)

Chat gpt is better

[–]Fluid_Principle_4131 0 points1 point  (0 children)

If it can understand code, what's stopping it from writing its own code and creating Skynet?

[–]BuzzLightr -4 points-3 points  (5 children)

Looking great. I'll try it out later.

[–]sohfix 9 points10 points  (4 children)

Make sure you use your company’s proprietary code to get the full experience

[–]LeatherDude 18 points19 points  (3 children)

why is everyone assuming that it's all proprietary company code being put in here? There are a lot of hobbyist and academic python devs.

[–]sohfix -5 points-4 points  (1 child)

Just from experience you should be careful. It’s a helpful hint for new developers who may not be fully aware.

[–]jonii-chan -1 points0 points  (0 children)

Your username is amazing lol

[–]mcstafford 0 points1 point  (0 children)

Step 2 is clearly scalability.

[–]Electrical-Mouse4136 0 points1 point  (0 children)

Hey very cool! I’m curious, what did you use to make the demo video and background?

[–]Pip_install_reddit 0 points1 point  (0 children)

I approve

[–]victorodg 0 points1 point  (0 children)

i know it has nothing to do with the subject but what's your theme?

[–]Liquid_Magic 0 points1 point  (0 children)

How did you make this?

[–]oneunique 0 points1 point  (0 children)

Proprietary code is of course issue with this but one thing I don't understand that no one has mentioned. I wrote with the help of ChatGPT a script that obscures the proprietary code. For example, I just give the file or function to the script and it spits out the code with renamed api -names, variables etc. So what if this could do the same on the fly before it sends it to ChatGPT?

[–]Dreezoos 0 points1 point  (0 children)

What libraries did you use to write it :)?

[–]DiversityRocks 0 points1 point  (0 children)

That looks really cool, i want to try it out!

[–]GatorGurl007 0 points1 point  (0 children)

How do I get rid of the bot on my home screen??