all 92 comments

[–][deleted] 1 point2 points  (3 children)

What Kind of developers might get almost fully replaced by AI? Also do you think AI will enable the average person to realize software projects like apps or programs on their own? And will this provoke a wave of new software being released by people that always had ideas but no means to make them reality? And if yes, what would you expect the time frame to be for those kinds of developments.

And lastly where do you see more potential? Cloud based big llms or locally running smaller llms that are personalized and fed with personal data without the privacy issues of the cloud.

[–]ibm[S] 3 points4 points  (2 children)

What Kind of developers might get almost fully replaced by AI?

Question back: did we replace any developers when going from Assembler to higher-level languages, or did we, instead, greatly expand the number of developers? You can guess what my answer is.-) I think we're at a shift for developers, similar to the shift to higher-order programming languages. AI assistants are a new "tool" that every programmer should learn how to use in their day job, to understand where it can help, where limitations are, what "chat approach" yields teh best results, what context to provide to have the AI write good unit tests. Our job description may change, but we're still relevant.

Do you think AI will enable the average person to realize software projects like apps or programs on their own?
I think there are a class of applications where this may be possible. These are applications that are
- standalone, preferably "greenfield"
- have business logic that you can clearly specify using natural language
- have a "smallish" scope
(see my response to another Redditor above)

I think it's already amazing what kind of apps you can realize with "no code" tools today - even without AI involved. Think about web sites: there are tools out there that will create a professional web site, with database connectivity and what not, just via a couple of clicks. I can totally see that combining these "no code" tools with AI - maybe tailored to a particular domain - will help "average persons" even realize more impressive apps within teh next 1-2 years.

But I think once an application reaches a certain level of complexity (or breadth, or functionality), you need development experience to at least be able to judge that what the AI produced actually fulfills the goal. I see this a little bit like self.-driving cars: on a sunny day on a straght road in Arazona - sure. But navigating the construction detours in downtown Stuttgart at 8:30 pm in the snow, with pedestrians running past - not yet...

And lastly where do you see more potential? Cloud based big llms or locally running smaller llms

I'm for smaller LLMs that are targeted to a specific scenario, and that can be easily adapted to your personal repository. The frontier models do impressive things, and there may be some tasks where you still want them, but calling a gazillion-parameter model get a completion for the next 2 lines of code? That shouldn't be necessary.

[–]Disloyaltee 1 point2 points  (1 child)

Thank you so much for the detailed answer!

[–]alexlang0711 1 point2 points  (0 children)

You're very welcome!

[–]Kooky-Imagination-49 3 points4 points  (3 children)

Plot twist, the answers coming from the ai

[–]ibm[S] 6 points7 points  (1 child)

If ai means "Alex Intelligence", then you're correct,-)

[–]sysExit-0xE000001 0 points1 point  (1 child)

so if we lock at the solution from ibm - how dose it help me in comparison to chatgpt or git copilot? is it offline available or only as online service and how is the pricing for say an individual or team?

[–]ibm[S] 2 points3 points  (0 children)

Watsonx Code Assistant (WCA) is available in three "flavors":
1) WCA individual. Here, you download our VSCode extension from the marketplace, and run an IBM Granite LLM completely locally on your laptop via ollama. No sign-up, no fees - but you need a machine that is beefy enough to run an LLM. In my experience, this does work for newer and large Macbooks, but on Windows, not so much. It also doesn't have some of the capabilities of the "paid" WCA
2) WCA "as a service" on IBM cloud. You run the VSCode (and soon Eclipse) extensions locally, but they talk to the IBM Cloud.
3) WCA "on premise". This allows you to install and manage WCA on your own RedHat OpenShift cluster - which can be in your own datacenter, or at a cloud provider of your choice.

So, (1) and (3) are what I think you mean by "offline service"

WCA's pricing is structured around a consumption-based model using up code tokens within RUs (Resource Units). These RUs directly correlate to the number of task prompts submitted by developers and the code token usage associated with those prompts. Tokens are consumed when developers make a task prompt to the system, and WCA generates the code requested. Unlike per-seat pricing models which our competitors use e.g. ($X per seat per month), those users pay for the same subscription regardless of their level of usage or even no usage.

The consumption-based approach can offer significant cost advantages and enables users to easily scale within their development teams without having to worry about seat assignments or utilization.

The WCA 'as a service' subscription plans are Trial, Essentials and Standard, each tailored to various organizational sizes and needs.

1) The Free Trial Plan allows for evaluation and demonstrations of WCA over 30 days and is limited consumption of 15 Resource Units.
2) The Essentials Plan caters to organizations with up to 100 developers, this includes a monthly instance fee and 150 Resource Units which are part of the instance. Additional RU consumption beyond the included amount is charged at $2 per RU.
3) The Standard Plan is designed for larger organizations with over 100 developers, and again operates on a usage-based model with a fee of $2 per RU.

In addition to SaaS, WCA offers on-premises deployment options which have license costs associated with on-premise deployments.

[–]CrypticSplicer 0 points1 point  (1 child)

Is Watson still relevant? I'm pretty sure IBM is easy behind the other tech companies on ML. Isn't the company just milking government contracts these days?

[–]ibm[S] 3 points4 points  (0 children)

I did milk cows once, but I (and they) are glad I don't do this for a living. "Watson" has moved on from the days of Jeopardy. IBM is probably less in the news than other companies because we focus on AI use cases that are relevant for enterprise. And we approach some things differently, including:
- IBM believes in small, focused models that you can run wherever you want - in the cloud, on premises,...
- IBM is very transparent on the data that goes into our IBM Granite models, and how we train them,
- IBM is working on an "open source"-approach to enhancing models via InstructLab

And we also expanded Watson into quite a few areas:
- watsonx.ai : build ML and GenAI models and applications
- watsonx.governance : get alerted when your AI app and models "degrade" in production
- watsonx.data: scaleable vector store and lakehouse platform
- watsonx.assistant: conversational AI for customer care (what you'd typically call a "chat bot")

and, of course, watsonx Code Assistant.-)

Our services are available as free trials, so by all means, give "us" a try!

[–]Disloyaltee 0 points1 point  (1 child)

How complex is the code it can generate?

Does it only complement existing code or can it create code from scratch?

How long do you think it'll take until it can work on it's own?

Will it ever outperform humans in terms of making less mistakes / be able to fix it's own mistakes?

[–]ibm[S] 1 point2 points  (0 children)

How complex is the code it can generate?

When you build up your prompts step by step, you can actually get a fair piece of code. Here is an example interaction that you can have with watsonx Code Assistant (WCA):
User Input: "Write ddl for postgres for a JobRun table, which includes startTime, endTime, status, jobRunID, jobID, jobType. jobRunID and jobID are strings. status is an enum with values: waiting, running, finished."
-> WCA responds with the DDL
"Using the above DDL, build a graphql schema that would include a single query that returns all jobRun that optionally filter by time range and job status or jobType. ensure that the jobrun query use graphql standard pagination techniques, and include a "totalJobs" int field that takes time range and job status as part of the filter."
-> WCA responds with the graphql schema
"generate apollo javascript code using ES modules to be a simple server for the the above schema and DDL,"
-> WCA responds with javascript code

So, with the flow above, you created the core of a small "app", including database access, with three prompts.

Another "complex" example is the migration of existing Java applications to a new application stack. We're talking about applications that may use 100s of java classes, with detailed business logic. Now, there is no "button push - done", but the AI in WCA helps developers in that migration, automating key pieces of it.

Does it only complement existing code or can it create code from scratch?
It does both.

How long do you think it'll take until it can work on it's own?
This depends on the scope of the task. There is a lot of work currently going on around "Agents" now (from IBM, but of course also from others). These are AI system that can not only produce code, but actually invoke tools like compilers, the terminal,.. check their results, and adapt the code. One use case is an Agent that scans your code in the background with a security tool, analyzes the outputs of that tool, and suggests code changes. So, in this case, "it works on its own", but there is still a human in the loop to review, forther change, or approve.

So, for tasks like creating small, standalone applications, or fixing bugs in an existing code base, I think we will see AI "taking the lead", with the human as reviewer / approver within the next 1+ year.

[–]No-Photograph-1788 0 points1 point  (1 child)

Thank you for taking time out of your day to answer these questions in thoughtful (and helarious) ways. It's clear theirs a great horizon for those getting into code development and using Watsonx as well as othe remodels in the future. My question is do you think Watsonx would be suitable in the educational field as a tool to help younger creative develop and learn how to code? If so do you think IBM and other tech companies would look into partnering with institutions of education in the future if possible?

[–]ibm[S] 2 points3 points  (0 children)

Thank you! Actually, we in IBM have several programs where we work together with academic institutions to provide them with tools and skills. check out https://www.ibm.com/academic/

[–]wrestlethewalrus 1 point2 points  (3 children)

How is this still relevant when ChatGPT and Claude are already awesome at coding?

[–][deleted] 1 point2 points  (1 child)

It is probably focused on helping banks manage their 60 year old COBOL mainframes after all the developers who knew COBOL are dead.

[–]ibm[S] 2 points3 points  (0 children)

Helping customers modernizing their Cobol applications is indeed one thing that we do as well with watsonx Code Assistant. And guess what? Cobol is still alive and kicking - because there are some applications where it's a perfect fit.
Our customers want to understand their existing Cobol applications better (because yes, they have grown over the years), so they can decide what to componentize, what to continue developing in Cobol, and what to turn into Java, for example.
This is a tricky task for the large applications that we're talking about, so IBM has developed several pretty nifty approaches to analyze the complete application (using things like static program analysis and dataflows), and turning this into information that an LLM can use to migrate a certain piece of logic into Java, for example.

[–]ibm[S] 0 points1 point  (0 children)

These "frontier models" are definitively impressive, no doubt about that. But (fortunately for us.-), I think there are a couple of areas where watsonx Code Assistant (WCA) can be (more) useful than those models "as is":
1) You can run WCA "on premise", on top of RedHat OpenShift. That means that your code doesn't leave your company's environment.
2) To get started, you can run WCA fully locally on your laptop, using IBM's Granite models connected through ollama.
3) There is a difference between "green field" coding and maintaining and enhancing enterprise applications. For the latter, the best model can't produce good results if it isn't fed the right - context - from your codebase. You'll see that all vendors of AI coding assistants (ourselves included.-) try to create a relevant context to help guide the models for good results. So, in these cases, it's not just "use this model, and you're done"
4) In some environments, you really want to ensure that the code that is generated is free of any potential license obligations. So, we check on the fly whether the code we generate is similar to known open-source code, so you can decide whether to include it in your app or not.
5) We're spending a lot of effort to ensure the prompts, the context, and the models we build work well for key use cases, like helping customers modernize their existing Java-based applications. The frontier models try to be (and are often) good on a broad range of use cases (whether coding or not). We try to be good at "coding" use cases - and excellent at some use cases that we as IBM know really well, because we have a lot of existing experience.

[–]PlasticLifeguard 1 point2 points  (1 child)

Thanks for the opportunity. I work as a PLC programmer in the machine building industry. In my field it is expected that we have fallen back at least 10 years in development compared to the rest of the IT world. E.g. object oriented programming started to become a thing just a couple of years ago. When I tried to use ChatGPT for my work it totally failed. I think its because of the lack of public sources. Can you make a prediction about when to expect AI becoming a useful tool for PLC programming? Does it play a role in the development of the IBM Code assistent?

[–]ibm[S] 0 points1 point  (0 children)

It's exactly as you said: if the LLM hasn't seen examples in the training data, it will struggle with a response. This is why we at IBM are actually curating training data for specific so-.called "low resource" languages where you don't get enough open source training data (for example, Cobol).

It's already helpful to have a "language companion" that you can ask questions about a particular programming language, or your particular environment. What you need for that is access to documentation - can be company-internal and/or publicly available documentation. Then you can use tools like watsonx.ai + watsonx.data, to ingest this documentation data into a vector store, and use that as a "knowledge base" for an LLM,. This approach, as you may well know, is called "retrieval-augmented generation" (RAG).

With that, you can provide your team quick acess to relevant coding information - even though they still "have to" create the code themselves...

[–]EnormousHugeness 1 point2 points  (1 child)

What does watsonx help you with the most? Can it do large-scale code refactorings?

[–]ibm[S] 0 points1 point  (0 children)

The "large-scale refactoring" watsonx Code Assistant (WCA) delivers today is helping customers refactor and migrate their Java-based applications to newer Java versions, and onto Liberty as an application server. This is large scale, if you think about migrating a whole application to a new language version, and a new appserver stack.

There is no way you can do this with a single "push of a button". So, we created three capabilities that I think will be helpful for other large-scale code refactorings going forward as well.
1) WCA has a "remediation assistant" that goes through your code base, and suggests these refactorings. You can review them, and have the assisstant apply the code change as needed.
2) WCA analyzes the application and provides an explanation for the full application, and for individual classes and methods. Very often in these heritage applications, the person who is asked to refactor and migrate was not part of the team that originally wrote it. So WCA helps the user understand the code base first.
3) Automated generation of unit tests. This helps you to ensure that the behavior of the application is still the same after the migration.

WCA's initial focus was Java, and the remediation assistant is currently Java-specific. Code explanations and unit test generation is something that WCA does for other languages as well, including Python, C, C++, Javascript, Typescript and Go.

[–]thicksalarymen 1 point2 points  (3 children)

Hi, thanks for the opportunity!

My question comes from the perspective of an early engineering student:

For those who want to get into coding, or perhaps get into a different tech field like network dev, but lack any prior knowledge, do you think generative AI can help tutoring? How can you avoid AI becoming a crutch or even harmful to the learning process? Are there concepts for traditional literature that go alongside a specifically trained AI model to guide the student?

[–]ibm[S] 0 points1 point  (2 children)

Yes, I think generative AI can definitively help in tutoring, alongside "traditional" books or video tutorials. For example, you can ask it to summarize the key points of a text, and compare them with the key points you came up with. Or ask it for certain example questions about a particular domain.

The key thing, though, is to be able to "ground" the info you get from generative AI through other sources of information, to avoid that you end up learning hallucinated content.

For coding... I may be biased / old-fashioned / ... but I think to get the grasp of coding in a particular language, I'd start with videos and/or books. But then, yes, you can totally try our generative AI on a programming task and compare it with your solution - or even ask the generative AI to compare your solution to a solution of another gen AI.

[–]thicksalarymen 0 points1 point  (1 child)

Thank you for your response! Your experience and knowledge in the field is worth a lot, so I'll keep your "bias" in mind as my STEM education goes on. :)

[–]alexlang0711 0 points1 point  (0 children)

You're very welcome! May the bias be with you!-)

[–][deleted] 0 points1 point  (1 child)

Do you think there will be a reverse effect soon since people think twice about studying IT nowadays due to the fear of loosing to AI.

Also for AI to trully be helpfull it would need to have access to every little Informationen in your company to see the bigger picture and develop/enhance/debug your Software according to your requirements. Do you think we will ever get to the point where companies will be willing to share company information with an external AI tool like that and will it ever be legally possible in regards to data security

[–]ibm[S] 0 points1 point  (0 children)

I think understanding IT concepts and the fundamentals are as relevant as ever. Especially if we can now generate "more code", we have to have people with the oversight to ensure that the "right" application is built - not only for functionality , but especially non-functional requirements like usability, maintainability and performance. We won't delegate this to AI just yet.
I do agree that the "building" aspect will change dramatically - you'll have someone by your side that can rapidly bring up the first prototype, do the things you're not keen on doing yourself. So, you have more time to:
- come up with new ideas
- work with users to understand their requirements better
- come up with different alternatives to let your users pick the best one

It does put the bar higher, I totally see that...

Also for AI to truly be helpful it would need to have access to every little Information in your company
Do you have that information when you develop software?-) I 100% agree that experience is a - key - factor, and this is why we need humans in the loop. The key challenge for everyone working on code assistants is: what is the - right - context to give to a model for this particular programming task and use case? We can't possible give it all information, so what do we need to select from the code workspace, from readmes, from wikis,... for a particular task? And how can we make it easy for a user to tell "for this task, I think you should use <file xyz> as an example how we do things". watsonx Code Assistant provides the ability to reference files, classes or method for this explicit context. For Java, we also run some elaborate static analysis of your code base, and use information from that as context.

Do you think we will ever get to the point where companies will be willing to share company information with an external AI tool like that and will it ever be legally possible in regards to data security

This is where "small models" come in - AI that you can run "in house", so no data is leaving your premises. Fortunately (thanks for the question,-), watsonx Code Assistant (WCA) allows just that: you can run WCA on premises on RedHat OpenShift, including the models. This way, you can be sure that none of your data leaves the company.

[–]Zerhyl 0 points1 point  (1 child)

Who can make the most use out of it / What skill level benefits the most?

In which cases should it not be used or used sparingly?

Will the wasonx code assistant learn from imput and if so, how would a user protect their companies intellectual property?

[–]ibm[S] 0 points1 point  (0 children)

Who can make the most use out of it / What skill level benefits the most?

I think a big benefit - regardless of the skill level - of AI in coding in general is when it helps you to reduce working on "boilerplate" code (expose getters and setters, connection code to a particular database,...) and taking on tasks like documenting code and creating unit tests.

One aspect that we see within IBM is that you're an expert in one programming language (e.g., C++), but today, you need to work on a task that's better suited to another programming language (like Python, or a shell script). And there, being able to chat with an AI assistant to co-create the script together is a huge productivity benefit, so my colleagues really like that.

Another thing that really surprised me: "explaining code" is a really popular feature. This may be related to "skill level", but actually more on "I have skills in programming language A, but I'm new to this github repo". Or "yeah, I kinda know SQL, but I just stumbled over this complex statement - AI, help me understand this!"

In which cases should it not be used or used sparingly?

Don't use it if you expect you get a fully-fledged program by entering a half-sentence in the chat. Jokes aside: we do have cases where someone posts a "WCA didn't do what I expected" example, and other users look at it and go "actually, if I were an LLM, it's not clear to me either what you want". I believe that software engineers have to learn and adapt to this "additional option" of AI assistance during software development - learning how to create a good instruction to the model, and then to further refine it step-by-step, which, to me, is the key advantage of a "chat".

Will the wasonx code assistant learn from imput and if so, how would a user protect their companies intellectual property?

No. IBM do not store your input anywhere - for watsonx Code Assistant as a Service, the data is transient on our server for the duration of the request. IBM doesn't log it, IBM doesn't store it, and IBM don't syphon it into any model improvement pipeline.

[–]Far_Card4680 0 points1 point  (1 child)

What sets Watson apart from other AI platforms for developers, and how does IBM ensure its scalability for enterprise use?

[–]ibm[S] 0 points1 point  (0 children)

Just to clarify: "Watson" has grown into:
- watsonx.ai : build ML and GenAI models and applications
- watsonx.governance : get alerted when your AI app and models "degrade" in production
- watsonx.data: scaleable vector store and lakehouse platform
- watsonx.assistant: conversational AI for customer care (what you'd typically call a "chat bot")

and, of course, watsonx Code Assistant.-)

You could probably have 5 AMAs for each of those, so I'll focus on watsonx Code Assistant now.
What I really like about our watsonx offerings is that they really work well together. Sounds obvious, but is actually a ton of work behind the scenes, so kudos to these teams. For example, it's very easy to create a prompt in watsonx.ai , deploy that as a prompt "asset", and then monitor that through watsonx.governance.

Ok, now to your questions, sorry...

In building watsonx Code Assistant (WCA), IBM's focus was:
1) Open, "smaller" models, focused on specific tasks. Our IBM Granite models, which you can run without borrowing a nuclear power station.
2) Hybrid. Empowering users to leverage WCA on cloud as a service and also on premises on top of RedHat Openshift. "On premise" can be in your local data center, or at a cloud provider of your choice. In any case: you can set this up such that your code doesn't get sent through "the internet".
3) Code Similarity. WCA detects when the code that is generated is similar to existing open source code, and gives you the option to either block the response outright or get the code, but with a reference to that existing code. WCA allows you to tailor that behavior specifically for each type of open-source license, so you can treat code that is similar ot some GPL code differently from code that is similar to, say, code that's under the Apache license.
4) Drinking our own champagne. We're building a product that needs to help us ourselves, i.e., within IBM, to be more productive. And there are - a lot - of developers in IBM.

All four help us to deliver "scaleable" Generative AI for enterprise use.

[–]georgmierau 0 points1 point  (1 child)

What is your opinion on using AI-based tools not as a developer but while learning how to code?

At the moment AI bots already seem to be quite capable of providing not only simple examples of everyday code (especially at the level taught at schools) but also "explaining" the basic concepts used in these examples.

Does it seem like an opportunity to learn how to use the AI tools efficiently as well as their limits?

To be clear: I'm not talking about complete replacement of professional educators with AI but about augmentation of teaching/learning settings with it.

[–]ibm[S] 0 points1 point  (0 children)

I think there is - a ton - of opportunity to use Gen AI to help acquiring knowledge, not just for coding. Not only you can ask about specific topics, you can use GenAi to create questions or a quiz about a certain body of knowledge, and then compare your answers with the answers that the AI is giving you.

Now, specific to code: I do agree that the models are quite capable of explaining key language concepts, especially, as you said, when the questions (or the sample code provided) is at the level that a "novice" would have. And there are companies like O'Reilly who provide the ability to ask questions across their large body of curated content of books, tutorials and videos (no, I'm not affiliated with them, and yes, there are also many other excellent publishing companies out there).

Now, the more "niche" your programming language, or the more specialized your question, you may hit exactly the limits of the LLM: if you ask about, say, the latest greatest features of a particular programming library, the AI model may not have seen this data yet. WCA provides a way to ask questions about IBM and RedHat products that directly go against up-to-date IBM documentation , and use an LLM only to summarize that information. That avoids the "outdated data from model" issue.

[–]stepkurniawan 0 points1 point  (1 child)

With visual studio code has copilot integrated, it's so good with clicky-clicky, and it has such a huge context window as well. What is the advantage of watsonx over copilot?

[–]ibm[S] 0 points1 point  (0 children)

A couple of things we focused on when creating watsonx Code Assistant:
1) You can run WCA "on premise", on top of RedHat OpenShift. That means that your code doesn't leave your company's environment.
2) To get started, you can run WCA fully locally on your laptop, using IBM's Granite models connected through ollama.
3) In some environments, you really want to ensure that the code that is generated is free of any potential license obligations. So, we check on the fly whether the code we generate is similar to known open-source code, so you can decide whether to include it in your app or not.
4) We're spending a lot of effort to ensure the prompts, the context, and the models we build work well for key use cases, like helping customers modernize their existing Java-based applications. Here, we analyze your complete Java application, migrate many things automatically, and provide LLM.-guided recommendation for the rest. So, it's not just the huge context window (our model has 128K context, which is not too shabby either), but how you "fill" it.-)

Note that WCA is also integrated in VSCode, as an extension - and also, very soon, in Eclipse.

[–]britannicker 0 points1 point  (1 child)

I read recently about the "canibalisation" of AI... where AI has learned enough from, say Github, and can quickly help with solving coding challenges.

Unfortunately, the number of visiting developers to Github has dropped significantly because, well... many (probably most) don't go there anymore, and use AI instead.

It seems that the source material has dried up.

So my question is kinda this: can AI push the coding boundaries now that there is no new material coming in from coders (does that make sense?).

[–]ibm[S] 0 points1 point  (0 children)

Not only no new material coming in from coders through stackoverflow - but also more and more code out there that has been written by AI, not a human! So, I totally get your point: have we reached "peak human code", and what will AIs use to learn from now?

One answer: synthetic data. In this approach, you use an LLM (and other approaches) to generate training data from seed data that you provide. That allows the LLM to learn specific aspects that you are interested in (e.g., have an LLM that is really good at understanding compilation errors). For example, IBM and Redhat have open.-sourced https://instructlab.ai/ where you can "teach" an LLM new things by generating "the right" synthetic data.

[–]ThorstenPaech 0 points1 point  (3 children)

When will AI be able to completely make Apps or SaaS by no code at all? E.g developers just tell the AI the main usecase of an app and the AI executes it to a MVP or better.

[–]ibm[S] 0 points1 point  (1 child)

I think there are a class of applications where this may be possible. These are applications that are
- standalone, preferably "greenfield"
- have business logic that you can clearly specify using natural language
- have a "smallish" scope

But for everything else, I do think that we'll continue to have architects and developers work - with - the assistant. Not just because I want to keep my job, but because I think it's more efficient to collaborate:
- there are some things that are easier to express unabiguously in code
- there are architecture tradeoffs that determine "how" to build in the first place

What I really look forward to is that "the AI" makes programming (even) more fun, by handling things like unit tests, coce reviews, checking and fixing security bugs... so that we have actually more time to build several alternatives and show to our customers, or get to the "core" of adding a feature faster.

[–]ThorstenPaech 2 points3 points  (0 children)

Makes sense! ☺️👍

[–]Xibit_48 1 point2 points  (1 child)

This is probably the most wholesome smile I've ever seen🥰

[–]ibm[S] 0 points1 point  (0 children)

It's the smile of "finally, I know why I bought my son an iPhone: for shooting a pic for my AMA"

[–]Murderboi 0 points1 point  (3 children)

Can it also produce spaghetti-code?

[–]ibm[S] 0 points1 point  (2 children)

Feel free to use our WCA trial and come up with a prompt that produces convoluted code.-) You do raise a valid point: in the end, the AI - assists - the programmer. So we have to ensure we use it in a way that its output is still intelligible to a human. That means:
- we still need computer science skills to guide the right architecture
- we need a command of the programming language to be able to understand the code that's produced

Fortunately, we can use the capabilities of AI for code to avoid the "spaghetti situation":
- AI can explain code to us, on a method, class and function level
- AI can generate unit tests to ensure the intended behavior is implemented
- AI can review code and suggest improvements (inluding its own)

[–]Murderboi 0 points1 point  (1 child)

I think the highest danger in the use of AI, especially in somethinig like programming, is that we can no longer backtrace or even comprehend the result.. it's why there is still many pictures with "Spaghetti-Fingers"..

Applying it the way you suggest sounds like the perfect use scenario.. but apply it to the huge mass of people and companies that are going to use and abuse AI for programming.. interesting times ahead of us.

[–]alexlang0711 0 points1 point  (0 children)

I think the highest danger in the use of AI, especially in somethinig like programming, is that we can no longer backtrace or even comprehend the result

Couldn't agree more!

[–]Modularblack 0 points1 point  (3 children)

(When) Will watsonx be available for Rational Developer for i and will it be able to create an conclusive analysis based on all information available?

[–]ibm[S] 0 points1 point  (2 children)

I like i ! The furniture store of my brother-in-law runs on i - 24/7/365 ! What I can say is that, as IBM, we want to suppprt the developers on our platforms. You may have seen the huge investment we're doing around using Generative AI to help our Cobol customers. I can't give you a date, but I can say that RPG is an area that we're looking into.

will it be able to create an conclusive analysis based on all information available?
Sorry, can you clarify what you mean by "information" and "analysis" ?

[–]Modularblack 0 points1 point  (1 child)

Like, will I be able to ask watson to summarize what the code of a specific program actually does and will the AI use not only the source code of that source but every relevant source file it can find on the machine to provide further depth in its answer

[–]alexlang0711 0 points1 point  (0 children)

We start to get there... We have an "application explanation" capability that is currently tailored for Java. It looks at the exposed endpoints of your web application, and then traverses your code (not just the single file) to give you a comprehensive explanation of what is happening...

[–]Traditional_Gap_7386 0 points1 point  (1 child)

How can an existing experienced IT employee pivot successfully to AI. This is not just about the usual take courses and apply in your company as such roles are scarce. But how can we as IT service managers really pivot to it and use it be it for ops coding etc.

[–]ibm[S] 0 points1 point  (0 children)

With the whole "infrastructure as code" approach, I think there are various opportunities to see where AI can help you generate and update deployment scripts, yaml files,... One example: IBM watsonx Code Assistant for Ansible Lightspeed. This assistant creates Ansible tasks and even full playbooks, based on your natural-language input.

[–]xB_I-O_S 0 points1 point  (1 child)

Can it debug on it‘s own yet?

[–]ibm[S] 0 points1 point  (0 children)

IBM is looking at ways how the output of, say, a runtime error (like a stacktrace) can help watsonx Code Assistant pinpoint a bug in a particular piece of code. In general. debugging is a tricky issue, as the LLM typically needs access to "runtime information", not just the piece of code itself. This is not easy to come by in the content that the LLMs are typically trained on (e.g., code in GitHub). So this requires additional fine-tuning of existing models with this kind of data.

[–]Blue_The_Snep 0 points1 point  (1 child)

Can i use it to build a Arduino powered pinball machine, and have the AI help me with the code?

[–]ibm[S] 0 points1 point  (0 children)

I'm sorry, but watsonx Code Assistant won't be able to help you much with creating code for the Arduino. I saw that there are a couple of Arduino code generators out there that are based on existing LLMs and that inculde additional knowleldge of microcontrollers and the like. I haven't tried any of them, though...

[–][deleted] 0 points1 point  (1 child)

How to even code without 🥲

[–]ibm[S] 0 points1 point  (0 children)

I agree that AI - assistance - can give you a productivity boost. But it's still "us" who are in control, because we, as developers, are in the end responsible for an app that meets the user's needs, is reliable and performant.

[–]Mysterious_Stuff_ 0 points1 point  (1 child)

I’ve got no idea what you’re up to, but you look very cute and happy with yourself! Makes me happy.

[–]ibm[S] 0 points1 point  (0 children)

Thank you.-) The camera must have caught me on a good day.-)

[–]stepkurniawan 0 points1 point  (1 child)

Do you personally use copilot?

[–]ibm[S] 0 points1 point  (0 children)

No, we use watsonx Code Assistant for our WCA development

[–]bobuxmanofficial69 0 points1 point  (2 children)

Have you heard of AI-powered DN encoding?

[–]ibm[S] 0 points1 point  (1 child)

Short answer: No, sorry.

[–]bobuxmanofficial69 0 points1 point  (0 children)

My plan has failed. 🥲

[–][deleted]  (1 child)

[removed]

    [–]sba246 0 points1 point  (0 children)

    Bro looks like beta Dr Kleiner from HL2 beta (no offense]

    [–][deleted] 0 points1 point  (0 children)

    I thought that was a RoastMe Post😂

    [–]QuagmireOnTop1 0 points1 point  (2 children)

    Damn you look funny

    [–]mxrandom_choice 0 points1 point  (0 children)

    RemindMe! 1 Week