ChatGPT 4 vs Low/No Code.

I tried a few things on ChatGPT 3 and, in these particular cases I don't think ChatGPT 4 would have fared much better. But, I saw an article in my news feed today discussing Low/No Code and the impact these new AI tools are having. And while I didn't read the article, the title implied that things like ChatGPT are becoming quite the headache for these tools.

And I am not surprised. Not one bit.

No Code may have a little less to fear. But Low Code is exactly what ChatGPT is. And, it provides EXACTLY the sort of thing I have been advocating for, if in a slightly different delivery mechanism. What ChatGPT delivers is a largely platform agnostic solution. It requires you to understand the domain well enough to phrase the prompts and validate the output. But, it gives you ACTUAL code. It doesn't treat you like an idiot. And there is no vendor lock in.

ChatGPT is actually better than my own vision for low code. Or rather, the scope of the abilities of such a tool far exceed what I could hope to code on my own. It relegates my own concept to something of a niche product. It could still find success in the right market. But, it could no longer be a market disrupter.

No code gets a bit of insulation in that it takes the concept further than ChatGPT does. Tooling based around ChatGPT could in theory produce a full No Code solution and even deliver it. But, I think we are a ways off trusting even something as good as v4 to simply deliver a 100% functional solution with no tweaking. So, if you are in a position where No Code limitations aren't too limiting, then they are probably still the right answer.

Similarly, I think the best audiences for AI like this are pretty much the same as the ideal audiences for other low code solutions. There are a few extra ones, and the ideal ones I mentioned before don't fit in here (without further training for the AI on a specific data set).

As for what I tested and have seen, and why I don't think ChatGPT 4 is as much of a leap as people are thinking:

First, I asked ChatGPT to create a C# Source Generator to automate some testing. The code for the Source Generator itself was "flawless", though a bit dated as it used the older Interface. Which is probably because the incremental source generators are newer than the training data. But, I think the reason it did SO DAMN WELL on this particular case is simple; a lack of data. Source Generators are STILL a rather niche area of Dotnet development and most code and resources out there are either from a) Microsoft directly or b) VERY talented and enthusiastic individuals. I don't think that ChatGPT 4 would improve on that output in any meaningful way.

As for the specifics of what the generator did beyond that... I think it did a good enough job. It made a lot of assumptions. But therein lies the problem; it didn't ask for clarification on ANYTHING.

Second, I asked it to write a Plugin/Module loader. I have a side project I'm working on, and I tried to write one myself. And it failed because it didn't know how to load the native assemblies for the SQL Client. I found this project which CAN handle it properly. But, if I can avoid it, I would rather not introduce any 3rd party assemblies in my main project. In fact, that is part of the driving force behind the modules in the first place. So, I asked ChatGPT to do it. And I had to correct it MANY times to get to something looking like the outcome I was aiming for. 

I suspect ChatGPT 4 might have gotten to a similar answer in 1 or 2 prompts as opposed to 5-6. But, I haven't tested it yet, and I doubt it will work. Reason? It looks an awful lot like my original code which didn't work. Even with me telling it the exact problem over and over again. Also, despite all of the hand holding there are still some things which need to be fixed. There are some SLIGHT variations compared to my code, and I haven't scrutinized the source of the other project to figure what the magic sauce is.

But the problem that I think the AI is running into is simple; bias. There is a lot of code out there. And a lot of module/plugin loading code. But, not a lot of them concern themselves with this particular problem. Which is unfortunate because this problem is related to many other problems around dynamic module loading. In short though, the AI has far more examples of code which don't solve my problem. So, it is MUCH better at suggesting code which does not solve my problem than it is at suggesting code which will.

And, we see this in videos of top level developers talking about the skills of ChatGPT 4. They CAN make it write amazing code. And, it does a lot better on the first pass than the prior version. But, it still tends to prefer, on the first pass, delivering Junior, or Intermediate developer level code. And the reason, I suspect, is the same. Bias. As skill level progresses, the amount of training data decreases. Also, as the scope of the domain narrows, the same problem occurs. 

For quality, less data is bad because, technically speaking, the mountain of bad data still fits the prompt. 

For narrowing scope, it is a double edged blade. If the right answer exists consistently within in the narrow scope of data the AI has, you'll likely get a good answer, just as I was able to get a C# source generator. But, if it doesn't, you get outputs more like my module loader.

But, I'm a human, and one with Software Development experience. So, I can validate the output and choose to take it as I get, improve on it, or simply walk away. Either way, used right, it will almost always yield a more efficient use of my time. And when I take the code or improve upon it, I get something that is of more value than relying on a black-box Low Code solution and likely something a No Code solution could never hope to achieve. 

Comments

Popular Posts