Not withstanding the drama going on at OpenAI at the moment, I have spoken to many people who seem to think that AI can do generate “any” code at present.
That is not necessarily true. It can generate code based on documentation and code samples available online. It can drum up some pretty “new” things combining documentation with code samples to seem as if the generated code is completely novel. But, I don’t think we have reached a stage yet where it can drum up something completely new, which is undocumented anywhere and without any code samples to learn on.
I have seen some examples of this myself.
So, how do we differentiate?
Well, our code generator does not generate code based on any code samples out there, or any open frameworks and APIs. The generated code refers to our internal APIs and Frameworks which are not openly available.
This also means that we can evolve our internal frameworks independent of the generated code. We can move forward without breaking the generated code, and we have actually moved forward like that in the past few months.
Happily, I have now figured out how to implement Generative AI within our product beyond it just being a help or customer service bot. It took a while, but we have a good idea about it currently. We can use it to improve what we have built for sure. And I am not talking about a Copilot for code generation, but much beyond that.
If we go beyond GPT 4 into an AI system which can independently sift through decades of material in CS, and reinvent everything we have invented from scratch, including all the learnings from extensive testing and such, sure — maybe sometime in the future, it could do the whole thing.
But that would require the ability to read thousands of pages of documentation and understand it without error from page 1 to page 1000, and generate millions of lines of code without any error.
I think we are pretty faraway from that singularity.
Also, note that even if we are in the right path with GPT at present, adding safety to it, and putting it into a straitjacket will curb how far and how fast we can go. Safety is good — but it implies holding things back implicitly — which is also good.
We don’t know what went inside OpenAI beyond the materials which leaked out. Even if things are going to go slower now, if it means a safer world, I would rather have that, than moving fast and breaking things. AI can break a lot, and I think everyone knows at this point that controlling AGI maybe beyond us.
Of course I admire Sam and team who are now going to Microsoft. But, I will vote for safety over speed any day.