Two Futures With AI: Thoughts After OpenAI DevDay

Some may know me already - I'm not really interested in corporate agendas. The reason I'm writing this is that I'm interested in my own agenda as a developer.

OpenAI DevDay was yesterday. For the last 24 hours, it seems as if we're entering the next web 1.0. You talk to an OpenAI text completion model and get "static" outputs. I say static because new chats are usually considered contextless, so it's almost a fresh start whenever you start a new chat. Two different people starting a new chat with the same prompt will get similar results. When compare with what we'll have in the future, this is nothing beyond static.

1. How the next Web 2.0 may look like

  • Complete backends on OpenAI, where endpoints can optionally connect to LLMs. this would eventually come to a state where one can setup software systems by prompts under Azure, fully abstracted.
  • Fully customized UI and UX for custom GPTs. Or connecting GPTs to a specific domain extension (.oai) that also interacts with the GPTs marketplace.
  • An official OpenAI browser (or maybe hardware) that fully interacts with you as your agent. You connect your account, and it knows where it knows all the GPTs you've used, created, and interacted with.
  • A real-time collective context of information stored under OpenAI. A new source of information where OpenAI won't need to scrape the web anymore to increase its knowledge cutoff. Whether this will happen is hard to determine, as it may require more research on transformer models and how GPT4 can "learn" new data by prompts. It may require newer feedback models before updating global vector spaces and/or knowledge graphs. Yet, I'm personally sure that yesterday's speech included the intention of becoming the new primary data source for the Web.

2. How I think the above steps will impact the industry

Such an act would replace many of authentication and email providers, website builders, AI tooling services, vector databases, and similar services in the first place. This would be followed by many other industries, and I see blog services to be one of them.

What I actually find concerning here is that this would singularize very much of the academic and industrial (other than OpenAI) research, especially in cryptography and foundation models.

Eventually, this technological direction will eliminate the need for general-level search. Context-specific search engines, such as AirBnb or Uber, should be safe. In the long run, I'd still expect to see OpenAI-based alternatives for those. Yet, such services cannot be standalone LLM calls - wrappers will be needed, backed by Azure and deployed on OpenAI. Who will create these may depend on the reactions from the creator economy, yet that economy has not appeared yet. We'll only start seeing later this month.

This is the possibility of WWW, arguably the best distributed network in the world, mostly serving as a platform for a centralized institution.

3. Hardware

On the hardware side, I personally believe that we'll see some competition. On one side, hardware using OpenAI, and on the other side, hardware using locally embedded LLMs. I expect OpenAI or Microsoft to launch hardware specifically to act as a AI personalized agent, but until then there will remain some very successful wearable hardware agents using OpenAI. The philosophical difference here will be about who owns your information. In an LLM that is embedded in your hardware but does not send your information to a centralized service, you own your data.

Hardware capability concerns have been a strong controversy over the last couple years, but I'm one of those who think that we'll be able to embed models with billions of parameters into mobile devices in maybe three years - I may be wrong here. Time will show.

4. Local AI Agents & Decentralization, Next Web 3.0

I don't know how popular they will get, but I'm almost certain that decentralized, possibly fully distributed, systems to store local data will be useful. If the hardware capabilities let us, this may show great impact. I just don't think that wrapping any of the current major blockchains to wrap AI agents is the best idea.

If such a system becomes real, one example is that you would be able to tell your mobile (or any compatible hardware) phone to transfer money to someone, and you would be able to without any governmental/federal authorization. The only two things that would be required are:

  • You have the amount of money being transferred.
  • It is you sending the money, not anyone else faking you.

Such systems could easily ensure both requirements are met. This system can basically be used for anything, not just to transfer money.

Many people expect OpenAI to create such an authentication system, using Worldcoin or a different identity protocol. This is a possibility, but such a system won't be fully decentralized. A trust network & authorization system under OpenAI, even if acts transparently, would still use centralized assistants for personalization. I find the concerns about Worldcoins' approach, using your retina to approve any kind of transaction, valid, especially if it is connected to LLM based personalization using OpenAI.

5. Summing up

I don't like making assumptions in black or white, but it looks to me like the developer economy (not only technical developers, any kind of business or service) will be fragmented by their choice. Some will developer under the horsepower of OpenAI, and some others will rely on more decentralized products under a layer of their choice, or an ideal layer that does not exist yet but maybe will in the near future.

There are some services that OpenAI cannot touch easily:

  • Some are context-specific search services, goes without saying.
  • Image generation models other than diffusion models. I worked on building a Photoshop-like inpainting model at Haven, and I believe that was an example.
  • Services like Labelbox should be safe for a while as well, since academic and industrial research will continue.

I don't know where the developer economy is going, but at this point, I'm confident that there will be some significant fragmentation, not for the first time and not for the last. I just don't think that the impact of each individual choice would matter this much. Matter for what? I don't really know. Welcome to the ambiguous.