Deconstructing the Unpure - Invisible Products Under Large Language Models
Naval has a famous Tweet, as short as "Good platforms are invisible."
How can one ascertain a product's invisibility? Common consensus suggests:
- the platform's content should overshadow the platform itself.
- the platform as an entity should come after the problem it solves and possibly more.
I think these statements are far from encapsulating invisibility effectively. To illustrate, I have this example below.
Assume that you have a product that must replace number 4, in the set of integers. You heard it right, it will replace number 4. Yet, your product should be invisible, so no one can identify the difference between 4 and your product, call it.
func get_actual_4() int {
return 4
}
func get_your_product_4() int {
return 4
}
This completes its duty. It is easy to tell why - one can get the integer 4 using your product without a friction. This product, for the above problem, is invisible.
However, think about it again. How accurate would it be to represent number 4 only in the set of integers?
I don't think it would be accurate. The number 4 has millions of representations, and only one is the integer set. For the computer's operating system, or the renderer, or the set of real numbers, number 4 will have a different representation for each.
So, it is not possible to replace number 4 that will represent the number in all representation systems. This would simply require us to understand the nature and dynamics of all the representations of 4 in nature. This is simply not possible.
Well, now that it is impossible to have an invisible integer alternative, how can one have an invisible product? Let's explore - but again, it won't be possible.
Integers are numbers, and numbers are relationships. Yet, these relationships are usually deterministic. 3 + 1 = 4, or 5 - 1 = 4. When it comes to products, the relationships are more probabilistic, because they involve humans. As all know, humans are not quite stable or deterministic beings. How we think, act, or talk can always change. It is not possible to build anything that can completely represent our relationship with anything - when even us cannot represent ourselves very well.
What did Naval mean by invisible? How do we try to build invisible platforms? These are not invisible, drop-in replacements. Instead, these are abstractions, what we reduce from ideal invisibilities to something constrained but sufficient enough to convince others.
I will list the major assumptions behind abstractions as:
- that a product doesn't need to fully replace the nature of the idea, problem, solution or users. It should only be "invisible" enough to conceive its users.
- to achieve this, prioritizing the representation systems that are the most relevant to the target users will do fine. This is nothing else than feature engineering, if you think, in an incredibly complicated dataset.
Unfortunately, feature engineering is not easy. It originally refers to a process used when training AI models. Referring to its technical terminology, it is hard. When constructing social representation models to reach our customers, it is even harder.
Counterintuitively, but smart enough, people now use large language models for this feature engineering - given advancements in self-attention models and large language models. Now, AI is now central to feature engineering, streamlining product development, and optimizing user personalization, and all is possible through large language models.
The main difference of large language models is that they are trained on real data and perspectives on billions of topics. They have the ability to evaluate prompts under a very holistic and complex evaluation.
How does this evaluation work? Well, it's not quite possible to exactly know.
func evaluateLlmRepresentations(prompt string) []Representation {
var representations []Representation
// ... prioritize representations
return representations
}
We only assume that the representations returned by the models are accurate. I'm not sure if this is a particularly scary issue. Yet, this is unfortunate. It shows our vulnerability to the mechanisms that can understand us better than ourselves: we typically with pure acceptance.
Feedback loops of large language models, even though efficient for tasks such as fact-checking or code validation, do not perform well when it comes to fostering human understanding. The reason is simply that humans usually don't understand themselves, and prompts usually involve subjective and objective content together. Human responses generally have a blend of the two in their prompts, and it is not very possible to parse these prompts with minimal information loss with all the time. Prompts are not stable in the first place.
Large language models, pre-trained on factual representations to have a semantic understanding of accurate entity relationships in the world, enable great features for product builders. They create this very strong abstraction and trust that people can depend on them to manage their customer experiences, their own schedules, even relationships.
In a way, we are blinded by the beauty and power, and naturally, the technical beauty and entrepreneurial possibilities of large language models, as they can construct very accurate representations of different social relationships, at least beyond the human capability. Amazed by this, humans are following a technology that they will never be able to surpass in talent.
I have no idea what we are yet to in this direction. All I can imagine is that it will be fun to see but also somewhat concerning and scary. This is only one of the ways of how humans can follow their own greed and stupidity to lose their own nature under the attempts to build better personalization. Of course, I am one of them.
As Einstein said, "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former."