The Big Picture of AI: Why You Don’t Need to Know Every Detail

In the whirlwind of current AI advancements, trying to keep up with every new change can be dizzying. But unless you’re an active researcher in the field, you really don’t need to. You just need to keep abreast of the aggregate, because that’s where the real story is for the rest of us.

Large Language Models (LLMs) have created a paradigm shift in computing. One of the nice things about paradigm shifts, though, is that we know how they work. In 1962, philosopher of science Thomas Kuhn described the anatomy of a paradigm shift (and also coined the phrase itself) in The Structure of Scientific Revolutions.

In the early phases of a new paradigm, there is an avalanche of “normal science,” where the new paradigm begets huge amounts of discovery based on the new premises and techniques. However, this normal science is incremental, each new advancement pushes the field only marginally. Kuhn called it “puzzle-solving,” it’s applying the new paradigm in specific contexts to fit specific problems. It is only in aggregate, from the middle distance, that we can see the net effect of these incremental changes, or the abstractions that cut across solutions for multiple puzzles. It is only from a distance that the larger patterns of the new paradigm begin to emerge.

For business and engineering leaders focused on improving customers’ or clients’ lives using AI, there is no utility in following every single advancement, in knowing in detail all the changes impacting the technical landscape. You don’t need to dive into questions of LoRA vs QLoRA fine-tuning, or whether Kolmogorov-Arnold networks provide a marginal improvement over multilayer perceptrons. It may be valuable to know that we’re making strides in both pre-training and fine-tuning costs, that it’s getting easier and cheaper to run inference against multiple models at the same time, and that context windows are expanding. But even that might be more fine-grained than your requirements need.

So don’t worry about every detail. Understand the general architecture of the kinds of problems you hope to solve with AI, where the existing bottlenecks are, and keep an eye there. And spend your extra time reading some helpful philosophy of science.

The Importance of the Big Picture

While it’s tempting to get caught up in the day-to-day details of AI research, focusing on the big picture can provide several benefits:

  • Strategic Decision-Making: Understanding the overall direction of AI development can help you make informed decisions about where to invest your resources and efforts.
  • Avoiding FOMO: By focusing on the key trends, you can avoid feeling overwhelmed by the constant stream of new advancements and ensure you’re not missing out on anything truly significant.
  • Identifying Opportunities: Understanding the broader context of AI can help you identify new opportunities and applications for this technology in your business or industry.
  • Building a Strong Foundation: A solid understanding of the fundamental principles of AI will serve as a strong foundation for future learning and growth.

Here are some of the most important trends in AI that you should be aware of:

  • Scaling: The ability to train larger and larger models on more data is driving significant advancements in AI performance.
  • Cost Reduction: The cost of training and running AI models is decreasing, making AI more accessible to businesses of all sizes.
  • Contextual Understanding: AI models are becoming increasingly capable of understanding and generating text and other forms of content in context.
  • Multimodality: AI is starting to integrate multiple modalities, such as text, images, and audio, to create more powerful and versatile applications.
  • Explainability: There is a growing focus on developing AI models that are more explainable and transparent, making it easier to understand how they arrive at their decisions.