Podcast
Alexander Acker - What it really takes to train AI and how to make it accessible to many
A conversation with Alexander Acker about the infrastructure beneath modern AI: model history, GPUs, fine-tuning, distributed compute, and what it takes to make AI development more accessible.

In this conversation
Alexander Acker walks through the history and infrastructure behind modern AI, from early neural networks to the compute systems needed for current large models.
The conversation moves below the application layer. Training and deploying AI is not only code and data; it is hardware, distributed systems, fine-tuning strategy, and access to compute.
Central question
What has to exist underneath an AI model before it becomes usable infrastructure?
What we cover
- The path from perceptrons and backpropagation to transformers and large language models.
- Why GPUs became central to AI and why training large models is structurally expensive.
- How fine-tuning, LoRA, distillation, and distributed compute change who can build with AI.
Guest background
Alexander Acker works on AI infrastructure and model development, including access to distributed GPU compute through Exalsius. His perspective connects the history of AI with the practical systems required to train and deploy it.
Things to listen for
- Why AI access is partly an infrastructure problem.
- How smaller teams can work with models without training everything from scratch.
- Where the bottlenecks move as AI becomes more widely used.
Building with models
Model capability
What the model can do once it has been trained and exposed through a usable interface.
Compute access
The hardware, pipelines, cost structure, and deployment systems that decide who can actually build on top of the model.
