5 Comments
User's avatar
Dr. Jason Polak's avatar

Note: I originally wrote this post as a comment to the blog post linked in the first paragraph, but it inspired me to revise it and write a newsletter about it.

Expand full comment
Martin's avatar

As a researcher who has seen how AI is revolutionizing multiple disciplines, I understand your concerns about the diminishing human touch in mathematics. The beauty of human effort and struggle is indeed irreplaceable. However, I believe AI isn't about replacing the journey—it’s about complementing it. For instance, tools like https://labdeck.com/ are empowering scientists by automating routine computations and letting them focus on more creative, hypothesis-driven work.

In mathematics, perhaps AI can serve as a collaborator, helping us explore vast possibilities while we, as humans, find meaning and elegance in the connections. The Riemann Hypothesis might be solved by AI, but understanding its implications and uncovering the layers of beauty behind the proof will still be our job. Maybe the role of mathematicians is evolving, but it doesn’t have to lose its essence.

Expand full comment
Dr. Jason Polak's avatar

A lot of people give the argument that AI is about complementing the journey, and not replacing it. But making the statement that AI is "about" anything only indicates the intentions of the optimistic who haven't factored in how technology actually interacts with those who use it. Regardless of what "AI is about" (according to its promoters) does not change the fact that it creates a new environment where the primary actions of mathematicians (and scientists in general) will be a race to generate large amounts of new mathematics, shifting the focus inevitably towards the industrial production of mathematics, which is incompatible with fostering a healthy environment in which to enjoy mathematics.

Your statement on labdeck again for me is questionable: is it really empowering to automate so much? Or is it merely one step on the way to giving scientists too much power in the long run. Because, as we well know from history, the thirst for fame and discovery can quickly overcome any security from sanity. And I reject the idea that we should really "empower" anyone: the very word is telling, which is rooted in the word "power". Instead, we should be fostering an environment of more wisdom, not more power, which is certainly already not lacking in our advanced technical fields. If anything, we should have a slower process of discovery so we can appreciate what we are doing, rather than becoming and industry to produce knowledge for the sake of production.

Finally, to address your comment of AI as a collaborator: again, I posit that this sort of development is too much power for human beings to properly handle. Yes, in its initial stages, AI may help find a proof of the Riemann hypothesis. And those people who have studied the Riemann hypothesis will certainly be intrigued to find the beauty of it.

But, then what? Is our role as mathematicians to then sit down and sift through AI discoveries or hybrid human/AI discoveries in the interim? Who will be left to "apppreciate the beauty" once the population of graduate students dwindle because they realize that they are basically computer operators?

The truth is, the creative results cannot be separated from the hard work, or from the existence of a level playing field in which new mathematicians have some chance at doing something novel. With AI, mathematics will become sort of like acting: only a very few individuals will do it, and they will push mathematics so far into the abstruse and obscure that no one will care whether it is being done at all. The fallacy is that eliminating grunt work makes the pursuit of mathematics more interesting. It is on the level of saying that the best part of hiking through the woods is discovering the most creative way to get through it on the map before you start. No -- the rote work and human struggle at tedious computation is absolutely necessary. When I was a graduate student, I still remember working through a tedious spectral sequence computation, and understanding it was one of the high points of my self-discovery.

You can say people will still do such things because they can. Actually, I disagree: the social factor is greater than technically-minded scientsts assume. And the fact that now, other graduate students can just say "just input the calculation in the AI" will automatically diminish the feeling that most people feel when they do something that is impressive to others.

I'm afraid your utopian vision of the future of mathematics is somewhat like Gene Rodenberry's vision of Star Trek: that incredibly advanced technology will eventually liberate us and allow us to continue exploration for the sake of bettering ourselves, when it reality, technology and intelligence has very little to do with bettering ourselves.

Expand full comment
Martin's avatar

Thank you, Dr. Polak, for such a detailed and thought-provoking response. I truly appreciate the depth of your perspective—it brings an important dimension to this conversation.

I can see how the term "empowerment" might seem loaded in the context of mathematics and science, where unchecked advancements often lead to unintended consequences. Perhaps "facilitate" would be a better word, emphasizing AI's role as a tool that supports rather than dominates. Tools like LabDeck aren’t meant to replace the process of discovery but to handle repetitive tasks, allowing researchers to focus on more challenging questions. That said, I agree—there’s always the risk that over-reliance could lead to an unhealthy shift in priorities.

You make an excellent point about the social aspect of mathematics. The shared struggle and collective acknowledgment of effort are critical to the discipline's culture. I hadn’t fully considered how automation could diminish that dynamic. The possibility of graduate students feeling reduced to "AI operators" rather than creators is a real concern, and I wonder if we need to rethink how AI tools are introduced into academic environments to ensure they enhance rather than replace meaningful experiences.

Your metaphor about hiking resonates deeply. The satisfaction of solving a complex problem, even when it’s tedious, is something that automation can’t replicate. But perhaps AI doesn’t need to pave the trail entirely. Could it instead help clear some of the overgrowth while still leaving room for us to stumble upon hidden paths?

I also value your broader argument about slowing down to appreciate the work we do rather than industrializing it. Technology often tempts us to prioritize speed and volume over depth and meaning. Striking the right balance will likely determine whether AI serves as a collaborator or becomes a disruptor.

Thank you again for your insights—they’ve given me a lot to think about. Conversations like these remind me of how important it is to consider the broader implications of the tools we develop and use.

Expand full comment
Dr. Jason Polak's avatar

Thank you for your reply, Martin. I think you bring up some good points. I understand that LabDeck is not meant to replace the process of discovery. In fact, so many automations in the past have been introduced with the promise that the drudgery will be replaced so that we can focus on the creative tasks. However, that promise presupposes two things:

(1) That we know where the boundary between the creative and the drudgery lies.

(2) That the process of replacing the rote tasks with computers in the long run will always lead to a situation in which the only thing remaining is creative construction.

Regarding (1) , I think we've already covered that. But (2) is even more concerning to me, because while short-term replacement with SPECIFIC technologies may indeed result in the ability to focus on creative tasks, in the long-term, the philosophy of replacing rote tasks with computers may eventually result in all the creative beauty of life being taken away from us. It is important to note here that no SINGLE invention may do this, but the entire complex of them.

For example, computers at first only ever did very mechanical and boring things. Who could argue with a machine that could add a thousand numbers up in less than a second? No one really wants to do that. They also kept documents and had handy features such as editing and spell-check.

But over time, we kept automating more and more, adding more features, until now computers can create aesthetically-pleasing graphics with generative image AI. Generative image AI would NOT have been possible without all the previous automations of work that was clearly mechanical, but there was a qualitative change that happened, not just a quantitative one. Now, artists don't have the same opportunity to do work any more because many people can use generative AI instead.

In the same way, mathematics NOW with AI may be an assisted journey, but it may not be that way always. At some point, we may get to the point where mathematics can be done without much human assistance at all, and then we'll truly be just the assistants to the AI, rather than the other way around. And to me that is fundamentally demeaning. The point is that we need to take on the responsibility of that happening with our current inventions, because it is almost surely to happen. Imagine a mathematician 20 years in the future who will finally make the breakthrough of making a completely autonomous mathematical system: do you REALLY think that this person will NOT make the breakthrough? Of course not: they will be instantly famous for doing so and be set for life. No one will resist that.

You wonder "if we need to rethink how AI tools are introduced into academic environments to ensure they enhance rather than replace meaningful experiences". I say that there is no way to ensure anything. If the technology exists, people will use it. If a graduate student can use AI to get ahead of their peers, they will use it. The problem is that we have SOME level of self-control to create a healthy society, but that control will always lose eventually once the power of the machine goes beyond a certain level because the temptation will be too great.

Could AI "help clear some of the overgrowth while still leaving room for us to stumble upon hidden paths" as you ask? Again, for what purpose? Do you honestly believe in your heart that the AI will suddenly make mathematics more beautiful because it can allow us to discover new truths? Let me put it another way: would mathematics be less beautiful if we never had access to AI? Would a hiking trail be more beautiful if we could use a 4x4 jeep to cut down new trails to hidden places? But then that also causes habitat fragmentation -- there is always a cost.

Expand full comment