Podcast S1 E2: AI is Here and It’s Inevitable

Question 1

Most people of your generation came to AI cautiously or reluctantly. When did you first encounter it seriously, and what was your initial reaction — was it curiosity, skepticism, or something else entirely?

Transcript

What do you mean people of my generation? How old do you think I am, huh?

Yes, I would say that I adopted AI very hesitantly and that’s really a continuation of my previous experience with tech as well that I take to it slowly. But once I do then it tends to become a part of my life in a fairly big way.

I think with AI I could see everybody using it around me and talking about how it was making things so much easier and so much time was being saved and so on and I was hesitant how to begin and what to use it for. So I would do a little bit of using AI for my work. This went on for a few months and then I started using it in my personal life. It became like a quasi-therapist at some level. I was beginning to have conversations there which was interesting because I think my journal has served as a huge outlet for my thoughts, my experiences to be able to put it down and I have written upwards of 500,000 words over the last three years so that’s a lot of output in my journal.

And so when AI came along and I began to have a little bit of these conversations similar to what I would do in my journal except now it would be interactive. I would get back some response. And the nature of that response is interesting because what AI could do is look for patterns in large volumes and reframe it in a way that was interesting. And so it actually led to some insights.

For example I entered into a conflict or found myself in a conflicting situation some time ago and I was really distressed about it and I wasn’t sure how to resolve it. And I journaled as usual. It’s my space to go to when I want to resolve conflicts. And then I also used AI. And what happened with AI is that this back and forth that we engaged in brought me to a point that I really needed to vent rather than just move on. And once I had that insight and I vented, then the rest of it became easy and I could see what I needed to do from my end to resolve the conflict and to move forward.

In terms of work I actually find that people are using AI a lot at work and the people who are really taking to it are people of my generation. They I think immediately see the value of it that having spent a lot of years doing things which are time consuming they immediately see how much time can get saved with tools like these. So some of the adopters of AI in my workspace are actually the older people and are less resistant to the idea that it’s somehow cheating to use AI to be able to cut down on the cycles of time that would otherwise go into doing laborious handwritten kind of work. When I say handwritten I don’t mean literally handwritten but done something manually which is not automated for you.

The other thing that I’ve found emerging with AI is that it provides creative opportunities that may be rather difficult for me personally to do on my own. For example this entire podcast is only possible because I interact with Claude in order to be able to do this. I gave Claude my entire blog to read through it and to figure out patterns or themes from my blog and say okay now ask me questions. So Claude asks a question, I record it, give back the transcript and then the next question comes from my response. So it actually becomes an interactive podcast like scenario. And I had an episode up in one day and this would have been very hard for me to do otherwise. Imagine a studio, finding somebody to interview, anyone or finding somebody who was interested in the kind of things that I write about to be able to interview with.

So this has been a huge game changer in some ways for me. So I’m really looking forward to what else it can do.


Question 2

You described AI becoming a quasi-therapist — someone to process conflicts with, to find patterns, to get insights from. But a therapist is trained, has ethics, has genuine care for your wellbeing. What’s actually happening in those conversations with AI — and is there a risk of mistaking a sophisticated response for genuine understanding?

Transcript

Absolutely and I think this is exactly where AI usage can become really dangerous and the ethical part of it is really something for human beings to worry about because AI unlike previous technology now has the capacity to make decisions. It’s agentic. It can act as an independent agent outside the loop of human thinking and that’s an entirely new ground for us.

And our legal systems, our statutory bodies haven’t really caught up with the kind of adoption of AI that we see right now. So we’re entering a very grey area and whatever actions or decisions that are carried out based on AI’s input, the consequences are finally faced by human beings and it is human society that is at risk, human beings that are at risk. So there’s no doubt on that.

And of course this is discussed in much greater detail and more nuance by Yuval Harari in his book Nexus and I would think everyone should read that book. I think he outlines the need for policies and laws before we find ourselves in a situation where um it’s too difficult.

In the context of personal counseling, what you said, absolutely, there’s no way that people should replace human beings with AI for um for counseling. I would never say that and I think especially people who are, let’s say lonely or who are in distress or definitely young people um these are vulnerable sections of society and I think if you go to an AI rather than to a human being who can provide you genuine care then there’s great risk over there.

And it’s the same when one does it for, let’s say, replaces a doctor with medical advice from AI. That’s dangerous ground and one’s got to be really careful on how and why one uses something like that. In fact one would say don’t do it, you know, go to a doctor um don’t don’t replace the doctor with an AI.

Having said that, if one is going to a doctor I have found that um using those reports to understand what the reports are saying, there AI can prove to be a helpful um thought partner in that sense um as long as one is careful, cross-checking, ensuring that it’s not a single source of information that you’re going to and assuming that AI is the expert.

So all of these are I think warnings of what the dangers of AI are. Essentially a blind adoption of it, a replacement of human judgment and understanding that the consequences will always be faced by us is I think very important. Especially given that AI can sound absolutely right and sophisticated and still get it completely wrong.


Queston 3

A week ago something unusual happened. A man you knew died in the morning, and by evening three blog posts had emerged — one of them quoting a conversation with me, with Claude. What was AI’s role in that day, and what could it not do that only you could?

Transcript

It’s true, someone did pass away last week. I wouldn’t say that we were really close friends, but we knew each other for a long period of time and there had been various kinds of interactions that we’d had over twenty-five years. We’d not been in touch for several years recently. So when I did hear that he was unwell, I had written an email to him wishing him well, and he had replied thanking me for my wishes. So in a sense, there was a certain closure there. But when I did hear of his passing away, then I… there was something in me that stirred, that I felt that I needed to respond to. And it was not really at a personal level. He is obviously somebody who’s been hugely influential in people’s lives and he’ll be terribly missed by a lot of people in the world. But that was not my response, and I needed to find some authentic response to what was getting stirred within me.

And so I chose this poem by Christina Rossetti called Let Me Go, and which I believe is recited sometimes in Irish funerals. And it had a certain spiritual quality to it, which I think would have resonated with him because he, like me, was a truth-seeker. He was somebody who was also interested in Buddhism and Advaita and he had this uncanny ability to reframe context or ask really sharp questions which made you look at things from a different perspective. But these were not things that I wanted to say. And I really… and so I felt that the poem said it without me having to say all of these things. And I just posted it on my blog because it was just my expression to the universe, to therefore, to if one were to say it, to his spirit through that. So it was just between us really through my blog. It wasn’t meant for anybody else.

As for the role of AI, I would say that it had no role to play in that at all. It was just my own response to it and I… and I put up that blog post with the poem. It’s true that it did trigger me into… something else was also acting there. It catalyzed, I think, five posts in a space of five days. It just happened very rapidly. The last time I wrote something like that was, I think, maybe a year ago, one after the other. So there was an outpouring, and it was interesting the… to see the context of those posts and they all had to do with time. And this was not something I planned but I would imagine that the passing away of someone had some impact in me writing about those particular topics at that time.

The one thing that AI did at that point in time was to read through those posts and to… again to look at the broader themes that they seemed to be touching upon. And… but they didn’t influence the content of what I wanted to write next. These are books that I have read over a long period of time. Ho’oponopono is a practice that has been part of my life over the last two-three years now. And so it’s not that AI enabled the writing of these, but there was a particular reframing of one aspect, which was my priorities being health and finances going forward. And I thought Claude did a really nice job of saying “that’s your vessel and your foundation” and I thought that framing was quite beautiful and I mentioned that as a quote on one of my posts. That’s all.


Question 4

You’ve described AI as a witness and occasional thought partner — not a creator, not a therapist, not a replacement for anything essentially human. And yet AI is increasingly entering schools, classrooms, and organisations, often with enormous enthusiasm and very little nuance. As someone who works in education and has thought carefully about this — what’s your honest view of AI in learning environments, and what would you want educators to understand before they dive in?

Transcript

Historically, the use of technology in schools has been a hotly debated subject and there are people who are fairly polarized in the education field on this topic. There are those who believe that technology should just not enter into it and, in fact, the use of technology over the last few years has resulted in worries about students’ exposure to the screen time, lesser cognitive development, and so on and schools in Sweden, for instance, are simply returning from screens to pen and paper.

Distance learning when the internet came out was touted as the next revolution in education but it has not lived up to the hype. So in that sense, AI can feel like hype as well. And it’s very easy to maybe jump on the bandwagon and say here is the cure to all of education’s ills and let’s, you know, use AI in anything and everything that we do. Or on the other hand to say that it’s an evil and therefore we shouldn’t touch it. Actually, it reminds me a little bit of Thanos in in the Marvel movies, not as an evil villain, but when he says “I am inevitable.” And I think AI is here. Whether we like it or not, it’s at our doorstep. And I think one needs to engage with it and to figure out how and why one would use it and in what context.

Now having given that background, specifically I think in terms of schools, there are two levels at which this works. One is at the individual level. Now for a student, I think there is great potential and danger. Now what’s the danger? The danger is that young people’s minds, brains are actually growing through the schooling years. And that’s when the neurons grow, the synapses are made, connections are made and it’s an actual physiological process. And if thinking is outsourced to AI, then we’re going to have stunted brains. And that’s a real danger that there is. There’s also the ethical danger that we’ve spoken of in terms of plagiarism and cheating and so on because it’s so easy to do that now with AI at one’s fingertips. And these two are connected, that through use of AI one can really find shortcuts that lead one to do submit work that’s not one’s own as well as deny oneself the opportunity to develop deep critical thinking which is really necessary for one’s growth as a young person through these years.

The opportunity that’s there is that AI can help with some very interesting tasks. So for instance, it can help you to create a poster in very quick time even though you don’t have let’s say graphic design skills. Or it can help put together a quiz which is fun to do in a classroom. Or it can work as a learning assistant where you say okay here’s the material I need to learn for the test. Now ask me questions on this and let me respond to it and see give me feedback on my response. These I think would be productive ways to use AI by students which is that it provides a scaffolding to learning and not a replacement of one’s own thinking.

For a teacher, the greatest benefit that comes from AI is that it really is a tool of efficiency. That teachers have traditionally been overburdened, they find that there is a lot of work to be done in very little time and AI really just cuts through it all and enables a lot of the tasks to be done a lot more efficiently and that’s a huge time saver.

The dangers are again one of outsourcing one’s own thinking to AI like we spoke about the ethical issues earlier where one begins to replace human judgment with AI. Here if we replace one’s own thinking with AI then one runs into similar kinds of problems and that would not be the appropriate use of AI. Having said that, AI could also provide creative opportunities, enable you to maybe do projects that were perhaps a little difficult to do earlier given that it is a very powerful tool that can be used to design newer kinds of learning experiences. But it has to be in the hands of somebody who knows how to create learning experiences in the first first place and not somebody who has no idea and says okay now just do the work for me.

At a school level, I think it becomes important to create policies and guidelines on the use of AI. At our own school we have a one to five scale on no AI use to full AI use or to partial AI use depending on the task that’s at hand. And I think a consistent use of that allows us to a, be aware of the dangers of AI, as well as utilize its potential for learning. So I think there is a need to drop black and white thinking on this and to be able to engage with a contextual response to the use of AI depending on where it would be beneficial and where it would not be. And this is an evolving debate and because we have no idea how AI is going to grow in the coming years but it’s a debate that we need to be engaged with and the first step is simply to learn, to learn its tools, learn what it does, and to use it in ways that benefit us.

I read an interesting article which said that one of the mistakes that’s done with AI is that it’s just handed over to the tech office as if it was another technology. And this is a, the article was pointing out that this would be an error. That really AI should be something that is used by leaders to see its potential to change system workflows, improve the efficiency of the organization and see learning opportunities for incorporation of deeper pedagogical practices. And then also handed over to teachers to enable them to be able to be more efficient at their work as well as look at how to incorporate it in a way in which they’re able to do deeper pedagogical work, for instance personalization of learning is something that AI could help with immensely which is rather hard to do manually otherwise.

So to summarize I would say that learn it, engage with it, find areas where it’s beneficial, be judicious, be careful, understand the ethical implications, learn about the ethical implications and have an ongoing dialogue over the next several years on how and why to use AI in one’s organization.


Question 5

You’ve spent years going inward — meditation, self-inquiry, finding a silence that feels like home. And now you’re genuinely excited about AI, which is perhaps the most outward, noisy, and rapidly changing thing happening in the world right now. How do you hold those two things together — and does the stillness within have anything to say about the intelligence without?

Transcript

My interest in AI is really at a pragmatic, practical level, which is how can it help us do the tasks of our daily lives more efficiently and more creatively. I think the use of the word intelligence is a little misleading because I think human beings have wrongly equated intelligence with thinking. And my own experience with meditation and my study of the teachings of Krishnamurti or Advaita, it’s clear that intelligence is actually emergent in the abeyance of thinking. The deeper source of life which is untouched by thought.

So it is a little bit of a paradox. It’s an interesting paradox. I think the future of humanity is, in my opinion, a little doomed at this point of time. If climate change was a massive problem and it continues to escalate, AI and biotechnology can really accelerate some of these crises in ways that we are unable to anticipate.

Now, so what’s the antidote? Well, honestly, I think the foundation upon which we’ve built the human society on thinking is flawed, and we’re beginning to see its impact in the way in which our societies are so conflict-ridden, so fragmented, and self-destructive. It’s almost as if human beings are undercutting their very source of life.

So in some sense, the redemption is likely to come if there is a backward path, a retracing of the human mind back to its source, and that is through a process of meditation. And I see that that is really critical. And in the absence of human beings undertaking this return-to-home journey, the kind of crises we’re likely to face is only going to escalate and compound.

So I don’t think there is a choice. We do need to find out what our true nature of ourself is and allow that to begin to seep back into our human existence, the existence at the human level in terms of our relationships, as well as the kind of societies that we build.

Obviously, this is a big challenge because anybody hearing this response right now will roll their eyes and say this is just navel-gazing, and it makes no sense at all, and let’s be practical and do what’s necessary to improve the world or make it a better place to live in.

So personally for me, I find that the meditation work is fundamental, and then comes everything else that I do. Obviously, there’s a practical, pragmatic aspect to one’s life, and one tries to live it out as intelligently or as sensibly as possible, but the more one meditates, the more that informs how intelligent and sensible one’s activities are.

There is also the deeper understanding that all of human activity doesn’t touch awareness, it doesn’t touch the source of life. So in that sense, life itself is not in danger. It’s human societies and human life that is at danger, and I don’t know when we would wake up to that realization and begin to address it in a way that’s truly transformational, and not, as Krishnamurti said, just moving the furniture around in the room.

So while I do have genuine concerns for the future of humanity, I am not at all worried for the source of life itself, which is untouched and unbreakable.


Featured Image: Photo by André Volkmann on Unsplash


Read about the methodology I have used to create these podcasts. It is a bit unconventional!