Artificial Intelligence

In this episode, Bill Hendricks and Fuz Rana discuss the rapid growth in artificial intelligence and what that means for the world, humanity, and our faith.

About The Table Podcast

The Table is a weekly podcast on topics related to God, Christianity, and cultural engagement brought to you by The Hendricks Center at Dallas Theological Seminary. The show features a variety of expert guests and is hosted by Dr. Darrell Bock, Bill Hendricks, Kymberli Cook, Kasey Olander, and Milyce Pipkin. 

Timecodes
05:04
Rana’s Work in Transhumanism
13:01
Does Artificial Intelligence Think?
22:04
Potential Bias in Artificial Intelligence
30:50
Is AI a Crutch or a Tool?
34:00
What Openings Does AI Create for the Gospel?
42:59
Technology is a False Gospel
Resources
Transcript

Bill Hendricks: 

Hello, I'm Bill Hendricks, Executive Director for Christian Leadership at the Hendricks Center at Dallas Theological Seminary. And it's my privilege to welcome you to The Table Podcast where we discuss issues of God and culture. 

Once upon a time, it's hard to believe, but there really was the world's first computer, and that computer could be programmed by humans to do things that frankly only humans could do. And soon those machines were able to do certain tasks much faster than humans could do them. And then in 1977, a computer defeated a world-class chess champion in a match of several games, at which point machines were said to have become as smart as humans. And now we have artificial intelligence, which some in the science and technology space predict will quickly join with various other technologies and actually outstrip human capabilities and lead us into a whole new era of humanity, or perhaps even something beyond humans. 

This is no longer science fiction, this is a present reality that is developing rapidly and it raises many questions. And that's why I'm delighted today to welcome Dr. Fazale Rana. He's the president and CEO of Reasons to Believe, which was founded in 1986 by Dr. Hugh Ross, a Christian working as an astrophysicist to help people see how scientific research and clear thinking consistently affirm the truth of the Bible and of the good news that it reveals. Fuz, welcome back to The Table podcast. 

Fuz Rana: 

Bill, thanks for having me. It's a delight to be here. 

Bill Hendricks: 

Well, great. And I say welcome back because we were able to feature Dr. Rana on a previous Table Podcast related to transhumanism. Today we're going to do a close cousin of transhumanism, really a subset in a way, and talk about artificial intelligence. Fuz, by training. I understand you're a bioethicist, is that correct? 

Fuz Rana: 

A biochemist, actually. 

Bill Hendricks: 

Biochemist, yeah. 

Fuz Rana: 

Yes, so I've spent my career studying the molecules that make up living systems. So I find molecules fascinating, and I see within these molecular systems, very clear signatures for design and very clear pointers to the reality of a creator. 

Bill Hendricks: 

Wow. And I assume then you're a busy man these days because there's so much that's been happening in biology in the last few years. 

Fuz Rana: 

Oh, yeah. I mean, it's mind blowing really the pace in which new advances are taking place. It's far stripping anybody's ability to really keep on top of what's happening. The pace is really becoming breakneck, and this is exciting, but it's also, I think, deeply concerning because more and more biology is really pushing frontiers, pushing against boundaries because scientists now have the ability to create artificial non-natural life forms in the lab, primarily by manipulating preexisting life forms. But they're able to create these life forms unlike anything that exist in nature. And these raise all kinds of very important, very interesting and very complex ethical issues. And so scientists can no longer afford just to be in the lab doing their work, they really have to be thinking through the ethical implications of their work as well. And unfortunately, many scientists aren't doing that, and bioethicists really are struggling, I think, to keep pace with the scientific advances. Nobody has time to even deliberate about whether we should or shouldn't do a certain thing before the next advance takes place. 

Bill Hendricks: 

Yeah. Well, so with these new complex, I mean, I hesitate to call them life forms, it's something like life. Is it life? That really leads to the topic that you wrote your book on, transhumanism, and tell us a little bit more about what transhumanism is about. 

Fuz Rana: 

Yeah. Well, scientists that are looking to create, again, these artificial non-natural life forms fall under the umbrella of synthetic biology, and transhumanism, you might think of as really being extension of that in that the goal is to use technology to modify our biological makeup as human beings so that we are able to really overcome our limitations to make human beings more intelligent, stronger, more emotionally and psychologically well adjusted. And some people even think that this technology could even be used to maybe extend our life expectancy to perhaps a practical immortality. And so people are looking at trying to, again, overcome the biggest limitation of our biology, which is the fact that we all are going to ultimately die. And so people see these advances in technology as really being something that could lead to a utopian type of future. And maybe even we would one day modify human beings to such a degree that we would live in a world that's post-human where the creatures, entities that exist would be derived from us as human beings, but they would be so different that we would not recognize them as human beings. 

And part and parcel of that package of anticipating a post-human world would be maybe modifying animals so that they would have sentience and intelligence and would be granted personhood or the machines, as you mentioned, these with artificial intelligence would be granted personhood so that the post-human world would be, again, biological entities that were modifications of human beings, but also it may even include animals that have been modified to be self-aware, to have a high level of cognitive abilities and machines. So you really are looking at a type of future that feels very much like a science fiction novel, but this is the trajectory that many people see humanity on. 

Bill Hendricks: 

Well, it's interesting we talk about artificial intelligence that, I mean, you mentioned the limitations of biology, and I guess there's one sort of strand of this way of thinking that actually sees the day when biology is sort of no longer needed, it just is all turned into information. Or we could use the term intelligence of the sort, and we're on that trajectory with artificial intelligence where you have these algorithms that you ask questions of and you get back fairly robust and detailed and in-depth answers. Sometimes there's a lot of dubious information in there, and sometimes it's remarkably human-like, if you will. 

Fuz Rana: 

Yeah. Well, I think probably many of your listeners and viewers may be familiar with ChatGPT. 

Bill Hendricks: 

Yes. 

Fuz Rana: 

That's just been released not that long ago. And when I first played around with it, I wasn't that impressed. And a few months later, I went back and began to play around with it and I was impressed with how rapidly the quality of the responses improved just even in a couple of months. And it's remarkable because the interactions that I've had recently with ChatGPT almost make me wonder if these systems actually would pass what's known as the Turing test. This was a test proposed by Alan Turing, the father of computer science in 1950, where he argued that if you are interacting with a computer system and you can't tell the difference between that system and an interaction with a human being, that you have to argue that that machine has the ability to think, that that machine is an entity similar to that of a human being. 

Now, there are people that I'm sure have criticized the Turing test as maybe not being the best way to demonstrate sentience or self-awareness in machines. But the point is that at least by that standard, you might even argue that ChatGPT very well may pass the Turing test where you can't tell the difference between, again, interacting with that software and interacting with a human being. Or as a customer service feature with a lot of online commerce, there are these chat bots now that interact with you that help you navigate the site or address the concerns or the questions you might have. And it's not clear sometimes whether that chatbot is just an AI or is it actually a person on the other side of the computer screen responding to your questions. So we're dangerously close, I think, for people to make an argument that these AI systems really deserve some kind of status beyond that of just simply an algorithm or a machine. 

Bill Hendricks: 

Well, which of course raises the question, what do we now mean by human? The machine is as good as a quote, "human". Well, then what do we mean by human? And of course embedded in that is what do we mean by thinking? My understanding of ChatGPT, and in full disclosure, my background is in the humanities, so we are already way out of my expertise, but at least my perception and what I've read about ChatGPT and programs like them is that you ask the question, and because of the internet and all the linkages of all these little sources of information that have been placed online, that very quickly, these machines are out there collating, if you will, curating if you will, all this information and boiling it down very quickly to, okay, here's what you need to know. 

And then sort of icing on the cake is often in the case of ChatGPT, doing it in a way that's very personable and conversational as if you really were having a conversation with a person, which means that you getting the amalgamatenever got put on there in the first place. As well as I'm sure there's overloads of certain stuff, knowledge, opinions on the internet, and a surplus of other things that may really factor into the equation. As you understand how the brain works, and maybe on top of that to some extent, how the mind works, is there really thinking going on there? 

Fuz Rana: 

Yeah, I don't think so. And one way to think about AI systems is to think about them being not so much akin to human intelligence, but really akin to maybe what we might call animal intelligence. And what I mean by that is that what ultimately undergirds AI systems and their ability to learn, if you will, quote, unquote, "to learn" is machine learning, which is really a type of associated learning. Remember the experiments that Pavlov did? You ring a bell and the dogs begin to salivate because they associate the ringing of the bell with a reward, with getting food. And associative learning people have discovered in recent years is actually a very powerful way to create systems that are able to learn to perform complex tasks where problem solving is available through associative learning. A type of planning seems to emerge out of associative learning, but this is essentially what animals are doing is they're learning through associations where there's a behavior or there's an action they take and they're either rewarded or they're punished. 

And if they're rewarded, then that behavior is going to continue. If they're punished, they'll do different types of behaviors or different types of actions. And that's essentially what machine learning is involving. As you mentioned with ChatGPT, that AI system is going out onto the internet and it is pulling in information and then it is analyzing that information, looking for patterns, and then producing a response. And then you have a chance to give it a thumbs up or a thumbs down, and that's essentially like giving it a reward or giving it a punishment. And so it's learning through this type of association, but that training set, if you will, that set of data, somebody has decided that this is the set of data that ChatGPT is going to use and so that's the initial information that it's using. But then as it interacts with the human user who's, again, giving the thumbs up or the thumbs down, it's refining its responses, it's learning what is a good response, what is a bad response to particular questions. 

And this is very different than what humans do when we think, when we solve problems. Because as humans, we have this ability called symbolism that we can represent the world and even ideas with symbols, and that we can combine and recombine those symbols in a near infinite number of ways to create stories, to create scenarios. So what we do when we problem solve is that we do mental time travel, that we think through different scenarios that could result from actions that we would take now and that we would anticipate what those outcomes would be. And then on that basis, we then problem solve or we make decisions based on these imaginary scenarios that we've created in our minds with an understanding of cause and effect relationships. 

Or we might reflect on what happened in the past and imagine, well, what would happen differently if different decisions were made, different actions were taken. And so we are not really using a training set, we're not really using associative learning, but we're really using symbolism in the capacity to create scenarios and then to mental time travel to evaluate those scenarios in our minds. And so this is fundamentally very different than what an AI system is doing. And so even though people talk about it being human-like or do making decisions like humans would make decisions or performing tasks like humans would perform tasks, I actually think a better way to say that is that these systems are learning in the way that animals are learning through this associative process. 

Bill Hendricks: 

And so when we thumbs up something, it's essentially giving the animal a treat and it says, oh, well then I'll do that again. 

Fuz Rana: 

Yeah, right. Well, I mean, one example of an AI system that probably many people are familiar with, whether you realize it or recognize it as an AI system or not, and that's the texting app on your smartphone. It's remarkable because as you type in a word, it suggests two or three other words that are most likely going to be the word that you're going to type next. And so what's happened is somebody has created this training set where they have all the possible words in the English language, and then they assign probabilities to those words as being the most likely word to follow after you type a specific word. So if I type Jesus, the probability I'm going to type Christ next is probably very high. So initially it's going to suggest- 

Bill Hendricks: 

It offers you that, yeah. 

Fuz Rana: 

Right, those options. But as you begin to use the software, what's happening is that if you select one of those offerings, it boosts its probability the next go around. But if you don't select any of them, it reduces their probability. And if you suggest a word that it would not have anticipated, it remembers that, and then it assigns that an increasingly higher probability each time you type that word that it wouldn't have initially expected. But this is essentially associative learning where it's starting off with a set of options, and it's refining those options based on rewards and punishments that you're giving to that system, either by selecting or deselecting a particular word choice. 

So that's what an AI system is doing. And AI experts would call this narrow AI or weak AI or specialized AI where the AI system is designed for a specific task, but it has to be trained to do that task with a set of data that is extracting patterns from and then making decisions or producing output based on those patterns. In condistinction, the Holy Grail of AI research would be something called artificial general intelligence, where the AI system can make decisions without any kind of training ahead of time. And that's in effect what we do as human beings is that we can make decisions without having any prior experience, that we can reason through things, again, based on our capacity for symbolism in this open ended ability to manipulate those symbols. 

Bill Hendricks: 

Well, it's interesting to me that in that system, particularly the weak AI as you put it, there's still the need for a human element in the chain because of course, this whole AI thing raises massive issues of trust and believability, because you can get... A blog post that I read gave a college history exam answer that was pristine and beautiful and just had all kinds of facts nailed. The only problem was it got the wrong persons identified as who had come up with this particular way of thinking back in history. And obviously over time it would learn, no, it was this other person. But what the blog blogger pointed out was that verifying and editing have now become essential skill sets for every individual. So we get the answer from the ChatGPT or whatever, and we have to look over it and see, well, what parts of this are spot on? But then what parts need to be edited? And I guess that's okay, but there's still that human element that has to do what the machine can't do. 

Fuz Rana: 

Ultimately, the best use of AI systems are always going to be those systems where the AI system becomes a tool, and there's always human oversight. And if we ever move away from having human oversight at some level, I think this is where I think the real danger and the real misuse of AI can take place. You can never remove that human element. But even like with ChatGPT, there is somebody that has made decisions as to what constitutes the data it's going to draw from. 

Bill Hendricks: 

Right. 

Fuz Rana: 

And even the way the AI system works is, again, there is a bias. The developer has introduced their own particular bias or their philosophy or their set of values into what data's going to be selected, what data isn't, but then how that system responds. So it's very interesting when you ask ChatGPT a question that is more philosophical or theological or values oriented, oftentimes it does a great job of saying, "Here are the two perspectives." But then what it does is it'll say, "It really is ultimately up to the individual to decide which perspective they're most comfortable with." Now, that's very interesting to me because on one hand you could say, well, the developers were trying to be fair and balanced, weren't trying to push any particular agenda, or at least they're giving that appearance. But what they have ultimately done is essentially introduced this idea of relativism as a philosophical framework for how ChatGPT is going to respond to theological, philosophical, or values oriented questions. 

It's up to the user to decide what is ultimately true. So even then, there is a philosophy that is permeating how ChatGPT operates. So you have to do your due diligence in terms of understanding where is this AI system getting its data? How is it training? Can that training be manipulated by people that might want to skew its response by flooding the AI system with particular types of pluses or minuses or thumbs up or thumbs down? And is there an undergirding philosophy that is at play in terms of the way the AI system works? 

Bill Hendricks: 

Well that is a mark of human beings, is it in that whatever we create, whether it's a computer program or a car or a piece of poetry or anything that we develop, we always leave the fingerprints of our own worldview and moral inclinations. We can't really ever get away from that. It becomes embedded in the very design of what we've put together, which frankly, I think it factors into your story. You talked about the design of cells, there's fingerprints there. The intricate design tells you something about the designer. And that's part of being made in the image of God is that everything we do, we communicate something about ourselves because that's what God does every time He communicates in any way, shape, or form, anything He creates, anything He reveals. But the designer of a computer algorithm, it sounds so neutral, so technical, so removed. And yet even in the putting together of that code and what it's designed to do, you can't help but inject moral, I'll call them biases or inclinations, philosophical inclinations, et cetera. 

Fuz Rana: 

And in many respects, I think to build off of your point, there's nothing wrong with somebody introducing their particular bias into the content that they would create or an algorithm that they would write or the data set that they would choose to train their AI system. It's human nature, we all are biased. We can't avoid that. But when I was growing up and learning to read and to think critically, it was a time when there weren't AI systems, there wasn't even the internet. And so part of what we learned to do is if we read an article, we would ask the question, well, who is that author? What is their expertise? What is their political leaning? What is their religious leaning? Do they have an agenda? Are they trying to promote a particular perspective? What is the publication that they're writing for? What's its orientation? What is it trying to accomplish? So as you read, you understood that there were biases. 

Bill Hendricks: 

And you wouldn't necessarily even hold those against them. You'd just factor that into your interpretation of what they're giving you. 

Fuz Rana: 

Exactly. And I think we have to be careful now as we move into this era of AI that we double down. And one of the things that I find very frustrating about ChatGPT is I've asked it several times, could you give me a reference for the information you provided? And it tells me it can't do that. 

Bill Hendricks: 

Yeah, right. 

Fuz Rana: 

Whereas with Google and other search engines, we know that there are algorithms that are used that kind of elevate certain articles to the top and elevate other articles based on some kind of, again, algorithm that the search engine designers have put in place. And a lot of people try to game that by trying to understand the algorithm and how can they manipulate it to get their article with visibility, but what at least I like about that is I have a sense for what is being promoted and what isn't. And I know that there's other things out there that Google isn't suggesting, but I can at least go and read the article and I can ask the question, well, who is this person? And what is this website all about? What is this publication all about? So I'm able to critically evaluate the information as I'm consuming it, but I can't do that with ChatGPT, and that to me is very frustrating. 

Bill Hendricks: 

You talked about as long as humans, I'll use the phrase stay in control, we got a shot at some good uses. I think about human nature and I think about how lazy humans can be, and at some point they just get tired of being in control. The technology just becomes too easy to flip the switch and it's given me good enough stuff. So for instance, having breakfast recently with a friend, and he had just gotten on ChatGPT and he was playing around with it. And so he asked the ChatGPT to write a letter to a dying parent, and he said it returned a remarkably touching, emotionally compelling piece, and he sort of stopped and thought, but is this morally acceptable? I mean, what if I actually sent that to a dying parent? Is that legitimate? 

And of course, somebody could say, well, how is that really all that different from just going out and buying a Valentine's Day card to give to your wife on Valentine's Day when the card expresses so much better, more eloquently, anything you could say or a sympathy card to somebody who's lost a loved one. Somebody could say, Bill, it's not the words anyway, it's the gesture. But I don't know what will the world be like? What will humans be like when they have a program that creates their communication for them and they never have to go through the struggle of sorting out their feelings and finding ways and words with which to communicate? 

Fuz Rana: 

I mean, you're raising this really profound point, I think that really deserves a lot of thought because to me, one of the things that I find frightening about not just ChatGPT, but generative AI as a package, is it going to make humans irrelevant? As you said, if I can have ChatGPT, write a sympathy note to somebody or a love letter to the person that I care about, have I become irrelevant? Am I no longer a creator? If a poem or a piece of music or a piece of art can be produced by an AI system, it seems like you're robbing human beings of one of the things that defines us, which is our creativity, that we are creators and these AI systems are really making human beings unnecessary to create. 

Or as AI systems become more and more capable of performing more and more sophisticated tasks, you no longer have humans that are even necessary to do a lot of work. And again, our ability and our investment of time and effort in the work that we do is very much part of what it means to be a human being. So if you eliminate the need that we have to work, the need that we have to be creative, and you've granted that to these AI systems, have we fundamentally robbed human beings of the core of who we are? Have we made human beings irrelevant? That to me is profoundly chilling. 

Bill Hendricks: 

I think it's very chilling, and I think it ultimately begins to raise the possibility of robbing humans of just work itself. And at what point do humans actually start working for the machines instead of the other way around? I want to tie this into the work that you do at Reasons to Believe, because at Reasons to Believe you are grappling with scientific developments and the questions that they raise, and finding avenues by which to help people begin to discover Jesus and the gospel. And I guess I'm just curious, as you've thought about all this, what openings as it were, do you see AI as opening to the gospel? 

Fuz Rana: 

Well, I think as we continue to develop AI and other types of technologies like AI, we are rapidly moving into a time where you might say that humanity is practicing a type of religious system known as techno faith, where we are looking to science and the technology that comes from science as a way to mitigate pain and suffering in the world, to perhaps produce a type of utopian future, to solve some of our most pressing problems where people are going to begin to put their hope, their trust in science and technology. And this will become particularly true as we become increasingly secular in our world. And so in a sense, you could see techno faith as really a competitor to the gospel. 

And in fact, these advances in AI begin to bleed into this movement known as transhumanism, where it's an idea that, again, is dressed up in the language of science and technology, but it really is a religious idea at its very core, where our ultimate salvation is going to come from science and technology and our ability to modify our bodies in such a way that we have a type of practical immortality. So it very much is a religious idea. And one of the areas of human enhancement technology that really, again, intersects with AI would be what are called brain computer interfaces. These are these electronic devices that you can implant in the brains of test subjects or patients, and these systems can extract electrical activity from the brain and communicate user intent to a computer software or machine hardware. 

Now, this is going to revolutionize how we treat people that are amputees, they can use these BCIs to control robotic prosthetic limbs, or it's going to revolutionize how we treat patients that are paraplegic or quadriplegic where they could learn to control exoskeletons with their thoughts. People that are locked in that can't communicate because of brain injuries can communicate by thinking and converting their thoughts in the text. But what's happening is that as these BCIs are asked to do more and more sophisticated tasks, in order to extract the user intent from the electrical activity in the brain, it requires essentially machine learning algorithms that learn to associate patterns of electrical activity in the brain to again, user intent. 

So now you're forming this collaboration between the human, this BCI, and the machine learning algorithms or the AI systems. And so it raises a lot of very interesting questions about autonomy and who's in control, those kinds of questions. But these kind of advances give a lot of excitement to people who think that maybe one day we might be able to upload our minds into a machine framework and attain a practical digital immortality. And so interestingly enough, people like Elon Musk with his company Neuralink, are trying to develop the next generation of BCI technologies because they see this as necessary to be able to essentially interface the human brain with computer systems if we are going to be able to maintain a competitive edge over the AI systems that we are creating. And so he's become a reluctant transhumanist ironically, because of his concern about what AI systems are going to generate. 

And remarkably, you have to use AI systems in order to compete with AI systems in Elon Musk's framework. But the larger point here is this, is that these kind of advances are leading to this idea of techno faith where people are going to turn to science and technology for salvation, not turn to the gospel. But what I see happening here is that these kind of advances are really exposing the deepest need that we have as human beings, which is a need for hope, purpose, and destiny, that we desire to connect to the transcendent, that we recognize that death is wrong, that we recognize that there's something tragic in the possible extinction of humanity. And so people are searching for salvation, it's just that the source of their salvation is misguided, is misdirected. But this gives us an opportunity to really talk about the gospel in an exciting new way, in an exciting fresh way in a world that's becoming, again, increasingly secular and increasingly techno savvy. 

Bill Hendricks: 

Well, it would certainly seem in one sense I think that technology is a savior as long as one actually is getting saved. But I think most people, while in spirit, they might say, oh, sure, I want to save the species, promote the species, survival, survival of our species, not necessarily at a cost to them personally, because they kind of get left behind in that rush. In other words, one of the things that is exposed, as you put it, by some of these technologies that are developing, is the radical inequalities that begin to open up. I mean, if nothing else, you certainly have some very, very, very bright people aided by some very, very, very wealthy people who is this elite group developing these things and operating them and having control over them and creating the algorithms and so forth and so on. And this widening gap between the, I mean, the 91% of the world that makes less than $10,000 a year. Those people, how would they ever gain access to or have benefit from these systems, certainly in their lifetime? 

And it's interesting you mentioned Elon Musk. My understanding is that part of the reason he wants to select just a handful of people, eight people or whatever, to go colonize Mars is that's the lifeboat from which we will as a species get off the planet. But the admission there is, well, we've kind of messed things up here, so we need to go start over somewhere else. And implicit in that to me is, well, but we haven't figured it out yet, so why start somewhere else and mess that one up too? 

Fuz Rana: 

Yeah, your point about inequality is a very powerful point, and we really are very rapidly moving to a world of haves and have nots. And the potential for abuse is enormous in that particular framework. And to me, part of the irony of trying to gain salvation through the use of technology is something that philosophers call the Salvation Paradox, is that ultimately what we save isn't going to be us, it's going to be something else that we have created. Or if you are looking to save yourself by uploading your brain or your mind or a copy of your brain or your mind into a machine framework, what's being saved is a digital copy of you, not you. 

And so I think the larger point is that technology is a false gospel, and it's only going to disappoint us if we turn to it for salvation. And that's something that we need to articulate as Christians to our culture is that, hey, technology can do great things if it is managed well, but it can never ultimately save us. And that the only place that we can find salvation is in the person of Christ, but at least people are exposing the fact that they crave after salvation. They're looking for something to save them. They realize the desperate situation that they're in. And that's a very critical first step towards people hearing and responding to the gospel. 

Bill Hendricks: 

Well, it's a fascinating competing narratives, isn't it? On the one side, you've got a technology that's holding out promise of salvation in a very technological way, and then you have the narrative of a creator who actually entered into humanity, not remove it, but actually came into a messy world with the offer of salvation and kept the humanity. And in fact, by becoming human, dignified, that humanity and literally redeemed it. 

Fuz Rana: 

Yes, and the challenge that we have is to show that in the midst of these competing narratives, that the Christian offer, the Christian hope is where there's true hope, there's true salvation. That's going to be the task that's in front of us as we move in into this future. But the allure of techno faith in the gospel of transhumanism is a very powerful allure. Particularly for people that are of a secular mindset who have rejected belief in God, have rejected belief in the supernatural. If you are a materialist, if you're an atheist, there is no ultimate hope, purpose, or destiny. And so transhumanism and these kind of emerging technologies really offer some kind of hope for people who have rejected the possibility that there is a God. And so it's a type of eschatology that's being offered. And so again, it's going to be a very attractive gospel to be clear. But I think people that embrace that gospel are extremely naive in terms of really what technology can ultimately deliver. 

Bill Hendricks: 

Well, of course, back in Genesis 11, we have the story of the Tower of Babel and humanity coming together on a giant project that was very technologically driven. Nothing wrong with technology per se, but as you said, it's how it's managed. And the Lord comes down and sees what's going on and realizes how impressive it is as technology so often is. And He says, if as one people all sharing a common language, they've begun to do this, then nothing they plan to do will be beyond them. And of course, at that point, He decides to confuse the languages and stall things out for a while. But here we come back again on a big giant project, and it'll be interesting to see how the Lord responds this time. 

Obviously, we haven't come up with all the answers today, but I think we've come up with at least some insights, Fuz, and I am so grateful to you for your time and being so key in this conversation. Thank you for the work that you're doing. Thank you for being on The Table Podcast. 

Fuz Rana: 

Oh, thanks for having me. It was a lot of fun and enjoyed the chat, Bill. 

Bill Hendricks: 

Well, we will definitely keep this series going. There's a whole lot of stuff developing, and I know there's going to be a lot of questions, and I want to thank all of you who've joined us today for this podcast. If you're on a subscription service, we would invite you to please leave a review about your experience with today's podcast, and of course, we would invite you to subscribe. We come out with The Table Podcast weekly, and that way you'll be able to keep up on the regular rotation of what we do. But for The Table Podcast, I'm Bill Hendricks. We'll see you next time. 

Bill Hendricks
Bill Hendricks is Executive Director for Christian Leadership at the Center and President of The Giftedness Center, where he serves individuals making key life and career decisions. A graduate of Harvard, Boston University, and DTS, Bill has authored or co-authored twenty-two books, including “The Person Called YOU: Why You’re Here, Why You Matter & What You Should Do With Your Life.” He sits on the Steering Committee for The Theology of Work Project.
Fazale Rana
Dr. Fazale Rana, PhD (Ohio University), Vice President of Research and Apologetics at Reasons to Believe, became a Christian as a graduate student studying biochemistry. The cell’s complexity, elegance, and sophistication coupled with the inadequacy of evolutionary scenarios to account for life’s origin compelled him to conclude that life must stem from a Creator. Reading through the Sermon on the Mount convinced Him that Jesus was who Christians claimed Him to be:  Lord and Savior. Still, evangelism wasn’t important to him – until his father died. His death helped him to appreciate how vital evangelism is. It was at that point he dedicated himself to Christian apologetics and the use of science as a tool to build bridges with nonbelievers. In 1999, he left his position in R&D at a Fortune 500 company to join Reasons to Believe because he felt that the most important thing he could do as a scientist is to communicate to skeptics and believers alike the powerful scientific evidence – evidence that is being uncovered day after day – for God’s existence and the reliability of Scripture.
Contributors
Bill Hendricks
Fazale Rana
Details
July 25, 2023
AI, Artificial Intelligence, technology
Share