Bonus Dad Bonus Daughter

Robot Overlords: Should We Be Worried? Our Complex Relationship with AI - Part Two

Bonus Dad Bonus Daughter

Send us a Comment, Question or Request, we'd love to hear from you

Artificial intelligence is rapidly transforming our world, from healthcare to warfare, presenting both extraordinary benefits and profound dangers that we must carefully navigate.

• AI today includes tools like ChatGPT, Gemini, and systems creating art, music, and even full-length movies
• AI in healthcare is detecting diseases faster than doctors, personalizing medicine, and designing new drugs
• Business applications include automating jobs, enhancing decision-making, and transforming industries like finance and law
• Self-driving vehicles, AI-powered drones, and humanoid robots are becoming reality, raising questions about safety and control
• The future may bring artificial general intelligence (AGI) that matches or surpasses human capabilities
• Brain-computer interfaces could allow direct AI interaction with our minds, raising serious privacy and identity concerns
• Pop culture representations in films like The Matrix, 2001: A Space Odyssey and I, Robot increasingly reflect genuine technological possibilities
• The rise of an "AI elite" controlling powerful systems could fundamentally alter power structures in society
• Deep fakes and AI-generated propaganda already threaten to manipulate elections and public opinion


Support the show

Speaker 1:

Hello and welcome to Bonus Dad. Bonus Daughter a special father-daughter podcast with me Hannah and me, davy, where we discuss our differences, similarities, share a few laughs and stories.

Speaker 2:

within our ever-changing and complex world, Each week we will discuss a topic from our own point of view and influences throughout the decades, or you could choose one by contacting us via email, instagram, facebook or TikTok Links in bio us via email, instagram, facebook or tiktok links in bio. Hello and welcome to another episode of bonus dad, bonus daughter, artel, uh, artel, artel. Artificial intelligence part part two.

Speaker 1:

Part two yes, because we did part one a week ago they know that we record three in one still wearing the same shirt, still wearing the same dungarees, which which are really cool, thank you very much. Should we just touch on that conversation that we've just had with Mitchell?

Speaker 2:

Go on then.

Speaker 1:

So we had a little bit of a lifestyle update last week, and Mitchell has just returned from the shop.

Speaker 2:

Yes.

Speaker 1:

Yeah, veering shampoo, veering shampoo. And Hannah mentioned to Mitchell about the fact that I mentioned that it could be the all-inclusive hotel that Hannah could be going to. It could be an actual swinger resort and hilarity ensued.

Speaker 2:

It did, it, did, it did. We were talking about pineapples and how I now can't have a Hawaiian pizza, which is so upsetting. I also want to show our visual viewers, but I will describe to the audio people as well. I am wearing baked bean socks today and I'm just showing the camera now. But basically for the, for the audio peeps that, just imagine a blue or maybe what do you call?

Speaker 2:

that turquoise, a turquoise sock with little baked beans on it. Anyway, there is like a little heinz beans. Again. Other beans are available, but mine says little bean and mitch, and I always coordinate, and his says big bean. Now my socks were supposed to be for a child, but I have child-sized feet, but I will always remain the little bean and he will be the big bean. So I just wanted to show everyone that we have matching socks and this is peak dink husband and wife duo situation.

Speaker 1:

Okay.

Speaker 2:

Dink goals.

Speaker 1:

Because you have your little Instagram page as well, don't you? It's like big and little adventures.

Speaker 2:

We do, we do. I've not been posting much on that recently only because I haven't been anywhere.

Speaker 1:

You have to be careful when you go to Tenerife. What you post, I haven't been anywhere. You have to be careful when you go to Tenerife what you post?

Speaker 2:

Yeah, little big adventures, swingers, little big swingers.

Speaker 1:

Oh my God, so funny. It's actually big and small, but yeah, big and small swingers yeah.

Speaker 2:

Dear God. It sounds like we're talking about the actual genitalia now, doesn't it?

Speaker 1:

The actual genitalia.

Speaker 2:

The actual genitalia.

Speaker 1:

So shall we go on to the episode yeah, let's do that.

Speaker 2:

So this is part two of Artificial Intelligence, because we overran.

Speaker 1:

We did. So what we're going to discuss today? We're going to talk a little bit more about where we are with AI today.

Speaker 2:

Yeah, sorry, I don't know why I made that noise um where?

Speaker 1:

we are with ai today. Uh, we're going to talk about a little bit about the future of ai and what's coming next and what we think might be coming next. Um yeah, we don't actually know. Yeah, the the benefits of ai, the dangers of ai, and then we're going to finish off a little bit about AI in pop culture yeah, so that's going to be the structure the structure oh my gosh, we've got a structure we've got a structure just because we went so off piece last time, I know, in the last episode it is my fault.

Speaker 2:

I started rambling about computing at school and then literally my mind went completely blank and then I looked at the time and I was like we have been talking for 40 minutes. We've only discussed a small segment of AI.

Speaker 1:

Exactly so. You're welcome. So we're going to be a little bit more structured.

Speaker 2:

I feel gutted for the person. It's like oh my gosh, they've done an episode on AI. Goes to that episode and like 40 minutes of it is literally just our life update and the last five is artificial intelligence.

Speaker 1:

And they have to wait another week for this one. Just save them up. Yeah, save them up. So we. So where are we with ai today? So we've got, so we do have the um, such as chat gpt. We've already mentioned that. Uh, we've got ai, which is now creating art, music, stories and even full-length movies yeah ai does create movies.

Speaker 1:

Crazy it is. It is, it is completely crazy. Yeah, I mean actually from a D&D perspective, right? So when you're playing D&D, I think that could be quite useful To thinking like when you guys are, when we're playing D&D, and then actually seeing your characters through an AI of what's actually happening. I quite like that idea.

Speaker 2:

Yeah.

Speaker 1:

Of a visual representation, so it's almost like a live stream version of like baldur's gate.

Speaker 2:

Yeah, I can. I can see, why that would be visually cool yeah yeah, but.

Speaker 1:

Yeah, I mean good for gameplay, uh, but not necessarily needed no, no, I mean we'll come on to the pop culture later. Um with it, because I know things like love, death and robots has been using, which is absolutely fantastic I love, love, death and robots.

Speaker 2:

Yeah, it's absolutely fantastic check out netflix, which uses ai it is netflix have you seen the game version of that as well? No, it's called um. I think it's on. Weirdly enough, it's on like another streaming service, like prime or something it's made by the same people, but it's like they they do adaptations of games right, oh, no, no, yeah, yeah, yeah, no, that is.

Speaker 1:

I have seen that um what it's called.

Speaker 2:

It was actually mitchell who told me about that, yeah, yeah, yeah, it was well good, because they pack man one yeah yeah, that was brilliant, that was really good it's like reimagined pac-man.

Speaker 1:

It was very yeah um, you mentioned this in the previous episode and ai in health care I did yeah, and detecting diseases faster than doctors, personalizing medicine and designing new drugs. In fact, this goes back to one of the previous episodes, when you were talking about women's health. Yes, yeah, and you're saying about well it will be when this comes out, when you talk about the personalisation of medicine and the dosages and things like that. So you think that, yeah, AI would be useful in that Maybe. In that forum.

Speaker 2:

I've asked ChatGVT to diagnose me.

Speaker 1:

Have you, yeah, and what's it?

Speaker 2:

says endometriosis.

Speaker 1:

Does it?

Speaker 2:

Yeah, so I'm just taking that as my formal diagnosis. Now Does it?

Speaker 1:

Yeah, so I'm just taking that as my formal diagnosis now, okay, we've got AI in business and productivity, such as automating job, enhancing decision making and transforming industries, finance law and customer service I must admit right. So in my role, I do write policies.

Speaker 2:

Yes.

Speaker 1:

And when I write policies, they have to be legal.

Speaker 2:

Yeah.

Speaker 1:

Yeah, so one of the policies that I did just write is I did copy and paste it and I did put it into an AI and I said what laws are covered by this policy? And it reeled off like five or six laws.

Speaker 2:

Was it accurate in that?

Speaker 1:

It was Wow, because then I obviously double-checked it and checked it against it. It was accurate and things like health and safety at work.

Speaker 2:

That saved you some time.

Speaker 1:

It did save me some time. It did save me some time. So you know, I think it is useful in that regard.

Speaker 2:

I think in a customer service context, if I may be that guy, I do sometimes find myself quite frustrated and think can I just speak to a human? Like? I feel sometimes that the chatbots don't do. Say, for example, a couple of years ago I had to change my name on everything because I got married. And it's like they go, oh, go to a chatbot instead of a human. And like, oh, please, can I change my? Oh, go to a chat bot instead of a human. And like, oh, please, can I change my name? Here is a number to speak to a representative and you're like, oh, like, why? Why wasn't that option there in the first place?

Speaker 1:

Exactly, it just annoys me so much. There are some low level questions that can get answered by AI bots yeah, that I need doing.

Speaker 2:

I don't know if it's just because I'm tech savvy enough that I know that I can change my address on things and just do that. But, it annoys me when I go to it and it's like use our chat service and then it's only open between nine and I don't know nine and five or whatever.

Speaker 1:

And you use that and then it's like please talk to a human, um, but I must admit I do find it fairly useful in that regard. I have used it, as I say, not to write the actual main bulk, but I have actually as well. I've kind of, when I've written something, I've kind of put it a policy or procedure or something into AI and then gone, can you just tidy this up a little bit? And then I've read it and gone okay, now I'm going to change that bit back and that bit back, so it's almost like a collaboration rather than getting it to do it. Okay, so it's, it's a. I use it as a fairly useful tool in that respect.

Speaker 1:

Then, of course, we've got the AI and robotics and you could ask you this is where we're going down a bit of a scary room in the fact of humanoid robots, ai-powered drones and self-driving cars. Did you see that thing on the news the other day about in China, where they're looking at deliveries with lorries which are completely powered by AI? So they still have a driver, they still have a human in there at the moment, as it stands, as it stands. So basically, the lorry drives by itself, completely self-driving, using sensors, but there is a human sitting in the driving seat ready to take over in case it does go wrong now I'm already foreseeing a problem with that I'll make he's asleep.

Speaker 2:

Yeah, yeah because because you're not actively doing something yeah I know you could quite easily nod off well not only not being like. I know you can nod off at the wheel anyway, but like when you're actually doing something, you're less likely to nod off. I don't know if I. I don't know how I feel about the human in that situation.

Speaker 1:

But also, I know, if you're not, when you're actually driving and you're doing something, you are what's the word? Focused and you're in there. But if you've got this thing that's doing it for you and you're ready, your reaction time is still going to be delayed, delayed, yeah, yeah, isn't it? Yeah, but they've also got these smaller um vehicles as well, so they're not quite, they're not lorries, but they're quite.

Speaker 1:

Probably only about we talk about no, not even, not even that big, only about probably double the size of this table. Oh, where they're riding down the road with almost like, just like, a box on wheels right with deliveries in to go to certain places it's just food delivery, or like amazon uh bit both, all right, um, and then of course you've got, because it's that whole thing about the amazon drones as well.

Speaker 2:

Remember that they said they were going to do amazon deliveries by drones well, some people that is a thing in america, like they do drop them out the sky yeah they're back gardens and stuff but then drones do kind of worry me a little bit as well, because don't just talk very briefly about warfare.

Speaker 1:

Uh, warfare, yeah, warfare has kind of changed from the thing you think you know how you have warfare. Wasn't that then? And that has now. It's very drone based, um, drone strikes, drone strikes, that type of thing, and that's all done by ai and you kind of remove that human element of actually fighting on the front line.

Speaker 2:

It's almost like we've got robots to fight for us exactly it's kind of the road we're going down.

Speaker 1:

I mean, you look at what happened in russia not that long ago, where they they actually sent that shipping crates with drones inside and then the drones came out of the shipping crates and destroyed the bombers in Russia. Now I don't know if it's true or not, but I have heard one report say that was powered by AI. You know, those drones were powered by AI. I don't know how true that is I really don't but you know, you've got that element of it as well. Again, humans can't be trusted with things because we abuse it, um, abuse, abuse. Then, of course, we got the yeah, the quantum computing. Have you heard about that? No, quantum, where um mitch would know all about this? Uh, where we got quantum? Computers are so fast yeah, actually.

Speaker 2:

Yes, I do know about this because don't they have, like, the, the largest server rooms ever? Yeah, yeah, but these things are well.

Speaker 1:

These, you know, are tiny.

Speaker 2:

The chips are really small, tiny yeah.

Speaker 1:

The processing power is astronomical.

Speaker 2:

Yeah, you know that's Like unheard of.

Speaker 1:

Oh, we've already mentioned and the next one on there I've already mentioned it which was AI in warfare. Yeah, autonomous weapons, yeah, um autonomous weapons, drone swarms, ai military strategy, I mean, because that's something else as well. I mean, when you, when you look at it, we're saying about ai being as kind of like an information and I know that was the lead slapping on the tape yeah, looking at ai is um analyzing data yes so you think how quickly it can do that, and that's not just in warfare, I mean, that's across the whole board.

Speaker 1:

Ai can analyze tons of data in seconds and then bring it out, and then you've got that information there for you. In essence, kind of what I've done here, the same thing of what I've done here with this particular episode, because I used AI on an AI episode to see what came out and it analyzed all that data. See what came out? Yeah, and it analysed all that data and this came out in literally seconds.

Speaker 2:

Yeah, yeah, literally seconds. That's mad, isn't it? Yeah, that's so mad.

Speaker 1:

So major AI players that we've got at the moment is OpenAI which we've mentioned. They're the big boys. Yeah, which is chat GPT, which is essentially what is on this tablet right now.

Speaker 2:

Yes, that's what.

Speaker 1:

I'm looking at, yeah Um Google deep mind such as Gemini.

Speaker 2:

Yeah, do you ever.

Speaker 1:

Of course, you've got Siri as well, haven't you?

Speaker 2:

Uh, don't know if Siri. Yeah, I guess I get I think it has AI intelligence.

Speaker 1:

Now, yeah, have you had that where your phone's in your pocket, because I've got a work phone which is android how can I help you today? Yeah, I was in a meeting on teams the other day and I was talking and then I'm in my bag. My phone went. I'm sorry, could you repeat that?

Speaker 2:

yes, what the hell so in this particular room I can't say it, but in this room I have a google hub, so when I activate that it will turn on the lights as well, which I know sounds super lazy. But all of our switches are covered by either heavy furniture or one that's just inconveniently over the side of the room and you are small and I am small you are little um, so yeah, I can say a command and it will turn on the lights yeah, I don't know if that's technically artificial intelligence.

Speaker 2:

I don't think that's just more automation. But, I have Google's version of that. I don't know what Google is. Oh, it's Google Assistant. That's what it's called Gemini Google Assistant. You often see adverts with people talking to Gemini and having real conversations and stuff.

Speaker 1:

Yeah.

Speaker 2:

What I do like about that side of things is I think it's quite good for people that suffer from sort of the loneliness side, and I think there there is a there's definitely a market for that. I know a lot of elderly people feel quite isolated and I know that they're. I guess they're less likely to use that type of technology. But if it was rolled out and if they were shown how to use it, I reckon that would boost their morale a little bit even though it it's a fake person.

Speaker 2:

But then again you get into the Black Mirror side of things. Like then you have a relationship with AI and, yeah, it would like touching on that side of things, which is odd.

Speaker 1:

I think, yeah, you've got the film with Megan Fox Subservience or something like that. I haven't seen it, but I've seen it on.

Speaker 2:

Yeah, there's one where Scarlett Johansson voices the, the lady in it she's, she's talking to the man the whole time and I don't know. Oh yeah, they have quite a deep connection sort of relationship yeah um, yeah, and she gets annoyed at him and he gets annoyed at her, like it's just little things like that, and then she ends up going like completely sentient yeah she yeah yeah she meets up with our other ai people and leaves him essentially yeah, yeah, mad um.

Speaker 1:

So of course, my mic course. Microsoft has obviously got Copilot. I think you mentioned that previously. You know, within the AI, integrated office tools and like Grammarly, like we mentioned before as well in 365. Meta has got AI generated avatars, yep, and then, of course, tesla, old Elon.

Speaker 2:

Old Tezzy.

Speaker 1:

Old Tezzy, old Elon. I mean, he's got what they call Optimus Robot, optimus Prime, I know, yeah, yeah. So that's kind of where we are at the moment. I think we're still in the, I think you know AI is. What would you say in the past two years?

Speaker 2:

definitely, you'd say definitely it's really starting to be more and more prevalent in everyday usage and the fact that it could be used in schools. Now, that was never a thing for me in school, and definitely not for you either.

Speaker 1:

Like it's mad computers weren't really a thing you know when I was my first start at school. Yeah, it wasn't really that kind of yeah, but now it's, it's kind of in everything. I mean like even like I recently got new the new iphone I say recently, not that long ago, um, but it said now comes with apple intelligence. You know it's yeah everything. Everything has got some form of ai, yeah, kind of built in there, but again, it's not sentient it, it's just using that collating of data.

Speaker 1:

So I've looked at what the future of AI is. I've looked at the short term, within the next 10 years, then within the next 25 years and then beyond that, so from 2050. So a lot of this is kind of possible, yeah. But again, like I was saying in the previous episode, some of the stuff that may be mid-term or even long-term, people are already looking at it. We just won't know or it won't be brought out because the research is already being done.

Speaker 2:

Yeah, yeah, it's fun.

Speaker 1:

So short-term, in the next 10 years, uh, looking at ai as a co-pilot. Ai assistance will integrate deeper into daily life, helping with writing, coding, research, creativity almost like jarvis yeah, I do think about him quite a lot actually.

Speaker 2:

I do think about what ends up being vision, essentially yeah yeah, that again that's another sort of pop cultural type reference. But yeah, I do often when I think of sentient ai, I do often think of vision and he did go off piste, like you know he did um, who, what? What did java's become? What's the mean one? Ultron, ultron.

Speaker 1:

Ultron yeah.

Speaker 2:

Jarvis to Ultron, to Vision.

Speaker 1:

Yeah, but it is. You know, can you even see that in the Marvel films with Iron man or Tony Stark when he's talking to Jarvis, and it is that kind of back and forth?

Speaker 1:

It's a relationship, it's like his friend, yeah, and Jarvis was his father's butler, essentially so it's even bringing in that familiar connection yeah, calling him Jarvis, so like working on projects together yeah, to come up with something yeah. So when you actually think of it like that, that's actually quite good, you know, but it's, it's positive that's why I think it should be part of school curriculum, because there is a positive influence there.

Speaker 2:

It can help you create. It's almost like having like I've mentioned before about writing and prose. It's like having a writing partner Just checking out something and then it might pop up a few ideas that, yes, a human could have come up with. But in the moment, often a book can be very personal and can be very. Any creative piece actually can be quite personal.

Speaker 1:

Sometimes you don't want to discuss that with another human no, you know, but sometimes you need to say it out loud and you need to bounce those ideas off like a soundboard, you know and I'm sure you do this at work with your colleagues.

Speaker 2:

Sometimes you've got say you've got a new policy coming in and you you go to your colleagues and who are in the field. Um just gonna just going to really completely typecast you a little bit here. But, you've got you in your office and you've got those in the field and you think am I saying too much about your work?

Speaker 1:

No, no, no.

Speaker 2:

That sometimes you might be. Oh, I'm writing this policy right now, but what kind of happens between this point A and B? I know you've worked in that, but it's a little bit different between this point A and B. You know, I know you've worked in that, but it's a little bit different. But to say that you didn't you can collaborate with your colleagues is no different in a creative setting to collaborate with a robot.

Speaker 1:

Yeah, no, that's absolutely true, Because you're right. I mean, I'm in a fairly unique position where I have worked there.

Speaker 2:

I have done that.

Speaker 1:

So I kind of do know. But I am now in a position where and I had this discussion at work the other day where you do get skill loss yeah, yeah, because you've been out of the field because I've been out the field and and I, I, the guys, are doing things out on the ground that I don't know anymore exactly, yeah but I'm sitting, I, I will write a policy, and you're right, because I write the policy and then the policy gets turned into okay.

Speaker 1:

so this is the policy, this is the strategic side of things. How are we then going to make that work in real time? I might write something in policy. I might then go to one of the guys, give it to them and they'll go yeah, I get where you're coming from, but realistically that's not going to work. So I'll then have to flex and kind of change it and adapt.

Speaker 2:

So you're right yeah, I don't think that's any different than asking AI the same question.

Speaker 1:

No, absolutely.

Speaker 2:

Other than the fact that you would actually need that field experience that AI won't know. But I'm in a completely creative setting. Go, this is how I would like to end my book, but I'm not entirely sure how to get there. Can we brainstorm some ideas together and it might come up with three things to think. Oh, I like the idea of your second option, for example. Let's explore that a bit more. How do you think we can get there? And it can be. You know, it's almost like talking to a human in that sense. It's like you know, help me out with this thing. Yeah, okay, maybe my brain is not intelligent to come up with those things myself, perhaps, and there's an argument there for that but I just think it's a tool to be used and I'll tell you what would would be useful for, when it comes to creativity, is plugging plot holes yes you know, once a book is written, yeah, giving it to the hour and going.

Speaker 1:

Is there anything like in if you're writing a series of books, say like george rr martin with the song of ice and fire? So he's written I don't know how many books. Whether he's going to write the last book or not is up for debate, but he could plumb all those books into ai and then go right. Well, I've been writing these books for 40 years. There's stuff that's going to be going on in book one that people, really that I've forgotten about. Can you just tell me what plot holes I need to now plug?

Speaker 2:

exactly, yeah, because, yes, that human has written it and spent his life writing that. There's no way he could remember god.

Speaker 2:

No, I mean I struggle with a trilogy, let alone like a actual book series, a whole world that you've created like yeah I, yeah, it's as a writer, it's not, it's actually not that welded in your mind, and I think that's that's what Supernatural touches on when they have, when they go, and it's that fan fiction side of the world and they go in season so-and-so, da-da-da, this happened, this happened, and you'll find that there's, there's actually quite a few shows that Jared and Jensen do together and they go. I was in that episode, like I was there acting that. How do you remember that? Yeah, exactly Like even the actors in those positions don't know, and I think that's what a lot of people maybe don't always appreciate is that it's not that welled in as the creator always what's going on, because you're, if anything, you're always looking for that next thing, that's it, but you're also in the moment and then looking for the next bit.

Speaker 1:

I mean, unless you had, I know, I can't see my own mistakes.

Speaker 2:

It's so difficult.

Speaker 1:

When I'd write stories and things, you'd have a board and Charlie Day, charlie Day. We always bring him up. Yeah, exactly like that.

Speaker 2:

Oh, by the way, his name is Charlie Kelly in Always Sunny.

Speaker 1:

Is. Is it Charlie Kelly?

Speaker 2:

I know he didn't quite use his full name. His real name is Charlie Day. Charlie Day.

Speaker 1:

Yeah, yeah, yeah Okay.

Speaker 2:

Sorry.

Speaker 1:

No, no, it's fine, no fine.

Speaker 2:

Where are we? Where are we?

Speaker 1:

So then, so we're looking at, you know, we're looking at the short term. So, short term, ai powered science, ai can help solve climate change, cure diseases and design materials for space exploration change, cure diseases and design materials for space exploration. I've just realized something as I'm reading this. So this is the short-term, mid-term and long-term plans for ai and this is written by ai, right, okay, and I like the way it said ai will help solve climate change, cure diseases and design materials for space exploration.

Speaker 2:

Will you now chat GPT? Yeah, will you?

Speaker 1:

It's like I know my shit.

Speaker 2:

Yeah. Listen to what I'm saying I will be helping to solve climate change.

Speaker 1:

Yeah, maybe. And then the next, but it's true. You know, you can argue that that is true, it could help solve those things.

Speaker 2:

But then the next one is ai generated expect expect ai written tv shows and ai powered game npcs see, that is something, that is something that.

Speaker 1:

So I've just been me and my brother. That's quite exciting. Yeah, me and my brother have just been playing oblivion, so oblivion remastered, okay, and it's quite funny. So we keep messaging each other and da, da, da, da, and there are a lot of npcs in that game. Now, imagine if all those npcs had ai and you could interact, them interact with them, which was very um.

Speaker 2:

Have you seen free guy?

Speaker 1:

yeah, very free guy, isn't it?

Speaker 2:

yeah, yeah I hope that's in your um, you know?

Speaker 1:

yeah, I don't think it is actually but then we've got, uh, because I think the new witcher game is concentrating on a lot of the, a lot of ai and the npcs, I really do for your witcher so then midterms, we're talking about 2035 to 2050. It's a mid-range, uh, and this is where I. This is why I was like hold the phone.

Speaker 2:

Go on then.

Speaker 1:

Right, the first one's a midterm Artificial general intelligence, so it's coined a new phrase here.

Speaker 2:

Not just general intelligence AGI.

Speaker 1:

AGI and it says when AI matches. Not if, not if, but when. When AI matches or surpasses human intelligence and can learn and reason like humans.

Speaker 2:

Scary.

Speaker 1:

So this is written by AI? Yeah, yeah.

Speaker 2:

It is very big headed. It's got a bit of an ego, hasn't it? It has got a bit of an ego.

Speaker 1:

Yeah, chat GPT. What are you doing? Then it says AI, integrated humans, brain, computer interfaces, neural link allowing direct ai interaction with the brain. That's some black mirror shit right there oh, where's your sensor button?

Speaker 2:

yeah, like honestly stop, yeah, so that is. Do you know what annoys me about that go?

Speaker 1:

on human.

Speaker 2:

Human memory has been a problem. You know it. Human memory is a problem. We, we are, as as as a memory, beings who can communicate their memories. We are absolutely shocking at it. Just look at any a like eyewitness testimony, like psychology journals, loftus and palmer particularly. We are rubbish at recalling events. Do you know what this is? This is going to kill relationships. I said this to you. Oh, did you bink? But this is my glasses and I'm like bink.

Speaker 1:

They watching it an entire history of you an entire history of me.

Speaker 2:

I don't want to re-see some of the things that I've seen. Think of it on a trauma level as well. Yeah, what if you'd experienced a really horrific, traumatic event you don't want to recall?

Speaker 1:

yeah, in my opinion it's, and you know it's stored there and you know it's well.

Speaker 2:

You know it's stored there somewhere and suppressed, we'll get into that. You know that's very much a psychology, mental health side of things, but on a surface level that I think, I honestly think that would ruin a lot of people. Yeah, our brains are very good at protecting us at times, because that was a black mirror episode, wasn't it?

Speaker 1:

an entire history of you, yeah because they um.

Speaker 2:

I remember the couple yeah weirdly enough, having sex and they were looking at themselves having sex before when they're, when they're when they're like the relationship was new. That's right and then it just looked when they're having sex before. It was just like oh, it's just an act, like it's just.

Speaker 1:

Exactly, exactly, and it was.

Speaker 2:

There was just like no feeling and no romanticism about it. And I just think that is. That is the way it would go, like I think I think Black Mirror has hit it on the head, like I really genuinely think and maybe we wouldn't need to think anymore.

Speaker 1:

No, I don't know Scary. Yeah, black Mirror, charlie Brooker. I think I mentioned it before, but it's just amazing.

Speaker 2:

Yeah, you're very much. It makes me feel very uncomfortable, and I think that's why I'm struggling to watch the newer series, but I think that's the point of it, isn't it?

Speaker 1:

It is the point, because you know what Black Mirror actually means, what the black mirror is. No, it's that. Oh, it's that yeah, it's your phone, the phone is the black mirror.

Speaker 2:

That's black mirror, yeah, yeah, yeah, the phone screen For the audio people. Yeah, sorry For the audio people.

Speaker 1:

I just flashed my phone up to the screen. Yeah, so the next is put AI in governance. Could AI assist in government decision-making or even lead societies? I don't know. Chatgpt, could you?

Speaker 2:

The thing is, there's a lot of people in government that I wouldn't want there.

Speaker 1:

Yeah.

Speaker 2:

And I think I don't know, would AI make a better world? Mm-hmm, would it? I'm just putting it out there, would it?

Speaker 1:

I don't know. Is that a rhetorical question?

Speaker 2:

I think it is. I think it is. I don't think anyone can answer that. Yeah, it scares the shit out of me.

Speaker 1:

Okay, so that's midterm, so that is kind of 2035 to 2050.

Speaker 2:

That's in our lifetime, very slap bang in there In the next 25 years. How old am I going to be?

Speaker 1:

You're going to be in your 50s. You're going to be nearly as old as me, then Ugh, yeah, I know, then we've got long term.

Speaker 2:

Makes you 70s, though, so.

Speaker 1:

Mm-hmm, yeah, long term 2050. I'll tell you what. By then, I'll be living in a small house in a field in the middle of nowhere.

Speaker 2:

In your little suicide pod.

Speaker 1:

My little suicide pod.

Speaker 2:

Oh, I know what they're called Euphanasia pod. Sorry, I meant what do we?

Speaker 1:

What you said. You wanted to go to Switzerland. Yeah, I said if I started to, you know, deteriorate, I didn't want to be a burden. So we're taking a little trip to Switzerland.

Speaker 2:

Right, right yeah, although I think it's legal in the UK now. They passed that bill. Good, anyway, carry on.

Speaker 1:

So long term, 2050 and beyond. This is the Ray Kurzweil prediction. The singularity Ooh. When AI becomes self-improving and surpasses human intelligence, changing civilization forever, what?

Speaker 2:

I don't like the next point over.

Speaker 1:

No. And the next point point post-human era will ai replace humans as the dominant force on earth? I don't know, chat gpt will it?

Speaker 2:

2050, that's in our lifetime and beyond.

Speaker 1:

Yeah, is it in your lifetime? Yeah, it's in your life, and you'll still be alive then I hope so do you think you'll be.

Speaker 2:

You'll be um retired then, right yeah, I bloody hope so no wait, you're 50. What is the retirement age in the?

Speaker 1:

uk. It's changing. Then it was 65 at one point I think we're up to 90 now you'll be 75 yeah, I'll be retired, I'll be, I'll be, yeah you hope, yeah, yeah so yeah, this is. And then the next one ai and space exploration, ai powered probes, self-replicating robots and terraforming planets.

Speaker 2:

Really, chat, gpt, really like can we move on? My brain hurts, hurts.

Speaker 1:

Okay, so the benefits. So we're going to talk about benefits, the dangers, very quickly, and then we'll just talk about pop culture very quickly. Some of this we've already kind of mentioned Using AI.

Speaker 2:

Did you know you have two scripts.

Speaker 1:

This is where.

Speaker 2:

I was getting a little bit lost.

Speaker 1:

Yeah, I doubled up, you doubled up, I doubled up.

Speaker 2:

I'm skipping that confused me when are we now?

Speaker 1:

The benefits of AI Got it Like healthcare revolution. We've already said Eliminating poverty. Ai-powered automation could lead to abundant resources. I don't know if I agree with that, because maths can neither be created nor destroyed, so there's only a finite amount of maths in the world. So I don't know if I agree with that one I don't know how I feel about that.

Speaker 2:

Yeah, um, I mean, it would be good, sorry, good, but I don't know if it could do it could it actually do it um so likely yeah solving by killing the humans.

Speaker 1:

Yes, yeah, there you go, yeah, thanos yeah the snap, snapping, snapping away, uh, solving climate change, scientific breakthroughs we've mentioned and we've also said about AI and education. So it does have some forces for good. Oui, oui, dangers.

Speaker 2:

High voltage.

Speaker 1:

Danger danger, high voltage. So millions of jobs could be replaced by ai, causing widespread unemployment this is one of one of the biggest arguments for ai.

Speaker 2:

That and I don't think it's no, it's not on there, so I will say it as well job loss and environmental factors, because the apparently like the server rooms in order to hold something like chat, gbt and then other models is apparently very much impacting the environment because of how much energy is used, cooling, etc. Yeah, servers are servering and I guess until they find a way to reduce the size and the environmental impact of servers, Well, that's where quantum comes in.

Speaker 2:

Yeah, until they develop that perhaps there is definitely an argument for that. Would my job ever be replaced by ai? I'm a project manager, yes.

Speaker 2:

However, I use ai in my job as the human to direct, slash, organize other humans, so I guess for me I'm not worried about it in my let's talk about my career lifetime as opposed to like, when I retire, maybe that would be more of an issue that my job role might not be, or my job role would evolve, because I think ai could do my job very well. They could schedule events and and stuff like that, schedule a lot of things, which is a large part of my job. So for me personally, yes, but I don't think it could replace something like your job.

Speaker 1:

No.

Speaker 2:

Because you look after humans Exactly, whereas I manage humans in a linear project sense, and I think my job is more likely to be taken by AI than yours.

Speaker 1:

Yeah, my job and my team likely to be taken by ai than yours. Yeah, yeah, you, my, my job and my team could not be replaced by ai. I don't believe so.

Speaker 2:

No, ever no, I I don't think, I I really don't think that that could ever be a possibility no there's a maybe a 90. There's like that 0.1 chance that a robot could potentially do your job, but because you care for humans, it would be strange, I think yeah, yeah, yeah, you couldn't. I mean, I'm personally not worried about my job.

Speaker 2:

I mean, as I said before I use, I can use AI in my job yeah but there's absolutely no way that my job is one of those unique positions where AI could not be your job, yeah, but it could be in mine, and I think I'm not particularly myself worried about that, because I feel like there still needs to be a human element to it.

Speaker 1:

Yeah.

Speaker 2:

I think AI could do my job a lot better, if I'm being completely honest, because it's more likely. Well, it will remember and store the information better than I ever could, but I guess you would lose that human interaction element that comes from me. But then you could argue that all my emails are written by chat GPT. There's an argument there for it. But yeah, mine is more likely to be taken by AI, but I'm not worried about it. It's my honest opinion.

Speaker 1:

One of the other dangers is the rise of the AI elite, and that is a small group of companies and governments could control AI, leading to technocratic rule.

Speaker 2:

And.

Speaker 1:

I think that is actually a valid concern.

Speaker 2:

I agree with that. I think there is probably a lot of people that wouldn't want someone like Elon in charge, as I'm told and as I can hear, um, so yeah, I think there's a real possible concern um yeah, am I personally concerned about it? Not yet not yet.

Speaker 1:

Not yet, but uh, good use of the word yet yet yes ai and warfare. Have already mentioned um the fact that you know making life and death decisions in war, and we've already mentioned yeah, we've already covered that one the fact that you know making life and death decisions in war. And we've already mentioned deep fakes and AI generated propaganda, manipulating elections and public opinion. Now that already happens.

Speaker 2:

That does already happen and I do think that is quite scary. I recently saw I promise this is related, but small segue Billie Eilish came online and said because a lot of people were slating her dress at the met gala, some some event she was like mate, I was doing a show, then that's a deep fake. And she was saying I would.

Speaker 2:

Apparently her outfit was trashy and she was like she just jumped on there and she was laughing through the whole video. She's like guys, I don't know why you're coming at me. I wasn't even there. That is, that is a deep fake like I was doing, I was doing a show in so and so and she was like this is me live on stage on that date that you think I was at this event and and she just laughs all the way through it. She's like guys, like you know, like you know, look into it a bit more like before you come at me with that it's, it's.

Speaker 2:

It was just like a little funny video and I really appreciated her very humorous kind of interaction with that. That was that was quite good of her, I thought but it's true.

Speaker 1:

I mean, I see stuff on tiktok all the time and on and whatever. I mean, I saw something the other day and it was like, oh, look at, look at this, this is happening in england right now. But then actually, ai underneath it said this video was actually taken in 2016 and it's to do with this, but somebody had written that and put that and it's just like, really, because what it does is it's dangerous, it's playing with people's emotions, it's playing with it and it can be quite dangerous and the whole thing. They say yeah about deep fakes and there's, you know, there's, um, there's certain worrying things where you can create ai versions of, you know, when you look at stalking and things like that catfishing catfishing and there's, there's again.

Speaker 1:

It's a tool, but if it gets into the wrong hands it can be quite dangerous.

Speaker 2:

Yeah, can be quite dangerous I think it's damaging when people in a position of being well known sorry, I didn't word that correctly, but when someone is well known and then they're, they're deep faked, a bit like billy eilish yeah that was a lesser example. But what if there was a video circulating of someone doing something absolutely horrific like murdering someone, like something like?

Speaker 2:

that that could seriously damage that person's reputation and everything. And they haven't, they've clearly not done it. And then you, then you've got the argument of crime. Like you know, how do the how do the police know that that's not ai generated? Like you know, there's. There's got to be a way of, I guess, knowing if ai was used, and hopefully there is somewhere in the coding or something like that. Um, instagram's quite good, you can put an ai label, for example, like those, um, those dolls that were being made, like the barbie dolls in the cases yeah um, you can put ai label on that.

Speaker 2:

That was created by ai, so there's there's there's ways of doing that, but it relies on the person being honest and putting the ai label on there.

Speaker 1:

Yeah, sorry no, no, but it's, it's true, it is true, um, so yeah, so they're kind of like the, the worrying things uh, and of course we get the existential risks, such as um ai becoming uncontrollable. If I, if ai self-improves uh uncontrollably, we may not be able to shut it down. That's some skynet shit talking about terminator well, that's very jarvis that is very jarvis, very jarvis uh and then, of course, we've got the ai driven surveillance governments using ai to track and control citizens.

Speaker 1:

Yeah, I mean, that's been, that's always been brought up as well as a, as a Mm-hmm. You know, is that a genuine, genuine concern? So I'm just looking at the time, so I think we'll just finish off very quickly.

Speaker 2:

Let's go pop culture To talk about pop culture. Let's end on a happy yeah.

Speaker 1:

I've already mentioned this one, the top one I mentioned 2001,. A Space Odyssey and HAL 9000.

Speaker 2:

Well, if you go on to your other script, you've actually got the dates have.

Speaker 1:

I.

Speaker 2:

Sorry, I don't mean to be that guy Number four on your script if you go up. So there's Metropolis. Oh yeah, yeah, yeah, yeah.

Speaker 1:

Metropolis, yeah.

Speaker 2:

Metropolis sorry 1927, A.

Speaker 1:

Space.

Speaker 2:

Odyssey, which we talked about with HAL.

Speaker 1:

Yeah 1968.

Speaker 2:

Terminator. Very good representation. I'll be back. Yeah, that's a good Autotron. The Matrix probably changed the lives of many. Do you know what? I watched the.

Speaker 1:

Matrix again the other day, Simulation, and there's a few things within that film. I know the Matrix as well is very kind of. It is very religious. There's a kind of religious base to it as well. Yeah, yeah, yeah. But then there's a few things that Agent Smith says and I'm like you know what, You're probably not far off there, I mean, and it's very. I mean you got the whole point. It's very based in Alice in Wonderland as well.

Speaker 1:

The Matrix with the red pill, the blue pill, you know, see how far down the rabbit hole you go yes yeah, it's very alice in wonderland, yeah, um there's so many carol would be proud. Oh yeah, there's so many, there's so many references in the matrix and it's just, it's so clever. But there's one point where I mean I've mentioned this before when he says humans are like a virus of the planet but there's also something else he says when he says about the.

Speaker 1:

was it the epitome of human nature? Which is why they go back to the 90s? Is that when people were the most happiest in history?

Speaker 2:

Yeah.

Speaker 1:

And do you know what I know? We're like 2020, in 2025, and I watched this and I thought because that that was when that was at the birth of technology before technology yeah it was the birth of technology, it was before, and I was like I don't know if he's far off here I mean sadly.

Speaker 2:

I wasn't able to appreciate. It was like the pinnacle of human yeah, the pinnacle of human society.

Speaker 1:

Everything was fine, you know, the cold war was over, there was no threat of nuclear war. It was like people were kind of happy. Yeah, whereas I've lived in the technology age era and I don't know if I'm just looking back in history with rose-tinted glasses. I really don't know that. Perhaps, perhaps, but I thought yeah.

Speaker 2:

I don't think he's far off. Also, you were in your 20s in the 90s and they're supposedly the best years of your life, so again it's one of those, yeah, rose-tinted glasses. It would be nice getting someone on the podcast like denwe, for example, to see what his take on that would be like. Yeah, what you know what year he would find best. You know someone, someone mature, just to see like what, what their take sorry, then we mature well that's my dad you're talking about now.

Speaker 2:

I meant mature in age. I just didn't want to use the word old, because I respect my grandfather. Yeah, unlike you, clearly. Yeah, so yeah, it would be. Yeah, I'd be interested to know. You're saying they're your golden years.

Speaker 1:

Yeah.

Speaker 2:

I don't know what my I think I'm too young to say what my golden years are. I'd like to think that my 30s are going to be my golden years. I'd like them to be. I don't think my 20s have been that great in terms of covid being ill, etc. Etc. So I think it's probably down to the individual, as well, yeah, of course it will be.

Speaker 1:

Will be will be down to the individual um, yeah, matrix is was 1999.

Speaker 2:

So, yes, within my time. But my first interaction with anything to do with AI was iRobot 2004.

Speaker 1:

Yes.

Speaker 2:

That makes me I watched that film so many times you loved that film.

Speaker 2:

I think I know it's the ghost in the machine, and I remember do you know what? It's one of the proudest moments of my life. That's not true, but it's one of those times where I always remember being like, yeah, I'm intelligent. We were in a philosophy class and we were talking about a particular philosopher I can't remember now, it might have even been Descartes actually, now that I think about it and we were talking about the oh no, it was Asimov. Because we were talking about the. Oh no, it was Asimov, because we were talking about the ghost and the machine.

Speaker 1:

Ah, the three laws.

Speaker 2:

Yeah, yeah, yeah and the ghost and the machine. And I remember my teacher going, oh, does anyone know what this is a reference to? And I was like, oh, that's the film. I was like the only one that put it on my hand. I was like that's the film I wrote about, with Will Smith in it, and she was like, yes, you know exactly what that's about. And I was like I know something. I got called upon in class and I was right and yeah, I remember that that was a level um, but yeah, um, yeah, I robot absolutely love that film. I could probably, I could probably recite the script of that film. I've watched it so many times. I just really enjoyed it it's a great movie.

Speaker 2:

Yeah, it's such a good film um yeah, because, because he wanted, he was in that car accident and the robots had determined that she wasn't viable to save because she only had 12 life left that's right and will was saved and he. That's why he was so angry at the robots. I can't remember what his name in the actual show was that was a great that was a great acting but that's when.

Speaker 1:

That's when you get cold data versus emotional intelligence. Yeah, there's your argument there. He wanted the little girl to be saved.

Speaker 2:

She was going to be a dentist and he wore her bracelet and everything.

Speaker 1:

That's right.

Speaker 2:

Yeah, I can't remember why he was in the car with her, what relation she was to him, but yeah, oh no, he was a police officer. He jumped in to save her and they ended up ripping him out and ripped his arm off.

Speaker 1:

That's right off, and he had a robotic arm. Yes, oh god, sorry, this film is just all coming back to me. It is a good film, great, but I mean that was 21 years ago, that was filmed.

Speaker 2:

Yeah, that's mad, that's 21 years ago yeah, I was um, I was a good age for that to come out. Yeah, I was a, I was, I was eight the next one on there.

Speaker 1:

It mentions her 2013. I've never seen that.

Speaker 2:

I don't recognize that no, the only one I could think of was lucy. Yeah, um, because she was sort of she took the. Well, she sort of took a pill. Situation, didn't she?

Speaker 1:

she, she was smuggling drugs oh yeah, but but that's where she that was just opening up her mind artificial intelligence same sort of thing. Yeah, because it was artificial it was artificially in her body.

Speaker 1:

Yeah, uh, and then in 2015 and this was an amazing film x mackina yeah, I don't think I've seen um. Cheering test meets ai ethics. That was a really good film I wouldn't say, in fact, just we might be worth watching, that. I won't say too much about that, but it's, it's, it's for, just so that you can watch and bringing back alan turing as well exactly the father of the father of ai I'm gonna call him. I'm calling him the father of AI.

Speaker 2:

The father of AI. I'm going to call him, I'm calling him the father of AI.

Speaker 1:

So I think, finish on one final thought, as what did sci-fi get right and what has sci-fi got wrong so far? What it has got right is that AI such as Hal I'm sorry, dave I can't do that. And Jarvis actually oh, hang on a minute, Her 2013. Samantha in Her actually. Oh, hang on a minute, her 2013. Samantha in her. That might be scarlett johansson that might have been another one, another scarlet.

Speaker 1:

Yeah, I think that was a scarlett johansson film. I don't remember right. I'm just gonna google that very, very quickly. I'm sure it.

Speaker 2:

I'm sure it is, but um, I mean, whilst you google, I can close the episode off yeah, yeah, no, I was going to say.

Speaker 1:

But so what did it get right and what did it get wrong? So, like Hal Jarvis, do now exist, such as Alexei, siri and ChatGPT.

Speaker 2:

Yep Gemini Copilot, all of those you can talk to Yep absolutely.

Speaker 1:

But what it gets wrong is what's the Microsoft thing.

Speaker 2:

Yeah, I was right. Oh, I was right it. What's the Microsoft thing?

Speaker 1:

Yeah, I was right. Oh, I was right, it was Joaquin Phoenix, amy Adams and Scarlett Johansson.

Speaker 2:

What is Microsoft's? She's got a name Santana. No, what's her name? I don't know oh gosh, it's because I don't use Microsoft. Oh my gosh, it's like Sienna or something, or oh.

Speaker 1:

I'm not sure what, or I'm not sure what's her name I can't remember.

Speaker 2:

It'll come to me later. It'll be another text message, another voice message to you. That was the name of. Yeah, I can't remember off the top of my head, sorry. What did they get? Right and wrong?

Speaker 1:

So what they have got wrong, though, although this is coming from ChatGPT itself. Is that true? General AI like Skynet is still hypothetical. Is it ChatGPT? Is it? Is it, though? Or are you just? Yeah, are you just spinning me a yarn here?

Speaker 2:

Yeah, I.

Speaker 1:

The interesting thing is, though, with technology, sometimes, when technology is in pop culture, it then becomes a reality, because people then see it as a thing. So you think of things like Demolition man. Yeah, have you ever seen Demolitionition man? There's, there's a whole thing where sylvester stallone, uh it goes, and then this woman comes up on the screen and it's essentially like a zoom call and it's like this. But we didn't even think about zoom calls back then, but now it's a reality just like the ipod story that you told.

Speaker 1:

Yeah, yeah it just comes into reality it just comes into reality. And also, like in star trek with the communicators, it just well if someone can think it, often they can make it exactly like yeah it's some fiction has become.

Speaker 2:

I'm sure there's some films in the 80s and things like that where they were talking about certain technology, like when I watched back to the future and they went to 2015 seeing how they thought 2015 would be a lot more advanced than it actually is yeah you know, but they still had a couple of things in there that made me chuckle, like the automated ordering service. We do have those we do have screens, you know.

Speaker 2:

So they can. It's not that they predict it, it's that the, if the possibility can be imagined, then the possibility could possibly be made. It's it's, it's one of those things. So, yeah, I think, I think a lot, of a lot of pop culture will, will influence um technology today.

Speaker 1:

So I think for me personally, I think ai is, I think it'd be something very good, very good if not abused, if abused, but that's like, like everything like everything yeah, everything. I think this. It could be used as a force for good, but it is open to abuse.

Speaker 2:

Her name was Cortana.

Speaker 1:

Oh, Jesus Christ.

Speaker 2:

Sorry, microsoft, sorry, sorry, sorry, microsoft, siri is Cortana, okay, okay.

Speaker 1:

Do you know what? When I think Cortana, I think of a hard house track called the Music Is Moving.

Speaker 2:

Inst instance. Anyway, if you enjoyed this episode. The music is moving If you enjoyed this episode on artificial intelligence. We talk about it quite a lot in our other episodes as well, but we wanted to do a bespoke one for artificial intelligence. Go watch those if you haven't already, but if you are up to date with all the podcasts. We've got some exciting ones coming up, including games and all sorts of things that we actually haven't written yet, but I'm sure they're going to be exciting. So, yeah, check them out.

Speaker 1:

Have we written or will ChatGPT write them? Chatgpt yeah.

Speaker 2:

So we are, I guess, pro-ai when it's not abused. I think is the message we'd like to leave on yes. Anyway, if you've enjoyed this episode, please check out everything, including our socials, because we tend to post on there very infrequently, and we hope you enjoy the rest of your day and goodbye, cue the outro. Thanks for joining us on Bonus Dad, bonus Daughter. Don't forget to follow us on all our socials and share the podcast with someone who'd love it. We are available on all streaming platforms. See you next time.