How Schools Can Use AI with Sam Bourgeois

Download MP3

Jethro: Welcome to Artificial Intelligence Real Talk.

I'm your host, Jethro Jones, and today we are talking about AI and how schools can use it and like what do we do?

I mean, this is what we talk about all the time on this podcast, but today we're talking with Sam Bourgeois.

And, uh, pretty excited to chat with him because he has been in the IT space for years.

And, uh, so we're just gonna have a conversation, gonna riff off on a couple of topics and, and go from there.

So, uh, you can listen to this podcast wherever you get your podcasts.

Artificial intelligence, real talk.

Thanks so much for listening.

Sam, where do you think we should start?

Sam Bourgeois: Man, uh, it's always a good convo with you.

Um, I think what's been on my mind lately is I've got this, uh, this course in development and I wanna put this content in as a, as a en enrichment opportunity for that.

Course, and that course is centered around it.

Uh, district leaders, uh, instructional leaders, potentially superintendents, even teachers as leaders in the classroom, right?

And the course is all about.

AI risk management, cybersecurity risk management.

So we're on an AI podcast.

I'm gonna tell you, let me, let me tell you a story and then you tell me where we can go with this.

So I had this really cool idea.

Um, now I know we have similar backgrounds with our, with our, uh, with our children.

son, for everybody who's listening is a neuro neurodivergent 10-year-old.

he is super, super excited about who would win books.

I don't know if you've seen those, the Who's Gonna Win Tiger Shark versus Great White Shark or whatever.

the other day I'm getting ready for a party.

It's the holiday season and I've got phone in hand.

I'm trying to get something done, and he's just peppering me nonstop with questions.

So I say, I've got this idea.

So I pull up my phone.

Go to my chat, GPT Pro, which is not training the, the data in the, in the masses, right?

So I've got it, you know, internalized.

I hit the talk button, I dunno if you ever do this.

And I just put it in front of him and he just starts talking.

He goes, who would win?

Uh, great white shark or tiger shark.

And then the lady comes back to him with a British accent and he says, oh, that's a great question.

Let's, let's unpack that for a moment, blah, blah, blah, blah, blah.

And my kid just goes off and has this amazing conversation.

And the funny thing is my, um.

My, my a BA person, my a BA clinical specialist, is in this, she's there watching and she's just like, mind blown, like, whoa, this is great.

And she's telling me after the fact, she's like, he's practicing his conversation.

He's waiting and pausing.

He's treating this like a real person.

So I was like, awesome.

Dad.

High five moment.

But then I was like.

I totally didn't do like a risk assessment to see where this is gonna go and like the drawbacks, right.

So I don't know.

What are your thoughts on that?

Jethro: Yeah, well, I, I love that example for a few reasons.

One.

Uh, you, you're right.

You didn't do a risk assessment and like say, what could possibly go wrong with this?

But the first thing there is what your, uh, what the specialist said, which is he's treating this like a real person.

And to me that's the biggest issue that we have with AI right now is that people often think that it is a real person when it definitely is not.

And so that to me is, is the, the groundwork where you need to start.

This is not a person, this is a tool that might help you understand or do something a little bit better, but it is not a real person.

So don't get confused and think that this is a real person, however.

Was she right about him practicing, talking and using and speaking appropriately and being able to be understood.

Let me share a little story with you as well that's along those same lines.

Uh, my oldest daughter has down syndrome and has a speech impediment and is sometimes hard to understand, but she has a, uh, a home pod, uh, in her room that plays music that she loves to listen to.

She loves music so.

We were driving in the car and I invoked Siri on my, uh, car and said, play this kind of music, or play this song or something.

I tried five times, I couldn't get it to work, and so my daughter leans over and says, dad, I got it.

And she says, Siri, play this thing and then it does.

And I was like, are you kidding me?

So what has actually happened is that.

My Siri account has been trained on my daughter's, uh, speech impediment, so it understands her better than it understands me now, which causes all sorts of
frustration on my part, but she totally gets it, and she can, she does a great job with it, and that's, that's one of those things that these tools can enable.

Kids with disabilities to have a more level playing field.

Let me share another story.

Um, my daughter needed to give a talk in church and so she, I had her write down and text me what she thought, uh, she wanted to say.

And so she wrote this long sentence with no transition words, no periods, no commas, no nothing, but were things that I know she believes and understands, and that.

And so I was like, okay, this is something that she would say.

However, if she gets up there and reads this, nobody's gonna understand and nobody's gonna like care what she's saying.

They'll just be like, there's a kid with Down syndrome speaking some gibberish, and like people are respectful and they'll be kind and everything, but she won't be able to communicate what she really believes.

And so I went into Chachi pt. I put that in and I said, she needs to give a talk.

Here's our religion.

Here's the things we believe.

Here's what she said.

Now make this into a talk that would make sense to a normal person.

It created a talk that actually communicated what my daughter really believes in a way that was incredibly powerful, which I just loved because she got to.

To put this out there.

Uh, and it was what she believed, even though she didn't construct the sentences herself.

Now, to be perfectly honest, if she needed to give a talk and I was helping her, I would be the one writing the talk for her and try to make it sound like her as much as possible.

The benefit in this situation is that I gave her the.

The, the task to, to write what she wanted to write.

I put into chat GPT and then made a couple of, uh, additions or subtractions so that, so that it would make sense.

Now, going back to your first story in this story, what that enabled, um, there was a paper released in the summer of 25 that was called
cognitive Debt, and it talked about how when you use AI, then you create cognitive debt because you don't remember the things that you wrote.

But for people with disabilities, I believe that AI creates cognitive equity, that they are able to do things that they wouldn't be able to do normally.

That the AI gives them the power to do that.

And I wrote a paper on this myself called Cognitive Equity, so you can go look that up and, and gave the example of what my daughter went through and how we use this to help her communicate something
that was so much better than what she could do on her own and how valuable and important that was because it gave her the opportunity to express what she believes and what she is interested in.

To a room full of people who don't interact with her very often, and then they could see what she really believed.

And that was a huge, huge mark, like huge improvement on her life because she gets to express herself in a way she never has been able to before.

Sam Bourgeois: We, and you know, we've talked about this in the past.

Um.

And this, you know what it reminds me of?

It reminds me of the conversations where, do you remember when we were first handing out iPads for like, again, my, my son is a SD, but there
were, there were these iPads and they were like the, it was like the old storyboards that we used to do with like Velcro to help communicate.

Jethro: Oh yeah.

Yeah.

Sam Bourgeois: was, it was like a game changer and people thought.

I don't care how much it costs.

Let's just get this thing so the, so the child can actually have a voice.

Or, you know, if you even go back a little further from there.

And you look at the, um, uh, the, the kiddos who are confined to a wheelchair, their mobility, just conversation.

Their mobility comes from some of these technical improvements, technology, technology improvements.

It's, it's really the same thing.

It's like, but we have to ask ourselves, this is the question for the principal side of you, not the father side of you.

We have to kind of reevaluate what is the, what's the learning outcome?

What's the objective?

What's the thing?

So in your example was the thing to get up on stage.

Okay?

Mission accomplished was the thing to express yourself.

Mission accomplished was the thing, the grammar.

Okay?

So we didn't, we didn't do that.

But if, if that's not the thing that we're trying to measure.

that's not the coursework, the course material, it's like, it's like, uh, when we were young men and we were told we couldn't have a calculator in math class, right?

It's the same thing.

It's

Jethro: Yep.

Yep.

Sam Bourgeois: are, are you measuring work that I produced and wrote in my little blue book, the, the showing my work?

Or are you measuring my accuracy to do three digit multiplication?

Like

Jethro: Yeah.

Sam Bourgeois: me which one.

It's right.

Jethro: Well, and, and this is what is, is so important, is the, the purpose it, we need to go back to that every single time.

What are we trying to get the kids to be able to do and if the purpose is for them to to learn something?

If the purpose was for my daughter to learn grammar, it was a total failure, but that wasn't the purpose.

If the purpose was for my daughter to be able to give a talk so that people could understand her in a way that they've never been able
to understand her, but before then, not only was that purpose met, but it was exceeded at a grand scale, which was really amazing.

And so again, we go back to your original story that you started out with the, the purpose for your son in that situation was for him to have a conversation about what would win with somebody.

Or something I should say that could engage with him in a, at his level and in a way that was appropriate for him.

And again, mission accomplished.

And you and I as dads, sometimes we don't have the time, and this is where it becomes really valuable for teachers, because teachers don't always have the time either.

They've got 20 to 40 students in their classroom and they don't have time to give individual individualized attention to each one of those kids.

So where do we want their time to be spent?

Do we want them to be, uh, creating.

Personalized materials for every single kid when an AI could do that so much better.

No, we don't want them to be using their time for that.

I want my teachers to spend as much time as possible interacting with my kids, uh, in a way that is gonna help them.

And I want the teacher to have the freedom and the power to make that decision.

So where can we bring AI tools in to help with that?

Now, if, let me.

Put a little caveat in here if the AI tools are there, just to help make it so that the teacher can do the same dumb lessons she's always done and not change what she's doing and personalize it for the kids.

That's not really beneficial.

You know when when these new AI tools came out, everybody was like, oh, I can have it make worksheets for me.

Well, worksheets are not really that valuable.

Let's not do that.

So let's do something different and let's do something that is valuable for our kids.

And teachers have to ask the same question.

What's the real purpose and value here?

And what I've said a hundred times is.

We should not use AI to circumvent learning.

We should use AI to enhance learning.

Think about the actual purpose.

What is that you want to happen, and then use a tool that's gonna help get you to that point.

That's, that's the key, and that's what we need to be using AI for a hundred percent.

Sam Bourgeois: I couldn't, I couldn't agree more.

What are your, what are your thoughts on the risk, on the risk profile?

So, my example was a silly one, but I, here's where I was at.

This is what I was thinking.

I was like.

Honestly, I, I think my kids get too much screen time and, and it doesn't matter what the number is.

By the way, just for the record, there's always gonna be too much screen time if you ask me.

Like, I would like to have more green time, not more screen time.

Like,

Jethro: Yes.

Yep.

Sam Bourgeois: even if it's 15 minutes a day to be, you know, less.

Right.

Jethro: Mm-hmm.

Sam Bourgeois: is I didn't put that on a device that he has access to all the time.

'cause I'm, I'm afraid that that might.

Take away from human interaction.

Okay.

I thought about like, I training the data on this?

No, no.

I'm not training the data on this.

I actually thought about does this actually, uh, this actually affect my training of my GPT experience?

Because like I'm always asking like, let's say, let's say higher level, higher order questions, I'm asking things about forensics, data forensics, cybersecurity, or something like that.

And by having a conversation about sharks, is that gonna.

You know, of,

Jethro: Yeah.

Yep.

Sam Bourgeois: affect the learning, right?

So I asked some of those questions, but what are your thoughts at the school level?

How do we ask the right questions and how do we determine what's appropriate for use in a classroom at the, at the building level and at the district level?

What are your thoughts there?

Jethro: Yeah, so I, I am most typically in favor of giving teachers as much autonomy and, and permission to experiment as we can.

So, so that's the first place is let's, let's let them be the ones to make the most decisions and give them the tools that they can then make those decisions about.

So that's where I would.

Start.

And that's because they're the ones who are closest to the kids and they're the ones who know what the kids need the best.

And so that's, that would be the first place.

The other thing to be thinking about is, is going back to what we said before, the purpose.

What, what are you trying to accomplish?

And make sure that the tools you're using allow you to accomplish that purpose.

And, and if they don't, that's an issue.

You gotta go, you gotta go fix that.

Um.

So those two things first would be what I would think about the next thing would be what's gonna happen with the kid who uses this?

What's going to be the long-term effects of them using it?

And so, you know, uh, the calculator is a good example.

If somebody, if.

If somebody uses a calculator but they don't have an idea of number sense, then the calculator is essentially pointless.

'cause they won't know when it's wrong.

If they do 25 times 25 when they meant to do 25 plus 25 and they get 5,000, they're gonna know if they have number sense.

That's not the right answer.

If they don't have number sense, they're just gonna copy the answer down.

And so that's where you really gotta be.

Intentional about talking about what it is that you're using, how you're using it, and why, so that you can make a more informed decision about the output that you're getting.

Because if you don't know what that output is, then it's gonna be really difficult for you to say, this is exactly what it needed to be, and this is how, how I understand it, and this is the right, the right response.

So you have to be thoughtful about that, and teachers have to have continuous conversations with principals and with students about that.

Students, when they're using it need to be able to articulate why and how they're doing what they're doing, instead of just saying, oh, I just made an image.

Okay, but like, what's the purpose?

I just had to do my homework.

Okay, but what's the purpose?

Why did you do that?

How did you go about it?

Because you can actually understand a lot more from the questions somebody's asking.

And the things they're asking it to do, you can understand if they understand what they're trying to accomplish and other times, like they clearly can't.

In my daughter's example that I shared, she had no idea what was going on.

She was like, oh, I sent you something and now I read this thing that you sent me back.

Okay, whatever.

And you know, that's fine for what we were trying to accomplish, but if that's the case in a presentation in school, for example, that's probably not the outcome that you're looking
for, but, but if the kid is super nervous about speaking in front of others and is so nervous about it that they can't handle writing the actual content, they are going to speak.

You can give them the, the tools to write that faster so that they, they're not worried about punctuation or spelling or grammar because it's already laid out nicely.

That is a benefit to them If the purpose is to get up and speak.

But if the purpose is other things, then you need to use other tools to get them to that point.

And so that's where like, there's no way to say, here's a hard and fast rule.

Use AI for this purpose and this purpose only.

It has to be case by case, situation by situation, making the best decision that you can.

And nobody likes that answer because it's too flimsy.

Sam Bourgeois: You know, you know where my mind was going as you were talking, I was thinking.

are, what are you testing for?

Like what's, what is the assessment?

What's the evaluation looking for?

Right.

And I was thinking about those nervous times that we've all had in life where we've had to get in front of the class and do a presentation on XY and I was thinking, you know, what I use today, there's a tool.

So anybody who's watching this, if you wanna go check it out, it's called synthesia.

Uh, I hope I pronounce that right.

But, um, Synthesia Online is, is relatively.

Attainable, budget friendly.

It's not too expensive, but, um, you can use the avatars that exist, but you can also create your own avatar.

And so I was just thinking about me like, what do I do?

I put my shirt on and I do a talk and I read myself, and then I get my avatar, and then, um, I can just kind of dump in text scripts that I've written and I can have the avatar read it.

My avatar doesn't make mistakes.

There's no noise from the kids in the other room.

There's no like, I don't have to shave, right?

Like so I'm just thinking about like, what about the kid who says, well, I'm gonna create an avatar of myself.

I'm nervous in front of people, but you wanted me to the material, create a presentation.

I did that and then I gave it to my avatar.

It's my voice, it's my likeness anyway.

I'm not trying to be too wild and challenging, but I'm just kind of thinking through like a lot of fun conversation to be had here.

What about the teacher?

Who does what I do do.

We look down on that teacher and say, you did a flipped classroom, but you cheated, you know, you used your avatar to read the lessons.

It was like,

Jethro: Yeah.

Sam Bourgeois: it, I did what I, what I was trying to accomplish.

Right.

Jethro: Yeah.

Well, we've all sat through classes with a teacher or a professor who rambles on and on and on and doesn't get to the point and talks about a million different
things, and when in reality it would be much better if we just had the exact information that we needed to know, and then we could have those rambling conversations.

When we're not trying to teach a specific topic, so, you know, Synthesia is a, a good example of an opportunity where you can produce a perfect lesson essentially with no errors, no bird walks, nothing.

And if that's what you're trying to do is teach a very specific thing, it is crazy to try to do that yourself.

With a bunch of kids, with people interrupting when a better way to do that is say, here's exactly what you need to know in a clean, well thought out presentation with slides and bullets and all this kind of stuff that makes it very clear what's going on.

That is, is really powerful.

Now, there, there are a lot of, uh, scripted type, uh.

Um, uh, curriculums where the teacher basically says, you know, they read from a book everything that's happening that day.

In those situations, why not have a computer or AI do that?

If that's really what the point is, the AI is not going to mess up.

It's going to read the script exactly.

Why not have it do that?

Those situations?

Yes.

You, you might as well.

What's the benefit of the teacher being the one to do that?

Well, the real benefit is that the teacher will notice when the kid needs some additional help or needs to slow down.

And that's great and they should, should provide that and help with that.

But if we are like, here's the exact thing you need to know, here's how we know exactly how to teach it, and this is what it looks like.

We might as well put that into something like Synthesia or something else to say, here's the information that needs to be had in a clean, clear way to make sure you don't mess up and you really understand it.

Let's, let's go with that, and that is a perfectly acceptable way to use it.

However, that's not what most teaching really is when it comes down to it.

Most teaching is us having a conversation, getting to understand where each other's coming from.

Excuse me.

And then making decisions from there.

And that's the kind of thing that I want more of from my kids' teachers.

And I want less of the stuff that an AI could do better that we should just hand over to the ai.

Sam Bourgeois: I was in a presentation, uh, in October of this, of this past year, the talk was, uh.

An AI guru and he was trying to kind of open everyone's eyes.

It was a great talk, by the way.

I'm not, I'm not, not poo-pooing it at all.

It was a great talk.

Very inspirational.

But one of the things that he said is he had a mentor.

Um, this is a business conference and an education conference.

He had a mentor who was just full of all nuggets of wisdom, just great experiences.

And so what this guy did was he took all of his writings, his books, manuscripts, a GPT and called it like, I'm just making this up.

Imagine Jethro, right?

So GPT.

Imagine if we took all of your writings, all of your brilliant musings, all of your perspectives, and built a GPT, and then we had an agentic conversation.

Now, I'm not suggesting we don't have this podcast.

I'm not suggesting you, you know, you, you, you, you trademark your likeness and all your writings and then do a bot instead of having real human interaction.

My point is we're talking about individualized instruction.

Imagine if we had that teacher likeness in the future that could answer questions authority without rambling in a way that the kid wants to have the conversation say, you know, I, I wanna learn had.

had that experience in college, by the way.

I had a, he was a Jamaican, I think he was a Jamaican chemistry professor.

Brilliant.

But he talked a lot about alligators and sharks, and I didn't know what the heck he was talking about most of the time.

and he was funny.

He was really funny.

He was a great guy.

But wouldn't it have been great for me to have that silly, rambling 90 minute class Tuesday and Thursday, and then go home with my phone and talk to, I think his name was Dr.

Darbo, by the way.

So, Dr. Darbo, if you're watching this.

Have my Dr. Darbo, GPT and have a conversation about inorganic chemistry it's meaningful for me, when I'm ready to learn, when I'm ready to ask questions.

How incredibly powerful would that be, right?

I mean, that's the future.

Jethro: Yeah.

And, and like that, that's certainly very possible, and you could already do that without even getting all of his stuff in there because there's enough information in the models already that they could, they could be.

Answer that, but here's the real key, like it's not enough to just have the conversation, right?

And just having a conversation with the teacher is typically not enough because each one of us learns and we have our own learning path, and we learn in our own way.

And, and this is not getting into all the multiple intelligences stuff and all, all that kind of thing, but just our own experiences.

You and I are gonna walk away from this conversation having different insights that matter to us from what we said.

And anybody listening is gonna have their own insight.

Now it's possible and very likely that many of the insights will be similar, but the reality is, is that everybody has their own learning path, and it is unique to them, as unique as their fingerprint.

In fact even more unique because even identical twins are gonna have a different learning experience than Than their, than their twin.

So we all have this unique experience, and as you go through life, you build on what it is that you learn all throughout your life.

And that makes you into the person that you are.

And there's culture, there's genetics, there's uh, what you actually learned, like academically officially.

There's also the things that you learned from, uh, negative personal interactions and things like that.

And what I want is for us to be able to recognize and value whatever learning.

That happens wherever it comes from.

And our problem in schools is that we focus on the learning that happens within the four walls of the classroom.

And that's all that quote unquote counts.

Because if you already know how to do something, you're still gonna take that test in class.

You're still going to have to do that report, even if you can de demonstrate.

I definitely know this.

Most teachers are not gonna let you outta that.

However, it's incredibly powerful when you do let kids out of that.

My second year teaching, I had a student who was so smart and so far advanced for the rest of my kids.

I knew there was nothing I could teach her.

As a student in my class, and I said to her, look, I'm not gonna be able to do anything to help you grow.

So you need to come up with your own thing that you wanna spend your time learning, and you don't have to do any assignments in class, and I will exempt you from everything, but you've gotta be learning at your own rate, at your own pace.

And she was like, are you serious?

I said, yeah.

And she's like, all right, I wanna do Victorian literature.

And she read four Victorian.

Literature books, and I don't remember them all, but I know Withering Heights and Pride and Prejudice were two of them.

And I was like, what in the world is this girl doing?

But she was totally into it and loved what she was doing, and then she wrote a bunch of stuff about it and, and did a great job.

Definitely better than what I did in my own English degree in college.

It was, it was great.

But she did her own thing.

Now, what would be really valuable is if she had a, an AI chat bot that was there to help her work through that stuff and talk about goals
and how to make plans and what she was trying to do and what she was doing well and where she was struggling, that would've been nice.

But the thing that was really valuable about my time with her is that we talked during class.

Once a week and she just said, here's where I'm at and here's what I'm doing.

And I, and I said, this looks good.

Let me give you some feedback here.

This could be improved, that looks great, whatever.

And we just did that each, each week for the whole school year.

She produced more in that year than any of my other students.

Because she was operating at her own level, and what I didn't have was Subway to help push her and keep her going, except just my interactions with her.

And so.

If we can have some way to personalize everything that we're doing in class, that would be great.

Here's a real simple example.

You're teaching a math class and you have story problems.

You're trying to teach Pythagorean Theorem and you have kids say, this is what I'm interested in, and then your questions that use the same numbers, just use something that they're interested in.

And so, you know, it doesn't matter what it is because it's all about learning the math.

So.

You have 500 questions that you can generate instantaneously that each one relates to, gives each kid 10 different options for how they want to answer that specific question.

And then like that could be generated in a heartbeat.

And that is the amazing thing because then a kid might actually be interested in it if they're like, when will I ever have to use this?

And you're like, oh, here's a place where you could use it and something you're interested in that becomes really valuable.

Sam Bourgeois: Yeah.

Yeah, the diameter of the death star compared to the opening

Jethro: Yeah.

Sam Bourgeois: the tunnel, right?

Yeah.

Yeah.

I,

Jethro: Yes.

Sam Bourgeois: totally agree.

I totally

Jethro: Yeah.

So those are the kinds of things that we want to do, right?

Sam Bourgeois: we did that whenever we were in the classroom, but it was exceedingly difficult and taxing

Jethro: Oh, yeah,

Sam Bourgeois: and we couldn't do it for every kid.

We could just do it

Jethro: yeah,

Sam Bourgeois: a handful whenever it was possible.

Right.

Jethro: yeah.

And so, so what great teachers do.

They default to, uh, things that they think most of their kids will get and will like, and that's great.

I, I so appreciate that.

But they could do so much better if given the proper tools and the proper understanding of how to use those tools and when to use them and when to make 'em right.

Like that is incredibly powerful.

Oh, you just muted yourself somehow.

Sam Bourgeois: Apologies.

I guess

Jethro: There you go.

Sam Bourgeois: I was quiet too long.

Um, you know, what I think about though is I think about the, think about the risk.

I, I have to, that's just my security brain, right?

And I don't like to stifle innovation.

I don't like to, put up, put up too many blocks and obstacles, but.

I do think about data that we're collecting and, and I, I worry about that.

Like it concerns me like, not to change the subject from ai, but like, you know, you were talking about Siri.

I mean, so Siri everything you say, waiting for the queue, the, the trigger.

Hey, Siri.

Um, the question I have is like, are we just piling on all this data and metadata.

You know, like what is, is there, is there a risk there?

You know, is it, are we creating profiles that can be used, know, potentially in a negative way?

Like I, and again, I'm not, not thinking about it like, uh, like barcode tattoos or anything like that.

What I'm thinking about is like we create these learning profiles and is it possible that the profiles don't reflect the learner, and that if given a different opportunity for a different path, they could be.

On a different trajectory, they could go faster, slower.

Um, are we creating biases in the system based on that, on that profiling of information?

Are we saying, now this kid, you know, I'm not saying we'd write 'em off, but this kid's obviously a low performer.

Let's, let's slow things down for them.

know that there's always gonna be a response, like, no, no, no.

The, the AI could always encourage and uplift and train, but from that general perspective of say least privilege, and I say.

data's bad data.

I don't want any data.

I don't want to, I don't wanna possess your data.

There's liability in anything that I have.

Right.

as schools start to compile all this information about learners, what are your thoughts about that?

And it's, eh, we gotta do what we gotta do.

Right?

Like we, it's part of the, part of the success.

Jethro: Yeah.

You know, I think one of the.

One of the worst things is that we have all this data about students already and we don't use it to benefit them.

So, so that's an issue too.

And, and that's a real problem because, uh, we're missing out on opportunity to bless our kids' lives with the information that we do have.

So there are some aspects of that.

On the other hand.

We definitely should not be keeping a profile of these kids because they will change, and we certainly don't want to do anything that would force a kid lower in their capability than they're actually at.

We'd never want to do something that would limit a child's potential for growth.

I can definitely see that happening.

If you are, if your references are always about sports, and so then the AI only gives you sports related, uh, content, then it never gives you a chance to see something else.

And so in, in the design of these things, those kinds of considerations I think should be made as well.

That being said, you, you don't want to just be capturing everything and, uh, having it be, um.

There forever.

You know, you want the kids, you want kids to be able to be forgotten from the system.

And to be honest, with the way AI is right now, nobody knows how to do that.

And nobody understands how to make that actually happen because we don't understand how it works well enough.

And I'm not just talking about me, you know, the uneducated principal over here, I'm talking about act.

Actual data scientists who are creating these tools don't know what's happening inside the black box of ai.

And that's, that's very real.

They don't know what's being saved completely, and they have some ideas, but they're not totally sure exactly how it all is working.

You can correct me if I'm wrong, but that's how I understand it.

So

Sam Bourgeois: No, you're, you're

Jethro: go ahead.

Sam Bourgeois: no, you're right.

And there's, there's some, there's some ethical thought exercises that I've gone through, and if, if you guys have never heard some of these perspectives, I'll give you one or two that are kind of the juicy ones.

You know, there's, there's a lot of people who would argue.

That, you know, ai, whatever the heck that is, right?

Like it's, you know,

Jethro: Yeah,

Sam Bourgeois: your, your LLM, whatever it's you're using.

Jethro: the marketing term for what's been going on for decades.

Sam Bourgeois: exactly.

Jethro: on.

Sam Bourgeois: Um, there's, there's a, there's a, that some people say, you know, all that art, that AI art, not AI art.

It's just a, very poor copy, a very poor amalgamation of real art that's being plagiarized and copied.

And if.

That's what you're going for.

Okay.

Got you.

Got it.

Great.

Super duper.

Same thing could be said about all the, the writing that we, you know, that we might produce from these things.

I mean, there's the, the story of the, the attorney, uh, I think it was up north, uh, he had a big case, did a bunch of research, turned in all of his information, cited all of his works, and the whole thing was made up.

Like none of it was even

Jethro: Yeah.

Yep.

Sam Bourgeois: So we hear those things and those are like the, I call it the fud, you know, the fear, uncertainty and doubt.

Jethro: Yep.

Sam Bourgeois: those things.

That's not my point.

my point is like, there are ethical considerations to be made and I'm, you know, I'm thinking about eu, like eus got a lot of regulations
around privacy in particular, uh, their new ai, uh, their new ai, uh, framework, uh, that just came out middle of last year, middle of of 2025.

I think that's gonna be interesting to see where the United States goes with some of those things about data privacy and protection.

And you used the word I was really.

on that.

You said, um, I'm paraphrasing the right to be forgotten.

You know, do we, is that a right?

Is that a, is that something that as a US citizen, do we have a right to that?

Can I say, I wanna be forgotten?

I don't wanna be a part of your system.

I don't wanna train your model.

You know, like, are we, I, I wonder Jethro like.

All the people selling us the stuff, the AI stuff, they turn on AI from the textbook manufacturer.

They turn on AI from PowerSchool, they turn on ai, whatever.

Is it possible that they're just selling us a platform that we're actually doing the work to train to

Jethro: Yeah.

Sam Bourgeois: information back, right.

Jethro: Yeah.

I mean the, the challenge with this is that the, the training.

Data is valuable and worthwhile, uh, to the companies and for us, right?

So if it knows that I like sports, then it can.

Bring up things that are related to sports for me.

And, you know, there's, there's all kinds of ways where that knowledge about you is beneficial.

That's why people love the algorithm on social media.

And I would, I don't know that people personally love it, but they like being on social media and like seeing things they like because the algorithm has been trained
on the things that you look at, the things that you leave up, the things you know, that you're paying attention to and click on and like, and all that kind of stuff.

So.

So those things do matter and, and we want to take advantage of that, but we also have to recognize that that might not be the best, the best way to manage things.

And there may be benefit in starting fresh every time.

There may be benefit in starting fresh on some things and going back to other things in other ways.

And you know, the truth is we just don't know s the answers to so many of these questions yet.

And, and a lot of people will say that and say, we don't know, so we just keep using it.

But what we really need to be thinking is, let's, since we don't know the answer, let's be intentional and start asking some of these questions so that when we see something that's a red flag.

We can shut it down or we can not take it to begin with.

And so for example, I, I gave the example of teachers using AI to just create worksheets, right?

If that's what the tool is there for, then there's no point in having the tool in my mind, because we don't need more worksheet creators.

So.

That's not how I would suggest we teach anything in education.

Um, and, and what I mean by that is, is that everybody does the same exact thing and it just makes it so easy for the teacher to grade and correct and, and give feedback on.

And what we really want are.

Opportunities for individualization and understanding where people are at and helping them in their own personal learning journey.

That extends far beyond what happens in the four walls of the classroom, and so that's what we want to be looking at.

If the tool is just to recreate standardized testing, then let's not use it.

If the tool is to make it easier for us to assess really what kids understand, then.

That sounds a lot better, and we can start with those kinds of questions.

What kind of education system do we want to have and is it worthwhile for this tool to be used to help us get to that point of where it's at?

Tho those are the kinds of questions we should be asking.

And when something gives us a red flag and it's like, oh, we're going to use your data to train the our future models.

We need to be thoughtful about that in some situations.

Yes, we should be doing that because if we're in a school, and this is training us on how to teach reading better, I want the data to train the model so that kids learn how to read faster.

That's not a bad thing, but if the, if the data is used then to say, uh, you know, we're gonna train this model and we're gonna use your personal information and refer back to you later.

If that's not a good thing and we just don't understand how this stuff works well enough, nobody understands how this works well enough to really say, this is the true harm in using your data, and this is the true benefit in using your data.

We just don't know yet.

So for that reason, we start with don't use my data to train the model

Sam Bourgeois: Mm-hmm.

Jethro: until we understand what it's for, and then maybe we can.

Sam Bourgeois: Or create enclaves whenever possible.

Create enclaves.

There's a happy

Jethro: Yeah.

Sam Bourgeois: can we,

Jethro: Yep.

Sam Bourgeois: Yeah.

Jethro: Yeah.

So we're gonna do all this stuff in this own siloed box over here so that it's not referring back to anything else, not connected to anything else.

And then as we can learn better, then maybe we can say, all right, let's go some, let's take this somewhere else.

So, for example, Claude, I think, is doing this thing where, when the, when the context of your message gets too long, your check gets too long.

It'll summarize it all and save it so that you have that context.

That is a good way to say, here's what it's remembering about us in our conversation, and then you can actually make a decision about that.

Most of the time, you can't make a decision about it at all.

Sam Bourgeois: That idea.

I like that idea.

It's interesting.

I hadn't, I hadn't really thought about that.

Um, I also thought about maybe we create systems that are kind of school safe.

If that's, if that even makes sense.

Like a pseudo anonymization.

Right.

Like, I mean, we, we already do that for the most part, right?

We already attempt to do that.

interesting.

Yeah.

Interesting thought though.

I, I think, uh, I think we've been promised that in many industries by many tools and it hasn't actually existed.

So that's, I do con I have concern there.

Jethro: Yeah, for sure.

And a lot this comes down to trust.

Um, the, the companies and you have to determine if you can trust them, and, and that is a huge question mark for many of these companies that, that you just don't know about.

Sam Bourgeois: Well, it, and it it, you might do a risk assessment.

That was where I started this conversation.

That was my, that's my security guy, right.

Uh, risk assessment may actually be a quality assessment, but then you don't know what happens behind the scenes.

Like didn't, didn't Apple own PowerSchool.

If you go back far enough, like I I, those are the kind of questions we have to ask ourselves.

Jethro: Yeah.

Sam Bourgeois: if, if the industry has already demonstrated how they will operate data brokers, uh, click bait.

economy.

Right?

Very inhumane approach to how we interact as humans with our technology.

that's what the industry has already demonstrated to us, why would we expect anything different if they sold to an education market?

And that's potentially a concern.

Yeah.

Jethro: Yeah.

And then what happens when the company that you have contracted with gets acquired by somebody else?

Uh, and do you, are you forced into that new contract?

Do you want to be part of that new company?

And, you know, let's say you're working with a, a local mom and pop ai shop that is providing you a, a good service, and then they sell to PowerSchool or HMH or Pearson or any of these other publishers, and they're like.

All right now that's managed by them.

And it's no longer this little safe box on your desk in your school building.

It's part of their server and it's connected to their thing, but now you have insights into all the, all the other stuff that they produce and publish.

And is that a benefit or is that actually a hindrance?

And those are the questions you gotta ask.

You gotta wonder about.

Sam Bourgeois: And God forbid they're not gonna, they're not gonna give you a question answer back and forth on this one, but you know, God forbid they, they get breached, which is

Jethro: Yep.

Sam Bourgeois: happened just a

Jethro: Yes.

Yep.

Sam Bourgeois: If they were breached, what happened to all that training data?

You know, like imagine all those profiles and learners, like, not to blow this outta proportion, but we, we see what happens in the real world.

We know what Cambridge Analytica did in 2016.

If memory serves with our US elections, we know what, what they would do with the data if they had access to it.

So how would that data be used, nefariously, should there be a breach or, or some kind of leak or exposure?

Jethro: All, all kinds of things that, that we just don't know the answer to, and we have to recognize that we don't know the answer.

And.

Make the best decision we can with the information we do have and be ready to intervene when things, uh, are not going well, which is a, a scary place to be for sure.

But at the same time, you can't just ignore it and say, this is, we're not talking about this.

We're not doing anything with it because it is the world that we live in.

And, and just because you block it at your school doesn't mean the kids aren't using it at home.

And we don't have any control over that.

But the reality is, is they have access and we can at least be partners with families in that rather than, uh, just saying, one, you're on your own, or two, we're unlocking the floodgates here.

Neither one of those are good solutions.

Sam Bourgeois: Well said man.

Well said.

Jethro: Yeah.

Well, Sam, this has been awesome.

Thank you so much again for, for doing this and being part of this and, uh, everybody who's listening, thank you for listening to Artificial Intelligence, real Talk, and any final words from you.

Sam, I'll let you have the last word.

Sam Bourgeois: Man, I'm, I'm just, it's always a pleasure chatting with you and, uh, as you mentioned, it's always good to learn.

This is, this is something we will never replace with AI, is these kind of real, authentic conversations.

So thank you for your time.

Jethro: That's right.

Even I said I was gonna have let you have the officer, but now I'm not even, even when we do this kind of a thing with ai, it still is not the same as doing it with another person.

So you, you are absolutely right.

Thanks man.

Sam Bourgeois: Take care.

Bye.

How Schools Can Use AI with Sam Bourgeois