AI, Privacy, and Governance with Peter Gregory
Download MP3all right, well, what is interesting to you in AI these days?
Oh, man.
Um.
Well, my AI governance book is gonna be out in about three weeks.
Oh, it is.
Excellent.
Uh, so I'm really excited about that.
Um, yeah, I've, uh, just finished, uh, the proofing a couple of weeks ago, and then the publisher says, congrats.
It's now at the printer.
And, uh, and, and then he confirmed that the, the availability date showing on Amazon, you know, where it's available for pre-order is, is pretty close.
Uh, I'll have my own personal copies maybe two weeks ahead of that.
So maybe first or second week of January I'll have them in my hand.
Man, that's pretty exciting.
So tell me about your, about your book.
What do you, what is the.
The premise of it.
Yeah.
Let's see.
I gotta, I gotta move you over to the right window here, so I'm, yeah, all good.
Looking at you.
And not off in the corner of the room somewhere.
Alright, there we are.
Uh, so this is a book about, uh.
Um, it, it's, it's for professionals and, and companies who are serious about governance and want to bring AI into, into their governance, um, program.
Or if they don't have governance and they don't know how, then, you know, my book also helps, helps them, you know, build, build government governance from scratch.
Great if, if they need to.
Uh, and this is a study guide for a certification called A IGP, um, AI Governance Professional, which is, uh, which was done a couple of years ago by the International Association of Privacy Professionals.
Which is highly, highly respected in the areas of privacy and AI and privacy are kind of go hand in hand here, um, because many AI systems are trained with personal information about us.
Mm-hmm.
And so, you know, existing and emerging privacy laws, um, there it is.
Yeah.
Privacy laws are, um.
Um, run kind of at odds with the way that AI systems work and are trained.
And so there are some, some kind of, some caution areas in, uh, in AI governance, uh mm-hmm.
You know, training, training and AI system with PII.
And then the way that.
Society and applicable laws expect companies to treat.
PII are are directly at odds with one another.
So,
uh, yeah.
So how did you go about writing this book?
Like, did you take the, the test and look at it and say, what do I need people to understand or?
Yeah, good question.
So in, in my last position when I was the senior director of, uh, cyber risk and compliance at, uh, this Alaska based telecom where I worked for.
Four plus years.
Um, I was put, one of the last big things I did was, uh, I was asked to put together an AI governance program.
Hmm.
So, I, I found that IAPP.
Had a really great training course on AI governance.
And so I took the course and I ran through the course three times.
It was a web-based on-demand course.
Mm-hmm.
So I could watch modules over and over.
Um, and after I went all through that, that really helped me figure out, okay, now I, now I know what I have to do here in this company.
And, and I built governance structures for identity and access management and for risk management and, and.
For privacy.
So, so it, it wasn't entirely new to me other than just some of the new things that AI introduces that, that other technologies don't.
Um, and so after going through that training and I thought, gosh, you know, I've written like, you know, 40 certification exam guides.
Maybe I should write one for this one.
And so I shot a proposal off to the publisher and they said, yeah, do it.
Let's do it.
So it's like, alright.
Yeah.
And you've written like a hundred books now, right?
Well, I dunno, A fee seems like a hundred.
It's, um, it, depending on how you count, I'm looking at the list now, it's like in the sixties.
Um, I've.
Written 43 full length manuscripts from scratch.
So the difference there is that some books have gone into repeat editions.
Mm-hmm.
Which doesn't mean I have to write a full length manuscript from scratch, I only have to revise what's already there.
Yeah.
And that's over 25 years.
Yeah.
And you've got 42 titles on Amazon.
So there are, there are some that are not on Amazon that, um, you can't even.
Get, uh, that aren't printed anymore, right?
Well, it's, that's a part of the story of why there aren't as many.
The other reason is now if you stop right there, you see that, that one that's called getting, oh, wait, no, that's not it.
Um, there I've written several, what are called custom publications for mm-hmm.
Wiley Publishing, and those are books commissioned by vendors that are not sold on Amazon, but one of them did appear here.
So if you scrolled slowly, I'll, I'll show you where one of those books is and you probably want to go the other way.
Other way.
Okay.
Um, all right.
Keep going.
Keep going.
I think there's one of those in there.
One of those custom pubs.
Um, it's a Dummies.
It's a dummies book.
It's, uh, yeah, so
there's this dummies right here where it's just somebody took a picture of it, like they're reselling it right
now.
That was a real book.
I mean, that was like 23 years ago.
Larry and I wrote that.
Yeah.
And it, it didn't celebrate, so it, it didn't go into repeat editions.
Oh, okay.
So none of the custom pubs are in here.
Maybe one of them used to be.
So examples of these, um, just.
Going up my list here, like, um, advanced Physical Access Control for Dummies, which was commissioned by HID Global, which is like the, the security entrance card vendor.
Um, another one Stopping Zero Date Exploits for Dummies, which was written by an Israeli security company called Trust Deer.
Hmm.
Um, and several others.
Um.
I was, for about 5, 6, 7 years, I wrote, I don't know, maybe 15 of these custom pubs.
Uh, and they're short books anywhere from like 25 to 60 pages.
Some of them I can write in a weekend.
Um.
So, yeah, those were fun.
I don't have time to do those right now.
Yeah.
Um, but, um, that, that is mostly the difference between what I claim and what Amazon lists.
Yeah.
Yeah.
That's pretty good.
Very cool.
Um, so, so you got the AI governance and so for a, a school focused audience, which is, is who's usually listening to this, um.
The way I'm interpreting that is that, uh, governance means things like school boards and people overseeing the operations of the school, things like that.
Is that the same thing that you're talking about?
Yes.
Or are you talking about something different?
No, this governance in, in a kind of a generic term, represents formal structures established so that executive management.
Or board of directors can exert over or can exert control over a part of the business and then receive information back in the form of re uh, metrics and KPIs and readouts and briefings to give them feedback on how it's going.
Uh, so it's, it's generally in, in more formal companies, governance is something that, that boards do.
Mm-hmm.
Uh, but, and then in many, many larger companies have what you would call an IT steering committee.
And this goes back like to the sixties where stakeholders from across a business will.
Meet from time to time, maybe monthly or quarterly, to deliberate what's happening in it?
What does the business need in terms of, you know, new capabilities, updates, and so forth for, you know, any of many reasons.
Uh, and then how can that all be kind of prioritized to best meet the business' needs?
Yeah.
And so that would be an example of governance structure.
WW without having mentioned, you know, some of the minutia, like, you know, the metrics and, and things like that.
Yeah.
Well, and this is a really fascinating topic because like you mentioned before, the, the way that they go about collecting the information is, is definitely at odds with how we are supposed to, uh, manage and take care of, um.
The information of the people that we are serving and so, right.
That, that's a, that's a big issue right there, that it's, it'd be cool to use AI to help with that governance and, and pay attention to things, but you run into issues.
If the AI was trained in an inappropriate way, like how do you, how do you reconcile that because you, you really don't have control over how the AI was trained.
And
Well, it depends.
I mean, you should have control over it.
I mean, if, if you're a company, let's say you're, uh, you know, let's say you're, you're a university or you're just, you know, like a K 12 school system and you want to better understand.
Courses and grades and test results, let's say, and be able to do some prediction of that by bringing data in.
I'm just making this up on the fly.
Right.
Well, let me, let me give you a good example on that.
So we, we know that if kids are not reading by grade three and we have tests to determine whether or not they're grade, they're reading.
We know that if they're not reading by grade three, then things are going to be more difficult for them.
We also are required to keep data about students for years.
Some, some pieces of data for a hundred years after the student graduates from the high school.
That's, that's what some data retention policies are like
by
name with grades and things like that.
Yes.
Depending on the, on the thing.
So it's not, it's when it's a hundred years, it's not that detailed like.
So every year the district has to archive and save the the grades or the assignments and the scores on the assignments that made up the final grade for that student.
So school districts particularly have all of this stuff there and they have access to it, some of which is digitized, some of which is hard Copy.
But the idea is you could essentially go back for the last 50 years of your school district and you could say, let's look at everybody who was at this level in third grade, and let's track how they graduated and what their rank was, for example, in the rest in their graduating class.
And the data exists to be able to do that.
Now, with it being the district data, it seems like.
They could use that and perhaps anonymize it and, and do something to make it so that they're not revealing personal information.
But then they could say, when a kid finishes this test in third grade, here's the trajectory we map for them.
If nothing changes.
That idea is like the holy grail for education that will know where someone ends up.
Well, and AI could be a great tool to analyze a very broad set of data points per student as well as if you could follow those students.
Post education.
Yep.
Then a, a well-trained AI system could, could show you what the success factors are.
Yeah.
Right.
And that would be something that educators would love to know.
Right.
Now, you, you touched on a, on a point about anonymizing the data, and certainly you'd want do that however.
That's harder than it seems on the surface, because if you don't anonymize it, well then data records are subject to being able to be re-identified.
Mm-hmm.
You know, for instance, in, in a different example, I use, um, let's say, you know, you've got, uh.
Give me all the, the vice presidents in the software business living in this zip code who make more than $200,000 a year and are married.
Mm-hmm.
Well, you can anonymize that, but there might only be one person.
Right.
Right.
And it's like, well, we know who that is.
Yeah.
It's not hard to, uh, you know, put, put that back together.
So one of the ways that this, that, that's a very real example in the education world is that we, um.
We, uh, aggregate and disaggregate data based on tests.
That's based on, that's test scores based on, uh, different, um, uh, characteristics like race and gender and things like that.
Right.
Demographics.
Yeah.
Yeah.
And so if there are, uh.
If there are less than 10, for example, then it won't show on certain reports because it says it.
If we do this, if we show this, then you'll know exactly who it is, and that defeats the purpose of this being anonymous data.
So it just says less than 10, and so there's no, there's essentially no data.
Right.
But going back to this idea of being able to predict in the future and being able to.
Use this historical data to train a model on what we can do going forward.
There's, there's a lot of value to that and things that are very worthwhile for that.
Mm, tremendous
value.
Yeah.
Yeah.
So, so how would you go about anonymizing that data?
What things would you suggest people put in place so that they do train the data in a, in a way that respects personally identifiable information?
Well,
you know, you just sort of gotta sit down and look at what the data set is, how are you gonna use it?
I mean, you know, that's kind of a, you know, that gets pretty detailed.
Uh, you know, kind of like, well, how would we rebuild this engine anyway?
It's like, well, I don't know.
It's kind of hard to walk through it, but it's easier.
Like, let's just find an example and do it.
Um, yeah, it's, it's kind of tricky.
It takes a good.
Data scientist who understands data and how it can be used together with a privacy professional.
And sometimes that's, you know, those are both in one person.
Mm-hmm.
Sometimes not.
Um, it depends.
There's, there's so many things.
Uh, the context is also important.
You know, are these customers, are they employees, are they students?
Are they just members of the public?
Are they.
Something else, you know, like, you know, former inmates of our jail or mm-hmm.
You know, what class do, do they belong to?
Um, and you know, then you kind of over, you gotta overlay privacy laws to see, okay, what are we required to do?
And then in the context of the industry, there are, you know, many industries that, that prescribe minimums and maximums for.
The retention of business records and often you cannot, I, you know, anonymize those, you know, for instance, employers are required to keep HR data for all employees for, what is it, you know, termination plus seven years or plus 10 or something like that.
Mm-hmm.
And you can't anonymize that in terms of the business records you're required to keep.
Although if you wanted to.
Play around with AI in that, you know, you could anonymize it and throw it into your AI to get kind of some generic workforce data.
You know, like if you wanted to better understand the characteristics of the more successful employees in your company, you know, that might help you to find what those factors are.
And then that would help you, you know, hire better in the future or, you know, mm-hmm.
Take care of your current workforce better.
Um, there's a, a tremendous amount of detail that that.
Potentially, you know, has to go into, you know, studying, okay, how do we, how do we go about this for this thing?
Mm-hmm.
The, the other big thing about AI that I write about in the book, um, so back to privacy.
One of the things that's codified into GDPR, I don't remember if it's in California's privacy laws or not.
CPRA and CCPA.
Um, but I have a feeling that it's kind of coming anyway because people are just so twisted up about mm-hmm.
You know, marketing information that is just nonstop.
And that is that, that concept about the right to be forgotten.
Mm-hmm.
Which organizations are obligated to.
Do just that on request, as long as it doesn't conflict with a different law that says you must keep your business records for a minimum of this long.
Yeah.
So as long as you don't run into other statutory re uh, requirements on required retention, then organizations do need to go ahead and, and remove that data.
Although anonymizing, it is allowed.
Provided they do it well enough that it can't be re-identified.
Mm-hmm.
But then when you're talking about an AI system, you know, so let's say, you know that such a organization, whatever business it's in, had this data about customers or employees or whoever they are, and they trained their AI with it to make their business better in some way.
And then one, then one person comes along and says, okay, I want you to forget about me.
Well, in ai, there's no such thing as removing one record from a training set where the system's already been trained.
Yeah.
There's just no one undo.
So what do you do about that?
I mean, that is, that is the big, that's the big head-on collision with AI and privacy.
Uh, and, and copyright.
What I recommend is.
You gotta either pseudonymized or anonymize that data unless the thing you want to do with the AI requires, you still have to know precisely who is the person and then you know what you might end up having to do.
Is like if you've gotta have, you know, real PII in there and you gotta put it in your AI because of whatever you decide you're gonna do as a business, then you
might have to do something like, you know, monthly or quarterly retraining where you remove the records in the next retraining cycle and maybe that's how you do it.
Mm-hmm.
Um.
Yeah.
But
yeah, the devil's in the details here.
Um, I mean, AI is capable of doing great things, but there are a few new gotchas here that are unique to AI because before ai, I mean, you know, removing a data record for privacy purposes, no big deal.
Um.
Yeah.
And, and it's, it's so interesting because it, some of those things you wouldn't even think about.
Um, and if, uh, I mean a lot of things related to privacy, uh, people just don't think about period, right?
Yeah.
I mean that's, that's one of the real challenges that we, that we currently face.
Um, but then when you think about retraining the data and taking out the people who have asked to be forgotten.
That, that seems more complicated and
Yeah.
Um, and there would be situations where it would be very valuable to have, uh, PII, so for example, there's a, there's an AI tool called Libra, L-E-B-R-A.
And what they do is their goal is to improve culture in a school.
And so, um, so what it does is it, uh.
It, it brings in your information from your Office 365 or whatever, so that it knows who everybody is, what their relationship is.
And then, um, it, it allows you to use AI to write notes and short and like do recognitions of things that may have taken you too long to do, to do otherwise.
That, uh, just makes that part of it seamless.
So having the PII is essential because you need to know.
That you know, so and so a third grade teacher at this school is the person and that they work at that school, and you're not really training the AI on that person per se, but they are included in the data so that when.
That third grade teacher at that school does something, and it is noted in the data, it will surface to the supervisors or the people that are connected to her.
So those things, those connections and relationships do exist.
And that is, um, those are just things that we haven't never really had to think about before.
Um, especially with the black box nature of AI as a whole, because we don't know what's actually going on in there.
Whereas with the computer program, we just delete that record and then poof, they're, they're gone.
But that's not how it is with ai.
Right, right.
I mean, these, these are, these are some of the challenges.
I mean, they're.
They're solvable and one of the areas where I think we may end up seeing some innovation is, uh, one of the AI vendors might figure out a way to remove individual records from training data.
Mm-hmm.
Now,
I don't understand enough about how.
AI training works.
I just have never gone that deep.
Um, I've heard discussions about it and it sounds like I'd have to like go back to school for 80 hours.
Yeah.
My head down and just be deeply immersed in this to have any kind of a clue as to what this all means.
Um, but.
Eventually somebody may figure out a way of having an AI system unlearn a single record.
But I don't know.
I'm not optimistic based on things I've read.
Um, but you know, in terms of the age of ai, you know, it's still, well, it's in, in some forms.
It's been around for 50 years, but it's been used in volume for, what, five years?
So.
Yeah.
Ish, five-ish years.
Hmm.
So there's, there's different ways of doing it.
Uh, one, one that comes to mind is that you could have an index of students and their names and maybe their student ID, and then some other kind of an identifying number.
And then when you import your training data.
You don't import the student ID or the student's name, but this other index value that points back to it.
And if a student wants to be forgotten, I know it's a bad business case, but if, if a student wants to be forgotten, you can keep the roster.
But all you do is in this intermediate index file, you just delete the record.
So now what happens is in your AI system, you've got a pointer.
That that won't resolve and point you back to a student.
It points back to, well, this was deleted.
Yeah.
And so you won't know who it was.
But depending on what the other data points are that were imported for that record, you know, you still got have that re-identification risk.
Yeah.
You know, that's something that I personally have done where we've had this district student id, we've had the student name, and then we've had the.
Jethro's school ID and I import just that Jethro School id, which is a number that I just, I just took all the kids in in, in Excel, and I just did number one for the first kid and then down to 560 for that.
You need your own index.
Yeah, yeah, exactly.
And so that became the unique identifier for me.
And I could always go back and find out who that kid was when I needed to find out who they were.
But the system didn't understand who they were, anybody else looking, couldn't understand who they were and couldn't see who they were.
And you had to have the, the key to make.
To unlock the data that that made sense for them.
Right.
But the problem here is that a lot of companies are, are diving into the deep end of the AI pool right away without knowing these little things.
Yeah.
Only, and, and some of 'em are gonna get into trouble and some of 'em are gonna have to go, oh, well that didn't work.
We're gonna have to start over.
Okay.
AI Project 2.0, let's try it again With lessons learned and um.
Well, and, and I think that's gonna happen a lot.
Um, it to all kinds of different tools and, and what people are using.
And eventually I believe we're gonna find a way to manage that.
But I don't think that anybody is, uh, anybody has the need to do that just yet.
And that is a, an area that is concerning because.
Uh, the damage is, we're not going to be able to go back.
And when we look at, for example, the, um, the, the current lawsuit of copyright owners against OpenAI and Anthropic, um, I think Anthropic just paid like a billion dollars or something.
I'm, I am in that, I am a member of that class.
I was gonna say, I have had to file a lot of paperwork because many of my books are.
A part of that class.
So yeah, I, someone tipped it off to me, although I was gonna find out anyway, but someone, someone tipped me off early to it maybe four months ago, and so I got on board and yeah, and started doing my homework and I'm in the class and they'll be paying that out next year.
Um, and there are several other lawsuits still pending and probably some have settled now.
Like, um, Getty Images is suing open ai.
Yeah.
I mean, and I don't have to tell you why, right?
Yeah.
And then the music industry has been suing somebody, um, one of the, one of the AI companies, uh, because, you know, this is another, you know, collision area here Ha.
Having to do with.
You know what constitutes fair use of copyright content.
Yeah.
In any form, you know, whether it's a book or performance art or something else.
Uh, and so, you know, copyright law is, needs to address this and it doesn't.
Yeah.
Yeah.
Yep.
And, and that's gonna take time too.
But the other, the other part of that is that content creators like Disney can partner with open AI and in this, on the same day, send a cease and desist letter to, um, to Google.
But then the question becomes, going back to what we were talking about before, how does Google take out any Disney related references from.
They're from their training data and I, I can't imagine that they even understand how to get rid of all that stuff that's already in there.
Yeah, I mean that's, that's, that's the problem.
And, and that was the risk they took when they decided to train their systems with copyright data.
I am just so sure that they talked about it.
In the planning stages, like, okay, we wanna build this giant AI system, what are we gonna train it on?
Everything.
Yeah.
What about copyrights?
Oh, we'll worry about that later.
Yep.
For sure.
That just absolutely seems to be what, what path they
took
and,
yeah, and, and to, to the point exactly that we as educators, when we adopt this stuff, we have to.
Think about those things right now, because if we do something to damage a kid, for example, um, we're, we're gonna be held responsible for that.
And, and we, we have to be prepared to, to manage that and deal with it in a way that we don't even understand how it could be possible.
Right, right.
Well, and that, what that comes down to, and, and privacy laws have been testing this for, oh, I don't know, 15 years.
Or more, oh, actually, probably 20 years is what constitutes harm.
Mm-hmm.
What does it mean when harm occurs?
Um, there have been a number of big security breaches that have occurred where massive numbers of.
PII have been breached because some company wasn't paying attention.
Right.
And some, you know, cyber criminals came in and stole all this data and either used it or published it or sold it.
And there have been class action lawsuits or, and non-class action lawsuits.
And the judge says, where's the harm?
How, you know, to the plaintiffs whose PII has been stolen?
He says, show me.
How have you been harmed by this?
Yeah,
well, you know, it's an invasion of your privacy, but where's the harm and lawsuits have been dismissed because just because someone steals your identity doesn't mean you've been harmed.
You've only been harmed when they do something with it.
Like open credit cards in your name and run 'em up and then not pay 'em.
Yeah,
then the harm comes, but then, you know, associating that with that breach over there.
Yeah.
Good luck.
Yeah, no kidding.
Good luck.
And not to mention when you put everything out publicly on social media anyway, you know, oh yeah.
And then that makes it even more difficult to say, you know, you, yes, maybe the data was, was breached and, and they got this information.
But I could also spend 10 minutes Googling you and get all this information as well.
Oh yeah.
So it's, it's not like that is, you know, and, and that's a, a sickening feeling, I imagine, to, to think that you have really been harmed and then have a judge say, sorry, there's no evidence of harm here.
That's, that's gotta really stink.
But at the same time, you know, there's, there's some reality to that also.
Yeah, I mean, you know, feeling uncomfortable does not mean you're a victim of a crime.
Yeah.
It might mean that you may become a victim of a crime, but Yeah.
It doesn't mean you will.
And until you do, I mean, you know, un until we have a department of, of pre-crime, um, know the reference.
Right?
Yeah.
Exactly.
Never hit then, you know, and, and I'm not sure I want that.
No, definitely not.
Yeah.
Uh, 'cause, you know, people can, can commit crimes in their, in their heart and their mind and, and never carry them out, you know, and
Yeah.
Are they guilty?
I know, I know.
Yeah.
Um, I, my, my thought on that briefly is if they can commit the crime in their head and heart, then they can also, uh, uh, seek forgiveness of that crime in their head and heart as well.
So.
If, if it works that way, then it's gotta work the other way too.
So, uh, we do have a question from Kenny.
Oh, he asks, uh, I think we've already an answered the first question.
Do the AI companies give a crap?
And I think we'd say probably not about most of this.
Uh, he's got,
what do you think they, it depends on their exit strategy, but certainly for any of these companies, I mean, if they wanna.
If they want to get big and the founders wanna get rich, then that means they gotta stay in business for a little while longer.
Yes.
Mm-hmm.
Um, but you know how companies are, uh, some of them.
Are are, are not altruistic.
They, they care about themselves and they care about, you know, and, and to them, the exit strategy is that the founders are gonna make a billion dollars a piece and, and go, you know, yeah.
Live in, on the Riviera, uh, or, you know, wherever.
Um, and they, and they don't care what happens.
Um, mm-hmm.
But you know, their companies and, and their founders are all over the place.
Um, I mean, they will need to care because.
You know, there are some laws where directors, as in board directors are personally accountable when things go wrong.
Mm-hmm.
Yeah, exactly.
Um, so I, I would say for the most part, they, they care so long as they can continue doing what they're doing.
And if it becomes a thing where they can't continue what doing what they're doing, then they will care.
And yeah, I mean that's just the, the sad reality.
So Kenny also asks, how does it impact what schools should be doing this AI as a whole?
How like, should we even be teaching reading and basic, basic skills?
Uh, do you think brick and mortar schools are gonna go away?
What are your thoughts on that kind of stuff?
I've never been a professional in education, you know, either, you know, K 12, you know, primary or, or mm-hmm.
Uh, beyond.
Um, I, I don't have expertise there, but I, I guess I, I could only say that that AI is proving already to be really impactful.
Um, and that's both, you know, good and bad.
It, you know, organizations and individuals can use it to do great things, but also terrible things if they're not careful.
Yeah.
Uh, or if they don't have a good moral compass.
Right.
Um, one thing though that I, that I will say, because I use AI sometimes in my work
mm-hmm.
And a couple of thoughts about it, um, I consider AI.
As if it were a fresh out of college.
Um, really eager but not so bright research assistant.
Mm-hmm.
Who doesn't always get things right but is, is good at, you know, fairly good at grammar and, and so forth, can, you know, can actually, you know, write some decent content even if they're kind of locked into a style.
Yeah.
So, so that's, that's one thought I have, but the other is that.
So for me, I use AI to supplement why my work with subject matter.
I'm already an expert in
Yes.
So when AI says something good, I recognize it.
Yeah.
When it's, when it gets something wrong, I recognize it.
Mm-hmm.
But the caution, but there's a caution here is if you start using AI to help you in an area where you lack expertise, then it might be hard to know.
Unless you start doing a lot of fact checking it, it, it will often be hard to know whether whatever AI told you is correct or not.
Mm-hmm.
Okay.
That's one thing.
Now in, in the learning part, um.
I think professional educators know what the words are for this.
I don't, I only know what it feels like.
Sure.
And in my career, um, I've been a computer programmer, a software engineer, network engineer, systems engineer, security engineer, DBA IT manager.
Um, security manager, security director, and with, with many, many different things I've been responsible for over the years, and I've, and my, my career has been one of continuous learning.
I've, in almost every year of my career, I dove into something deep and learn something entirely new, like a new programming language or a new, uh, you know, just some new skill.
Um.
What I found is learning is hard and learning requires struggle.
Yes.
Yeah.
You
learn in the struggle.
Now, if I could rewind my career back to the beginning, and if I had had chat GPT by my side since 1979, when I started full-time professional work, what if I had AI do those things for me?
What kind of person would I have been as a result?
AI would've taken the struggle off my shoulders, but then I wouldn't have personally done the learning.
Exactly right.
Yep.
And
I would have not really been an expert, but more like a messenger transferring data from an AI system over to a book page or a presentation or what, something.
Well, that, that is exactly my thought.
We do still need to teach all these skills.
Uh, brick and mortar schools might go away, but I don't think that's because of AI specifically.
I think it's because of the other things that we're doing in schools that are not serving kids and families well.
And, and so that I see as, uh, definitely an area to, to be concerned.
But it really comes down to the necessity for what we in education call productive struggle.
Where it is struggle that is worthwhile, that teaches you something, and, and that is nece a necessity for learning.
You have to go through that to be able to, to make those neural pathways
stick
and remember the things that you're, that you're learning.
It's reinforcement.
Yeah.
Yeah.
The learning and retention.
Comes through reinforcement.
Yeah.
One, one little vignette of an example I can think of here is, is in the books I write, sometimes I'll have a write ai, write a passage or a short section, say a couple hundred words.
Yeah.
And then I'll, I'll throw the words into my document, and then I'll kind of tweak it so that it, it sounds right.
What I find is that I will not have as good a recollection of having written that content as it.
In the same way as if I had written it from scratch.
Yep.
It was easier, but there was less of me in it and, and less retention for what I had written.
Yeah.
And, and there's a really powerful thing that happens when you do that, that when you write your own words and or when you use Chacha PT to do it, the feel is very different.
And so you want.
We don't, I don't believe we know how to fully articulate that yet, but that is a key area where, you know, I can, I can have AI do something for me and it is not nearly as valuable to me or as to somebody else as when I do it myself.
And I think about it like this when it comes to things like empathy and character.
Things like that.
What makes empathy from another human valuable is that they, they are limited in the amount of empathy they can give someone.
And what makes empathy in an AI completely ridiculous and just seems Han, is that the AI never runs out of time or energy to have empathy.
And, and empathy is really valuable, but what makes it valuable is the, the limit on how much of it we can give.
And so
Well, yeah.
Also the source of it.
Yeah.
AI empathy is counterfeit.
Yes, totally.
It's, it's like, uh, you know, processed sugar.
Yeah.
Yep.
It's, and, and that is really the issue.
That's, that's to me, what it really comes down to that.
AI can only mimic what is real and only we as humans can still create things that actually are real.
And even like artwork, for example, or music.
AI can do a much better job creating that than I can.
That was never one of my skill sets.
That was never one of my gifts in this life.
And I've never been good at artistic things.
But now I can use AI to help me be more artistic with some things, but it still lacks the, uh, the feeling and emotion of me putting myself into it.
And it may be.
Better produced, it may look better, but a handwritten note that I write to my wife means a lot more than a, a, an AI typed note to her.
Period.
End the story.
Right?
Or, or, or what if you hired an AI agent to watch your texts and auto reply, and what if you wife is going, oh, I'm feeling really down today.
And then your AI agent goes, oh, I'm sorry to hear that.
Blah, blah, blah, blah, blah, blah, blah.
You know?
And then she might think, oh, he really cares.
And, and Jethro's going.
Did I say that?
Yeah, I was, I was out chopping wood when I supposedly said that.
Yeah.
Yeah.
It's counterfeit.
Yeah.
I didn't know you were struggling today.
That would be something.
Alright, well, uh, Peter, this has been awesome chatting with you.
Thank you so much for your time.
Sure.
And for sharing.
Um.
Once again, the book that he was talking about before was the A IGP, artificial Intelligence Governance Professional Study Guide.
Um, and so, uh, go check that out.
And Peter, thanks so much for being here today.
I appreciate it.
Yeah.
Hey, thanks.
Uh, thanks for putting me on Jethro.
Really appreciate it.
Yeah, my pleasure.
Enjoy.
Have a good one.
Yep,
you too.
We'll see
ya.
