An AI Porsche Classroom Experience with Michael Angelone

Download MP3

Jethro: Welcome to Artificial Intelligence, real Talk.

I am very excited today to be joined by Michael Angel, who's a professor at American River College, which is where my two brothers went to college many, many years ago, back in the late nineties, early two thousands, and, uh, he didn't know them.

I didn't know him, but we're together now.

Michael, welcome to Artificial Intelligence, real Talk.

Great to have you here.

Michael Angelone: Thanks for having me.

I appreciate it.

Jethro: So, uh, the world has changed quite a bit over the past couple years as we've, uh, been exposed to artificial intelligence and you, you have this idea about how we, how we need to act in a world that is increasingly being shaped by machines.

And I'm reading a book right now called, I believe it's called the Next Renaissance.

I better look up that title and make sure that I, that I got it right.

It's by the, um, guy who, uh, Crosby, I think, I'm just gonna look it up so I don't put my foot in mouth.

So it's called the Next Renaissance AI and the Expansion of Human Potential by Zach Cass, who used to be the uh, the go to market guy for open ai and.

Anyway, very interesting stuff.

Um, and he, he has an optimistic view of it, AI and the expansion of human potential is how he views it.

one of the things he talks about is that our world is shaped by machines and very much it.

is going to continue being shaped by machines.

And what does that bring about for you?

What are some of the things you're thinking about that Michael?

Michael Angelone: Yeah.

And like my world, which is in the community college and in higher education, it's, it's, it's funny to think about how different learning tools have come and, and, and, and people are.

Uh, reminded of like, course Hero, which is the most recent thing I think, which was, I don't know if you're familiar with it, but was a real threat to our way of life because, um, students were uploading your work.

If you're a professor, they would like take your quizzes and they would get, it was gamified, so they would get points for uploading, like a prompt from Jethro's class or my class.

And, and before that it was Wikipedia.

And, um, there's a lot of analogies and metaphors thrown out on like what AI means.

That's another noisy environment to be involved in.

It's, uh, it's not a calculator.

Um, it's not, in my opinion, as reducible as to just being a tool.

Um, it's sort of more than that.

Um, but the panic that that happens, I think is like all based on emotion.

And so that's really been what I've been swimming in for like the last few years is, uh, panic reptilian mode.

Um, but as far as your reference is concerned, I, I do think of it as a new renaissance.

I've heard the Gutenberg press as another, as another analogy, but I think the best one I've come up, I, I, I, I don't know who I, I'm gonna, I'm stealing this from somebody, but, um.

So I want to pay credit, but someone compared it to like the expansion, like the expanding universe and how we have to sort of see it as astrophysicists, see and measure the expanding universe.

And they don't just throw their hands up in the air and go, oh, we can't measure the distance between us and that distant galaxy.

Uh, they figure it out based on, on science and methodology.

So I think that if we approach it with the same kind of method.

And when I mean it, I mean the amorphous blob of generative ai.

Um, both with like literacy in schools, um, and both in, um, and higher ed as well.

Trying to address some of the things that it impacts, which, um, I think the elephant in the room isn't AI or generative ai.

I think like we're too focused on that.

Um, the real.

The elephant in the room for me is, is uh, like this idea that it's a threat to learning and we're not sort of camped with that, with the idea as teachers of like the how to approach this new, um, this new discovery, if you will.

So like, think of it more as a discovery of an expansion of a universe.

And so that's why we're sort of wobbling and, and, and that and that messiness of what we do next.

Jethro: I think that that part is really interesting because if you.

If you think of it in one way, then it becomes this like existential threat that it's something that is happening to you.

If you think about it another way, then it becomes this empowering, amazing thing that really can help you do things that you could never do before.

And that's one of the things that I find so interesting.

I recently finished my, uh, dissertation, which was that principals who use AI for.

Innovation, create cognitive equity.

Equity as in debt versus equity and principles who use AI for just to speed things up.

They create cognitive debt.

And there's a paper that came out last year called, um, that focused on creating this idea of cognitive death that you don't remember what AI creates and, and All this stuff, which is true, but
at the same time, There's AI can create things that I could have never even fathomed and has significantly improved my life in some very specific ways, and in other ways has made no impact at all.

You know, like it, has not improved my relationship with my wife or kids.

At all.

And I don't really expect it to, you know, but when it comes to a, a school environment, either higher ed or, or K 12, there are some very real things because
to me, the biggest thing is that, uh, AI has made it so that all of these assignments and tests and projects that, that most schools have been working on

Michael Angelone: All

Jethro: are super easy to do with ai.

Michael Angelone: right.

Jethro: The AI can do a much better job anyway, and it kind of forces that conversation of, well, what is, what are we actually here for Then

Michael Angelone: Oh, I, yes,

Jethro: yeah, so talk a little bit about

Michael Angelone: I've been having, I've been asking that

Jethro: would get you going.

Michael Angelone: question since like, uh, November of like 2020 or whatever it was when Chatt dropped.

And, uh, my students, uh, in November were normal in the sense of like what I'd been dealing with for about at that point.

It's been a minute now to, jeez, uh, 14 years in the, in the business.

Uh, pretty normal, right?

In terms of what students would turn in at the end of a semester.

Carefully scaffolded portfolio based, process based approach to writing.

More general composition courses, things of that nature.

Not to bore your audience, but you know, really trying to give you a context, you know, your first semester English one A or one B class, right?

That everybody who's entered college and gone through knows they need to take.

Right.

You gotta check those boxes.

So you get, you get this, you get a be, you get a bell curve.

But it was like, by December, everyone sounded like Toni Morrison, right?

And everyone was gifted and great.

Um, and I, and I had, you know, kind of.

I'm, I'm, I'm pretty, I'm not a technocrat, but I'm pretty plugged in as an English professor to what's changing with technology, especially as it impacts me.

Um, um, I wouldn't call myself a late adopter.

I wouldn't either call myself an an, an, an innovator either by any stretch of the imagination.

But with ai I would say that I probably, anything in my life, I'm probably a little bit in the back of the line on early adoption, but.

With that said, um, I had a mental nervous breakdown, man.

I had an extension.

I as extension, like panic.

It was no, no, no lie.

I was calling friends in December going, can you, is your sales is, is, is, uh, is pharmaceutical sales?

Well, how you doing inform surgical sales.

I heard you guys do very well, right?

Jethro: Yeah.

Can I, can I get a job transfer right now?

Change my career?

Yeah.

Michael Angelone: And.

they're like, like my livelihood.

It just completely shifted.

So I was trying to tell that human tale because, um, in January I started to have less of like that emotional reaction.

My brother's also like a, a, a psychologist.

So we've, that makes me kind of like an armchair cognitive behavioral therapist on Monday morning.

Um,

Jethro: Yep.

Michael Angelone: I have a little introspection in that area, so I was like, man, you know, I could do three things.

Like, am I having an emotional reaction to my, to this?

Yes.

Is it going to you?

Sort of, all right.

Allow that to happen.

Going through my like brother, sort of like voice in my head and that's, these are just data points, like, don't think about how it makes you feel.

Just be glad you feel.

Then go from there.

Alright.

And then, and from there it was like, okay, into community, into like discussions with other like-minded people.

I think Steve Jobs called them, like what the, uh, the crazies or whatever that, the, you know, the group of people that just find my tribe.

And I did, uh, with all due respect to my department at American River College, I kind of knew it wouldn't be happening there.

Everyone was still kind of in frozen mode, uh, or in constable mode.

Jethro: Yeah.

Well, let me just address that real quick because this is the other thing that is a huge impact on what we're doing and, and does have a huge impact on students specifically.

Is, at first everybody was like, not allowed to use ai.

It's banned.

And that was everybody's first response.

And it was like, well wait a second.

Maybe that's not the Right.

response.

But if you are in an environment, either at work or as a student.

Where that is the response, then you can't even talk about it comfortably.

You know, you can't even say like, what could I do with this?

Michael Angelone: Right.

Jethro: if you're in another place where they're like, oh, you know what, let's, let's figure out what we can do and figure some, figure out how this fits in, then it totally changes the whole, the whole game.

Michael Angelone: Crazy.

You mentioned that like this, like psychological safety alone was like a huge.

You know, I don't have like numbers in front of me, but anecdotally it was like a huge piece of this pie.

It was one students felt like, and there's ways I did that too.

And I think that I doing that.

Okay.

Um, one full transparency.

I'm doing it.

Okay.

'cause I'm a humble person and it's like, I don't know anything.

Better than the throw spaghetti at the wall.

See what sticks and not go insane like I did back when this thing dropped.

So it's like, what do I, what am I doing here?

And so like any scientist, it's like observe, see what's happening, what works.

And I think, but with my, with students, that psychological safety that you're pointing to was so important.

And that had to be, I think, developed through things they can see.

Feel because I, we had a couple of panels in American River College that I was happy to host and we had students on, which was super important.

And the number one thing they said, uh, each student across the board was they would rather have a ban on AI in their class than a teacher who's like.

You can use ai, but to brainstorm and to chit and not really like lay it out.

And it was like problem pain 0.1.

I, I wanted to create some kind of like rubric where we could all discuss freely in my classroom what those degrees of usage.

Even meant, and I've used some usage, seen some usage scales and other things that a lot of interesting smart people put together.

And just like I said, went, let's see if this sticks.

So my, my classroom became a point of psychological safety immediately because it was like my students were like, oh, okay, we're showing up to class, we're using it, and the teachers here, not just we're using it, but there's this guide rail.

Framework that seems legit because it comes from this place called unesco, and these are like people with PhDs after their names.

You know how the ethos goes for students.

And then bam, once I had that buy-in for like, eh, I'm not gonna say everybody, you know, I wasn't Jaime Escalante, I wasn't slicing apples.

There were a few students.

And as there always should be, I think wonderfully, uh, to make the discourse community work, um, to be there and be like, you know what, can we opt out?

So I've created even things like Bill of Rights, opt out clauses for each assignment, you know, just so that students feel like I can be here psychologically safe, using it, not using it.

And so, so far so good.

Um, it's been sort of successful, but my, my next thing is to measure it.

So like, what my next thing is to sort of talk, uh, uh, more, more as many people as I can about how we measure learning now in an age of AI where there's transparency and ethical usage.

Jethro: Well, let's talk a little bit about that, because like.

My learning has grown exponentially since I've been using ai, but it is not in a way that makes sense in a classroom situation.

And, and that to me is, is one of the biggest changes that have to happen for educators as they need to be able to know how to, how to judge learning in a way that is appropriate and.

Uh, the way we've been doing it just isn't going to work anymore.

So what are some of the conclusions you've come to with, with what that looks like for you?

Michael Angelone: So a couple years ago for, I've been on a dozen or so committees in-house at my own campus and in my own district.

So I'm not gonna name drop a bunch of bureaucratic acronyms, but one of the first ones was for like our.

Jethro: appreciate that.

Michael Angelone: Student Success Council, which is like a local thing.

And it's, you know, it's, uh, I wanna say it was like four guys that were like interested in ai and then a supervisor, a manager, a ar uh, and so there, like, what do we do?

And I, I was like, you know what?

We really should start with a literature review.

And everyone was in agreement and we spearheaded.

Trying to put together was what was out there.

And what we discovered was that there was some early usage of ai, not not of great studies.

The MIT study had dropped, which is the fame, the, I like to call it the like sort of the infamous study because it was a small end.

It was done with a group of students, graduate students.

It really didn't kill the idea that AI killed learning like the tabloid headline, click bait.

Thing that it did, and that's probably a whole nother podcast to talk about because it's still a matter of debate.

And so that's what we're trying to do here, right?

You know, uh, test this stuff to see if it works.

So first things first, I just came back from as CC, C about this is, this is a narrative.

This is a timeline from this a, this student Success council work, and now doing this statewide work.

And the, the state of California community college system is basically saying, help us.

Um, and the second thing they're saying is.

Maybe we think that we gotta stop putting so much emphasis on a, on product assessment.

And, um, I'm looking at my friend who was there with me going, you know, I, you know, it's one of those moments in time where it's like, I've been saying this for years now.

Uh, and finally they're listening, but I don't get paid the big salary.

Like this guy.

Oh, Peter does, he's said and he's saying it, but at least he's shown up to my two breakout sessions, which is a good sign.

Uh,

Jethro: Yeah,

Michael Angelone: you know.

Jethro: it.

So that's one of the things specifically for me that I've been saying for years that we need to have more focus on.

Projects and process rather than on the end goal, which is how do you perform at one point in time and say, I get this.

And, you know, as a, as a teacher and as a, as a principal, those were the things that I was driving for my, students as much as I could was this is how, this is what real learning looks like, and.

And if you learn how to learn, then it doesn't matter what tools are there, because learning is a personal endeavor regardless of whatever else is going on.

You can have two twins sitting in your classroom.

those two kids, because it's a personal experience, are going to get different things from that class, even though they could have everything else in common.

But because it's such a personal thing, then we have to figure out ways to, to balance that and figure out what learning actually looks like.

Because our, experience is that learning looks like you do well on tests, and that may not be the thing that we use to measure.

Uh, going forward.

Michael Angelone: I can't.

And, and that was like the, and, and honestly, um.

I'm not gonna sound, I don't wanna sound like a blowhard, but when I, when I hit the wall there in my lab with that same idea, I, I probably in the same timeframe that probably the other educators hit that wall, you know, who were involved in generative ai?

Um, both like how it was impacting them, you know how you had mentioned that it's making you smarter and then like imagining, 'cause you sound like a passionate person like me, imagining what it could do for students, right?

Like if they had this.

If you could share this power with them of how to and I, and then, so I hit this wall and I'm gonna get back to that how I heard you used the word judgment.

And then, um, one thing I hear, uh, is like, how to restrain.

So like, I want to get to that.

Um, um, so put a, put a pin in that.

But, so it was like, okay, so then how do I measure those things too?

Are they important for measurement?

Will it be convincing enough to like, you know, you know, except for a letter grade into a whole system.

Well, I'll cross that bridge later.

For now, I'll just go under the veil of academic freedom and test the stuff out.

So I, I, I, I thought to myself, um, well, like the, one of the craziest polls that started me into this was, I don't wanna like throw.

I'm not, I'm not in, first of all, I'm not in a fallacy.

So it's like, just because my mom used to teach us, tell me when I was a little kid, I'm from New York.

She would say, if your friends jumped off the Brooklyn Bridge, would you, you know, that old saying, so just because everyone's doing it doesn't make it cool.

I know that.

But like, and then there's all kinds of environmental concerns, and I'm not, I don't not want to put those on the back burner whatsoever.

And from my, from my perspective.

So I'm not necessarily about usage.

My whole thing is impact and impact on the learner.

So when, when you find out that night, like.

86% of learners across the globe from two years ago in a study said they use it regularly now imagining how much usage of it is on a daily, you know, by any given student in the United States of America.

So if we could just say.

It's ubiquitous, even if they're unintentionally using it.

That means we start having to have conversations about environmental sustainability when it comes to things that we use that are technology, whether it's GPS, discord accounts, red Dead Redemption Two, or
ai, like we gotta teach our, there's a literacy involved in, if you use a tool and it uses like as much energy as it takes to charge your phone, then you should really think about why you're using it, right?

That's one.

So there's literacy and that has to be really baked in, excuse me, to the whole process-based approach that we're talking about.

Because if there's no buy-in to, okay, I'm using this machine, it's gonna help me not, um, hit executive friction, but like exact, uh, allow
me to sort of like develop my executive function, which is something that I think a lot of colleges assume our students kind of come in with.

I went, aha.

That's the place where we can be measure if they're using it, when they're using it, how they're using it, what questions they're asking, how they're iterating.

And if like four basic areas for me on the executive function list are checking off, then I think I can measure that is basically my hypothesis and my approach to usage in my classes.

Jethro: I like that.

Just a, putting a pin in the environmental sustainability thing.

Simon Wilson, who is very much in the uh, AI boat, um.

Posted a, a post a while ago and has come back to this again and again about, uh, the AI water issue being fake.

And so yes, it uses water.

Yes, it has other impacts.

However, it is, it is not as much as a lot of people are making it

Michael Angelone: Right.

Jethro: And I don't know the Right.

answer here, but what I'm thinking about is the sustainability of relying on something that.

You can't possibly understand and how that, the environmental thing is a question mark and we should be looking into that, but your own sustainability and using it.

Like if you, if you are supposed to be learning and you're using AI to circumvent your learning,

Michael Angelone: Great.

Jethro: me that's a hard stop.

Like you can't do that.

But if you're using it to enhance your learning, that's a very different conversation and a very worthwhile one to have with kids.

And they need to be able to understand that and articulate that.

And I know your, your kids are adult students.

My kids are, are younger than 18 students.

But still, those are questions that need to be answered.

And as I build stuff with it myself, need to know what, what is worthwhile, having it do it for me and what I should be doing myself.

And I still after, uh, more than I, 'cause I was using it before chat.

JBT came out just in a different way.

Um, like for a long time I've been using and teaching about this stuff.

And so.

I still don't know the right answer to be

Michael Angelone: Right.

Yeah.

You know, I don't know.

And that's a whole like astrophysicist thing.

I keep always going back to, it makes me like sleep better at night knowing that if I'm an astrophysicist, like you had mentioned, that framing maybe is bs, who knows?

But I, I, I think it helps me in like not just throwing my hands up in the air and, um, and saying, well.

Um, this is dangerous.

I know it's dangerous in a lot of ways, just like I know autonomous vehicles are dangerous.

Um, and the thing that we depend on, analogous leads with other things that we grow to depend on, that are technological, that are resource-based, that are finite, right?

Like automobiles and other things.

Start getting into like, you know.

Um, not questions of banning or questions, but there are questions of like how we get the most use out of how we make environmental protection laws, how we protect drivers from one another.

I mean, like Ralph Nader, right?

Um, you know, how we protect ourselves from each other.

Um, helmet laws from motorcyclists, you know, like the list goes on and on and on.

When we talk about things that we depend on each day and not to get into some kind of like.

Philosophical notion of like, you know, argument.

But I really think that AI belongs in the category of something that we don't scapegoat per se, because I have read those articles and um.

There's a certain amount of water that's used that's continuously recycled, is not like in continuous stream and things of that nature, so I don't know either.

I just know that to stay informed is my job so that I'm relevant as an instructor so that I can make sure that this is baked into our curriculum so that our students know this too, so that they learn.

And it's not just thrown into the category of critical or, uh, like, you know, um, digital literacy like that to me is a, I think that's a cop out.

I think that it belongs in its own category, AI literacy.

And I think that it's something that, um, our state, others lawmakers need to get.

Need to get off of whatever they're doing and they need to figure it out because I think that accreditation come down to program outcomes, the things that people like you really care about more, more than a lot of things that you, that'll keep you up at night.

I know.

So it's like that reframing needs to happen.

I think you unilaterally between senates, districts, boards, and local principals for their and their teacher staff and, and I think that's a, it's not.

I think it's not as urgent.

It doesn't have to happen overnight, but I think it needs to happen like as much time as it takes to plan like a World Cup.

Jethro: Yeah, I like that.

Uh, So you mentioned come back to this idea of how to restraint.

So talk, talk about that.

Michael Angelone: So I think that if a student doesn't know how to restrain with ai, that's a, that, I think that's a measurable thing.

Um, and I think that it, it could be written up into a rubric, and I really believe that actually you could actually norm people who know what that means and what's, how that's measurable in a certain large language model context.

So I'll give you a sort of cash on demand example.

Um, let's say I have a classroom that day and it's a AI workshop and students are iterating with AI and trying to come to, um.

Let's say a brainstorming session with AI to come up with a thesis statement for something that they want to explore, right?

Maybe they're gonna write an in-class essay, you know, at the end of this so they can perform, as you had mentioned earlier, the knowledge, because we still need those outcomes, right?

So I'll let them use AI in class to.

Prepare for the in-class essay just as you or I could have used the writing center or our teacher's office hours to help us prepare for that in-class essay during finals week.

Same kind of, kind of external, um, mechanisms, but this one's on demand 24 7 and gives you basically everything you want.

So about that bias mitigation is huge to me.

If I don't see a student.

Was able to go, oh yeah, but what about, but wait, did you consider this?

What would a person, uh, who disagreed with me say?

So this concept of entertaining and entertaining a naysayer rhetorically I in the rhetorical situation is huge for us.

Whether you're using AI or.

So if you're writing a paper, none, uh uh, and you're not including or an objection or an opposing viewpoint, right?

One thing that guys like me in our lane we've been doing for years is teaching this sort of dominion, Rogerian process.

Yeah.

Uh, rhetorical sort of approach.

Um, uh, ethos, pathos, Lagos, right?

And so audience awareness context is everything.

And so you can actually see rhetorical ability.

In the actual inputs.

And so it's like you compare one student who has the ability for bias mitigation, their follow ups to AI inputs are interesting to see.

And I measure 'em, I look at them and I collect that data and go, um, I could run this through my own agent and it could give me sort of a, is this person in advanced?

Um.

Now people go, what's the a?

What's the thing?

Is it a prompt engineer?

I go, no.

Um, they are able to go through the four phase AI regulation, scaffolding process, which I sort of had to come up with a stupid name so I could convince people that it's not just prompt engineering.

It's like, are they asking good questions?

No.

Yeah.

You know, is that measurable?

Yes.

Simple as that.

Can I norm Jethro in a two hour session and say, read these iterations and you tell me, is this student asking ex, you know, do, can we measure their executive functions when it comes to these particular areas?

Specific specifically with judgment and, um.

I think that student's able to judge the why they're using it.

Um, and I try to balance it off with more metacognitive writing, where they're like, just bring your authentic voice and tell me what you learned from the process of iterating with some large language model.

Jethro: Yeah.

Well, and that's one of those skills that is to me, vitally important.

Do you take whatever it gives you?

Or do you iterate on it?

Or do you say, I have no idea what's going on here, and I'm just gonna like move on to the next thing.

And that, that range does exist and it is very real and people who take whatever it says at face value is, is one thing.

And in certain domains and contexts that makes sense.

For example, I've had it create several apps for me and I don't know.

If it's doing it, if it's writing the best code ever, because I'm not a developer, there's no way for me to know other than does the thing work.

And if it does, then honestly that's good enough for me and that's all that I need it for.

But through that process, I have also learned when to say.

Uh, that doesn't work how I need it to, or that doesn't sound right based on what I do know.

Explain it to me better and help me understand why you're making this design decision.

You know, all those kinds of things that can have at one shot, something for you, but almost always it Needs multiple layers and refinement and, and things like that.

Michael Angelone: Needs

Jethro: there?

Michael Angelone: judgment.

And it's not to cut you off an needs human judgment.

Jethro: Yeah.

Mm-hmm.

Michael Angelone: And I think it could be taught is my, is my thing.

And I know other people are saying it, but I think it can be taught, I think it could be a place where it's taught.

Um, and it's, and it's not just you're teaching, I think you're teaching them to, to drive a new car.

In a weird way, like a new, kind of, a new kind of vehicle, if you will.

You know what I mean?

That that does take, um, I like analogy, so I'll use this one.

Like, I think that, like for me personally, if you're gonna say, Mike, choose a car right now, and you put a 1984 Porsche nine 11.

Out there, manual transmission or a Tesla model, whatever autonomous, ludicrous mode I'm taking the Porsche and the reason why is 'cause I maybe call me old school, but I love an open road, a little German
car that hugs it doing, you know, with a manual transmission feeling me and the car working together, getting to where I want to go, and having kind of, and feeling, and it's an aesthetic experience, right?

A Tesla.

For me is the exact opposite, especially on autonomous driving mode.

It's, it's getting me to where I need to be.

It's achieving the same goal, but it's, I, I could fall asleep if I want to.

And um, more importantly, and it's an it that sleeping and an anesthetic, so it's an anesthetic driving experience.

I want my classroom to be an AI Porsche experience.

I want them to recognize that they're driving something and the driver is the most important thing.

And I do not want idiots driving a Porsche 'cause it's a very expensive car.

Right?

And I don't trust that just because you graduated high school or you're returning to school and maybe you have a few trips around the sun and it's your chance to go back to school.

'cause community college sees all different kinds of students.

I'm still not convinced that even with those trips around the sun that you are 100%.

This might sound deficit minded, but I'm just one of the few people that will keep it real, especially with AI Now.

Because especially with AI now, I'm not afraid to say it.

I don't trust you behind the wheel of my Porsche.

Like I'm, I, I, I, we're gonna start small.

We're gonna go on the track.

We're gonna go around in first gear, second gear, third gear.

I'll let you open up the hammer after about two hours of driving this thing.

And you don't, 'cause I don't want you to fishtail this thing.

It, it, it's a funny kind of car.

And I'm gonna that stop with my analogy because I think it works because I think that if teachers are ignoring it.

We're gonna just have people driving their, their dad's Porsche around and crashing them, you know, and metal surrenders when oak trees meet fenders.

Jethro: Yep.

Michael Angelone: uh, and, and I, I, I'm, I think that's, that's what the danger in AI is, is we have to treat it like a license to drive, not, not, not
police its usage, but in a weird way, like mothers against Drunk Driving, do pro bono work just for schools and go around and do, uh, to, to, to, you know.

To make announcements about the dangers of drunk driving or whatever, of driving, texting and driving, everything included.

I've been talking to you long, Jeff, but I'm so excited about it because I'm just saying that's the shift.

That's the shift that has to happen for me first epistemologically and then teachers buy-in is next because students are already bought in.

That's, I'm gonna see straight.

Jethro: Yeah.

Well, taking back to the analogy, uh, if somebody, you know, if, if a kid's like, Hey, can I drive that Porsche?

Every kid's gonna wanna say yes, but they don't understand how to do it and how to do it well.

And but given the opportunity, yes.

I'm gonna say I want to drive it.

And that's, and that's.

That's exactly where we're at.

I really like this analogy.

Uh, we are out of time for today, but this was a, a great conversation and we'll probably need to have this again and talk more about it.

But my big takeaway is that AI Porsche experience is, is a powerful way to look at it.

And, and that is a good way to do it.

That's why we need some handholding.

We need some structure.

We need some frameworks to help people learn how to use ai.

And, you know, people who, who can, uh, can just go learn how to do it and use their own wisdom and judgment to be cautious and smart about it.

Um, and if they can't, then they, they should definitely have some help.

So, Michael, if people wanna connect with you more, what's a great way for them to get in touch with you?

Michael Angelone: I could look me up on LinkedIn.

Um, uh, search my name, Michael Angeloni.

It's uh, A-N-G-E-L-O-N-E.

I'm also, um, a professor at American River College, so you can find me on my bio@americanrivercollege.edu.

Jethro: I've got a link to your LinkedIn, uh, in the show notes@visionforlearning.com.

Thank you again so much for being here.

Michael, this was great chatting with you and I really appreciate your insight.

Michael Angelone: Thank you so much for having me.

An AI Porsche Classroom Experience with Michael Angelone