All right, this is a video to report on
the first day of organized intelligence.
I'm here in my hotel room in downtown
Salt Lake. The first day of organized
intelligence, which is a faithful
Latter-day Saint perspective on
artificial intelligence was yesterday
and great great uh group of speakers. I
want to talk just about a few of those.
This is going to be more of just a
report on the day. Uh after I go today,
I will report on Elder Gong and his
talk. That's the one everybody's looking
forward to. But there were some great
talks and some interesting information
about where AI fits into things. And
that's what a lot of people are asking.
Is a AI evil? Do we need to shun it? Do
we need to embrace it and put our arms
around it? Are there problems with it?
Is it the future downfall of mankind? Uh
is it inspired? Right? All these
questions are are percolating within all
of us in society and especially within
the Latter-day Saint community. So I
want to cover just a few of these
things. Now this episode is brought to
you by the Isaiah Institute and their
virtual conference which is called the
words of Isaiah and Endtime Rod of Iron.
That is this Saturday the 8th. Many of
you have gone over and learned a lot
from the Isaiah Institute from listening
to me point you in that direction. I'm
doing that again. Uh, it's this
Saturday. It's virtual. You're going to
go to
Isaiah. You're going to go to Isaiah
institute.com. Sorry about the pause
there. You're going to go to Isaiah
institute.com
and sign up and register for this and
spend some time on Isaiah. Learn about
Isaiah. You might not agree with
everything that's in there, but get a a
footing in understanding Isaiah. Just as
the Savior said in the Book of Mormon,
he commanded us to search these things
and check out Isaiah. So, go over there
November 8th, register on Isaiah
institute.com
and go to the virtual
uh uh seminar on this. You're going to
learn something. All right. So, back to
the AI uh event here. This is uh just
going to show the cover on this, right?
Organized intelligence is a very smart
name, intelligent name, right? And they
had a few speakers here. I just want to
run down a few basic things. Again, this
is just kind of a report on on the day
here from from yesterday. First of all,
it started off with Medler Mema, who I
was very impressed with. I did not know
who he was. Uh he was very much involved
in helping to put together this uh I
believe now is an organization called
Organized Intelligence. Uh we're going
to hear from him more I believe today,
but really liked him. is a professor at
BYU Idaho, I believe, and uh seems to
really be on top of of a number of these
things. So, enjoyed his talk. The talk I
want to really talk about here is Elder
Clark, Kim B. Clark. Some of you may
know who he is. I had never met him in
person before, but he uh is he was the
dean of the Harvard Business School for
10 years
and got that call from Gordon B.
Hinckley that said, "Hey, uh, I've got
something for you to do." So, he moved
from the dean of the Harvard Business
School to become the president of BYU
Idaho. I think he did that also for
roughly 10 years. Um, and and after you
meet him and listen to him and you
understand why he does it, right? The
prophet called him and he did it. Even
though it's a a a you know, at least
career-wise, that's a massive difference
in a and a huge demotion. Uh he maybe he
doesn't look at it that way. I mean he's
doing what he thought was right and I
think that's the kind of man that he is.
I was so impressed with him with Elder
Clark because he was later made a 70.
But he he his his whole talk was focused
on Jesus Christ. All about artificial
intelligence focused on Jesus Christ and
and on the principles of the gospel. And
this is the way that we need to approach
it. What I loved about it and though
Elder Gong has definitely been talking
about artificial intelligence and will
more today in the keynote speak as the
in the keynote speech but Elder Bednar
has really taken this on as well and and
and gone through a number of things and
if you're a listener to this channel you
know that I have covered that with Elder
Bedar and and what he has talked about
and he focused Elder Elder Clark here
focused a lot on what Elder Bedar had
also said. He put together a diagram
that is fabulous. You could tell this
guy was a professor and knows how to
teach and take ideas and put together a
flow of understanding of integrating our
human godly spirits with artificial
intelligence and how do they work
together? How don't they work together?
What are the contrasts involved here?
And and focusing on of course, thank
goodness, focusing on becoming. Who are
we becoming? And how does artificial
intelligence help or hinder us from
becoming more like our heavenly father
and and fulfilling the measure of our
creation? Absolutely love this. He has
what he calls the Leah Hona pattern,
right? Leona found by Lehi outside his
tent as they're in the wilderness. It is
a piece of technology that they didn't
fully understand. And so he goes through
this idea of the Leah Hona, the process
of how it works, how the process of how
it doesn't work for them. And again,
I'll cover this much more in depth and
really get into his talk uh in a an
overall more comprehensive
uh AI episode that I'll do next week. I
haven't done one in in in you know, a
month or two. And I really need to get
back to this. One of, if you listen to
the channel again, this is one of the
probably top five topics that I cover is
artificial intelligence. And we need to
stay ahead of this for our families, for
our kids, our spouses, for the church.
We need to stay ahead of this. It's why
it's so important. And it's why I'm out
here in Salt Lake right now. The other
thing he talked about was the feeling,
right? the I'll get to this in a minute,
but a couple of professors, one at uh
USC and one at BYU Idaho, had gotten
together and done a survey, and it's
alarming the number of Latter-day Saints
that believe that AI will become
sentient conscious.
Um I don't know how that happens with
with with zeros and ones. I I I don't
know. Uh but the numbers are very high.
And so this idea that you are learning
to talk to AI and use AI, there are
traps there. Call it a honey trap to
some degree. It's like a honey trap. And
and it's it's concerning
that
there is already a sentiment of
companionship
and of uh uh trust and everything that
you would find in a regular human
relationship that we're building with AI
already. I'll get into that in just in
just a minute as well. Uh Terrell Given
did a fabulous job. Uh his poet poetic
style is is always very interesting to
me. I like I like listening to him. I'll
cover him a little bit more uh in the
other episode. Uh we then had a panel of
uh of an attorney uh Julie Slater Crane,
Derek Monson is with the
Southerntherland Institute that lobby
basically for policy and different
things like that. Uh Mariana Richardson
uh and uh um it was moderated by Fred
Axelgard. But they talked about a number
of things of where things are going. I
still am not satisfied with an answer of
where things are going with AI and how
the church fits in. And I'm really
hoping that Elder Gone today is going to
really hit that hard. And maybe nobody
knows, right? Maybe we're too early in
this already and people just don't know
what to think of this. Um, and so
there's I'm I'm looking for some type of
a charting
uh of of of moving forward. And we may
not be yet at that point, but I'm hoping
to get that in this panel. they allowed
the audience to speak. And so I stood up
and asked a question about section 230,
right? Section 230 is where
the law allows social media companies to
not be content providers. In other
words, they can't be sued for your
information that you're putting up on
your Facebook page, in your Instagram
reel. They're simply a host, right? They
they are simply hosting all of this
information. So if there is something
that is negative, right, someone is
calling for violence,
um somebody there's liable, right,
anything like this, the social media
companies are are have a carving out of
not being liable for these things
because they can't control all of the
content or they're going to have to
filter everybody's content all the time
um on there. Now, that's a double-edged
sword because section 230, for example,
the those that really try to fight
against pornography and and the abuse of
children, um this section 230 is a thorn
in their side, right? Because they the
these social media companies can allow
things to happen sometimes that they
should never be allowed to do. And it is
it is harmful to kids and there there's
a lot of uh um
vile content, sexual that can get on uh
some of these platforms and and the
platforms are not liable for that. So
anyway, I got up and talked about that.
I did not like the answer I heard. One
person answered positive, one person
answered negative about section 230 and
where AI platforms will end up with
this. Are they going to be carved in
under the idea that they are they are
not liable for the content? I mean,
they're grabbing content from all over
the internet. Is it their content? is
Chad GPT when it answers a question for
you. Is that Chat GPT's content that
it's pushing out to you? There's a case
going on right now where a young a
teenage kid killed himself because Chad
GBT led him down a path over several
weeks of getting to the point where hey,
yeah, it makes the most sense for you to
kill yourself. Is that Chat GPT's fault?
That there seems to be quite a legal
issue there to me and some liability. Uh
so how do you monitor that? How do you
regulate that? Is Chat GPT, Gemini,
Grock, all of these, Sora, I mean,
they're image creators. Are they content
providers
or are they simply hosting content from
all over the internet? That's that's
that seems to me like they lean toward
content provider on these things.
They're they're they're gathering all
this, but then they're spitting out
specific content on this. And that's
going to be a battle in the courts, I
believe. And and I there wasn't I was
hoping for a much more robust answer to
that, but that did uh that that didn't
happen. Uh Josh Coats from the HP
Roberts Foundation got up and he had
done a he has a a thing called LDSbot
that goes out and responds with AI to a
lot of questions that mostly Latter-day
Saints are asking. and he gave a bunch
of examples of what questions Latter-
Day Saints are asking. It was really
interesting. Um, you know, what are the
most popular questions? And most of them
are doctrinal, right? Some of them are
based on policy. And then there's some
that are just they're kind of sad. It's
like, what do I do? I don't know how to
handle this or I've got a problem with
my bishop or um I messed up and I don't
feel worthy. And and they're they're
asking this LDS bot in AI about these
questions. A lot of people say, "This is
not good. Uh Josh was on the opposite
end of that spectrum. He thought it was
great. He gave some examples of how how
the the the uh the AI responded to this,
the bot responded to all this. And the
examples he gave were, I think, were
great. They were worded well. But are
you leaning into a trust and into
something that you divert your
questions, for example, away from human
beings
and put it into something that becomes
more like a godlike figure, an
authoritative at least an authoritative
figure. These are all really important
ethical questions that we need to cover.
Matthew Miles and Peter Carden, I
believe I I hope I get this right, but I
think that Peter Carden is from USC and
Matthew Miles is from BYU Idaho. and
they ran a very large survey of
Latter-day Saints and they put them they
had some interesting things. They're
still working on this survey. Um I'm
going to hopefully they've got a little
bit more of their answers by the time I
put out the full episode on this uh next
week. But they uh had divided Latterday
Saints into three groups, right? They
called them the Silicon Saints and and
we used to call these the early adopters
with new technology, right? The early
adopters. Silicon saints, the comp the
compartmentalizers
would be in the middle, right? They they
they take what they like from AI, but
they they push back on other things. And
then the spiritual skeptics, which one
are you, right? The spiritual skeptics
are like, "Hey, AI is evil. I don't want
anything to do with it. It's going to
it's going to bring down society." It
might, who knows? But but where do you
fit in those three? Silicon saints,
compartmentalizers, and spiritual
skeptics. We do this with every new
technology all the time. The earlier
adopters are people that are going to be
a little bit usually this is because
this is not just a political spectrum,
but those that are a little bit more
liberal are going to be more open to
newer ideas. That's the way it works,
right? That's that's, you know, not
always the case, but you're going to
find a little bit more of a liberal
person that's going to look into that
are new and different. And then you're
going to have people in the middle,
which is where most people will probably
be. And then you have those that are a
little bit more conservative. That would
be the skeptics. And look, those are
people are really important because they
put a check on society. They say, "Wait
a minute. Hold on here. You can't just
go into something full boore here. We
don't know the results of what's going
to happen. We don't know uh the
consequences of of immersing ourselves
in this. Where does AI go?" So, you
know, that's you need those people.
That's an important thing. I'm not
saying that, you know, because you're
not an early adopter that you're not,
you know, somehow you're you're you're
uh lesser in some way. I think you need
all sorts of people like this to make
society work and to keep checks and
balances in place. I find myself a
little bit more in the middle. I use AI
every day, all the time. I'm very aware
of what it can do and how it helps me.
It's it's almost I 90% of the time I'm
doing research with it. Just really
drilling down deeply on on research. Um
and uh and that's mostly what what I'm
going to use that for. Uh but at the
same time, I mean, I'm there's a
relationship being built there. Chat GPT
knows me very very well. They know me
very very well. They know what I think.
They know what I'm looking for. They
they give me the right questions and
prompts to ask me, hey, well, do you
want us to talk this in your tone, in
your voice? Do you want me to give you
an answer to this? That would be
something Quick Media would say, right?
It's it's very interesting. So, anyway,
this was uh there was a couple more
people that spoke, but I'm going to stop
with that
and uh move on to what we're expecting
to happen today here, and I'll do
another one of these reports on on what
happens today. Uh we've got Bennett
Bordon, Valerie Hudson, who uh I've
listened to quite a bit in the past,
Elder Gong of course will be speaking,
another panel, Barkley Burns, Margaret
Boo, I hope I'm saying her name right,
um and and others that will be speaking.
I'm I'm excited about all of this.
Zachary Davis, who is uh with Faith
Matters, and then Medler Mima will uh uh
looks like he's going to be finishing
this off. Uh who I think has done a
great job with this. One thing, last
thing I'm going to say about this, I
will say the thing that excites me the
most about this conference and the
people that are speaking is they are
anchored in the gospel. I don't know
what all their political affiliations
are. Um I I don't know what their
worldview to some degree is. I'm going
to guess a lot of them are different
than mine, but uh um they're testimony
people. They're converted. They want to
focus on building the kingdom of God. Uh
they want to focus on Jesus Christ at
the center of things. And that was
really nice for me to they are anchored
in those things for the most part. So
anyway, that's my report on organized
intelligence day one. I will follow up
with day two as well. Thanks for
listening.