The latest Gemini and ChatGPT models provide incredibly
human- and expressive-sounding conversations. Some believe
AIs have already beat the Turing test (basically when a
computer's response is indistinguishable from a human's,
at least as perceived by another human).
However,
the real challenge isn't whether AI can fool humans in
conversation, but whether it can develop genuine common sense,
reasoning and goal alignment that matches human values and
intentions," Watson said. "Without this deeper alignment,
passing the Turing Test becomes merely a sophisticated form of
mimicry rather than true intelligence."
This. There have already been some instances where AI has been caught>following its own intentions vs. those of humanity. But, alas, they
This. There have already been some instances where AI has been caught>following its own intentions vs. those of humanity. But, alas, they
>still keep pursuing it.
And how long before it starts pursuing us? I'll be back... B)
I think the main problem isn't that AI will pursue its own agenda,
it's more a case of it being prejudiced/influenced by what the
original programmers put into it's basic start up database.
Granted, AI can make decisions that are surprising like the one
that was given a certain amount of time to try to solve a problem
and it was later discovered it had rewritten it's own code to give
itself more time to do it..
Add to the 'prejudices' above, when an AI is dealing as an individual
helping one person, it can also pick up that person's preferences
and try to accomodate them as well..
Rob Mccart wrote to MIKE POWELL <=-
I think the main problem isn't that AI will pursue its own agenda,
it's more a case of it being prejudiced/influenced by what the
original programmers put into it's basic start up database.
Granted, AI can make decisions that are surprising like the one
that was given a certain amount of time to try to solve a problem
and it was later discovered it had rewritten it's own code to give
itself more time to do it..
Add to the 'prejudices' above, when an AI is dealing as an individual helping one person, it can also pick up that person's preferences
and try to accomodate them as well..
>> it's more a case of it being prejudiced/influenced by what theI think the main problem isn't that AI will pursue its own agenda,
This seems to be the most pressing issue at the moment as we are>already seeing it happen. It doesn't even need to become sentient to
What I still find funny is that Grok, Musk's AI bot, was still giving>answers that were not at all flattering to him or MAGA. Recently, someone
>> helping one person, it can also pick up that person's preferencesAdd to the 'prejudices' above, when an AI is dealing as an individual
Indeed, just as an "enabler" human might do.
Indeed, just as an "enabler" human might do.
Yes.. I suppose that depends on what it is doing for the person.
In some cases it would be more like a flatterer or sycophant by
telling the person what they want to hear rather than the more
common truth. I'm not suggesting it lies to them, but it could be
picking out information tailored to what the person already thinks.
I think the main problem isn't that AI will pursue its own agenda,>will have a preconceived worldview that they start with when
it's more a case of it being prejudiced/influenced by what the
original programmers put into it's basic start up database.
I agree with this. Code has to start somewhere. Even scientists
Granted, AI can make decisions that are surprising like the one>put it in the code that it CAN rewrite itself? So it's still only
that was given a certain amount of time to try to solve a problem
and it was later discovered it had rewritten it's own code to give
itself more time to do it..
I've heard of this before. But wouldn't the programmers have to
Add to the 'prejudices' above, when an AI is dealing as an individual helping one person, it can also pick up that person's preferences>blog/podcast, or helping with wording something in a way that preserves
and try to accomodate them as well..
I actually like this! I use ChatGPT all the time for proofreading my
And I call him PETEY. :-)
>> In some cases it would be more like a flatterer or sycophant byYes.. I suppose that depends on what it is doing for the person.
I read that and was reminded of the Magic Mirror in Snow White. Sounds>like a potential money making venture there. ;)
>> In some cases it would be more like a flatterer or sycophant byYes.. I suppose that depends on what it is doing for the person.
>> telling the person what they want to hear rather than the more
>> common truth.
I read that and was reminded of the Magic Mirror in Snow White. Sounds>like a potential money making venture there. ;)
Ha.. Good point.. Who's the fairest of them all?
Oh wait, it eventually did tell her the truth..
I don't recall if that was followed by 7 years of bad luck or not.. B)
I read that and was reminded of the Magic Mirror in Snow White. Sounds>like a potential money making venture there. ;)
>> Oh wait, it eventually did tell her the truth..Ha.. Good point.. Who's the fairest of them all?
That was a design flaw. ;) The future mirror I am thinking of wouldn't>make such mistakes!
That was a design flaw. ;) The future mirror I am thinking of wouldn't>make such mistakes!
Mirror 2.0 ? It lies better!.. Sounds like the Government..
Meet the new boss.. same as the old boss... B)
Sysop: | altere |
---|---|
Location: | Houston, TX |
Users: | 69 |
Nodes: | 4 (0 / 4) |
Uptime: | 00:04:12 |
Calls: | 1,176 |
Calls today: | 4 |
Files: | 8,212 |
Messages: | 300,775 |