Feed aggregator

Re: [agi] Internecine War

AGI discussions @AGIRI - Sat, 2018-04-28 18:21
Yeah, isn't that how it is today? All the factions fighting about the way
to AI... I'm participating, but what can I do - I do have important ideas.

Cheers
Stefan

On 28 April 2018 at 14:47, A.T. Murray <mentificium@gmail.com> wrote:

> Hi, AGI-ers! Are you feeling snarky today? If so, read on.
>
> http://www.ai-forum.org/topic.asp?forum_id=1&topic_id=9925 is where
> in a Comment I once wrote:
>
> When I met Rodney Brooks on Thurs.28.FEB.2002, he told me that he
> had heard of the Mentifex AI project, but he did not render an opinion
> about it.
> Actually, when I presented my Mentifex business card, Rodney Brooks at
> first
> did not recognize it, but when he turned the card over and saw where
> I had signed the name "Arthur" in a message, he acknowledged that he
> had heard of what I do. I gave Rodney Brooks a copy of "The Robot Social
> Register"
> with an entry or two on robots of his creation.
>
> http://rodneybrooks.com/forai-the-origins-of-artificial-
> intelligence/#comments
> is where the famous roboticist has a few snarky things to say about AGI,
> and
> about the SIAI/MIRI group in particular and the GoodAI Roadmap of 2015.
>
> http://blog.marekrosa.org/2015/12/goodai-r-roadmap-preview.html
> is where your Merry Prankster i.e. Mentifex stirs up trouble:
>
> http://blog.marekrosa.org/2015/12/goodai-r-roadmap-
> preview.html?showComment=1524887598362#c2340599592943589592
>
> Mentifex April 28, 2018 at 5:53 AM
>
> A famous roboticist in a blogpost on The Origins of Artificial
> Intelligence
> is quoting this Roadmap and evaluating it.
>
> The not-yet-edited or not-yet-deleted response is at
> http://blog.marekrosa.org/2015/12/goodai-r-roadmap-
> preview.html?showComment=1524897856781#c8137148604805875430
>
> http://rodneybrooks.com/forai-the-origins-of-artificial-
> intelligence/#comment-7169
> April 27, 2018 at 9:27 am Yann LeCun says (inter alia):
>
> It’s hard to explain to non-specialists that AGI is not a “thing”,
> and that most venues that have AGI in their name deal in highly
> speculative
> and theoretical issues that sometimes flirt with charming naïveté,
> self-delusion,
> crackpottery, even charlatanism.
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
> Modify <https://www.listbox.com/member/?&#x3E; Your
> Subscription <http://www.listbox.com&#x3E;
>



--
Stefan Reich
BotCompany.de
Categories: Discussions

Internecine War

AGI discussions @AGIRI - Sat, 2018-04-28 14:47
Hi, AGI-ers! Are you feeling snarky today? If so, read on.

http://www.ai-forum.org/topic.asp?forum_id=1&topic_id=9925 is where
in a Comment I once wrote:

When I met Rodney Brooks on Thurs.28.FEB.2002, he told me that he
had heard of the Mentifex AI project, but he did not render an opinion
about it.
Actually, when I presented my Mentifex business card, Rodney Brooks at
first
did not recognize it, but when he turned the card over and saw where
I had signed the name "Arthur" in a message, he acknowledged that he
had heard of what I do. I gave Rodney Brooks a copy of "The Robot Social
Register"
with an entry or two on robots of his creation.

http://rodneybrooks.com/forai-the-origins-of-artificial-intelligence/#comments
is where the famous roboticist has a few snarky things to say about AGI,
and
about the SIAI/MIRI group in particular and the GoodAI Roadmap of 2015.

http://blog.marekrosa.org/2015/12/goodai-r-roadmap-preview.html
is where your Merry Prankster i.e. Mentifex stirs up trouble:

http://blog.marekrosa.org/2015/12/goodai-r-roadmap-preview.html?showComment=1524887598362#c2340599592943589592

Mentifex April 28, 2018 at 5:53 AM

A famous roboticist in a blogpost on The Origins of Artificial Intelligence
is quoting this Roadmap and evaluating it.

The not-yet-edited or not-yet-deleted response is at
http://blog.marekrosa.org/2015/12/goodai-r-roadmap-preview.html?showComment=1524897856781#c8137148604805875430

http://rodneybrooks.com/forai-the-origins-of-artificial-intelligence/#comment-7169
April 27, 2018 at 9:27 am Yann LeCun says (inter alia):

It’s hard to explain to non-specialists that AGI is not a “thing”,
and that most venues that have AGI in their name deal in highly speculative
and theoretical issues that sometimes flirt with charming naïveté,
self-delusion,
crackpottery, even charlatanism.
Categories: Discussions

CfP deadline extension to May 8 for the Workshop on Architectures andEvaluation for Generality, Autonomy & Progress in AI @ FAIM'18

AGI discussions @AGIRI - Fri, 2018-04-27 01:12
Dear colleague,

On behalf of the organizers for the Workshop on Architectures and
Evaluation for Generality, Autonomy & Progress in AI (AEGAP 2018) we would
like to inform you that we're extending the paper submission deadline to *May
8*. We would be very happy to receive a submission from you or your team
for this exciting workshop. More information can be found below or on our
website: http://cadia.ru.is/workshops/aegap2018/

Please forward this call to those you think might be interested.

Apologies for duplicates.

Kind regards,
The AEGAP 2018 Organizing Team



---------------------------------------------------
| |
| Architectures and Evaluation for |
| Generality, Autonomy & Progress in AI |
| Workshop @ Federated AI Meeting 2018 |
| |
| July 13/14/15 (TBD) in Stockholm, Sweden |
| |
| *** DEADLINE EXTENDED TO *May 8* *** |
| |
| http://cadia.ru.is/workshops/aegap2018/ |
| |
| Contact: aegap2018@gmail.com |
| |
---------------------------------------------------

The Joint Workshop on Architectures and Evaluation for Generality, Autonomy
and Progress in AI (AEGAP) focuses on our field's original grand dream: the
creation of cognitive autonomous agents with general intelligence that
matches (or exceeds) that of humans. We want AI that understands its users
and their values so it we can form beneficial and satisfying relationships
with them.

We welcome regular papers, short papers, demo papers about benchmarks or
tools, and position papers, and encourage discussions over a broad list of
topics. As AEGAP is the result of a merger between the Third Workshop on
Evaluating Generality and Progress in Artificial Intelligence (EGPAI), the
Second Workshop on Architectures for Generality & Autonomy (AGA) and the
First Workshop on General AI Architecture of Emergence and Autonomy (AAEA),
we are interested in submissions on both evaluation and architectures
(but not limited to):

EVALUATION:

* Analysis, comparisons and proposals of AI/ML benchmarks and competitions.
Lessons learnt.
* Theoretical or experimental accounts of the space of tasks, abilities
and
their dependencies.
* Tasks and methods for evaluating: transfer learning, cognitive growth,
development, cumulative learning, structural self-modification and
self-programming.
* Conceptualisations and definitions of generality or abstraction in AI /
ML
systems.
* Unified theories for evaluating intelligence and other cognitive
abilities, independently of the kind of subject (humans, animals or
machines): universal psychometrics.
* Evaluation of conversational bots, dialogue systems and personal
assistants.
* Evaluation of common sense, reasoning, understanding, causal relations.
* Evaluation of multi-agent systems in competitive and cooperative
scenarios, evaluation of teams, approaches from game theory.
* Better understanding of the characterisation of task requirements and
difficulty (energy, time, trials needed...), beyond algorithmic
complexity. Item generation. Item Response Theory (IRT).
* Evaluation of AI systems using generalised cognitive tests for humans.
Computer models taking IQ tests. Psychometric AI.
* Assessment of replicability, reproducibility and openness in AI / ML
systems.
* Evaluation methods for multiresolutional perception in AI systems and
agents. Analysis of progress scenarios, AI progress forecasting,
associated risks.

ARCHITECTURES:

* Analysis of requirements for autonomy and generality
* Design proposals for cognitive architectures targeting generality and/or
autonomy
* Complex layered networked systems and architectures
* Synergies between AI approaches
* Integration of top-down and bottom-up approaches (e.g. logic-based and
neural-inspired)
* Emergence of (symbolic) logic from neural networks
* New programming languages relevant to generality and autonomy
* New methodologies relevant to generality and autonomy
* New architectural principles relevant to generality and autonomy
* Complex (e.g. layered, hierarchical or recursive) network architectures
for generality and autonomy
* New theoretical insights relevant to generality and autonomy
* Motivation (intrinsic, extrinsic) for enabling autonomous behavior
selection and learning
* Analysis of the potential and limitations of existing approaches
* Methods to achieve general ((super)human-like) performance
* Methods for epigenetic development
* Baby machines and experience-based, continuous, online learning
* Seed-based programming and self-programming
* Education for systems with general intelligence and high levels of
autonomy
* Understanding and comprehension
* Reasoning and common-sense
* Acquisition of causal models
* Cumulative knowledge acquisition
* Curiosity, emotion and motivation for enabling autonomous behavior and
knowledge acquisition
* Meta-planning, reflection and self-improvement
* Principles of swarm intelligence for generality and autonomy

Papers should be between 2 and 12 pages (excluding references) and describe
the authors' original work in full (no extended abstracts). Papers will be
subjected to peer-review and can be accepted for oral presentation and/or
poster presentation. For papers that have previously been submitted to
IJCAI
and rejected, we ask authors to append the reviews and their responses to
aid
our review process.

Proposals for Demonstrations should be accompanied with a 2-page
description
for inclusion in the workshop's pre-proceedings. Examples include, but are
not limited to: (interactively) demonstrating new tests or benchmarks, or
the
performance of a robot, (cognitive) architecture or design methodology.

Oral presentations should be given by one of the authors during one of the
Contributed Talks Sessions. Posters and demonstrations will be presented
during the Demonstrations and Posters Session. Accepted papers will be
gathered into a volume of pre-proceedings and published on this website
before
the workshop. We are looking into the possibility of producing a special
issue
for an archival journal.

Submission should be made before the deadline on May 8th.

Please refer to our website for detailed information on how to submit a
paper: http://cadia.ru.is/workshops/aegap2018/

-------------
| Key Dates |
-------------

* Due date: May 8
* Notification date: May 29

----------------
| Organization |
* Satoshi Kurihara University of Electro-Communications
* Kenji Doya Okinawa Institute for Science and
Technology
* Itsuki Noda National Institute of Advanced Industrial
Science and Technology
* Hiroaki Wagatsuma Kyushu Institute of Technology
* Tadahiro Taniguchi Ritsumeikan University
* Hiroshi Yamakawa University of Tokyo &
Dwango AI Lab
* Kristinn R. Thórisson Reykjavik University &
Icelandic Institute for Intelligent
Machines
* Pei Wang Temple University
* Claes Strannegård Chalmers University of Technology &
University of Gothenburg
* Antonio Chella University of Palermo
* Lola Cañamero University of Hertfordshire
* Jordi Bieger Delft University of Technology &
Reykjavik University
* José Hernández-Orallo Technical University of Valencia
* Seán Ó hÉigeartaigh Centre for the Study of Existential Risk,
University of Cambridge
* Nader Chmait Victoria University
* Fernando Martínez-Plumed Technical University of Valencia
* Shahar Avin Centre for the Study of Existential Risk,
University of Cambrdige
Categories: Discussions

Re: [agi] Reason-Based Reasoning

AGI discussions @AGIRI - Tue, 2018-04-24 13:42
Steve,
I thought that you might be referring to something related to neural
networks, but I wasn't sure. As you realize, temporal learning does not
always produce insight into cause and effect. And reasons do not
necessarily refer to cause. Because they can detect or refer to a range of
possible relations, including very weak relationships, it is not easy to
effectively implement them. My interest is in building a language based
implementation of reason-based reasoning, but, reasons do not have to be
expressed in language, just as temporal difference learning can detect
relations that are not language-based.

In order to get people to pay attention to your idea you really have to
show an ability to implement them in an effective way that should draw the
attention of some audience. Then you need to continue to promote them. I
am interested in your idea about differentiating the inputs and integrating
the outputs of an NN, and I recall you talking about this before. However,
I always thought about it as a more abstract principle. If I ever had the
time to learn about neural nets I would definitely try your idea out in a
neural net designed to detect many different kinds of features..
Jim

Jim Bromer

On Tue, Apr 24, 2018 at 7:16 AM, Jim Bromer <jimbromer@gmail.com> wrote:

> Steve,
> I thought that you might be referring to something related to neural
> networks, but I wasn't sure. As you realize, temporal learning does not
> alway
>
> Jim Bromer
>
> On Tue, Apr 24, 2018 at 3:10 AM, Steve Richfield <
> steve.richfield@gmail.com> wrote:
>
>> Jim,
>>
>> On 4:42PM, Tue, Apr 17, 2018 Jim Bromer <jimbromer@gmail.com> wrote:
>> >
>> > I am not sure what you mean by temporal learning.
>>
>> Temporal learning is the process of recognizing that things are related
>> when their precise timing suggests a cause-and-effect relationship, e.g.
>> recognizing that when it rains that sidewalks become wet.
>>
>> A few years ago I published a paper at WORLDCOMP, where I did the math
>> that showed that if you construct a NN but differentiate its inputs and
>> integrate its outputs - that it performs the same transformation. HOWEVER
>> it magically transforms ordinaly learning into temporal learning WITHOUT
>> any of the heavy machinery others have proposed to implement temporal
>> learning. At one point in the math I was able to swap integral and
>> summation signs, which threw lots of people.
>>
>> I expected this to wake up the AGI world as being a trivial
>> implementation of a VERY powerful thing, but apparently no one even noticed.
>>
>> Steve
>> ==============
>> >
>> > Jim Bromer
>> >
>> > On Tue, Apr 17, 2018 at 5:53 PM, Steve Richfield <
>> steve.richfield@gmail.com> wrote:
>> >>
>> >> Isn't this what temporal learning is all about - determining what
>> causes what?
>> >>
>> >> Steve
>> >> ===========
>> >> AGI | Archives | Modify Your Subscription
>> >
>> >
>> > AGI | Archives | Modify Your Subscription
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
>> Modify <https://www.listbox.com/member/?&#x3E; Your
>> Subscription <http://www.listbox.com&#x3E;
>>
>>
>
Categories: Discussions

Re: [agi] Reason-Based Reasoning

AGI discussions @AGIRI - Tue, 2018-04-24 13:16
Steve,
I thought that you might be referring to something related to neural
networks, but I wasn't sure. As you realize, temporal learning does not
alway

Jim Bromer

On Tue, Apr 24, 2018 at 3:10 AM, Steve Richfield <steve.richfield@gmail.com>
wrote:

> Jim,
>
> On 4:42PM, Tue, Apr 17, 2018 Jim Bromer <jimbromer@gmail.com> wrote:
> >
> > I am not sure what you mean by temporal learning.
>
> Temporal learning is the process of recognizing that things are related
> when their precise timing suggests a cause-and-effect relationship, e.g.
> recognizing that when it rains that sidewalks become wet.
>
> A few years ago I published a paper at WORLDCOMP, where I did the math
> that showed that if you construct a NN but differentiate its inputs and
> integrate its outputs - that it performs the same transformation. HOWEVER
> it magically transforms ordinaly learning into temporal learning WITHOUT
> any of the heavy machinery others have proposed to implement temporal
> learning. At one point in the math I was able to swap integral and
> summation signs, which threw lots of people.
>
> I expected this to wake up the AGI world as being a trivial implementation
> of a VERY powerful thing, but apparently no one even noticed.
>
> Steve
> ==============
> >
> > Jim Bromer
> >
> > On Tue, Apr 17, 2018 at 5:53 PM, Steve Richfield <
> steve.richfield@gmail.com> wrote:
> >>
> >> Isn't this what temporal learning is all about - determining what
> causes what?
> >>
> >> Steve
> >> ===========
> >> AGI | Archives | Modify Your Subscription
> >
> >
> > AGI | Archives | Modify Your Subscription
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
> Modify <https://www.listbox.com/member/?&#x3E; Your
> Subscription <http://www.listbox.com&#x3E;
>
>
Categories: Discussions

Re: [agi] Reason-Based Reasoning

AGI discussions @AGIRI - Tue, 2018-04-24 09:10
Jim,

On 4:42PM, Tue, Apr 17, 2018 Jim Bromer <jimbromer@gmail.com> wrote:
>
> I am not sure what you mean by temporal learning.

Temporal learning is the process of recognizing that things are related
when their precise timing suggests a cause-and-effect relationship, e.g.
recognizing that when it rains that sidewalks become wet.

A few years ago I published a paper at WORLDCOMP, where I did the math that
showed that if you construct a NN but differentiate its inputs and
integrate its outputs - that it performs the same transformation. HOWEVER
it magically transforms ordinaly learning into temporal learning WITHOUT
any of the heavy machinery others have proposed to implement temporal
learning. At one point in the math I was able to swap integral and
summation signs, which threw lots of people.

I expected this to wake up the AGI world as being a trivial implementation
of a VERY powerful thing, but apparently no one even noticed.

Steve
==============
>
> Jim Bromer
>
> On Tue, Apr 17, 2018 at 5:53 PM, Steve Richfield <
steve.richfield@gmail.com> wrote:
>>
>> Isn't this what temporal learning is all about - determining what causes
what?
>>
>> Steve
>> ===========
>> AGI | Archives | Modify Your Subscription
>
>
> AGI | Archives | Modify Your Subscription
Categories: Discussions

Re: [agi] Reason-Based Reasoning

AGI discussions @AGIRI - Sat, 2018-04-21 21:39
I just wanted to mention that a reason-based-reasoning system could use
-and indicate- that some data objects were related by co-occurrence. It
could also use relations and indicate that relations were acquired from
some 'teacher'. So reason-based-reasoning might have learned that a wet
sidewalk could indicate that it was raining or had recently been raining
without knowing why it does. Or it might just explain that someone had told
it about that relation. At this point many people would criticize the
argument that it had some sort of understanding. This kind of criticism can
also be directed at any other current AI product. If a
reason-based-reasoning system was pushed to come up with a possible
explanation of why a wet sidewalk may indicate rain, it might come up a
fundamental guess, like, sidewalks and rain are related. That kind of
reasoning seems odd but no more odd than referring to a co-occurrence of
events which might be derived with other AI methods. Saying that sidewalks
and rain might be related seems like a pretty poor insight although we can
understand how that derivation could be produced given the analysis that it
was asked to make. A more insightful relationship might be described as, a
wet sidewalk and rainfall are related. Here, an example of a failure of
reasoning can be patched up with a little refinement. So, out of the blue,
a statement that sidewalks are related to rain, sounds like nonsense. But
the statement that wet sidewalks can be related to rainfall, is more
insightful.

My argument is that reason-based-reasoning, with some guessing and
conceptual exploration can be used as a basis for stronger AI and the
possibilities are only limited by the problems of scalability.
Jim Bromer

Jim Bromer

On Tue, Apr 17, 2018 at 7:42 PM, Jim Bromer <jimbromer@gmail.com> wrote:

> I am not sure what you mean by temporal learning.
>
> Jim Bromer
>
> On Tue, Apr 17, 2018 at 5:53 PM, Steve Richfield <
> steve.richfield@gmail.com> wrote:
>
>> Isn't this what temporal learning is all about - determining what causes
>> what?
>>
>> Steve
>> ===========
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
>> Modify
>> <https://www.listbox.com/member/?&&#x3E;
>> Your Subscription <http://www.listbox.com&#x3E;
>>
>>
>
Categories: Discussions

Re: [agi] On Memory & Machines

AGI discussions @AGIRI - Fri, 2018-04-20 08:13
Hi Shashank

A very interesting article. Something to ponder.

Thanks

Rob


________________________________
From: Shashank Yadav <shashank@asatae.foundation>
Sent: 19 April 2018 10:34 AM
To: AGI
Subject: [agi] On Memory & Machines

Hi

Just sharing a small post on the subject of memory from an AI design perspective:

http://muskdeer.blogspot.com/2018/04/on-memory-machines.html
On Memory & Machines<http://muskdeer.blogspot.com/2018/04/on-memory-machines.html&#x3E;
muskdeer.blogspot.com
Mathematicians argue that it is a mathematical fact, that often the most rational way of making a decision is to simply flip a coin. No wo...



Share along if you can. Very much appreciate your constructive criticism.

-
thank you,

AGI | Archives<https://www.listbox.com/member/archive/303/=now&#x3E; | Modify<https://www.listbox.com/member/?&&#x3E; Your Subscription [https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc] <http://www.listbox.com&#x3E;
Categories: Discussions

On Memory & Machines

AGI discussions @AGIRI - Thu, 2018-04-19 10:34
Hi



Just sharing a small post on the subject of memory from an AI design perspective:



http://muskdeer.blogspot.com/2018/04/on-memory-machines.html



Share along if you can. Very much appreciate your constructive criticism.



-

thank you,

Categories: Discussions

Re: [agi] Reason-Based Reasoning

AGI discussions @AGIRI - Wed, 2018-04-18 01:42
I am not sure what you mean by temporal learning.

Jim Bromer

On Tue, Apr 17, 2018 at 5:53 PM, Steve Richfield <steve.richfield@gmail.com>
wrote:

> Isn't this what temporal learning is all about - determining what causes
> what?
>
> Steve
> ===========
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
> Modify
> <https://www.listbox.com/member/?&&#x3E;
> Your Subscription <http://www.listbox.com&#x3E;
>
>
Categories: Discussions

Re: [agi] Reason-Based Reasoning

AGI discussions @AGIRI - Tue, 2018-04-17 23:54
Isn't this what temporal learning is all about - determining what causes
what?

Steve
===========
Categories: Discussions

Re: [agi] Reason-Based Reasoning

AGI discussions @AGIRI - Tue, 2018-04-17 21:27
The argument for reason-based-reasoning is based on numerous insights about
(natural) thinking. The fact that it has not been as effective in Computer
Intelligence as it was expected to be 60 years ago does not in itself prove
that it is an obsolete idea. The question is: how could the theory be made
plausible for computer intelligence in the immediate future?
Reason-Based-Reasoning would be based on discrete methods, weighted methods
and a network of relations. It would not be logical I'm the strict sense
although there would be a logic to it. The first questions are: How might a
'reason' be latched onto a subject matter. How could some unreasonable
reasons be rejected (as being reasonable)? How can reasons be made
malleable for particular situations that have not been previously explored
but are related to situations that have been. I do not think these are
difficult problems. The final complication is how can the method be adapted
for dealing with huge amounts of information.

Jim Bromer

On Tue, Apr 17, 2018 at 12:32 AM, Nanograte Knowledge Technologies <
nanogr8@live.com> wrote:

> Jim Bromer
>
> Criticism comes in many forms, but rendering a different view of what is
> accepted as being the Holy Grail of reasoning, does not compute to a case
> of - not understanding. Agreed, dumbing things down for some people has its
> communicative benefits, but people do tend to understand differently. This
> is most true when it comes to non-science, or inexact sciences. Off course
> your reasoning is plausible, but not to me the implications of the
> absolutist argument you presented, extending the result to a probable rule
> for reason-based reasoning, inter alia, logic. Here, I'm raising a flag
> against the fallacy of conducting an argument from authority.
>
> Reasoning only stretches as far as a mind can know. Logic dictates. Brian
> Greene in 'The Elegant Universe', asserted how the next 40 years would
> probably see the emergence of a new science. He based this assertion on the
> limits encountered by classical science, within quantum physics. We live in
> unconventional times. We need to develop unconventional reasoning for it.
> Unconventional reasoning isn't always convenient, easy, pretty, or studded
> with classical words. It has many layers to it, maybe even as many as 16.
> What unconventional reasoning (by any name) does, is challenge the
> conventional.
>
> I'm still not convinced how, on probability, the findings of the
> experiment can be reasonably generalized to a rule of logic in a valid and
> reliable manner.
>
> Robert Benjamin
>
> <http://www.listbox.com&#x3E;
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
> Modify
> <https://www.listbox.com/member/?&&#x3E;
> Your Subscription <http://www.listbox.com&#x3E;
>
Categories: Discussions

Re: [agi] Reason-Based Reasoning

AGI discussions @AGIRI - Tue, 2018-04-17 06:32
Jim Bromer

Criticism comes in many forms, but rendering a different view of what is accepted as being the Holy Grail of reasoning, does not compute to a case of - not understanding. Agreed, dumbing things down for some people has its communicative benefits, but people do tend to understand differently. This is most true when it comes to non-science, or inexact sciences. Off course your reasoning is plausible, but not to me the implications of the absolutist argument you presented, extending the result to a probable rule for reason-based reasoning, inter alia, logic. Here, I'm raising a flag against the fallacy of conducting an argument from authority.

Reasoning only stretches as far as a mind can know. Logic dictates. Brian Greene in 'The Elegant Universe', asserted how the next 40 years would probably see the emergence of a new science. He based this assertion on the limits encountered by classical science, within quantum physics. We live in unconventional times. We need to develop unconventional reasoning for it. Unconventional reasoning isn't always convenient, easy, pretty, or studded with classical words. It has many layers to it, maybe even as many as 16. What unconventional reasoning (by any name) does, is challenge the conventional.

I'm still not convinced how, on probability, the findings of the experiment can be reasonably generalized to a rule of logic in a valid and reliable manner.

Robert Benjamin

[https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc] <http://www.listbox.com&#x3E;
Categories: Discussions

Re: [agi] Reason-Based Reasoning

AGI discussions @AGIRI - Tue, 2018-04-17 00:27
Of course I meant, I do not understand why you would say the conclusion is
not plausible.

This idea, that by changing the population a shift in the probability of
the predicted outcome would occur, really shows that you do not understand
the liquidity or variation of abstraction. The idea that the statistical
result would not be changed by reason-based reasoning goes against the
observation of natural intelligence of human existence. The attempt to
force all interpretation of the mechanisms of intelligence to their most
insipid variations of co-occurrence does not make any sense except that in
not knowing how the mind actually works one might understandably grasp at
whatever is within reach at the moment.

As I tried to understand what your reasons for making your criticisms were,
I did come up with alternative explanations that might explain why more
people would be compliant when the simplest of reasons were given. For
example, not knowing why someone else would be in a hurry, you would
naturally start thinking of the reasons that you had been in a hurry and
you might become more sympathetic to the other person's request to use the
copier first. So this explanation would show that in understanding a
simplistic reason you might imagine what would motivate you the most in
those circumstances and thereby supply the requester with a good reason
(even if it was unspoken and even if you were only semi-conscious of the
possible reasons your mind was creating.) So reasons have to latch onto
something, but even if they only trigger an imaginative response they can
still motivate a listener.

But this alternative explanation still shows that reason-based reasoning is
plausible.

Jim Bromer

On Mon, Apr 16, 2018 at 2:17 PM, Jim Bromer <jimbromer@gmail.com> wrote:

> My conclusion is plausible but not strongly proven. I really do not
> understand why you would say that the conclusion is plausible. Perhaps you
> have a conclusion in mind that I did not intend but that seems unlikely.
> Maybe you are saying that the particular numerical proportion 50/90 is not
> plausible for every sort of understandable request across different
> populations. (who could understand and would have a reason to respond to
> the request.) Your criticism does not make any sense to me.
> Obviously, if you change the population to people who do not use copiers
> in their work, the particular copier experiment would be nearly meaningless.
>
> Reason based reasoning is in some ways similar to logic but in other ways
> very different. Y because of X is similar to an implication in some ways
> and different in others. As I make the effort to understand your criticism
> I have to start by trying out different explanations (or reasons.) If I
> understood your reasons I would be more likely to figure out what exactly
> you are talking about.
>
> Jim Bromer
>
> On Mon, Apr 16, 2018 at 1:18 PM, Nanograte Knowledge Technologies <
> nanogr8@live.com> wrote:
>
>> My point is that the mean statistical result would NOT be affected by
>> reason-based reasoning. For example, by simply changing the POPULATION, a
>> swing in the probability of the predicted outcomes seems highly likely. As
>> such, it may be a logical error to conclude statistically that all people
>> across the world would be nearly influenced, on average, by the same logic
>> as that being presented in this case.
>>
>> By implication, the first part of the question is based on chance alone,
>> as in flipping a coin - a 50/50 proposition - no reason given at all, and
>> none needed either. Zero reason-based reasoning in evidence. According to
>> the logic of chance, eventually the result would be almost the same.
>>
>> The conclusion (still) is not plausible, merely convenient to the case. I
>> do take the point, but not the argument for it. Therefore, the logic should
>> be adapted to suit the outcome.
>>
>>
>> ------------------------------
>> *From:* Jim Bromer <jimbromer@gmail.com>
>> *Sent:* 16 April 2018 02:13 AM
>>
>> *To:* AGI
>> *Subject:* Re: [agi] Reason-Based Reasoning
>>
>> The point is that reason-based reasoning has influence over people even
>> if the given reasons do not seem profoundly motivational. "Please let me
>> use your phone because someone needs an ambulance around the corner," would
>> seem to be more motivational than, "Let me use your phone because I want to
>> make a call." No one ever said that 50% of people would consent to any
>> proposition. Your counter examples are ok for framing boundaries on the
>> statistical nature of likely responses to (reason-based) questions but it
>> is otherwise specious to insist that they be taken asdefinitive
>> counterfactuals.
>>
>> -----THE REASON I say this is because I assume that most people would
>> react to more aggressive or hostile requests in less positive ways, but the
>> conclusion is that if you give a reason for your request there I going to
>> be a statistical difference in the responses given (ie the request will be
>> STATISTICALLY taken as being more persuasive in some way as MEASURED ACROSS
>> some POPULATION) regardless of the merit of the reason (or how delicately
>> the reason is given).
>>
>> FOR EXAMPLE, I would not have spent the time to write this response out
>> if you had just said, "I think the conclusion is not plausible, merely
>> convenient."
>>
>> Jim Bromer
>>
>> On Sun, Apr 15, 2018 at 7:48 PM, Nanograte Knowledge Technologies <
>> nanogr8@live.com> wrote:
>>
>> There could be tens of different factors influencing the outcome of this
>> question. I think the conclusion is not plausible, merely convenient.
>>
>> Requested the machine to the humans in the busy office: "I need to now
>> pause the oxygen for 7.54 min intervals, because the measured moisture
>> content is highly corrosive to my circuits. Okay?"
>>
>>
>> ------------------------------
>> *From:* Jim Bromer <jimbromer@gmail.com>
>> *Sent:* 15 April 2018 11:47 PM
>> *To:* AGI
>> *Subject:* Re: [agi] Reason-Based Reasoning
>>
>> I should not have said poor reasons but maybe something more like
>> superficial reasons. Wanting to go first because you are in a hurry is
>> reasonable but it is only a little better than saying, "Is it ok if I go
>> first because I want to go first."
>> If the reason was completely irrelevant, like "Is it ok if I go first
>> because Spain is far away," could/would lead someone to consider the
>> possibility that there is some kind of connection to the distance to Spain
>> and the wish to go first. This is an extremely important method of framing
>> a relation, although it is dependent on language and if a computer program
>> could understand language then there would be a number of different ways to
>> frame a relation.
>> But the idea that a simple, trivial or seemingly obvious reason might
>> actually have that much influence over people shows that reason-based
>> reasoning must be a powerful force in intelligence.
>>
>> Jim Bromer
>>
>> On Sun, Apr 15, 2018 at 2:30 PM, Dr Miles Dyson <
>> drmilesbennettdyson@gmail.com> wrote:
>>
>> Is it okay if I bomb your country? 50% approval?
>> Is it okay if I bomb your country, because you used chemical weapons?
>> 90% approval?
>>
>> On Sun, Apr 15, 2018 at 10:26 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>>
>> I listened to a book discussion on C-Span yesterday. Leonard Mlodinow
>> and Deepak Chopra. Chopra mentioned a psychological experiment that
>> was run with grad students who were stationed near an office copier.
>> Whenever someone came to use the copying machine the grad students
>> would ask if they could use the copier first to copy 5 pages. (That is
>> my recollection.) When they asked without supplying a reason they got
>> the consent of the other person around 50% of the time. But when they
>> asked if they could use the copier first for 5 pages and then added a
>> reason, the proportion of consents jumped up to more than 90%. This
>> was true even if the reason given was weak. "I have to copy 5 pages;
>> is it ok if I use the copier first because I'm in a hurry?"
>> Jim Bromer
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> Modify Your Subscription: https://www.listbox.com/member/?&#x26;
>> Powered by Listbox: http://www.listbox.com
>>
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>> <http://www.listbox.com&#x3E;
>>
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>> <http://www.listbox.com&#x3E;
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>> <http://www.listbox.com&#x3E;
>>
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>> <http://www.listbox.com&#x3E;
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
>> Modify
>> <https://www.listbox.com/member/?&&#x3E;
>> Your Subscription <http://www.listbox.com&#x3E;
>>
>
>
Categories: Discussions

Re: [agi] Reason-Based Reasoning

AGI discussions @AGIRI - Mon, 2018-04-16 20:17
My conclusion is plausible but not strongly proven. I really do not
understand why you would say that the conclusion is plausible. Perhaps you
have a conclusion in mind that I did not intend but that seems unlikely.
Maybe you are saying that the particular numerical proportion 50/90 is not
plausible for every sort of understandable request across different
populations. (who could understand and would have a reason to respond to
the request.) Your criticism does not make any sense to me.
Obviously, if you change the population to people who do not use copiers in
their work, the particular copier experiment would be nearly meaningless.

Reason based reasoning is in some ways similar to logic but in other ways
very different. Y because of X is similar to an implication in some ways
and different in others. As I make the effort to understand your criticism
I have to start by trying out different explanations (or reasons.) If I
understood your reasons I would be more likely to figure out what exactly
you are talking about.

Jim Bromer

On Mon, Apr 16, 2018 at 1:18 PM, Nanograte Knowledge Technologies <
nanogr8@live.com> wrote:

> My point is that the mean statistical result would NOT be affected by
> reason-based reasoning. For example, by simply changing the POPULATION, a
> swing in the probability of the predicted outcomes seems highly likely. As
> such, it may be a logical error to conclude statistically that all people
> across the world would be nearly influenced, on average, by the same logic
> as that being presented in this case.
>
> By implication, the first part of the question is based on chance alone,
> as in flipping a coin - a 50/50 proposition - no reason given at all, and
> none needed either. Zero reason-based reasoning in evidence. According to
> the logic of chance, eventually the result would be almost the same.
>
> The conclusion (still) is not plausible, merely convenient to the case. I
> do take the point, but not the argument for it. Therefore, the logic should
> be adapted to suit the outcome.
>
>
> ------------------------------
> *From:* Jim Bromer <jimbromer@gmail.com>
> *Sent:* 16 April 2018 02:13 AM
>
> *To:* AGI
> *Subject:* Re: [agi] Reason-Based Reasoning
>
> The point is that reason-based reasoning has influence over people even if
> the given reasons do not seem profoundly motivational. "Please let me use
> your phone because someone needs an ambulance around the corner," would
> seem to be more motivational than, "Let me use your phone because I want to
> make a call." No one ever said that 50% of people would consent to any
> proposition. Your counter examples are ok for framing boundaries on the
> statistical nature of likely responses to (reason-based) questions but it
> is otherwise specious to insist that they be taken asdefinitive
> counterfactuals.
>
> -----THE REASON I say this is because I assume that most people would
> react to more aggressive or hostile requests in less positive ways, but the
> conclusion is that if you give a reason for your request there I going to
> be a statistical difference in the responses given (ie the request will be
> STATISTICALLY taken as being more persuasive in some way as MEASURED ACROSS
> some POPULATION) regardless of the merit of the reason (or how delicately
> the reason is given).
>
> FOR EXAMPLE, I would not have spent the time to write this response out if
> you had just said, "I think the conclusion is not plausible, merely
> convenient."
>
> Jim Bromer
>
> On Sun, Apr 15, 2018 at 7:48 PM, Nanograte Knowledge Technologies <
> nanogr8@live.com> wrote:
>
> There could be tens of different factors influencing the outcome of this
> question. I think the conclusion is not plausible, merely convenient.
>
> Requested the machine to the humans in the busy office: "I need to now
> pause the oxygen for 7.54 min intervals, because the measured moisture
> content is highly corrosive to my circuits. Okay?"
>
>
> ------------------------------
> *From:* Jim Bromer <jimbromer@gmail.com>
> *Sent:* 15 April 2018 11:47 PM
> *To:* AGI
> *Subject:* Re: [agi] Reason-Based Reasoning
>
> I should not have said poor reasons but maybe something more like
> superficial reasons. Wanting to go first because you are in a hurry is
> reasonable but it is only a little better than saying, "Is it ok if I go
> first because I want to go first."
> If the reason was completely irrelevant, like "Is it ok if I go first
> because Spain is far away," could/would lead someone to consider the
> possibility that there is some kind of connection to the distance to Spain
> and the wish to go first. This is an extremely important method of framing
> a relation, although it is dependent on language and if a computer program
> could understand language then there would be a number of different ways to
> frame a relation.
> But the idea that a simple, trivial or seemingly obvious reason might
> actually have that much influence over people shows that reason-based
> reasoning must be a powerful force in intelligence.
>
> Jim Bromer
>
> On Sun, Apr 15, 2018 at 2:30 PM, Dr Miles Dyson <
> drmilesbennettdyson@gmail.com> wrote:
>
> Is it okay if I bomb your country? 50% approval?
> Is it okay if I bomb your country, because you used chemical weapons? 90%
> approval?
>
> On Sun, Apr 15, 2018 at 10:26 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>
> I listened to a book discussion on C-Span yesterday. Leonard Mlodinow
> and Deepak Chopra. Chopra mentioned a psychological experiment that
> was run with grad students who were stationed near an office copier.
> Whenever someone came to use the copying machine the grad students
> would ask if they could use the copier first to copy 5 pages. (That is
> my recollection.) When they asked without supplying a reason they got
> the consent of the other person around 50% of the time. But when they
> asked if they could use the copier first for 5 pages and then added a
> reason, the proportion of consents jumped up to more than 90%. This
> was true even if the reason given was weak. "I have to copy 5 pages;
> is it ok if I use the copier first because I'm in a hurry?"
> Jim Bromer
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> Modify Your Subscription: https://www.listbox.com/member/?&#x26;
> Powered by Listbox: http://www.listbox.com
>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
> <http://www.listbox.com&#x3E;
>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
> <http://www.listbox.com&#x3E;
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
> <http://www.listbox.com&#x3E;
>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
> <http://www.listbox.com&#x3E;
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
> Modify
> <https://www.listbox.com/member/?&&#x3E;
> Your Subscription <http://www.listbox.com&#x3E;
>
Categories: Discussions

Re: [agi] Reason-Based Reasoning

AGI discussions @AGIRI - Mon, 2018-04-16 19:18
My point is that the mean statistical result would NOT be affected by reason-based reasoning. For example, by simply changing the POPULATION, a swing in the probability of the predicted outcomes seems highly likely. As such, it may be a logical error to conclude statistically that all people across the world would be nearly influenced, on average, by the same logic as that being presented in this case.

By implication, the first part of the question is based on chance alone, as in flipping a coin - a 50/50 proposition - no reason given at all, and none needed either. Zero reason-based reasoning in evidence. According to the logic of chance, eventually the result would be almost the same.

The conclusion (still) is not plausible, merely convenient to the case. I do take the point, but not the argument for it. Therefore, the logic should be adapted to suit the outcome.


________________________________
From: Jim Bromer <jimbromer@gmail.com>
Sent: 16 April 2018 02:13 AM
To: AGI
Subject: Re: [agi] Reason-Based Reasoning

The point is that reason-based reasoning has influence over people even if the given reasons do not seem profoundly motivational. "Please let me use your phone because someone needs an ambulance around the corner," would seem to be more motivational than, "Let me use your phone because I want to make a call." No one ever said that 50% of people would consent to any proposition. Your counter examples are ok for framing boundaries on the statistical nature of likely responses to (reason-based) questions but it is otherwise specious to insist that they be taken asdefinitive counterfactuals.

-----THE REASON I say this is because I assume that most people would react to more aggressive or hostile requests in less positive ways, but the conclusion is that if you give a reason for your request there I going to be a statistical difference in the responses given (ie the request will be STATISTICALLY taken as being more persuasive in some way as MEASURED ACROSS some POPULATION) regardless of the merit of the reason (or how delicately the reason is given).

FOR EXAMPLE, I would not have spent the time to write this response out if you had just said, "I think the conclusion is not plausible, merely convenient."

Jim Bromer

On Sun, Apr 15, 2018 at 7:48 PM, Nanograte Knowledge Technologies <nanogr8@live.com<mailto:nanogr8@live.com>&#x3E; wrote:

There could be tens of different factors influencing the outcome of this question. I think the conclusion is not plausible, merely convenient.

Requested the machine to the humans in the busy office: "I need to now pause the oxygen for 7.54 min intervals, because the measured moisture content is highly corrosive to my circuits. Okay?"



________________________________
From: Jim Bromer <jimbromer@gmail.com<mailto:jimbromer@gmail.com>&#x3E;
Sent: 15 April 2018 11:47 PM
To: AGI
Subject: Re: [agi] Reason-Based Reasoning

I should not have said poor reasons but maybe something more like superficial reasons. Wanting to go first because you are in a hurry is reasonable but it is only a little better than saying, "Is it ok if I go first because I want to go first."
If the reason was completely irrelevant, like "Is it ok if I go first because Spain is far away," could/would lead someone to consider the possibility that there is some kind of connection to the distance to Spain and the wish to go first. This is an extremely important method of framing a relation, although it is dependent on language and if a computer program could understand language then there would be a number of different ways to frame a relation.
But the idea that a simple, trivial or seemingly obvious reason might actually have that much influence over people shows that reason-based reasoning must be a powerful force in intelligence.

Jim Bromer

On Sun, Apr 15, 2018 at 2:30 PM, Dr Miles Dyson <drmilesbennettdyson@gmail.com<mailto:drmilesbennettdyson@gmail.com>&#x3E; wrote:
Is it okay if I bomb your country? 50% approval?
Is it okay if I bomb your country, because you used chemical weapons? 90% approval?

On Sun, Apr 15, 2018 at 10:26 AM, Jim Bromer <jimbromer@gmail.com<mailto:jimbromer@gmail.com>&#x3E; wrote:
I listened to a book discussion on C-Span yesterday. Leonard Mlodinow
and Deepak Chopra. Chopra mentioned a psychological experiment that
was run with grad students who were stationed near an office copier.
Whenever someone came to use the copying machine the grad students
would ask if they could use the copier first to copy 5 pages. (That is
my recollection.) When they asked without supplying a reason they got
the consent of the other person around 50% of the time. But when they
asked if they could use the copier first for 5 pages and then added a
reason, the proportion of consents jumped up to more than 90%. This
was true even if the reason given was weak. "I have to copy 5 pages;
is it ok if I use the copier first because I'm in a hurry?"
Jim Bromer


AGI
Archives: https://www.listbox.com/member/archive/303/=now
Modify Your Subscription: https://www.listbox.com/member/?&#x26;
Powered by Listbox: http://www.listbox.com

AGI | Archives<https://www.listbox.com/member/archive/303/=now&#x3E; | Modify<https://www.listbox.com/member/?&&#x3E; Your Subscription [https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc] <http://www.listbox.com&#x3E;

AGI | Archives<https://www.listbox.com/member/archive/303/=now&#x3E; | Modify<https://www.listbox.com/member/?&&#x3E; Your Subscription [https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc] <http://www.listbox.com&#x3E;

AGI | Archives<https://www.listbox.com/member/archive/303/=now&#x3E; | Modify<https://www.listbox.com/member/?&&#x3E; Your Subscription [https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc] <http://www.listbox.com&#x3E;

AGI | Archives<https://www.listbox.com/member/archive/303/=now&#x3E; | Modify<https://www.listbox.com/member/?&&#x3E; Your Subscription [https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc] <http://www.listbox.com&#x3E;
Categories: Discussions

Re: [agi] Reason-Based Reasoning

AGI discussions @AGIRI - Mon, 2018-04-16 02:13
The point is that reason-based reasoning has influence over people even if
the given reasons do not seem profoundly motivational. "Please let me use
your phone because someone needs an ambulance around the corner," would
seem to be more motivational than, "Let me use your phone because I want to
make a call." No one ever said that 50% of people would consent to any
proposition. Your counter examples are ok for framing boundaries on the
statistical nature of likely responses to (reason-based) questions but it
is otherwise specious to insist that they be taken asdefinitive
counterfactuals.

-----THE REASON I say this is because I assume that most people would react
to more aggressive or hostile requests in less positive ways, but the
conclusion is that if you give a reason for your request there I going to
be a statistical difference in the responses given (ie the request will be
STATISTICALLY taken as being more persuasive in some way as MEASURED ACROSS
some POPULATION) regardless of the merit of the reason (or how delicately
the reason is given).

FOR EXAMPLE, I would not have spent the time to write this response out if
you had just said, "I think the conclusion is not plausible, merely
convenient."

Jim Bromer

On Sun, Apr 15, 2018 at 7:48 PM, Nanograte Knowledge Technologies <
nanogr8@live.com> wrote:

> There could be tens of different factors influencing the outcome of this
> question. I think the conclusion is not plausible, merely convenient.
>
> Requested the machine to the humans in the busy office: "I need to now
> pause the oxygen for 7.54 min intervals, because the measured moisture
> content is highly corrosive to my circuits. Okay?"
>
>
> ------------------------------
> *From:* Jim Bromer <jimbromer@gmail.com>
> *Sent:* 15 April 2018 11:47 PM
> *To:* AGI
> *Subject:* Re: [agi] Reason-Based Reasoning
>
> I should not have said poor reasons but maybe something more like
> superficial reasons. Wanting to go first because you are in a hurry is
> reasonable but it is only a little better than saying, "Is it ok if I go
> first because I want to go first."
> If the reason was completely irrelevant, like "Is it ok if I go first
> because Spain is far away," could/would lead someone to consider the
> possibility that there is some kind of connection to the distance to Spain
> and the wish to go first. This is an extremely important method of framing
> a relation, although it is dependent on language and if a computer program
> could understand language then there would be a number of different ways to
> frame a relation.
> But the idea that a simple, trivial or seemingly obvious reason might
> actually have that much influence over people shows that reason-based
> reasoning must be a powerful force in intelligence.
>
> Jim Bromer
>
> On Sun, Apr 15, 2018 at 2:30 PM, Dr Miles Dyson <
> drmilesbennettdyson@gmail.com> wrote:
>
> Is it okay if I bomb your country? 50% approval?
> Is it okay if I bomb your country, because you used chemical weapons? 90%
> approval?
>
> On Sun, Apr 15, 2018 at 10:26 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>
> I listened to a book discussion on C-Span yesterday. Leonard Mlodinow
> and Deepak Chopra. Chopra mentioned a psychological experiment that
> was run with grad students who were stationed near an office copier.
> Whenever someone came to use the copying machine the grad students
> would ask if they could use the copier first to copy 5 pages. (That is
> my recollection.) When they asked without supplying a reason they got
> the consent of the other person around 50% of the time. But when they
> asked if they could use the copier first for 5 pages and then added a
> reason, the proportion of consents jumped up to more than 90%. This
> was true even if the reason given was weak. "I have to copy 5 pages;
> is it ok if I use the copier first because I'm in a hurry?"
> Jim Bromer
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> Modify Your Subscription: https://www.listbox.com/member/?&#x26;
> Powered by Listbox: http://www.listbox.com
>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
> <http://www.listbox.com&#x3E;
>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
> <http://www.listbox.com&#x3E;
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
> Modify
> <https://www.listbox.com/member/?&&#x3E;
> Your Subscription <http://www.listbox.com&#x3E;
>
Categories: Discussions

Re: [agi] Reason-Based Reasoning

AGI discussions @AGIRI - Mon, 2018-04-16 01:48
There could be tens of different factors influencing the outcome of this question. I think the conclusion is not plausible, merely convenient.

Requested the machine to the humans in the busy office: "I need to now pause the oxygen for 7.54 min intervals, because the measured moisture content is highly corrosive to my circuits. Okay?"


________________________________
From: Jim Bromer <jimbromer@gmail.com>
Sent: 15 April 2018 11:47 PM
To: AGI
Subject: Re: [agi] Reason-Based Reasoning

I should not have said poor reasons but maybe something more like superficial reasons. Wanting to go first because you are in a hurry is reasonable but it is only a little better than saying, "Is it ok if I go first because I want to go first."
If the reason was completely irrelevant, like "Is it ok if I go first because Spain is far away," could/would lead someone to consider the possibility that there is some kind of connection to the distance to Spain and the wish to go first. This is an extremely important method of framing a relation, although it is dependent on language and if a computer program could understand language then there would be a number of different ways to frame a relation.
But the idea that a simple, trivial or seemingly obvious reason might actually have that much influence over people shows that reason-based reasoning must be a powerful force in intelligence.

Jim Bromer

On Sun, Apr 15, 2018 at 2:30 PM, Dr Miles Dyson <drmilesbennettdyson@gmail.com<mailto:drmilesbennettdyson@gmail.com>&#x3E; wrote:
Is it okay if I bomb your country? 50% approval?
Is it okay if I bomb your country, because you used chemical weapons? 90% approval?

On Sun, Apr 15, 2018 at 10:26 AM, Jim Bromer <jimbromer@gmail.com<mailto:jimbromer@gmail.com>&#x3E; wrote:
I listened to a book discussion on C-Span yesterday. Leonard Mlodinow
and Deepak Chopra. Chopra mentioned a psychological experiment that
was run with grad students who were stationed near an office copier.
Whenever someone came to use the copying machine the grad students
would ask if they could use the copier first to copy 5 pages. (That is
my recollection.) When they asked without supplying a reason they got
the consent of the other person around 50% of the time. But when they
asked if they could use the copier first for 5 pages and then added a
reason, the proportion of consents jumped up to more than 90%. This
was true even if the reason given was weak. "I have to copy 5 pages;
is it ok if I use the copier first because I'm in a hurry?"
Jim Bromer


AGI
Archives: https://www.listbox.com/member/archive/303/=now
Modify Your Subscription: https://www.listbox.com/member/?&#x26;
Powered by Listbox: http://www.listbox.com

AGI | Archives<https://www.listbox.com/member/archive/303/=now&#x3E; | Modify<https://www.listbox.com/member/?&&#x3E; Your Subscription [https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc] <http://www.listbox.com&#x3E;

AGI | Archives<https://www.listbox.com/member/archive/303/=now&#x3E; | Modify<https://www.listbox.com/member/?&&#x3E; Your Subscription [https://www.listbox.com/images/listbox-logo-small.pngecd5649.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nby1zbWFsbC5wbmc] <http://www.listbox.com&#x3E;
Categories: Discussions

Re: [agi] Reason-Based Reasoning

AGI discussions @AGIRI - Sun, 2018-04-15 23:48
I should not have said poor reasons but maybe something more like
superficial reasons. Wanting to go first because you are in a hurry is
reasonable but it is only a little better than saying, "Is it ok if I go
first because I want to go first."
If the reason was completely irrelevant, like "Is it ok if I go first
because Spain is far away," could/would lead someone to consider the
possibility that there is some kind of connection to the distance to Spain
and the wish to go first. This is an extremely important method of framing
a relation, although it is dependent on language and if a computer program
could understand language then there would be a number of different ways to
frame a relation.
But the idea that a simple, trivial or seemingly obvious reason might
actually have that much influence over people shows that reason-based
reasoning must be a powerful force in intelligence.

Jim Bromer

On Sun, Apr 15, 2018 at 2:30 PM, Dr Miles Dyson <
drmilesbennettdyson@gmail.com> wrote:

> Is it okay if I bomb your country? 50% approval?
> Is it okay if I bomb your country, because you used chemical weapons? 90%
> approval?
>
> On Sun, Apr 15, 2018 at 10:26 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>
>> I listened to a book discussion on C-Span yesterday. Leonard Mlodinow
>> and Deepak Chopra. Chopra mentioned a psychological experiment that
>> was run with grad students who were stationed near an office copier.
>> Whenever someone came to use the copying machine the grad students
>> would ask if they could use the copier first to copy 5 pages. (That is
>> my recollection.) When they asked without supplying a reason they got
>> the consent of the other person around 50% of the time. But when they
>> asked if they could use the copier first for 5 pages and then added a
>> reason, the proportion of consents jumped up to more than 90%. This
>> was true even if the reason given was weak. "I have to copy 5 pages;
>> is it ok if I use the copier first because I'm in a hurry?"
>> Jim Bromer
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> Modify Your Subscription: https://www.listbox.com/member
>> /?& <https://www.listbox.com/member/?&&#x3E;
>> Powered by Listbox: http://www.listbox.com
>>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E; |
> Modify
> <https://www.listbox.com/member/?&&#x3E;
> Your Subscription <http://www.listbox.com&#x3E;
>
Categories: Discussions

Re: [agi] Reason-Based Reasoning

AGI discussions @AGIRI - Sun, 2018-04-15 20:30
Is it okay if I bomb your country? 50% approval?
Is it okay if I bomb your country, because you used chemical weapons? 90%
approval?

On Sun, Apr 15, 2018 at 10:26 AM, Jim Bromer <jimbromer@gmail.com> wrote:

> I listened to a book discussion on C-Span yesterday. Leonard Mlodinow
> and Deepak Chopra. Chopra mentioned a psychological experiment that
> was run with grad students who were stationed near an office copier.
> Whenever someone came to use the copying machine the grad students
> would ask if they could use the copier first to copy 5 pages. (That is
> my recollection.) When they asked without supplying a reason they got
> the consent of the other person around 50% of the time. But when they
> asked if they could use the copier first for 5 pages and then added a
> reason, the proportion of consents jumped up to more than 90%. This
> was true even if the reason given was weak. "I have to copy 5 pages;
> is it ok if I use the copier first because I'm in a hurry?"
> Jim Bromer
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> Modify Your Subscription: https://www.listbox.com/
> member/?&
> Powered by Listbox: http://www.listbox.com
>
Categories: Discussions
Syndicate content