Feed aggregator

Re: [agi] How to build a consciousness detector.

AGI discussions @AGIRI - Fri, 2017-12-15 02:38
On Thu, Dec 14, 2017 at 9:56 AM, Alan Grimes <ALONZOTG@verizon.net> wrote:
> Step 1: gotta look good so make a nice wood and black plexi-glass
> housing. Old school industrial design is best, so throw in a CRT and a
> few bakelite knobs and toggle switches (always classic...)
>
> step 2: get a good wire harness to the test subject. -- a bit
> problematic for filthy humans but gator clips should work on the robots.
>
> Step 3: route the feedBACK signal to the screen, then, in circuitry,
> subtract out the feedFORWARD signal, when the voltage from this stage
> gets higher than some threshold epsilon then overlay a red error on the
> screen.
>
> Step 4: you can produce a final verdict using a simple meter labelled 0
> to 100%. The circuit can be dead simple, start with a fixed voltage,
> then a R-C integrator, and a meter. Whenever an error signal is
> triggered by step 3, short the RC circuit to ground. The machine should
> actually be constructed with the simplest possible circuit so that there
> is as little question as possible with regards to what it is measuring
> and how it is measuring it.
>
> Step 5: test out the device on your pet rock. since it does not generate
> a feedBACK signal, it will have a 100% error signal which will hold the
> voltage at the meter to ground which represents 0 consciousness..
>
> A democrat would probably register around 60% on this circuit, a
> conservative would score maybe 80-85%, a highly advanced AI would
> probably be near 98%...
>
>
> I realize that the plastic-faced chatbot is a great publicity stunt, but
> at some point we should get serious and run this machine on it and show
> that it really isn't conscious in any meaningful way. =|
>
>
> I really feel I'm not being taken seriously at all so my presentation is
> a bit silly today but I'm being absolutely serious about this. =|
>
> --
> Please report bounces from this address to atg@numentics.com
>
> Powers are not rights.
>
>

My electrical engineer friend said this sounds like something that
would have been tried in the 1950s. Try it and see what happens?
Categories: Discussions

RE: [agi] How to build a consciousness detector.

AGI discussions @AGIRI - Fri, 2017-12-15 02:26
"detect"

The word implies a consciousness.

A difference between two meanings... translated => information leakage, via a protocol, so it's a duplex/reflexive systems interaction...

Can a non-conscious entity detect? Sure you have some electric circuits interacting, there has to be an increase in something, integrated information? I don't know I have to pre-emptively disagree as an exercise.

Consciousness electrically measured less than 0.000001 hypothetical units disqualifies as conscious in a singular item :)

John

> -----Original Message-----
> From: Alan Grimes [mailto:ALONZOTG@verizon.net]
> Subject: [agi] How to build a consciousness detector.
>
>
> Step 3: route the feedBACK signal to the screen, then, in circuitry, subtract out
> the feedFORWARD signal, when the voltage from this stage gets higher than
> some threshold epsilon then overlay a red error on the screen.
>
> Step 4: you can produce a final verdict using a simple meter labelled 0 to 100%.
> The circuit can be dead simple, start with a fixed voltage, then a R-C integrator,
> and a meter. Whenever an error signal is triggered by step 3, short the RC
> circuit to ground. The machine should actually be constructed with the
> simplest possible circuit so that there is as little question as possible with
> regards to what it is measuring and how it is measuring it.
>
> Step 5: test out the device on your pet rock. since it does not generate a
> feedBACK signal, it will have a 100% error signal which will hold the voltage at
> the meter to ground which represents 0 consciousness..
>
Categories: Discussions

Re: [agi] AGI and ontologies

AGI discussions @AGIRI - Thu, 2017-12-14 22:23
I think the idea of Satisfiability makes sense for AI. For example a
program might try to see if certain terms/concepts can play a role with the
other terms/concepts it is adjoined with. This has to be fit to the context
of the situation. If previously learned structures are expressed with
different levels of generalization then they could serve as indexes into
more specific structures. For example, if there is an argument about
religious belief, the point that all knowledge must be belief can seem
insightful or annoyingly irrelevant. You need to rely on reasons in order
to discover if this idea is being used reasonably in some particular
context. If science delivers definitions of religion, to continue with this
line of reasoning, at what point does it take on the aspects of a religion?
I am taking the concepts of 'science' and 'religion' outside their usual
boundaries and you will not be able to 'get' what I am saying if you are
unable do that for some reason. That is what is wrong with absolute
structural roles. To continue with this example, the effort to define
science as an ontology of reality based on repeatable experiment leads to
either the absurdity that there are no unique events that can occur in
reality or that there are some events that are beyond scientific expression.
People who cannot deal with comments like this sometimes look emotional or
'political' but I have seen so many people who repeatedly express
themselves with absolute exaggerations that I have to wonder if they suffer
more from a lack essential cognitive flexibility than a strong opinion
about some particular subject that provokes them.
We can use our imagination to transcend the 'ontologies' that we have
acquired. When this is done with some kind of empirical reasoning this can
produce greater insight about the real world. So our ontologies have to
have a great deal of flexibility of imagination that can be grounded with
reason.


Jim Bromer

On Wed, Dec 13, 2017 at 3:22 PM, Jim Bromer <jimbromer@gmail.com> wrote:

> "Tom believes that Mary wants to marry a sailor."
> I didn't realize that Tom even knew Mary. Does Tom know Mary?
> Is Mary engaged?
> Is Tom jealous?
> Does Mary have a particular sailor in mind?
> If so, do we know the sailor?
> What navy is the sailor in?
> What kind of ship does the sailor work in? Is it a sailboat rather
> than a ship?
> Who's Tom? Who's Mary?
> Why use the term "believes"? Why not say something like, "Tom told me that
> Mary wants to marry a sailor."?
> Where did this sentence come from? Am I writing this sentence to the group
> for some reason? Does David know Tom or Mary?
> Are they actual people? Is this sentence being used in a play of some
> kind?
> Assuming that it is being used to convey information of some kind, what is
> the principle reason for writing the sentence down?
> I could go on.
>
> Even assuming that your ontology could potentially handle most relevant
> questions, the fact that there are so many other ways to convey the same
> information, or information that was substantially similar, means that the
> relevancy problem would still probably be too severe.
>
> I am not saying that some sort of ontological system of relations is not
> needed. But the system has to be able to ignore possible relations and find
> the most appropriate relations in order to effectively 'understand' the
> statement. (And this is the relevancy problem as it, the story goes, first
> appeared.)
>
> I have been working on Logical Satisfiability, and even though I have not
> found a time-effective solution for those kinds of problems, I am starting
> to wonder if AI Conceptual Satisfiability might be more feasible if we can
> use less detailed (or partially defined) generalizations as indexes. There
> is one concern about the whole indexing scheme. It increases the total
> number of possible relations that have to be considered. But an assumption
> of an indexing system is that it would be used in a way that tries to limit
> the number of ways to interpret an expression until a -good- interpretation
> is found. Since it is reasonable to assume that such a system would be
> imperfect it also needs to have pathways that can effectively handle dead
> ends without needing to retrace and start over again.
>
> Since I do not have a reliably effective way to handle Logical
> Satisfiability, and since an AI Conceptual system would need a lot of
> indexing, and since human reasoning is not perfectly logical, it seems
> reasonable to consider a generalization indexing system that contains (or
> points to) the possible relations that are needed but is not always
> perfectly integrable into that system. (The indexing can be used to locate
> possibly relevant relations but the indexing, and indeed, any concept, is
> not necessarily going to at the same 'resolution' (so to speak) as other
> concepts or systems of concepts that may be related to it.)
>
> Concepts and indexes (or the parts of concepts and indexes) will not be
> confinable to specified ontological roles. There are lots of exceptions of
> course, but the thing is, they are exceptions more than rules. We can
> change the idea of a 'rule' but that also implies that the issue is not
> easily pliant without creating some additional complications.
>
> Jim Bromer
>
> On Wed, Dec 13, 2017 at 10:45 AM, David Whitten <whitten@netcom.com>
> wrote:
>
>> This discussion has seemed to lag.
>> Perhaps we could discuss some things basic to this idea.
>>
>> If we have a rule
>> for any person who is a mother
>> there is another person
>> where the first person is the mother-of the second person
>>
>> or in logic:
>>
>> (for-all ((?M PERSON))
>> (if (mother? ?M)
>> (there-exists ((?C PERSON))
>> (if (different? ?M ?C)
>> (mother-of ?M ?C) )
>> ))))
>>
>> This way of writing logic makes some assumptions.
>>
>> so this rule as written only applies to elements of the Collection named
>> Person
>> who also satisfy the function "mother?" .
>> Typically you use a question mark (?) at the beginning of a variable name
>> to show that this name isn't a constant.
>> If you use a question mark as the last character in a name,
>> this means that it is a function that returns TRUE or FALSE
>> Some people like using hyphen-P as there marker that the name is a
>> Boolean function like: mother-p or mother-P
>> The "-p" at the end of the name as one that embodies a question-function
>> that can be used in an "if" statement.
>>
>> note that the function "different?" can take more than two arguments.
>> In this case using the function "different" is required because
>> the semantics of mother-hood requires no-one can be their own mother,
>> and because the Controlled Natural Language (CNL) for the rule
>> describes the second person as "another person"
>>
>> The argument in the for-all connective and there-exists connective
>> could take a list of variables like (?A ?B ?C).
>> Se we require that the variable ?M be referencing a Person,
>> we are replacing the simple variable name ?A with a list (?A Person) .
>> This means that the variable ?A has to satisfy the "type" information
>> of "Person" or in other words, the value of the variable ?A must be an
>> element of the Collection named "Person"
>>
>> The predicate "mother-of" ends with hyphen-of to say this is a
>> relationship
>> between the individual referenced by ?M and the individual referenced by
>> ?C
>> some people have the convention that when a relationship is being asserted
>> as this one is, (it is the consequent of the "if") then the name should
>> end with
>> an exclamation point "!" i.e. (mother! ?M ?C) where the English
>> sentence might
>> be : ?M is the mother of ?C
>>
>> Note this logic does NOT say that the individual ?M is the only mother of
>> the
>> individual ?C nor does it say that the individual ?C has only one mother.
>> This is to accommodate the reality that a person might have a genetic
>> mother
>> who donated an egg to a different biological (surrogate) mother, and then
>> raised
>> by a third person who is their social mother.
>> Of course the most common situation is that those three roles are met by
>> the
>> same person.
>>
>> Another thing this logical statement does not say is that the individual
>> ?M
>> is the mother to only one individual ?C. The "there-exists" connective
>> says
>> that there must be at least one individual where the condition is true.
>> The
>> only thing that is required is that the individual ?C must fit all the
>> criteria
>> in its body. If there are more than one such individuals then each
>> separate
>> one must fit the criteria ("fit the criteria" is also called "satisfy the
>> condition")
>>
>> Now as you might guess, all these words "different?" "mother-of"
>> "mother?"
>> and "PERSON" must also be linked together in rules that are specific about
>> what they mean and what they don't mean. The full group of all the rules
>> makes up the "ontology",
>>
>> There is a short hand for describing words in an ontology called a
>> "structural ontology" which allows the person describing the words
>> of an ontology without re-defining all the basic facts that are known
>> about
>> them, such as which ones take only one argument or which ones are
>> Categories or collections.
>>
>> Does this make sense to you ?
>> I am working on a slide show presentation about all of this, and would
>> like to know if the way I describe it can be used in the slides.
>>
>> David Whitten
>> 713-870-3834 <(713)%20870-3834>
>>
>>
>>
>> On Sat, Dec 9, 2017 at 1:22 PM, David Whitten <whitten@netcom.com> wrote:
>>
>>> On Sat, Dec 9, 2017 at 11:51 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>>>
>>>> An AI program might keep track of time it takes to make different kinds
>>>> of searches, but that involves additional complications.
>>>>
>>>
>>> Logging just the time is not sufficient in my mind. Recognizing when two
>>> searches are coordinated or touch upon the same issues is part of the value
>>> of using an AI. Registering these in some form that consolidates searches
>>> for one with a search for the other will increase the effectiveness of the
>>> system.
>>>
>>>
>>>> I think the relevancy problem is a problem. If a good AI program was
>>>> able to narrow in on relevant knowledge then I could be expected to come up
>>>> with good responses that could be improved on over time.
>>>>
>>>
>>> I totally agree. Do you think that we could recognize relevant
>>> knowledge by creating some form of a computer thesaurus which explains why
>>> one word is the same as another one, and why they are different ? Maybe
>>> what makes two words different might be what is common with a different
>>> group of words. If we look for these common ways to make distinctions we
>>> could give the AGI a good resource that would enhance its intelligence.
>>>
>>>
>>>> But because a program has to work from knowledge it already has as well
>>>> as working with the input at the moment, it has a greater likelihood of
>>>> going down irrelevant paths.
>>>>
>>>
>>> what makes the paths irrelevant is that the paths look relevant or at
>>> least possible. The more we mark the paths with distinctions the more
>>> likely we will see that they are irrelevant.
>>>
>>>
>>>> One method of dealing with this problem is to use multiple indexing
>>>> routes that are capable of utilizing cross-categorization.
>>>>
>>>> I agree. I think you are saying the same thing as me.
>>> Cross-categorizing means categorizing by several different ways so we know
>>> when things are in common and when they are not. Multiple indexing routes
>>> means treating the routes as significant in and of themselves so that we
>>> recognize when two routes are the same, and can use common methods to the
>>> routes to process them similarly.
>>>
>>>
>>>
>>>> Jim Bromer
>>>>
>>>>
>>> Thank you for your insight.
>>>
>>> David Whitten
>>>
>>> On Sat, Dec 9, 2017 at 11:51 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>>>
>>>> An AI program might keep track of time it takes to make different kinds
>>>> of searches, but that involves additional complications. I think the
>>>> relevancy problem is a problem. If a good AI program was able to narrow in
>>>> on relevant knowledge then I could be expected to come up with good
>>>> responses that could be improved on over time. But because a program has to
>>>> work from knowledge it already has as well as working with the input at the
>>>> moment, it has a greater likelihood of going down irrelevant paths. One
>>>> method of dealing with this problem is to use multiple indexing routes that
>>>> are capable of utilizing cross-categorization.
>>>>
>>>> Jim Bromer
>>>>
>>>> On Fri, Dec 8, 2017 at 7:56 AM, David Whitten <whitten@netcom.com>
>>>> wrote:
>>>>
>>>>> Exactly Jim.
>>>>>
>>>>> I wonder if there is some way to pre-calculate or trade time for space
>>>>> when dealing with this kind of search or problem.
>>>>>
>>>>> Any time you want the 'best' or 'shortest' or 'fastest' or 'one
>>>>> solution above all'
>>>>> you are stuck with this exponential 'you have to look at all of them'
>>>>> problem.
>>>>>
>>>>> The best is sometimes the first one you find, but you have to look at
>>>>> all the others
>>>>> just to make sure that there isn't a better one.
>>>>>
>>>>> A lot of times when you speed things up, you make a trade-off between
>>>>> space and time.
>>>>> for these kind of problems, both time and space are exponentially
>>>>> growing.
>>>>>
>>>>> On Fri, Dec 8, 2017 at 7:49 AM, Jim Bromer <jimbromer@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> I meant, when you write the entire series out as a table summations
>>>>>> of the counting numbers from 1 to n, the length of the table displays an
>>>>>> exponential rate of growth.
>>>>>>
>>>>>> Jim Bromer
>>>>>>
>>>>>> On Fri, Dec 8, 2017 at 7:46 AM, Jim Bromer <jimbromer@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> The series, ((n+1)*n)/2, is (or displays) a polynomial increase in
>>>>>>> value or length of the line. However, when you write the entire series out,
>>>>>>> 1
>>>>>>> 1+2=3
>>>>>>> 1+2+3=6
>>>>>>> 1+2+3+4=10
>>>>>>> 1+2+3+4+5=15
>>>>>>>
>>>>>>> The length of writing the entire table displays an exponential rate
>>>>>>> of growth.
>>>>>>> 1,5,13,23,35,....
>>>>>>> and that is ignoring the line returns.
>>>>>>> The difference can seem subtle when you are writing a program and
>>>>>>> are not aware of the issue. I think most combinatorial explosions in AI
>>>>>>> seem to occur in searches, where the program has to search for the best
>>>>>>> response or for the correct response. In Logical Satisfiability the problem
>>>>>>> occurs in making calculations.
>>>>>>>
>>>>>>>
>>>>>>> Jim Bromer
>>>>>>>
>>>>>>> On Tue, Dec 5, 2017 at 7:31 PM, David Whitten <whitten@netcom.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Dec 5, 2017 at 3:05 PM, David Whitten <whitten@netcom.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> The issue with combinational explosion is a real one. This is also
>>>>>>>>> a real thing in humans.
>>>>>>>>> If you look at how many connection lines you get with a fully
>>>>>>>>> connected set of points, then the explosion is obvious:
>>>>>>>>>
>>>>>>>>> 1 point = 0 lines
>>>>>>>>> 2 points = 1 line
>>>>>>>>> 3 points = 3 lines
>>>>>>>>> 4 points = 6 lines
>>>>>>>>> 5 points = 10 lines
>>>>>>>>> 6 points = 15 lines
>>>>>>>>>
>>>>>>>>> in general the number of lines given *n*=number of points is
>>>>>>>>> line=(number of connection lines)
>>>>>>>>> line=(*n* * (*n*-1))/ 2 (also called the binomial coefficient or the
>>>>>>>>> number of 2-combinations of *n* elements )
>>>>>>>>>
>>>>>>>>> So if we have several million neurons (mega-neurons) and every one
>>>>>>>>> is connected to every other neuron then we have several trillion neuron
>>>>>>>>> connections (tera-connectons).
>>>>>>>>>
>>>>>>>>> The power comes when we don't have that many connections but when
>>>>>>>>> we have far fewer.
>>>>>>>>> Then assuming each neuron is a number, we can store the calculated
>>>>>>>>> value of a function with two input variables and one output variable as a
>>>>>>>>> net of answers.
>>>>>>>>>
>>>>>>>>> When we increase our number of neurons to 6 we have one thousand
>>>>>>>>> separate inputs and one trillion connections of input values so as you can
>>>>>>>>> see, this way of storing answers would quickly explode.
>>>>>>>>>
>>>>>>>>> I think this is why I like logical expressions. You can connect
>>>>>>>>> nodes that represent various propositions with only a few connectors.
>>>>>>>>>
>>>>>>>>> David Whitten
>>>>>>>>> 713-870-3834 <(713)%20870-3834>
>>>>>>>>>
>>>>>>>>> On Tue, Dec 5, 2017 at 8:04 AM, Jim Bromer <jimbromer@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> I agree that being able to explain decisions - or at least being
>>>>>>>>>> able to examine decisions and coming up with explanations that are able to
>>>>>>>>>> be improved on, is a necessary part of AGI and I believe that discrete
>>>>>>>>>> reasoning is necessary for that ability. I think that a system that is very
>>>>>>>>>> efficient at working with discrete reasoning is what is needed and that is
>>>>>>>>>> what is lacking. You can use weighted reasoning but in trying to find new
>>>>>>>>>> categories of significance the problem becomes to complex for the same
>>>>>>>>>> reason that non-polynomial time Satisfiability can become too complex. The
>>>>>>>>>> combinatorial explosion is the problem.
>>>>>>>>>>
>>>>>>>>>> Jim Bromer
>>>>>>>>>>
>>>>>>>>>> On Tue, Dec 5, 2017 at 7:49 AM, Ed Pell <edpell@optonline.net>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> That was done circa 1975!
>>>>>>>>>>>
>>>>>>>>>>> On 12/5/2017 7:47 AM, Ed Pell wrote:
>>>>>>>>>>>
>>>>>>>>>>> Hi David, on John's web site you will find that IBM created such
>>>>>>>>>>> a system to answer typed natural language questions about property taxes
>>>>>>>>>>> for the city of Whiteplains, New York. Further IBM executives decided it
>>>>>>>>>>> was too labor intensive to be a business so it was dropped.
>>>>>>>>>>>
>>>>>>>>>>> Ed
>>>>>>>>>>>
>>>>>>>>>>> On 12/4/2017 9:40 PM, David Whitten wrote:
>>>>>>>>>>>
>>>>>>>>>>> I am of the opinion that AGI is most useful when the AGI is
>>>>>>>>>>> capable of generating explanations for the conclusions it makes.
>>>>>>>>>>>
>>>>>>>>>>> There are several systems for describing the base systems that
>>>>>>>>>>> might underlie the ability to explain, but I like a system based on Common
>>>>>>>>>>> Logic, with the knowledge in the system organized using a set of structural
>>>>>>>>>>> axioms to understand the why of the conclusions.
>>>>>>>>>>>
>>>>>>>>>>> Has anyone looked at any of these systems, like John Sowa's
>>>>>>>>>>> Conceptual Graphs? Does anyone want to talk about them?
>>>>>>>>>>>
>>>>>>>>>>> David Whitten
>>>>>>>>>>> 713-870-3834 <(713)%20870-3834>
>>>>>>>>>>>
>>>>>>>>>>> *AGI* | Archives
>>>> <https://www.listbox.com/member/archive/303/=now&#x3E;
>>>> <https://www.listbox.com/member/archive/rss/303/27758495-a2fd6aff&#x3E; |
>>>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>>>> <http://www.listbox.com&#x3E;
>>>>
>>>
>>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5&#x3E; |
>> Modify
>> <https://www.listbox.com/member/?&&#x3E;
>> Your Subscription <http://www.listbox.com&#x3E;
>>
>
>
Categories: Discussions

How to build a consciousness detector.

AGI discussions @AGIRI - Thu, 2017-12-14 18:55
Step 1: gotta look good so make a nice wood and black plexi-glass
housing. Old school industrial design is best, so throw in a CRT and a
few bakelite knobs and toggle switches (always classic...)

step 2: get a good wire harness to the test subject. -- a bit
problematic for filthy humans but gator clips should work on the robots.

Step 3: route the feedBACK signal to the screen, then, in circuitry,
subtract out the feedFORWARD signal, when the voltage from this stage
gets higher than some threshold epsilon then overlay a red error on the
screen. 

Step 4: you can produce a final verdict using a simple meter labelled 0
to 100%. The circuit can be dead simple, start with a fixed voltage,
then a R-C integrator, and a meter. Whenever an error signal is
triggered by step 3, short the RC circuit to ground.  The machine should
actually be constructed with the simplest possible circuit so that there
is as little question as possible with regards to what it is measuring
and how it is measuring it.

Step 5: test out the device on your pet rock. since it does not generate
a feedBACK signal, it will have a 100% error signal which will hold the
voltage at the meter to ground which represents 0 consciousness..

A democrat would probably register around 60% on this circuit, a
conservative would score maybe 80-85%, a highly advanced AI would
probably be near 98%...


I realize that the plastic-faced chatbot is a great publicity stunt, but
at some point we should get serious and run this machine on it and show
that it really isn't conscious in any meaningful way. =|


I really feel I'm not being taken seriously at all so my presentation is
a bit silly today but I'm being absolutely serious about this. =|

--
Please report bounces from this address to atg@numentics.com

Powers are not rights.
Categories: Discussions

Re: [agi] AGI and ontologies

AGI discussions @AGIRI - Wed, 2017-12-13 22:22
"Tom believes that Mary wants to marry a sailor."
I didn't realize that Tom even knew Mary. Does Tom know Mary?
Is Mary engaged?
Is Tom jealous?
Does Mary have a particular sailor in mind?
If so, do we know the sailor?
What navy is the sailor in?
What kind of ship does the sailor work in? Is it a sailboat rather than
a ship?
Who's Tom? Who's Mary?
Why use the term "believes"? Why not say something like, "Tom told me that
Mary wants to marry a sailor."?
Where did this sentence come from? Am I writing this sentence to the group
for some reason? Does David know Tom or Mary?
Are they actual people? Is this sentence being used in a play of some kind?
Assuming that it is being used to convey information of some kind, what is
the principle reason for writing the sentence down?
I could go on.

Even assuming that your ontology could potentially handle most relevant
questions, the fact that there are so many other ways to convey the same
information, or information that was substantially similar, means that the
relevancy problem would still probably be too severe.

I am not saying that some sort of ontological system of relations is not
needed. But the system has to be able to ignore possible relations and find
the most appropriate relations in order to effectively 'understand' the
statement. (And this is the relevancy problem as it, the story goes, first
appeared.)

I have been working on Logical Satisfiability, and even though I have not
found a time-effective solution for those kinds of problems, I am starting
to wonder if AI Conceptual Satisfiability might be more feasible if we can
use less detailed (or partially defined) generalizations as indexes. There
is one concern about the whole indexing scheme. It increases the total
number of possible relations that have to be considered. But an assumption
of an indexing system is that it would be used in a way that tries to limit
the number of ways to interpret an expression until a -good- interpretation
is found. Since it is reasonable to assume that such a system would be
imperfect it also needs to have pathways that can effectively handle dead
ends without needing to retrace and start over again.

Since I do not have a reliably effective way to handle Logical
Satisfiability, and since an AI Conceptual system would need a lot of
indexing, and since human reasoning is not perfectly logical, it seems
reasonable to consider a generalization indexing system that contains (or
points to) the possible relations that are needed but is not always
perfectly integrable into that system. (The indexing can be used to locate
possibly relevant relations but the indexing, and indeed, any concept, is
not necessarily going to at the same 'resolution' (so to speak) as other
concepts or systems of concepts that may be related to it.)

Concepts and indexes (or the parts of concepts and indexes) will not be
confinable to specified ontological roles. There are lots of exceptions of
course, but the thing is, they are exceptions more than rules. We can
change the idea of a 'rule' but that also implies that the issue is not
easily pliant without creating some additional complications.

Jim Bromer

On Wed, Dec 13, 2017 at 10:45 AM, David Whitten <whitten@netcom.com> wrote:

> This discussion has seemed to lag.
> Perhaps we could discuss some things basic to this idea.
>
> If we have a rule
> for any person who is a mother
> there is another person
> where the first person is the mother-of the second person
>
> or in logic:
>
> (for-all ((?M PERSON))
> (if (mother? ?M)
> (there-exists ((?C PERSON))
> (if (different? ?M ?C)
> (mother-of ?M ?C) )
> ))))
>
> This way of writing logic makes some assumptions.
>
> so this rule as written only applies to elements of the Collection named
> Person
> who also satisfy the function "mother?" .
> Typically you use a question mark (?) at the beginning of a variable name
> to show that this name isn't a constant.
> If you use a question mark as the last character in a name,
> this means that it is a function that returns TRUE or FALSE
> Some people like using hyphen-P as there marker that the name is a
> Boolean function like: mother-p or mother-P
> The "-p" at the end of the name as one that embodies a question-function
> that can be used in an "if" statement.
>
> note that the function "different?" can take more than two arguments.
> In this case using the function "different" is required because
> the semantics of mother-hood requires no-one can be their own mother,
> and because the Controlled Natural Language (CNL) for the rule
> describes the second person as "another person"
>
> The argument in the for-all connective and there-exists connective
> could take a list of variables like (?A ?B ?C).
> Se we require that the variable ?M be referencing a Person,
> we are replacing the simple variable name ?A with a list (?A Person) .
> This means that the variable ?A has to satisfy the "type" information
> of "Person" or in other words, the value of the variable ?A must be an
> element of the Collection named "Person"
>
> The predicate "mother-of" ends with hyphen-of to say this is a
> relationship
> between the individual referenced by ?M and the individual referenced by ?C
> some people have the convention that when a relationship is being asserted
> as this one is, (it is the consequent of the "if") then the name should
> end with
> an exclamation point "!" i.e. (mother! ?M ?C) where the English sentence
> might
> be : ?M is the mother of ?C
>
> Note this logic does NOT say that the individual ?M is the only mother of
> the
> individual ?C nor does it say that the individual ?C has only one mother.
> This is to accommodate the reality that a person might have a genetic
> mother
> who donated an egg to a different biological (surrogate) mother, and then
> raised
> by a third person who is their social mother.
> Of course the most common situation is that those three roles are met by
> the
> same person.
>
> Another thing this logical statement does not say is that the individual ?M
> is the mother to only one individual ?C. The "there-exists" connective
> says
> that there must be at least one individual where the condition is true.
> The
> only thing that is required is that the individual ?C must fit all the
> criteria
> in its body. If there are more than one such individuals then each
> separate
> one must fit the criteria ("fit the criteria" is also called "satisfy the
> condition")
>
> Now as you might guess, all these words "different?" "mother-of" "mother?"
> and "PERSON" must also be linked together in rules that are specific about
> what they mean and what they don't mean. The full group of all the rules
> makes up the "ontology",
>
> There is a short hand for describing words in an ontology called a
> "structural ontology" which allows the person describing the words
> of an ontology without re-defining all the basic facts that are known about
> them, such as which ones take only one argument or which ones are
> Categories or collections.
>
> Does this make sense to you ?
> I am working on a slide show presentation about all of this, and would
> like to know if the way I describe it can be used in the slides.
>
> David Whitten
> 713-870-3834 <(713)%20870-3834>
>
>
>
> On Sat, Dec 9, 2017 at 1:22 PM, David Whitten <whitten@netcom.com> wrote:
>
>> On Sat, Dec 9, 2017 at 11:51 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>>
>>> An AI program might keep track of time it takes to make different kinds
>>> of searches, but that involves additional complications.
>>>
>>
>> Logging just the time is not sufficient in my mind. Recognizing when two
>> searches are coordinated or touch upon the same issues is part of the value
>> of using an AI. Registering these in some form that consolidates searches
>> for one with a search for the other will increase the effectiveness of the
>> system.
>>
>>
>>> I think the relevancy problem is a problem. If a good AI program was
>>> able to narrow in on relevant knowledge then I could be expected to come up
>>> with good responses that could be improved on over time.
>>>
>>
>> I totally agree. Do you think that we could recognize relevant knowledge
>> by creating some form of a computer thesaurus which explains why one word
>> is the same as another one, and why they are different ? Maybe what makes
>> two words different might be what is common with a different group of
>> words. If we look for these common ways to make distinctions we could give
>> the AGI a good resource that would enhance its intelligence.
>>
>>
>>> But because a program has to work from knowledge it already has as well
>>> as working with the input at the moment, it has a greater likelihood of
>>> going down irrelevant paths.
>>>
>>
>> what makes the paths irrelevant is that the paths look relevant or at
>> least possible. The more we mark the paths with distinctions the more
>> likely we will see that they are irrelevant.
>>
>>
>>> One method of dealing with this problem is to use multiple indexing
>>> routes that are capable of utilizing cross-categorization.
>>>
>>> I agree. I think you are saying the same thing as me. Cross-categorizing
>> means categorizing by several different ways so we know when things are in
>> common and when they are not. Multiple indexing routes means treating the
>> routes as significant in and of themselves so that we recognize when two
>> routes are the same, and can use common methods to the routes to process
>> them similarly.
>>
>>
>>
>>> Jim Bromer
>>>
>>>
>> Thank you for your insight.
>>
>> David Whitten
>>
>> On Sat, Dec 9, 2017 at 11:51 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>>
>>> An AI program might keep track of time it takes to make different kinds
>>> of searches, but that involves additional complications. I think the
>>> relevancy problem is a problem. If a good AI program was able to narrow in
>>> on relevant knowledge then I could be expected to come up with good
>>> responses that could be improved on over time. But because a program has to
>>> work from knowledge it already has as well as working with the input at the
>>> moment, it has a greater likelihood of going down irrelevant paths. One
>>> method of dealing with this problem is to use multiple indexing routes that
>>> are capable of utilizing cross-categorization.
>>>
>>> Jim Bromer
>>>
>>> On Fri, Dec 8, 2017 at 7:56 AM, David Whitten <whitten@netcom.com>
>>> wrote:
>>>
>>>> Exactly Jim.
>>>>
>>>> I wonder if there is some way to pre-calculate or trade time for space
>>>> when dealing with this kind of search or problem.
>>>>
>>>> Any time you want the 'best' or 'shortest' or 'fastest' or 'one
>>>> solution above all'
>>>> you are stuck with this exponential 'you have to look at all of them'
>>>> problem.
>>>>
>>>> The best is sometimes the first one you find, but you have to look at
>>>> all the others
>>>> just to make sure that there isn't a better one.
>>>>
>>>> A lot of times when you speed things up, you make a trade-off between
>>>> space and time.
>>>> for these kind of problems, both time and space are exponentially
>>>> growing.
>>>>
>>>> On Fri, Dec 8, 2017 at 7:49 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>>>>
>>>>> I meant, when you write the entire series out as a table summations of
>>>>> the counting numbers from 1 to n, the length of the table displays an
>>>>> exponential rate of growth.
>>>>>
>>>>> Jim Bromer
>>>>>
>>>>> On Fri, Dec 8, 2017 at 7:46 AM, Jim Bromer <jimbromer@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> The series, ((n+1)*n)/2, is (or displays) a polynomial increase in
>>>>>> value or length of the line. However, when you write the entire series out,
>>>>>> 1
>>>>>> 1+2=3
>>>>>> 1+2+3=6
>>>>>> 1+2+3+4=10
>>>>>> 1+2+3+4+5=15
>>>>>>
>>>>>> The length of writing the entire table displays an exponential rate
>>>>>> of growth.
>>>>>> 1,5,13,23,35,....
>>>>>> and that is ignoring the line returns.
>>>>>> The difference can seem subtle when you are writing a program and are
>>>>>> not aware of the issue. I think most combinatorial explosions in AI seem to
>>>>>> occur in searches, where the program has to search for the best response or
>>>>>> for the correct response. In Logical Satisfiability the problem occurs in
>>>>>> making calculations.
>>>>>>
>>>>>>
>>>>>> Jim Bromer
>>>>>>
>>>>>> On Tue, Dec 5, 2017 at 7:31 PM, David Whitten <whitten@netcom.com>
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Dec 5, 2017 at 3:05 PM, David Whitten <whitten@netcom.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> The issue with combinational explosion is a real one. This is also
>>>>>>>> a real thing in humans.
>>>>>>>> If you look at how many connection lines you get with a fully
>>>>>>>> connected set of points, then the explosion is obvious:
>>>>>>>>
>>>>>>>> 1 point = 0 lines
>>>>>>>> 2 points = 1 line
>>>>>>>> 3 points = 3 lines
>>>>>>>> 4 points = 6 lines
>>>>>>>> 5 points = 10 lines
>>>>>>>> 6 points = 15 lines
>>>>>>>>
>>>>>>>> in general the number of lines given *n*=number of points is
>>>>>>>> line=(number of connection lines)
>>>>>>>> line=(*n* * (*n*-1))/ 2 (also called the binomial coefficient or the
>>>>>>>> number of 2-combinations of *n* elements )
>>>>>>>>
>>>>>>>> So if we have several million neurons (mega-neurons) and every one
>>>>>>>> is connected to every other neuron then we have several trillion neuron
>>>>>>>> connections (tera-connectons).
>>>>>>>>
>>>>>>>> The power comes when we don't have that many connections but when
>>>>>>>> we have far fewer.
>>>>>>>> Then assuming each neuron is a number, we can store the calculated
>>>>>>>> value of a function with two input variables and one output variable as a
>>>>>>>> net of answers.
>>>>>>>>
>>>>>>>> When we increase our number of neurons to 6 we have one thousand
>>>>>>>> separate inputs and one trillion connections of input values so as you can
>>>>>>>> see, this way of storing answers would quickly explode.
>>>>>>>>
>>>>>>>> I think this is why I like logical expressions. You can connect
>>>>>>>> nodes that represent various propositions with only a few connectors.
>>>>>>>>
>>>>>>>> David Whitten
>>>>>>>> 713-870-3834 <(713)%20870-3834>
>>>>>>>>
>>>>>>>> On Tue, Dec 5, 2017 at 8:04 AM, Jim Bromer <jimbromer@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> I agree that being able to explain decisions - or at least being
>>>>>>>>> able to examine decisions and coming up with explanations that are able to
>>>>>>>>> be improved on, is a necessary part of AGI and I believe that discrete
>>>>>>>>> reasoning is necessary for that ability. I think that a system that is very
>>>>>>>>> efficient at working with discrete reasoning is what is needed and that is
>>>>>>>>> what is lacking. You can use weighted reasoning but in trying to find new
>>>>>>>>> categories of significance the problem becomes to complex for the same
>>>>>>>>> reason that non-polynomial time Satisfiability can become too complex. The
>>>>>>>>> combinatorial explosion is the problem.
>>>>>>>>>
>>>>>>>>> Jim Bromer
>>>>>>>>>
>>>>>>>>> On Tue, Dec 5, 2017 at 7:49 AM, Ed Pell <edpell@optonline.net>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> That was done circa 1975!
>>>>>>>>>>
>>>>>>>>>> On 12/5/2017 7:47 AM, Ed Pell wrote:
>>>>>>>>>>
>>>>>>>>>> Hi David, on John's web site you will find that IBM created such
>>>>>>>>>> a system to answer typed natural language questions about property taxes
>>>>>>>>>> for the city of Whiteplains, New York. Further IBM executives decided it
>>>>>>>>>> was too labor intensive to be a business so it was dropped.
>>>>>>>>>>
>>>>>>>>>> Ed
>>>>>>>>>>
>>>>>>>>>> On 12/4/2017 9:40 PM, David Whitten wrote:
>>>>>>>>>>
>>>>>>>>>> I am of the opinion that AGI is most useful when the AGI is
>>>>>>>>>> capable of generating explanations for the conclusions it makes.
>>>>>>>>>>
>>>>>>>>>> There are several systems for describing the base systems that
>>>>>>>>>> might underlie the ability to explain, but I like a system based on Common
>>>>>>>>>> Logic, with the knowledge in the system organized using a set of structural
>>>>>>>>>> axioms to understand the why of the conclusions.
>>>>>>>>>>
>>>>>>>>>> Has anyone looked at any of these systems, like John Sowa's
>>>>>>>>>> Conceptual Graphs? Does anyone want to talk about them?
>>>>>>>>>>
>>>>>>>>>> David Whitten
>>>>>>>>>> 713-870-3834 <(713)%20870-3834>
>>>>>>>>>>
>>>>>>>>>> *AGI* | Archives
>>> <https://www.listbox.com/member/archive/303/=now&#x3E;
>>> <https://www.listbox.com/member/archive/rss/303/27758495-a2fd6aff&#x3E; |
>>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>>> <http://www.listbox.com&#x3E;
>>>
>>
>>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5&#x3E; |
> Modify
> <https://www.listbox.com/member/?&&#x3E;
> Your Subscription <http://www.listbox.com&#x3E;
>
Categories: Discussions

Re: [agi] AGI and ontologies

AGI discussions @AGIRI - Wed, 2017-12-13 17:45
This discussion has seemed to lag.
Perhaps we could discuss some things basic to this idea.

If we have a rule
for any person who is a mother
there is another person
where the first person is the mother-of the second person

or in logic:

(for-all ((?M PERSON))
(if (mother? ?M)
(there-exists ((?C PERSON))
(if (different? ?M ?C)
(mother-of ?M ?C) )
))))

This way of writing logic makes some assumptions.

so this rule as written only applies to elements of the Collection named
Person
who also satisfy the function "mother?" .
Typically you use a question mark (?) at the beginning of a variable name
to show that this name isn't a constant.
If you use a question mark as the last character in a name,
this means that it is a function that returns TRUE or FALSE
Some people like using hyphen-P as there marker that the name is a
Boolean function like: mother-p or mother-P
The "-p" at the end of the name as one that embodies a question-function
that can be used in an "if" statement.

note that the function "different?" can take more than two arguments.
In this case using the function "different" is required because
the semantics of mother-hood requires no-one can be their own mother,
and because the Controlled Natural Language (CNL) for the rule
describes the second person as "another person"

The argument in the for-all connective and there-exists connective
could take a list of variables like (?A ?B ?C).
Se we require that the variable ?M be referencing a Person,
we are replacing the simple variable name ?A with a list (?A Person) .
This means that the variable ?A has to satisfy the "type" information
of "Person" or in other words, the value of the variable ?A must be an
element of the Collection named "Person"

The predicate "mother-of" ends with hyphen-of to say this is a relationship
between the individual referenced by ?M and the individual referenced by ?C
some people have the convention that when a relationship is being asserted
as this one is, (it is the consequent of the "if") then the name should end
with
an exclamation point "!" i.e. (mother! ?M ?C) where the English sentence
might
be : ?M is the mother of ?C

Note this logic does NOT say that the individual ?M is the only mother of
the
individual ?C nor does it say that the individual ?C has only one mother.
This is to accommodate the reality that a person might have a genetic
mother
who donated an egg to a different biological (surrogate) mother, and then
raised
by a third person who is their social mother.
Of course the most common situation is that those three roles are met by the
same person.

Another thing this logical statement does not say is that the individual ?M
is the mother to only one individual ?C. The "there-exists" connective
says
that there must be at least one individual where the condition is true. The
only thing that is required is that the individual ?C must fit all the
criteria
in its body. If there are more than one such individuals then each separate
one must fit the criteria ("fit the criteria" is also called "satisfy the
condition")

Now as you might guess, all these words "different?" "mother-of" "mother?"
and "PERSON" must also be linked together in rules that are specific about
what they mean and what they don't mean. The full group of all the rules
makes up the "ontology",

There is a short hand for describing words in an ontology called a
"structural ontology" which allows the person describing the words
of an ontology without re-defining all the basic facts that are known about
them, such as which ones take only one argument or which ones are
Categories or collections.

Does this make sense to you ?
I am working on a slide show presentation about all of this, and would
like to know if the way I describe it can be used in the slides.

David Whitten
713-870-3834



On Sat, Dec 9, 2017 at 1:22 PM, David Whitten <whitten@netcom.com> wrote:

> On Sat, Dec 9, 2017 at 11:51 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>
>> An AI program might keep track of time it takes to make different kinds
>> of searches, but that involves additional complications.
>>
>
> Logging just the time is not sufficient in my mind. Recognizing when two
> searches are coordinated or touch upon the same issues is part of the value
> of using an AI. Registering these in some form that consolidates searches
> for one with a search for the other will increase the effectiveness of the
> system.
>
>
>> I think the relevancy problem is a problem. If a good AI program was able
>> to narrow in on relevant knowledge then I could be expected to come up with
>> good responses that could be improved on over time.
>>
>
> I totally agree. Do you think that we could recognize relevant knowledge
> by creating some form of a computer thesaurus which explains why one word
> is the same as another one, and why they are different ? Maybe what makes
> two words different might be what is common with a different group of
> words. If we look for these common ways to make distinctions we could give
> the AGI a good resource that would enhance its intelligence.
>
>
>> But because a program has to work from knowledge it already has as well
>> as working with the input at the moment, it has a greater likelihood of
>> going down irrelevant paths.
>>
>
> what makes the paths irrelevant is that the paths look relevant or at
> least possible. The more we mark the paths with distinctions the more
> likely we will see that they are irrelevant.
>
>
>> One method of dealing with this problem is to use multiple indexing
>> routes that are capable of utilizing cross-categorization.
>>
>> I agree. I think you are saying the same thing as me. Cross-categorizing
> means categorizing by several different ways so we know when things are in
> common and when they are not. Multiple indexing routes means treating the
> routes as significant in and of themselves so that we recognize when two
> routes are the same, and can use common methods to the routes to process
> them similarly.
>
>
>
>> Jim Bromer
>>
>>
> Thank you for your insight.
>
> David Whitten
>
> On Sat, Dec 9, 2017 at 11:51 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>
>> An AI program might keep track of time it takes to make different kinds
>> of searches, but that involves additional complications. I think the
>> relevancy problem is a problem. If a good AI program was able to narrow in
>> on relevant knowledge then I could be expected to come up with good
>> responses that could be improved on over time. But because a program has to
>> work from knowledge it already has as well as working with the input at the
>> moment, it has a greater likelihood of going down irrelevant paths. One
>> method of dealing with this problem is to use multiple indexing routes that
>> are capable of utilizing cross-categorization.
>>
>> Jim Bromer
>>
>> On Fri, Dec 8, 2017 at 7:56 AM, David Whitten <whitten@netcom.com> wrote:
>>
>>> Exactly Jim.
>>>
>>> I wonder if there is some way to pre-calculate or trade time for space
>>> when dealing with this kind of search or problem.
>>>
>>> Any time you want the 'best' or 'shortest' or 'fastest' or 'one solution
>>> above all'
>>> you are stuck with this exponential 'you have to look at all of them'
>>> problem.
>>>
>>> The best is sometimes the first one you find, but you have to look at
>>> all the others
>>> just to make sure that there isn't a better one.
>>>
>>> A lot of times when you speed things up, you make a trade-off between
>>> space and time.
>>> for these kind of problems, both time and space are exponentially
>>> growing.
>>>
>>> On Fri, Dec 8, 2017 at 7:49 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>>>
>>>> I meant, when you write the entire series out as a table summations of
>>>> the counting numbers from 1 to n, the length of the table displays an
>>>> exponential rate of growth.
>>>>
>>>> Jim Bromer
>>>>
>>>> On Fri, Dec 8, 2017 at 7:46 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>>>>
>>>>> The series, ((n+1)*n)/2, is (or displays) a polynomial increase in
>>>>> value or length of the line. However, when you write the entire series out,
>>>>> 1
>>>>> 1+2=3
>>>>> 1+2+3=6
>>>>> 1+2+3+4=10
>>>>> 1+2+3+4+5=15
>>>>>
>>>>> The length of writing the entire table displays an exponential rate of
>>>>> growth.
>>>>> 1,5,13,23,35,....
>>>>> and that is ignoring the line returns.
>>>>> The difference can seem subtle when you are writing a program and are
>>>>> not aware of the issue. I think most combinatorial explosions in AI seem to
>>>>> occur in searches, where the program has to search for the best response or
>>>>> for the correct response. In Logical Satisfiability the problem occurs in
>>>>> making calculations.
>>>>>
>>>>>
>>>>> Jim Bromer
>>>>>
>>>>> On Tue, Dec 5, 2017 at 7:31 PM, David Whitten <whitten@netcom.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Dec 5, 2017 at 3:05 PM, David Whitten <whitten@netcom.com>
>>>>>> wrote:
>>>>>>
>>>>>>> The issue with combinational explosion is a real one. This is also a
>>>>>>> real thing in humans.
>>>>>>> If you look at how many connection lines you get with a fully
>>>>>>> connected set of points, then the explosion is obvious:
>>>>>>>
>>>>>>> 1 point = 0 lines
>>>>>>> 2 points = 1 line
>>>>>>> 3 points = 3 lines
>>>>>>> 4 points = 6 lines
>>>>>>> 5 points = 10 lines
>>>>>>> 6 points = 15 lines
>>>>>>>
>>>>>>> in general the number of lines given *n*=number of points is
>>>>>>> line=(number of connection lines)
>>>>>>> line=(*n* * (*n*-1))/ 2 (also called the binomial coefficient or the
>>>>>>> number of 2-combinations of *n* elements )
>>>>>>>
>>>>>>> So if we have several million neurons (mega-neurons) and every one
>>>>>>> is connected to every other neuron then we have several trillion neuron
>>>>>>> connections (tera-connectons).
>>>>>>>
>>>>>>> The power comes when we don't have that many connections but when we
>>>>>>> have far fewer.
>>>>>>> Then assuming each neuron is a number, we can store the calculated
>>>>>>> value of a function with two input variables and one output variable as a
>>>>>>> net of answers.
>>>>>>>
>>>>>>> When we increase our number of neurons to 6 we have one thousand
>>>>>>> separate inputs and one trillion connections of input values so as you can
>>>>>>> see, this way of storing answers would quickly explode.
>>>>>>>
>>>>>>> I think this is why I like logical expressions. You can connect
>>>>>>> nodes that represent various propositions with only a few connectors.
>>>>>>>
>>>>>>> David Whitten
>>>>>>> 713-870-3834 <(713)%20870-3834>
>>>>>>>
>>>>>>> On Tue, Dec 5, 2017 at 8:04 AM, Jim Bromer <jimbromer@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> I agree that being able to explain decisions - or at least being
>>>>>>>> able to examine decisions and coming up with explanations that are able to
>>>>>>>> be improved on, is a necessary part of AGI and I believe that discrete
>>>>>>>> reasoning is necessary for that ability. I think that a system that is very
>>>>>>>> efficient at working with discrete reasoning is what is needed and that is
>>>>>>>> what is lacking. You can use weighted reasoning but in trying to find new
>>>>>>>> categories of significance the problem becomes to complex for the same
>>>>>>>> reason that non-polynomial time Satisfiability can become too complex. The
>>>>>>>> combinatorial explosion is the problem.
>>>>>>>>
>>>>>>>> Jim Bromer
>>>>>>>>
>>>>>>>> On Tue, Dec 5, 2017 at 7:49 AM, Ed Pell <edpell@optonline.net>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> That was done circa 1975!
>>>>>>>>>
>>>>>>>>> On 12/5/2017 7:47 AM, Ed Pell wrote:
>>>>>>>>>
>>>>>>>>> Hi David, on John's web site you will find that IBM created such a
>>>>>>>>> system to answer typed natural language questions about property taxes for
>>>>>>>>> the city of Whiteplains, New York. Further IBM executives decided it was
>>>>>>>>> too labor intensive to be a business so it was dropped.
>>>>>>>>>
>>>>>>>>> Ed
>>>>>>>>>
>>>>>>>>> On 12/4/2017 9:40 PM, David Whitten wrote:
>>>>>>>>>
>>>>>>>>> I am of the opinion that AGI is most useful when the AGI is
>>>>>>>>> capable of generating explanations for the conclusions it makes.
>>>>>>>>>
>>>>>>>>> There are several systems for describing the base systems that
>>>>>>>>> might underlie the ability to explain, but I like a system based on Common
>>>>>>>>> Logic, with the knowledge in the system organized using a set of structural
>>>>>>>>> axioms to understand the why of the conclusions.
>>>>>>>>>
>>>>>>>>> Has anyone looked at any of these systems, like John Sowa's
>>>>>>>>> Conceptual Graphs? Does anyone want to talk about them?
>>>>>>>>>
>>>>>>>>> David Whitten
>>>>>>>>> 713-870-3834 <(713)%20870-3834>
>>>>>>>>>
>>>>>>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>> <https://www.listbox.com/member/archive/rss/303/27758495-a2fd6aff&#x3E; |
>> Modify
>> <https://www.listbox.com/member/?&&#x3E;
>> Your Subscription <http://www.listbox.com&#x3E;
>>
>
>
Categories: Discussions

Re: [agi] AGI and ontologies

AGI discussions @AGIRI - Sat, 2017-12-09 20:22
On Sat, Dec 9, 2017 at 11:51 AM, Jim Bromer <jimbromer@gmail.com> wrote:

> An AI program might keep track of time it takes to make different kinds of
> searches, but that involves additional complications.
>

Logging just the time is not sufficient in my mind. Recognizing when two
searches are coordinated or touch upon the same issues is part of the value
of using an AI. Registering these in some form that consolidates searches
for one with a search for the other will increase the effectiveness of the
system.


> I think the relevancy problem is a problem. If a good AI program was able
> to narrow in on relevant knowledge then I could be expected to come up with
> good responses that could be improved on over time.
>

I totally agree. Do you think that we could recognize relevant knowledge
by creating some form of a computer thesaurus which explains why one word
is the same as another one, and why they are different ? Maybe what makes
two words different might be what is common with a different group of
words. If we look for these common ways to make distinctions we could give
the AGI a good resource that would enhance its intelligence.


> But because a program has to work from knowledge it already has as well as
> working with the input at the moment, it has a greater likelihood of going
> down irrelevant paths.
>

what makes the paths irrelevant is that the paths look relevant or at least
possible. The more we mark the paths with distinctions the more likely we
will see that they are irrelevant.


> One method of dealing with this problem is to use multiple indexing routes
> that are capable of utilizing cross-categorization.
>
> I agree. I think you are saying the same thing as me. Cross-categorizing
means categorizing by several different ways so we know when things are in
common and when they are not. Multiple indexing routes means treating the
routes as significant in and of themselves so that we recognize when two
routes are the same, and can use common methods to the routes to process
them similarly.



> Jim Bromer
>
>
Thank you for your insight.

David Whitten

On Sat, Dec 9, 2017 at 11:51 AM, Jim Bromer <jimbromer@gmail.com> wrote:

> An AI program might keep track of time it takes to make different kinds of
> searches, but that involves additional complications. I think the relevancy
> problem is a problem. If a good AI program was able to narrow in on
> relevant knowledge then I could be expected to come up with good responses
> that could be improved on over time. But because a program has to work from
> knowledge it already has as well as working with the input at the moment,
> it has a greater likelihood of going down irrelevant paths. One method of
> dealing with this problem is to use multiple indexing routes that are
> capable of utilizing cross-categorization.
>
> Jim Bromer
>
> On Fri, Dec 8, 2017 at 7:56 AM, David Whitten <whitten@netcom.com> wrote:
>
>> Exactly Jim.
>>
>> I wonder if there is some way to pre-calculate or trade time for space
>> when dealing with this kind of search or problem.
>>
>> Any time you want the 'best' or 'shortest' or 'fastest' or 'one solution
>> above all'
>> you are stuck with this exponential 'you have to look at all of them'
>> problem.
>>
>> The best is sometimes the first one you find, but you have to look at all
>> the others
>> just to make sure that there isn't a better one.
>>
>> A lot of times when you speed things up, you make a trade-off between
>> space and time.
>> for these kind of problems, both time and space are exponentially growing.
>>
>> On Fri, Dec 8, 2017 at 7:49 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>>
>>> I meant, when you write the entire series out as a table summations of
>>> the counting numbers from 1 to n, the length of the table displays an
>>> exponential rate of growth.
>>>
>>> Jim Bromer
>>>
>>> On Fri, Dec 8, 2017 at 7:46 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>>>
>>>> The series, ((n+1)*n)/2, is (or displays) a polynomial increase in
>>>> value or length of the line. However, when you write the entire series out,
>>>> 1
>>>> 1+2=3
>>>> 1+2+3=6
>>>> 1+2+3+4=10
>>>> 1+2+3+4+5=15
>>>>
>>>> The length of writing the entire table displays an exponential rate of
>>>> growth.
>>>> 1,5,13,23,35,....
>>>> and that is ignoring the line returns.
>>>> The difference can seem subtle when you are writing a program and are
>>>> not aware of the issue. I think most combinatorial explosions in AI seem to
>>>> occur in searches, where the program has to search for the best response or
>>>> for the correct response. In Logical Satisfiability the problem occurs in
>>>> making calculations.
>>>>
>>>>
>>>> Jim Bromer
>>>>
>>>> On Tue, Dec 5, 2017 at 7:31 PM, David Whitten <whitten@netcom.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, Dec 5, 2017 at 3:05 PM, David Whitten <whitten@netcom.com>
>>>>> wrote:
>>>>>
>>>>>> The issue with combinational explosion is a real one. This is also a
>>>>>> real thing in humans.
>>>>>> If you look at how many connection lines you get with a fully
>>>>>> connected set of points, then the explosion is obvious:
>>>>>>
>>>>>> 1 point = 0 lines
>>>>>> 2 points = 1 line
>>>>>> 3 points = 3 lines
>>>>>> 4 points = 6 lines
>>>>>> 5 points = 10 lines
>>>>>> 6 points = 15 lines
>>>>>>
>>>>>> in general the number of lines given *n*=number of points is
>>>>>> line=(number of connection lines)
>>>>>> line=(*n* * (*n*-1))/ 2 (also called the binomial coefficient or the
>>>>>> number of 2-combinations of *n* elements )
>>>>>>
>>>>>> So if we have several million neurons (mega-neurons) and every one is
>>>>>> connected to every other neuron then we have several trillion neuron
>>>>>> connections (tera-connectons).
>>>>>>
>>>>>> The power comes when we don't have that many connections but when we
>>>>>> have far fewer.
>>>>>> Then assuming each neuron is a number, we can store the calculated
>>>>>> value of a function with two input variables and one output variable as a
>>>>>> net of answers.
>>>>>>
>>>>>> When we increase our number of neurons to 6 we have one thousand
>>>>>> separate inputs and one trillion connections of input values so as you can
>>>>>> see, this way of storing answers would quickly explode.
>>>>>>
>>>>>> I think this is why I like logical expressions. You can connect nodes
>>>>>> that represent various propositions with only a few connectors.
>>>>>>
>>>>>> David Whitten
>>>>>> 713-870-3834 <(713)%20870-3834>
>>>>>>
>>>>>> On Tue, Dec 5, 2017 at 8:04 AM, Jim Bromer <jimbromer@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> I agree that being able to explain decisions - or at least being
>>>>>>> able to examine decisions and coming up with explanations that are able to
>>>>>>> be improved on, is a necessary part of AGI and I believe that discrete
>>>>>>> reasoning is necessary for that ability. I think that a system that is very
>>>>>>> efficient at working with discrete reasoning is what is needed and that is
>>>>>>> what is lacking. You can use weighted reasoning but in trying to find new
>>>>>>> categories of significance the problem becomes to complex for the same
>>>>>>> reason that non-polynomial time Satisfiability can become too complex. The
>>>>>>> combinatorial explosion is the problem.
>>>>>>>
>>>>>>> Jim Bromer
>>>>>>>
>>>>>>> On Tue, Dec 5, 2017 at 7:49 AM, Ed Pell <edpell@optonline.net>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> That was done circa 1975!
>>>>>>>>
>>>>>>>> On 12/5/2017 7:47 AM, Ed Pell wrote:
>>>>>>>>
>>>>>>>> Hi David, on John's web site you will find that IBM created such a
>>>>>>>> system to answer typed natural language questions about property taxes for
>>>>>>>> the city of Whiteplains, New York. Further IBM executives decided it was
>>>>>>>> too labor intensive to be a business so it was dropped.
>>>>>>>>
>>>>>>>> Ed
>>>>>>>>
>>>>>>>> On 12/4/2017 9:40 PM, David Whitten wrote:
>>>>>>>>
>>>>>>>> I am of the opinion that AGI is most useful when the AGI is capable
>>>>>>>> of generating explanations for the conclusions it makes.
>>>>>>>>
>>>>>>>> There are several systems for describing the base systems that
>>>>>>>> might underlie the ability to explain, but I like a system based on Common
>>>>>>>> Logic, with the knowledge in the system organized using a set of structural
>>>>>>>> axioms to understand the why of the conclusions.
>>>>>>>>
>>>>>>>> Has anyone looked at any of these systems, like John Sowa's
>>>>>>>> Conceptual Graphs? Does anyone want to talk about them?
>>>>>>>>
>>>>>>>> David Whitten
>>>>>>>> 713-870-3834 <(713)%20870-3834>
>>>>>>>>
>>>>>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
> <https://www.listbox.com/member/archive/rss/303/27758495-a2fd6aff&#x3E; |
> Modify
> <https://www.listbox.com/member/?&&#x3E;
> Your Subscription <http://www.listbox.com&#x3E;
>
Categories: Discussions

Re: [agi] Re: TM Program in Expo Time and Parallel Program inPolynomial Time

AGI discussions @AGIRI - Sat, 2017-12-09 19:15
Mark,
I do not really see how Tauchain is related to the kinds of things I am
interested in. Automatic Theorem Proving would be interesting but it looks
like something that would need a good Satisfiability Solver. I will look at
some of the videos that you mentioned (if I remember. I am pretty busy.).

Jim Bromer

On Sun, Dec 3, 2017 at 5:40 PM, Mark Nuzz <nuzz604@gmail.com> wrote:

>
>
> On Sun, Dec 3, 2017 at 10:54 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>
>> I cannot reply to all of the 0 responses I have gotten on this post, but
>> I do want to clarify my previous comments.
>>
>> My conjecture concerned the differences of running an algorithm on a
>> Turing Machine with a tape as compared to running it on a computer with
>> parallel addressable memory access. I wasn't thinking about massively
>> parallel computers. So I wondered if I could find a simple algorithm that
>> would bog down in a combinatorial explosion on a Turing Machine but not on
>> a modern computer with parallel addressable memory access. A paper tape of
>> a Turing Machine is not dynamically re-writable but I could change that to
>> a serial magnetic tape with an additional tape for temporary memory. The
>> only thing is that the tape would not be able to make fast access to other
>> parts of the tape, it would have to step over every memory location to
>> traverse the tape. (It is a thought experiment so imagine that the tape of
>> the Turing Machine and the addressable memory of the electronic computer
>> are not confined to a certain size.)
>>
>> Unfortunately, I got it wrong the first time. And when I finally came up
>> with an algorithm (that might map step by step onto a useful algorithm)
>> that fit my conjecture I realized that the window of opportunity was even
>> narrower than I first thought.
>>
>> The benefit of parallel addressable memory is that you can make quick
>> access to memory locations, so an extensive system of pointers (or handles
>> or offsets) is going to be important in finding the efficiency needed to
>> avoid a combinatorial explosion. That is, it probably has to be an
>> important part of a creative method to deal effectively with a component
>> system of (discrete) data objects (that can be meaningfully combined to
>> represent different reference objects). So while this thought experiment
>> might seem useless to some people, I believe that it can be used to direct
>> some aspects of research about these problems.
>>
>> Jim Bromer
>>
>
> Hi Jim,
>
> I am not much of a theoretical computer scientist, but I'm interested in
> seeing how this turns out. If you end up reaching a dead-end on this
> project, you might be interested in something called Tauchain, which is a
> research project to build a non-Turing-complete distributed system. It
> features automatic theorem proving and provably-terminating programs, but I
> can't do the project justice in any way I describe. I recommend taking a
> look at some of the videos posted by Ohad. http://www.idni.org/tauchain.
>
> Mark Nuzzolilo
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5&#x3E; |
> Modify
> <https://www.listbox.com/member/?&&#x3E;
> Your Subscription <http://www.listbox.com&#x3E;
>
Categories: Discussions

Re: [agi] AGI and ontologies

AGI discussions @AGIRI - Sat, 2017-12-09 18:52
An AI program might keep track of time it takes to make different kinds of
searches, but that involves additional complications. I think the relevancy
problem is a problem. If a good AI program was able to narrow in on
relevant knowledge then I could be expected to come up with good responses
that could be improved on over time. But because a program has to work from
knowledge it already has as well as working with the input at the moment,
it has a greater likelihood of going down irrelevant paths. One method of
dealing with this problem is to use multiple indexing routes that are
capable of utilizing cross-categorization.

Jim Bromer

On Fri, Dec 8, 2017 at 7:56 AM, David Whitten <whitten@netcom.com> wrote:

> Exactly Jim.
>
> I wonder if there is some way to pre-calculate or trade time for space
> when dealing with this kind of search or problem.
>
> Any time you want the 'best' or 'shortest' or 'fastest' or 'one solution
> above all'
> you are stuck with this exponential 'you have to look at all of them'
> problem.
>
> The best is sometimes the first one you find, but you have to look at all
> the others
> just to make sure that there isn't a better one.
>
> A lot of times when you speed things up, you make a trade-off between
> space and time.
> for these kind of problems, both time and space are exponentially growing.
>
> On Fri, Dec 8, 2017 at 7:49 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>
>> I meant, when you write the entire series out as a table summations of
>> the counting numbers from 1 to n, the length of the table displays an
>> exponential rate of growth.
>>
>> Jim Bromer
>>
>> On Fri, Dec 8, 2017 at 7:46 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>>
>>> The series, ((n+1)*n)/2, is (or displays) a polynomial increase in value
>>> or length of the line. However, when you write the entire series out,
>>> 1
>>> 1+2=3
>>> 1+2+3=6
>>> 1+2+3+4=10
>>> 1+2+3+4+5=15
>>>
>>> The length of writing the entire table displays an exponential rate of
>>> growth.
>>> 1,5,13,23,35,....
>>> and that is ignoring the line returns.
>>> The difference can seem subtle when you are writing a program and are
>>> not aware of the issue. I think most combinatorial explosions in AI seem to
>>> occur in searches, where the program has to search for the best response or
>>> for the correct response. In Logical Satisfiability the problem occurs in
>>> making calculations.
>>>
>>>
>>> Jim Bromer
>>>
>>> On Tue, Dec 5, 2017 at 7:31 PM, David Whitten <whitten@netcom.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, Dec 5, 2017 at 3:05 PM, David Whitten <whitten@netcom.com>
>>>> wrote:
>>>>
>>>>> The issue with combinational explosion is a real one. This is also a
>>>>> real thing in humans.
>>>>> If you look at how many connection lines you get with a fully
>>>>> connected set of points, then the explosion is obvious:
>>>>>
>>>>> 1 point = 0 lines
>>>>> 2 points = 1 line
>>>>> 3 points = 3 lines
>>>>> 4 points = 6 lines
>>>>> 5 points = 10 lines
>>>>> 6 points = 15 lines
>>>>>
>>>>> in general the number of lines given *n*=number of points is
>>>>> line=(number of connection lines)
>>>>> line=(*n* * (*n*-1))/ 2 (also called the binomial coefficient or the
>>>>> number of 2-combinations of *n* elements )
>>>>>
>>>>> So if we have several million neurons (mega-neurons) and every one is
>>>>> connected to every other neuron then we have several trillion neuron
>>>>> connections (tera-connectons).
>>>>>
>>>>> The power comes when we don't have that many connections but when we
>>>>> have far fewer.
>>>>> Then assuming each neuron is a number, we can store the calculated
>>>>> value of a function with two input variables and one output variable as a
>>>>> net of answers.
>>>>>
>>>>> When we increase our number of neurons to 6 we have one thousand
>>>>> separate inputs and one trillion connections of input values so as you can
>>>>> see, this way of storing answers would quickly explode.
>>>>>
>>>>> I think this is why I like logical expressions. You can connect nodes
>>>>> that represent various propositions with only a few connectors.
>>>>>
>>>>> David Whitten
>>>>> 713-870-3834 <(713)%20870-3834>
>>>>>
>>>>> On Tue, Dec 5, 2017 at 8:04 AM, Jim Bromer <jimbromer@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> I agree that being able to explain decisions - or at least being able
>>>>>> to examine decisions and coming up with explanations that are able to be
>>>>>> improved on, is a necessary part of AGI and I believe that discrete
>>>>>> reasoning is necessary for that ability. I think that a system that is very
>>>>>> efficient at working with discrete reasoning is what is needed and that is
>>>>>> what is lacking. You can use weighted reasoning but in trying to find new
>>>>>> categories of significance the problem becomes to complex for the same
>>>>>> reason that non-polynomial time Satisfiability can become too complex. The
>>>>>> combinatorial explosion is the problem.
>>>>>>
>>>>>> Jim Bromer
>>>>>>
>>>>>> On Tue, Dec 5, 2017 at 7:49 AM, Ed Pell <edpell@optonline.net> wrote:
>>>>>>
>>>>>>> That was done circa 1975!
>>>>>>>
>>>>>>> On 12/5/2017 7:47 AM, Ed Pell wrote:
>>>>>>>
>>>>>>> Hi David, on John's web site you will find that IBM created such a
>>>>>>> system to answer typed natural language questions about property taxes for
>>>>>>> the city of Whiteplains, New York. Further IBM executives decided it was
>>>>>>> too labor intensive to be a business so it was dropped.
>>>>>>>
>>>>>>> Ed
>>>>>>>
>>>>>>> On 12/4/2017 9:40 PM, David Whitten wrote:
>>>>>>>
>>>>>>> I am of the opinion that AGI is most useful when the AGI is capable
>>>>>>> of generating explanations for the conclusions it makes.
>>>>>>>
>>>>>>> There are several systems for describing the base systems that might
>>>>>>> underlie the ability to explain, but I like a system based on Common Logic,
>>>>>>> with the knowledge in the system organized using a set of structural axioms
>>>>>>> to understand the why of the conclusions.
>>>>>>>
>>>>>>> Has anyone looked at any of these systems, like John Sowa's
>>>>>>> Conceptual Graphs? Does anyone want to talk about them?
>>>>>>>
>>>>>>> David Whitten
>>>>>>> 713-870-3834 <(713)%20870-3834>
>>>>>>>
>>>>>>>
Categories: Discussions

Re: [agi] AGI and ontologies

AGI discussions @AGIRI - Fri, 2017-12-08 14:56
Exactly Jim.

I wonder if there is some way to pre-calculate or trade time for space when
dealing with this kind of search or problem.

Any time you want the 'best' or 'shortest' or 'fastest' or 'one solution
above all'
you are stuck with this exponential 'you have to look at all of them'
problem.

The best is sometimes the first one you find, but you have to look at all
the others
just to make sure that there isn't a better one.

A lot of times when you speed things up, you make a trade-off between space
and time.
for these kind of problems, both time and space are exponentially growing.

On Fri, Dec 8, 2017 at 7:49 AM, Jim Bromer <jimbromer@gmail.com> wrote:

> I meant, when you write the entire series out as a table summations of the
> counting numbers from 1 to n, the length of the table displays an
> exponential rate of growth.
>
> Jim Bromer
>
> On Fri, Dec 8, 2017 at 7:46 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>
>> The series, ((n+1)*n)/2, is (or displays) a polynomial increase in value
>> or length of the line. However, when you write the entire series out,
>> 1
>> 1+2=3
>> 1+2+3=6
>> 1+2+3+4=10
>> 1+2+3+4+5=15
>>
>> The length of writing the entire table displays an exponential rate of
>> growth.
>> 1,5,13,23,35,....
>> and that is ignoring the line returns.
>> The difference can seem subtle when you are writing a program and are not
>> aware of the issue. I think most combinatorial explosions in AI seem to
>> occur in searches, where the program has to search for the best response or
>> for the correct response. In Logical Satisfiability the problem occurs in
>> making calculations.
>>
>>
>> Jim Bromer
>>
>> On Tue, Dec 5, 2017 at 7:31 PM, David Whitten <whitten@netcom.com> wrote:
>>
>>>
>>>
>>> On Tue, Dec 5, 2017 at 3:05 PM, David Whitten <whitten@netcom.com>
>>> wrote:
>>>
>>>> The issue with combinational explosion is a real one. This is also a
>>>> real thing in humans.
>>>> If you look at how many connection lines you get with a fully connected
>>>> set of points, then the explosion is obvious:
>>>>
>>>> 1 point = 0 lines
>>>> 2 points = 1 line
>>>> 3 points = 3 lines
>>>> 4 points = 6 lines
>>>> 5 points = 10 lines
>>>> 6 points = 15 lines
>>>>
>>>> in general the number of lines given *n*=number of points is
>>>> line=(number of connection lines)
>>>> line=(*n* * (*n*-1))/ 2 (also called the binomial coefficient or the
>>>> number of 2-combinations of *n* elements )
>>>>
>>>> So if we have several million neurons (mega-neurons) and every one is
>>>> connected to every other neuron then we have several trillion neuron
>>>> connections (tera-connectons).
>>>>
>>>> The power comes when we don't have that many connections but when we
>>>> have far fewer.
>>>> Then assuming each neuron is a number, we can store the calculated
>>>> value of a function with two input variables and one output variable as a
>>>> net of answers.
>>>>
>>>> When we increase our number of neurons to 6 we have one thousand
>>>> separate inputs and one trillion connections of input values so as you can
>>>> see, this way of storing answers would quickly explode.
>>>>
>>>> I think this is why I like logical expressions. You can connect nodes
>>>> that represent various propositions with only a few connectors.
>>>>
>>>> David Whitten
>>>> 713-870-3834 <(713)%20870-3834>
>>>>
>>>> On Tue, Dec 5, 2017 at 8:04 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>>>>
>>>>> I agree that being able to explain decisions - or at least being able
>>>>> to examine decisions and coming up with explanations that are able to be
>>>>> improved on, is a necessary part of AGI and I believe that discrete
>>>>> reasoning is necessary for that ability. I think that a system that is very
>>>>> efficient at working with discrete reasoning is what is needed and that is
>>>>> what is lacking. You can use weighted reasoning but in trying to find new
>>>>> categories of significance the problem becomes to complex for the same
>>>>> reason that non-polynomial time Satisfiability can become too complex. The
>>>>> combinatorial explosion is the problem.
>>>>>
>>>>> Jim Bromer
>>>>>
>>>>> On Tue, Dec 5, 2017 at 7:49 AM, Ed Pell <edpell@optonline.net> wrote:
>>>>>
>>>>>> That was done circa 1975!
>>>>>>
>>>>>> On 12/5/2017 7:47 AM, Ed Pell wrote:
>>>>>>
>>>>>> Hi David, on John's web site you will find that IBM created such a
>>>>>> system to answer typed natural language questions about property taxes for
>>>>>> the city of Whiteplains, New York. Further IBM executives decided it was
>>>>>> too labor intensive to be a business so it was dropped.
>>>>>>
>>>>>> Ed
>>>>>>
>>>>>> On 12/4/2017 9:40 PM, David Whitten wrote:
>>>>>>
>>>>>> I am of the opinion that AGI is most useful when the AGI is capable
>>>>>> of generating explanations for the conclusions it makes.
>>>>>>
>>>>>> There are several systems for describing the base systems that might
>>>>>> underlie the ability to explain, but I like a system based on Common Logic,
>>>>>> with the knowledge in the system organized using a set of structural axioms
>>>>>> to understand the why of the conclusions.
>>>>>>
>>>>>> Has anyone looked at any of these systems, like John Sowa's
>>>>>> Conceptual Graphs? Does anyone want to talk about them?
>>>>>>
>>>>>> David Whitten
>>>>>> 713-870-3834 <(713)%20870-3834>
>>>>>>
>>>>>>
>>>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon&#x3E; Virus-free.
>>>>>> www.avast.com
>>>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link&#x3E;
>>>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>>>>>> <https://www.listbox.com/member/archive/rss/303/28564720-68aa1235&#x3E; |
>>>>>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>>>>>> <http://www.listbox.com&#x3E;
>>>>>>
>>>>>>
>>>>>>
>>>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>>>>>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5&#x3E; |
>>>>>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>>>>>> <http://www.listbox.com&#x3E;
>>>>>>
>>>>>>
>>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>>>>> <https://www.listbox.com/member/archive/rss/303/27758495-a2fd6aff&#x3E; |
>>>>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>>>>> <http://www.listbox.com&#x3E;
>>>>>
>>>>
>>>>
>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5&#x3E; |
>>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>>> <http://www.listbox.com&#x3E;
>>>
>>
>>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
> <https://www.listbox.com/member/archive/rss/303/27758495-a2fd6aff&#x3E; |
> Modify
> <https://www.listbox.com/member/?&&#x3E;
> Your Subscription <http://www.listbox.com&#x3E;
>
Categories: Discussions

Re: [agi] AGI and ontologies

AGI discussions @AGIRI - Fri, 2017-12-08 14:49
I meant, when you write the entire series out as a table summations of the
counting numbers from 1 to n, the length of the table displays an
exponential rate of growth.

Jim Bromer

On Fri, Dec 8, 2017 at 7:46 AM, Jim Bromer <jimbromer@gmail.com> wrote:

> The series, ((n+1)*n)/2, is (or displays) a polynomial increase in value
> or length of the line. However, when you write the entire series out,
> 1
> 1+2=3
> 1+2+3=6
> 1+2+3+4=10
> 1+2+3+4+5=15
>
> The length of writing the entire table displays an exponential rate of
> growth.
> 1,5,13,23,35,....
> and that is ignoring the line returns.
> The difference can seem subtle when you are writing a program and are not
> aware of the issue. I think most combinatorial explosions in AI seem to
> occur in searches, where the program has to search for the best response or
> for the correct response. In Logical Satisfiability the problem occurs in
> making calculations.
>
>
> Jim Bromer
>
> On Tue, Dec 5, 2017 at 7:31 PM, David Whitten <whitten@netcom.com> wrote:
>
>>
>>
>> On Tue, Dec 5, 2017 at 3:05 PM, David Whitten <whitten@netcom.com> wrote:
>>
>>> The issue with combinational explosion is a real one. This is also a
>>> real thing in humans.
>>> If you look at how many connection lines you get with a fully connected
>>> set of points, then the explosion is obvious:
>>>
>>> 1 point = 0 lines
>>> 2 points = 1 line
>>> 3 points = 3 lines
>>> 4 points = 6 lines
>>> 5 points = 10 lines
>>> 6 points = 15 lines
>>>
>>> in general the number of lines given *n*=number of points is
>>> line=(number of connection lines)
>>> line=(*n* * (*n*-1))/ 2 (also called the binomial coefficient or the
>>> number of 2-combinations of *n* elements )
>>>
>>> So if we have several million neurons (mega-neurons) and every one is
>>> connected to every other neuron then we have several trillion neuron
>>> connections (tera-connectons).
>>>
>>> The power comes when we don't have that many connections but when we
>>> have far fewer.
>>> Then assuming each neuron is a number, we can store the calculated value
>>> of a function with two input variables and one output variable as a net of
>>> answers.
>>>
>>> When we increase our number of neurons to 6 we have one thousand
>>> separate inputs and one trillion connections of input values so as you can
>>> see, this way of storing answers would quickly explode.
>>>
>>> I think this is why I like logical expressions. You can connect nodes
>>> that represent various propositions with only a few connectors.
>>>
>>> David Whitten
>>> 713-870-3834 <(713)%20870-3834>
>>>
>>> On Tue, Dec 5, 2017 at 8:04 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>>>
>>>> I agree that being able to explain decisions - or at least being able
>>>> to examine decisions and coming up with explanations that are able to be
>>>> improved on, is a necessary part of AGI and I believe that discrete
>>>> reasoning is necessary for that ability. I think that a system that is very
>>>> efficient at working with discrete reasoning is what is needed and that is
>>>> what is lacking. You can use weighted reasoning but in trying to find new
>>>> categories of significance the problem becomes to complex for the same
>>>> reason that non-polynomial time Satisfiability can become too complex. The
>>>> combinatorial explosion is the problem.
>>>>
>>>> Jim Bromer
>>>>
>>>> On Tue, Dec 5, 2017 at 7:49 AM, Ed Pell <edpell@optonline.net> wrote:
>>>>
>>>>> That was done circa 1975!
>>>>>
>>>>> On 12/5/2017 7:47 AM, Ed Pell wrote:
>>>>>
>>>>> Hi David, on John's web site you will find that IBM created such a
>>>>> system to answer typed natural language questions about property taxes for
>>>>> the city of Whiteplains, New York. Further IBM executives decided it was
>>>>> too labor intensive to be a business so it was dropped.
>>>>>
>>>>> Ed
>>>>>
>>>>> On 12/4/2017 9:40 PM, David Whitten wrote:
>>>>>
>>>>> I am of the opinion that AGI is most useful when the AGI is capable of
>>>>> generating explanations for the conclusions it makes.
>>>>>
>>>>> There are several systems for describing the base systems that might
>>>>> underlie the ability to explain, but I like a system based on Common Logic,
>>>>> with the knowledge in the system organized using a set of structural axioms
>>>>> to understand the why of the conclusions.
>>>>>
>>>>> Has anyone looked at any of these systems, like John Sowa's Conceptual
>>>>> Graphs? Does anyone want to talk about them?
>>>>>
>>>>> David Whitten
>>>>> 713-870-3834 <(713)%20870-3834>
>>>>>
>>>>>
>>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon&#x3E; Virus-free.
>>>>> www.avast.com
>>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link&#x3E;
>>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>>>>> <https://www.listbox.com/member/archive/rss/303/28564720-68aa1235&#x3E; |
>>>>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>>>>> <http://www.listbox.com&#x3E;
>>>>>
>>>>>
>>>>>
>>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>>>>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5&#x3E; |
>>>>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>>>>> <http://www.listbox.com&#x3E;
>>>>>
>>>>>
>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>>>> <https://www.listbox.com/member/archive/rss/303/27758495-a2fd6aff&#x3E; |
>>>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>>>> <http://www.listbox.com&#x3E;
>>>>
>>>
>>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5&#x3E; |
>> Modify
>> <https://www.listbox.com/member/?&&#x3E;
>> Your Subscription <http://www.listbox.com&#x3E;
>>
>
>
Categories: Discussions

Re: [agi] AGI and ontologies

AGI discussions @AGIRI - Fri, 2017-12-08 14:46
The series, ((n+1)*n)/2, is (or displays) a polynomial increase in value or
length of the line. However, when you write the entire series out,
1
1+2=3
1+2+3=6
1+2+3+4=10
1+2+3+4+5=15

The length of writing the entire table displays an exponential rate of
growth.
1,5,13,23,35,....
and that is ignoring the line returns.
The difference can seem subtle when you are writing a program and are not
aware of the issue. I think most combinatorial explosions in AI seem to
occur in searches, where the program has to search for the best response or
for the correct response. In Logical Satisfiability the problem occurs in
making calculations.


Jim Bromer

On Tue, Dec 5, 2017 at 7:31 PM, David Whitten <whitten@netcom.com> wrote:

>
>
> On Tue, Dec 5, 2017 at 3:05 PM, David Whitten <whitten@netcom.com> wrote:
>
>> The issue with combinational explosion is a real one. This is also a real
>> thing in humans.
>> If you look at how many connection lines you get with a fully connected
>> set of points, then the explosion is obvious:
>>
>> 1 point = 0 lines
>> 2 points = 1 line
>> 3 points = 3 lines
>> 4 points = 6 lines
>> 5 points = 10 lines
>> 6 points = 15 lines
>>
>> in general the number of lines given *n*=number of points is
>> line=(number of connection lines)
>> line=(*n* * (*n*-1))/ 2 (also called the binomial coefficient or the
>> number of 2-combinations of *n* elements )
>>
>> So if we have several million neurons (mega-neurons) and every one is
>> connected to every other neuron then we have several trillion neuron
>> connections (tera-connectons).
>>
>> The power comes when we don't have that many connections but when we have
>> far fewer.
>> Then assuming each neuron is a number, we can store the calculated value
>> of a function with two input variables and one output variable as a net of
>> answers.
>>
>> When we increase our number of neurons to 6 we have one thousand separate
>> inputs and one trillion connections of input values so as you can see, this
>> way of storing answers would quickly explode.
>>
>> I think this is why I like logical expressions. You can connect nodes
>> that represent various propositions with only a few connectors.
>>
>> David Whitten
>> 713-870-3834 <(713)%20870-3834>
>>
>> On Tue, Dec 5, 2017 at 8:04 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>>
>>> I agree that being able to explain decisions - or at least being able to
>>> examine decisions and coming up with explanations that are able to be
>>> improved on, is a necessary part of AGI and I believe that discrete
>>> reasoning is necessary for that ability. I think that a system that is very
>>> efficient at working with discrete reasoning is what is needed and that is
>>> what is lacking. You can use weighted reasoning but in trying to find new
>>> categories of significance the problem becomes to complex for the same
>>> reason that non-polynomial time Satisfiability can become too complex. The
>>> combinatorial explosion is the problem.
>>>
>>> Jim Bromer
>>>
>>> On Tue, Dec 5, 2017 at 7:49 AM, Ed Pell <edpell@optonline.net> wrote:
>>>
>>>> That was done circa 1975!
>>>>
>>>> On 12/5/2017 7:47 AM, Ed Pell wrote:
>>>>
>>>> Hi David, on John's web site you will find that IBM created such a
>>>> system to answer typed natural language questions about property taxes for
>>>> the city of Whiteplains, New York. Further IBM executives decided it was
>>>> too labor intensive to be a business so it was dropped.
>>>>
>>>> Ed
>>>>
>>>> On 12/4/2017 9:40 PM, David Whitten wrote:
>>>>
>>>> I am of the opinion that AGI is most useful when the AGI is capable of
>>>> generating explanations for the conclusions it makes.
>>>>
>>>> There are several systems for describing the base systems that might
>>>> underlie the ability to explain, but I like a system based on Common Logic,
>>>> with the knowledge in the system organized using a set of structural axioms
>>>> to understand the why of the conclusions.
>>>>
>>>> Has anyone looked at any of these systems, like John Sowa's Conceptual
>>>> Graphs? Does anyone want to talk about them?
>>>>
>>>> David Whitten
>>>> 713-870-3834 <(713)%20870-3834>
>>>>
>>>>
>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon&#x3E; Virus-free.
>>>> www.avast.com
>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link&#x3E;
>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>>>> <https://www.listbox.com/member/archive/rss/303/28564720-68aa1235&#x3E; |
>>>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>>>> <http://www.listbox.com&#x3E;
>>>>
>>>>
>>>>
>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>>>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5&#x3E; |
>>>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>>>> <http://www.listbox.com&#x3E;
>>>>
>>>>
>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>>> <https://www.listbox.com/member/archive/rss/303/27758495-a2fd6aff&#x3E; |
>>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>>> <http://www.listbox.com&#x3E;
>>>
>>
>>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5&#x3E; |
> Modify
> <https://www.listbox.com/member/?&&#x3E;
> Your Subscription <http://www.listbox.com&#x3E;
>
Categories: Discussions

Re: [agi] AGI and ontologies

AGI discussions @AGIRI - Fri, 2017-12-08 03:12
I wonder if the need for an AGI to generate explanations could help in
debugging regular programs too. As many of you know, there is a tradition
of logging progress in a program by producing text that gets dumped to a
log file. Usually this is just enough to find the code location where the
code execution has progressed. Sometimes it includes the values of
variables useful at that particular point.

Would we (as an industry) gain by having more "standard" informationi be
dumped to an "explanation" log or channel ? Like what code is running,
what information is being processed, what goal is being met by the current
code?

On Wed, Dec 6, 2017 at 8:01 PM, Ed Pell <edpell@optonline.net> wrote:

> Please tell us more. Thanks.
>
> On 12/5/2017 10:33 AM, Stefan Reich wrote:
>
> I am making such a system. It operates on the level of English throughout,
> so naturally all its reasonings are understandable to humans.
>
> Greetings,
> Stefan
>
> On 5 December 2017 at 03:40, David Whitten <whitten@netcom.com> wrote:
>
>> I am of the opinion that AGI is most useful when the AGI is capable of
>> generating explanations for the conclusions it makes.
>>
>> There are several systems for describing the base systems that might
>> underlie the ability to explain, but I like a system based on Common Logic,
>> with the knowledge in the system organized using a set of structural axioms
>> to understand the why of the conclusions.
>>
>> Has anyone looked at any of these systems, like John Sowa's Conceptual
>> Graphs? Does anyone want to talk about them?
>>
>> David Whitten
>> 713-870-3834 <%28713%29%20870-3834>
>>
>>
>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon&#x3E; Virus-free.
>> www.avast.com
>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link&#x3E;
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>> <https://www.listbox.com/member/archive/rss/303/29262487-e504eaa5&#x3E; |
>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>> <http://www.listbox.com&#x3E;
>>
>
>
>
> --
> Stefan Reich
> BotCompany.de
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
> <https://www.listbox.com/member/archive/rss/303/28564720-68aa1235&#x3E; |
> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
> <http://www.listbox.com&#x3E;
>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
> <https://www.listbox.com/member/archive/rss/303/27758495-a2fd6aff&#x3E; |
> Modify
> <https://www.listbox.com/member/?&&#x3E;
> Your Subscription <http://www.listbox.com&#x3E;
>
>
Categories: Discussions

Re: [agi] AGI and ontologies

AGI discussions @AGIRI - Thu, 2017-12-07 03:01
Please tell us more. Thanks.


On 12/5/2017 10:33 AM, Stefan Reich wrote:
> I am making such a system. It operates on the level of English
> throughout, so naturally all its reasonings are understandable to humans.
>
> Greetings,
> Stefan
>
> On 5 December 2017 at 03:40, David Whitten <whitten@netcom.com
> <mailto:whitten@netcom.com>&#x3E; wrote:
>
> I am of the opinion that AGI is most useful when the AGI is
> capable of generating explanations for the conclusions it makes.
>
> There are several systems for describing the base systems that
> might underlie the ability to explain, but I like a system based
> on Common Logic, with the knowledge in the system organized using
> a set of structural axioms to understand the why of the conclusions.
>
> Has anyone looked at any of these systems, like John Sowa's
> Conceptual Graphs?  Does anyone want to talk about them?
>
> David Whitten
> 713-870-3834 <tel:%28713%29%20870-3834>
>
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon&#x3E;
> Virus-free. www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link&#x3E;
>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
> <https://www.listbox.com/member/archive/rss/303/29262487-e504eaa5&#x3E;
> | Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
> [Powered by Listbox] <http://www.listbox.com&#x3E;
>
>
>
>
> --
> Stefan Reich
> BotCompany.de
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
> <https://www.listbox.com/member/archive/rss/303/28564720-68aa1235&#x3E; |
> Modify
> <https://www.listbox.com/member/?&&#x3E;
> Your Subscription [Powered by Listbox] <http://www.listbox.com&#x3E;
>

Categories: Discussions

Re: [agi] AGI and ontologies

AGI discussions @AGIRI - Wed, 2017-12-06 18:38
Stefan,
could you elaborate on your efforts?
How do you understand the "level of English throughout" ?

Thanks
David
713-870-3834 <(713)%20870-3834>

On Tue, Dec 5, 2017 at 10:33 AM, Stefan Reich <
stefan.reich.maker.of.eye@googlemail.com> wrote:

> I am making such a system. It operates on the level of English throughout,
> so naturally all its reasonings are understandable to humans.
>
> Greetings,
> Stefan
>
> On 5 December 2017 at 03:40, David Whitten <whitten@netcom.com> wrote:
>
>> I am of the opinion that AGI is most useful when the AGI is capable of
>> generating explanations for the conclusions it makes.
>>
>> There are several systems for describing the base systems that might
>> underlie the ability to explain, but I like a system based on Common Logic,
>> with the knowledge in the system organized using a set of structural axioms
>> to understand the why of the conclusions.
>>
>> Has anyone looked at any of these systems, like John Sowa's Conceptual
>> Graphs? Does anyone want to talk about them?
>>
>> David Whitten
>> 713-870-3834 <(713)%20870-3834>
>>
>>
>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon&#x3E; Virus-free.
>> www.avast.com
>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link&#x3E;
>> <#m_-3322266810644915645_m_-2298460121054897309_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>> <https://www.listbox.com/member/archive/rss/303/29262487-e504eaa5&#x3E; |
>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>> <http://www.listbox.com&#x3E;
>>
>
>
>
> --
> Stefan Reich
> BotCompany.de
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
> <https://www.listbox.com/member/archive/rss/303/27758495-a2fd6aff&#x3E; |
> Modify
> <https://www.listbox.com/member/?&&#x3E;
> Your Subscription <http://www.listbox.com&#x3E;
>
Categories: Discussions

Re: [agi] TM Program in Expo Time and Parallel Program in Polynomial Time

AGI discussions @AGIRI - Wed, 2017-12-06 06:35
Thanks for your comments. I was just trying to use the thought experiment
to see if could find an algorithm which was in exponential time on the type
of Turing Machine I described but would be in polynomial time on multiple
line addressable memory computers more like the ones we use. I hoped that
this thought experiment might help me to find advantages that I could use
with an actual program. I found one example but I later realized that it
was a rather mundane example and that there are a lot of other algorithms
that might be better models of the kinds of situations that I would expect
to encounter if an effective program was feasible. The goal was to help me
find some principles I could use in the design goals of a program. (I
actually realized that there are many mathematical functions that could be
used as design guidelines for programs that would be in polynomial time
using addressable memory but would be in exponential time if the program
had to traverse the data back and forth. In effect my thought experiment
was not necessary but it did provide me with me with some ideas that I
think will have interesting results and that I am carefully looking at.)
So I am not trying to write a proof, I am only interested in designing
programs that can be more efficient at working with the combinatorial
complexity in order to avoid combinatorial explosions when working with
intense combinations of discrete objects.
I do expect that there are programs that would solve logical Satisfiability
problems on infinite addressable memory computers in polynomial time but
would not run in polynomial time on infinite Turning Machine. However, I
certainly have not found one and I am not claiming to be on the verge of
finding one. I think the combinatorial complexity problem for Discrete AI
programs might be easier than the problem for Logical Satisfiability
because we need to use different kinds of insights with higher reasoning
and these varieties of insights cannot usually be smoothly integrated into
other kinds of insights even if they concern the same subject matter.
Because of this, the discrete combinations of these different kinds of
insights have to be fitted. That might give some leeway to solving some
problems.

Jim Bromer

On Tue, Dec 5, 2017 at 2:12 PM, Bill Hibbard <hibbard@wisc.edu> wrote:

> Jim,
>
> If you are comparing a Turing machine with an infinite tape
> to a conventional computer with infinite addressable memory,
> one thing to consider is that the Turing machine may need to
> make an unbounded number of operations to access a cell on
> the tape whereas the conventional computer accesses any
> memory cell in a single one of its operations. Furthermore,
> addresses for an infinite addressable memory are infinite
> precision integers, so address calculations correspond to
> unbounded numbers of Turing machine operations.
>
> I don't know whether there is an alorithm for satisfiability
> using an polynomial number of operations on infinite precision
> integers and infinite addressable memory. If you do find one
> and write a paper about it, hopefully the referees will point
> out that you are using a computational model with an unbounded
> time advantage over Turing machines.
>
> Best wishes,
> Bill
>
>
> On Tue, 5 Dec 2017, Jim Bromer wrote:
>
>> I feel like I am on the brink of a breakthrough for dealing with
>> combinatorial complexity. (I did not just say that I am on a breakthrough
>> only that it feels like that right now.)
>> It will take me about 3 years to determine if this feeling is wrong.
>> However, I am also saying that even if these ideas that I am considering
>> are
>> not instrumental in curing combinatorial complexity for Logical
>> Satisfiability, they might still work for the complexity that can arise
>> from
>> working with discrete reasoning. The reason is that higher level reasoning
>> is not purely logical.
>> (There are reasons that I think discrete reasoning is necessary for higher
>> reasoning, but they seem so obvious I do not feel like taking the time to
>> comment on them when no one is actually asking me anything. However, let
>> me
>> add that I think there are some aspects of reasoning and insight that do
>> not
>> model well in the terms of contemporary methods of weighted reasoning or
>> fuzzy logic or neural networks or deep nets.)
>>
>> Jim Bromer
>>
>> On Mon, Dec 4, 2017 at 9:26 PM, Jim Bromer <jimbromer@gmail.com> wrote:
>> If a theoretical thought-experiment contest between a Turing
>> Machine and a parallel addressable memory computer did not
>> involve the potential of infinite memory for the contemporary
>> computer then it would not make sense to grant that the Turing
>> Machine have an infinite tape - which is a pretty much a
>> requirement for the experiment. However, I am not really
>> interested in the purely abstract mathematical question (or for
>> that matter the precise establishment of the conditions of the
>> thought experiment), I am interested in extending the
>> capabilities of discrete reasoning sufficiently to solve many
>> significant problems in AI that still exist.
>>
>> Although the thought experiment and the conclusions that I have come
>> to might seem irrelevant or mundane it is my belief that, as a part of
>> a greater theory they are not mundane or trivial and that the things I
>> am thinking about will have an impact on AI at some time in the
>> future. Of course we would all like practical ideas that we could
>> implement and help us to achieve something spectacular ASAP, but the
>> reality for most of us is that we are going to need to work at a
>> problem with more fortitude and grit than sense in order to achieve
>> something significant.
>>
>> My goal (with this thought-experiment) was to find design principles
>> that could help us to avoid combinatorial complexity (or explosions)
>> while working with discrete systems. (I am not saying that weighted
>> reasoning should not be used, just that some discrete reasoning is
>> necessary.)
>>
>> I am living the fantasy right now. But the thing is, even if the
>> fantasy came crashing down and I wasn't even able to create even an
>> inefficient version of what I am thinking about, the ideas that I am
>> working on are so interesting that they have an intrinsic value and
>> simple elegance that make them really cool. I am sorry that I cannot
>> convey the reasons for my enthusiasm but part of it is that I totally
>> love working on this problem.
>>
>> And it is completely on topic although I cannot make you see that
>> now..
>>
>>
>> Jim Bromer
>>
>> On Sun, Dec 3, 2017 at 5:17 PM, TimTyler <tim@tt1.org> wrote:
>> On 2017-11-30 12:58, Jim Bromer wrote:
>>
>> I came up with a conjecture that a parallel
>> computer might be programmed with a polynomial
>> time solution to Logical Satisfiability in a
>> way a simple Turing Machine (with a single
>> tape for dynamic read-write memory) could not.
>> At first my mind drew a blank but then I spent
>> the early morning thinking about it and I came
>> up with a situation where a parallel device
>> could operate in polynomial time but a Turing
>> Machine could not - due to the traversals back
>> and forth across the tape the TM would have to
>> make.
>>
>>
>> You've been talking about this sort of thing for years.
>> Few on the list seem
>> interested - probably because it is off-topic.
>>
>> Very briefly, I think that conventional wisdom is that
>> polynomial vs exponential
>> time solutions don't depend on the degree of parallelism
>> of the computer involved,
>> because parallelism is always finite, while the notion of
>> polynomial vs exponential
>> time is to do with how the resources required for a
>> problem scale as the problem
>> size tends towards infinity - which is very, very big.
>>
>> The time when parallelism does make a difference is when
>> you have an infinitely
>> parallel machine, and can magically multiply your question
>> over infinitely
>> many inputs and magically condense your answer from
>> infinitely many outputs.
>> That set up would break a bunch of computational
>> complexity theory assumptions.
>> Nobody seems very interested in that case.
>> --
>>
>> __________
>> |im |yler http://timtyler.org/
>>
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed:
>> https://www.listbox.com/member/archive/rss/303/24379807-653794b5
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&#x26;
>> Powered by Listbox: http://www.listbox.com
>>
>>
>>
>> AGI | Archives[084641f1.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW
>> 1hZ2VzL2ZlZWQtaWNvbi0xM
>> HgxMC5qcGc] | Modify Your Subscription
>> [084641f1.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2
>> xpc3Rib3gtbG9nb
>> y1zbWFsbC5wbmc]
>>
>>
>>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-653794b5
> Modify Your Subscription: https://www.listbox.com/member
> /?&
> Powered by Listbox: http://www.listbox.com
>
Categories: Discussions

Re: [agi] AGI and ontologies

AGI discussions @AGIRI - Wed, 2017-12-06 02:31
On Tue, Dec 5, 2017 at 3:05 PM, David Whitten <whitten@netcom.com> wrote:

> The issue with combinational explosion is a real one. This is also a real
> thing in humans.
> If you look at how many connection lines you get with a fully connected
> set of points, then the explosion is obvious:
>
> 1 point = 0 lines
> 2 points = 1 line
> 3 points = 3 lines
> 4 points = 6 lines
> 5 points = 10 lines
> 6 points = 15 lines
>
> in general the number of lines given *n*=number of points is line=(number
> of connection lines)
> line=(*n* * (*n*-1))/ 2 (also called the binomial coefficient or the
> number of 2-combinations of *n* elements )
>
> So if we have several million neurons (mega-neurons) and every one is
> connected to every other neuron then we have several trillion neuron
> connections (tera-connectons).
>
> The power comes when we don't have that many connections but when we have
> far fewer.
> Then assuming each neuron is a number, we can store the calculated value
> of a function with two input variables and one output variable as a net of
> answers.
>
> When we increase our number of neurons to 6 we have one thousand separate
> inputs and one trillion connections of input values so as you can see, this
> way of storing answers would quickly explode.
>
> I think this is why I like logical expressions. You can connect nodes that
> represent various propositions with only a few connectors.
>
> David Whitten
> 713-870-3834 <(713)%20870-3834>
>
> On Tue, Dec 5, 2017 at 8:04 AM, Jim Bromer <jimbromer@gmail.com> wrote:
>
>> I agree that being able to explain decisions - or at least being able to
>> examine decisions and coming up with explanations that are able to be
>> improved on, is a necessary part of AGI and I believe that discrete
>> reasoning is necessary for that ability. I think that a system that is very
>> efficient at working with discrete reasoning is what is needed and that is
>> what is lacking. You can use weighted reasoning but in trying to find new
>> categories of significance the problem becomes to complex for the same
>> reason that non-polynomial time Satisfiability can become too complex. The
>> combinatorial explosion is the problem.
>>
>> Jim Bromer
>>
>> On Tue, Dec 5, 2017 at 7:49 AM, Ed Pell <edpell@optonline.net> wrote:
>>
>>> That was done circa 1975!
>>>
>>> On 12/5/2017 7:47 AM, Ed Pell wrote:
>>>
>>> Hi David, on John's web site you will find that IBM created such a
>>> system to answer typed natural language questions about property taxes for
>>> the city of Whiteplains, New York. Further IBM executives decided it was
>>> too labor intensive to be a business so it was dropped.
>>>
>>> Ed
>>>
>>> On 12/4/2017 9:40 PM, David Whitten wrote:
>>>
>>> I am of the opinion that AGI is most useful when the AGI is capable of
>>> generating explanations for the conclusions it makes.
>>>
>>> There are several systems for describing the base systems that might
>>> underlie the ability to explain, but I like a system based on Common Logic,
>>> with the knowledge in the system organized using a set of structural axioms
>>> to understand the why of the conclusions.
>>>
>>> Has anyone looked at any of these systems, like John Sowa's Conceptual
>>> Graphs? Does anyone want to talk about them?
>>>
>>> David Whitten
>>> 713-870-3834 <(713)%20870-3834>
>>>
>>>
>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon&#x3E; Virus-free.
>>> www.avast.com
>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link&#x3E;
>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>>> <https://www.listbox.com/member/archive/rss/303/28564720-68aa1235&#x3E; |
>>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>>> <http://www.listbox.com&#x3E;
>>>
>>>
>>>
>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5&#x3E; |
>>> Modify <https://www.listbox.com/member/?&&#x3E; Your Subscription
>>> <http://www.listbox.com&#x3E;
>>>
>>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
>> <https://www.listbox.com/member/archive/rss/303/27758495-a2fd6aff&#x3E; |
>> Modify
>> <https://www.listbox.com/member/?&&#x3E;
>> Your Subscription <http://www.listbox.com&#x3E;
>>
>
>
Categories: Discussions

Re: [agi] TM Program in Expo Time and Parallel Program in PolynomialTime

AGI discussions @AGIRI - Tue, 2017-12-05 21:12
Jim,

If you are comparing a Turing machine with an infinite tape
to a conventional computer with infinite addressable memory,
one thing to consider is that the Turing machine may need to
make an unbounded number of operations to access a cell on
the tape whereas the conventional computer accesses any
memory cell in a single one of its operations. Furthermore,
addresses for an infinite addressable memory are infinite
precision integers, so address calculations correspond to
unbounded numbers of Turing machine operations.

I don't know whether there is an alorithm for satisfiability
using an polynomial number of operations on infinite precision
integers and infinite addressable memory. If you do find one
and write a paper about it, hopefully the referees will point
out that you are using a computational model with an unbounded
time advantage over Turing machines.

Best wishes,
Bill

On Tue, 5 Dec 2017, Jim Bromer wrote:
> I feel like I am on the brink of a breakthrough for dealing with
> combinatorial complexity. (I did not just say that I am on a breakthrough
> only that it feels like that right now.)
> It will take me about 3 years to determine if this feeling is wrong.
> However, I am also saying that even if these ideas that I am considering are
> not instrumental in curing combinatorial complexity for Logical
> Satisfiability, they might still work for the complexity that can arise from
> working with discrete reasoning. The reason is that higher level reasoning
> is not purely logical.
> (There are reasons that I think discrete reasoning is necessary for higher
> reasoning, but they seem so obvious I do not feel like taking the time to
> comment on them when no one is actually asking me anything. However, let me
> add that I think there are some aspects of reasoning and insight that do not
> model well in the terms of contemporary methods of weighted reasoning or
> fuzzy logic or neural networks or deep nets.)
>
> Jim Bromer
>
> On Mon, Dec 4, 2017 at 9:26 PM, Jim Bromer <jimbromer@gmail.com> wrote:
> If a theoretical thought-experiment contest between a Turing
> Machine and a parallel addressable memory computer did not
> involve the potential of infinite memory for the contemporary
> computer then it would not make sense to grant that the Turing
> Machine have an infinite tape - which is a pretty much a
> requirement for the experiment. However, I am not really
> interested in the purely abstract mathematical question (or for
> that matter the precise establishment of the conditions of the
> thought experiment), I am interested in extending the
> capabilities of discrete reasoning sufficiently to solve many
> significant problems in AI that still exist.
>
> Although the thought experiment and the conclusions that I have come
> to might seem irrelevant or mundane it is my belief that, as a part of
> a greater theory they are not mundane or trivial and that the things I
> am thinking about will have an impact on AI at some time in the
> future. Of course we would all like practical ideas that we could
> implement and help us to achieve something spectacular ASAP, but the
> reality for most of us is that we are going to need to work at a
> problem with more fortitude and grit than sense in order to achieve
> something significant.
>
> My goal (with this thought-experiment) was to find design principles
> that could help us to avoid combinatorial complexity (or explosions)
> while working with discrete systems. (I am not saying that weighted
> reasoning should not be used, just that some discrete reasoning is
> necessary.)
>
> I am living the fantasy right now. But the thing is, even if the
> fantasy came crashing down and I wasn't even able to create even an
> inefficient version of what I am thinking about, the ideas that I am
> working on are so interesting that they have an intrinsic value and
> simple elegance that make them really cool. I am sorry that I cannot
> convey the reasons for my enthusiasm but part of it is that I totally
> love working on this problem.
>
> And it is completely on topic although I cannot make you see that
> now..
>
>
> Jim Bromer
>
> On Sun, Dec 3, 2017 at 5:17 PM, TimTyler <tim@tt1.org> wrote:
> On 2017-11-30 12:58, Jim Bromer wrote:
>
> I came up with a conjecture that a parallel
> computer might be programmed with a polynomial
> time solution to Logical Satisfiability in a
> way a simple Turing Machine (with a single
> tape for dynamic read-write memory) could not.
> At first my mind drew a blank but then I spent
> the early morning thinking about it and I came
> up with a situation where a parallel device
> could operate in polynomial time but a Turing
> Machine could not - due to the traversals back
> and forth across the tape the TM would have to
> make.
>
>
> You've been talking about this sort of thing for years.
> Few on the list seem
> interested - probably because it is off-topic.
>
> Very briefly, I think that conventional wisdom is that
> polynomial vs exponential
> time solutions don't depend on the degree of parallelism
> of the computer involved,
> because parallelism is always finite, while the notion of
> polynomial vs exponential
> time is to do with how the resources required for a
> problem scale as the problem
> size tends towards infinity - which is very, very big.
>
> The time when parallelism does make a difference is when
> you have an infinitely
> parallel machine, and can magically multiply your question
> over infinitely
> many inputs and magically condense your answer from
> infinitely many outputs.
> That set up would break a bunch of computational
> complexity theory assumptions.
> Nobody seems very interested in that case.
> --
>
> __________
>  |im |yler http://timtyler.org/
>  
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed:
> https://www.listbox.com/member/archive/rss/303/24379807-653794b5
> Modify Your Subscription:
> https://www.listbox.com/member/?&#x26;
> Powered by Listbox: http://www.listbox.com
>
>
>
> AGI | Archives[084641f1.jpg?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2ZlZWQtaWNvbi0xM
> HgxMC5qcGc] | Modify Your Subscription
> [084641f1.png?uri=aHR0cHM6Ly93d3cubGlzdGJveC5jb20vaW1hZ2VzL2xpc3Rib3gtbG9nb
> y1zbWFsbC5wbmc]
>
>
Categories: Discussions

Re: [agi] TM Program in Expo Time and Parallel Program in Polynomial Time

AGI discussions @AGIRI - Tue, 2017-12-05 18:42
I feel like I am on the brink of a breakthrough for dealing with
combinatorial complexity. (I did not just say that I am on a breakthrough
only that it feels like that right now.)
It will take me about 3 years to determine if this feeling is wrong.
However, I am also saying that even if these ideas that I am considering
are not instrumental in curing combinatorial complexity for Logical
Satisfiability, they might still work for the complexity that can arise
from working with discrete reasoning. The reason is that higher level
reasoning is not purely logical.
(There are reasons that I think discrete reasoning is necessary for higher
reasoning, but they seem so obvious I do not feel like taking the time to
comment on them when no one is actually asking me anything. However, let me
add that I think there are some aspects of reasoning and insight that do
not model well in the terms of contemporary methods of weighted reasoning
or fuzzy logic or neural networks or deep nets.)

Jim Bromer

On Mon, Dec 4, 2017 at 9:26 PM, Jim Bromer <jimbromer@gmail.com> wrote:

> If a theoretical thought-experiment contest between a Turing Machine and a
> parallel addressable memory computer did not involve the potential of
> infinite memory for the contemporary computer then it would not make sense
> to grant that the Turing Machine have an infinite tape - which is a pretty
> much a requirement for the experiment. However, I am not really interested
> in the purely abstract mathematical question (or for that matter the
> precise establishment of the conditions of the thought experiment), I am
> interested in extending the capabilities of discrete reasoning sufficiently
> to solve many significant problems in AI that still exist.
>
> Although the thought experiment and the conclusions that I have come to
> might seem irrelevant or mundane it is my belief that, as a part of a
> greater theory they are not mundane or trivial and that the things I am
> thinking about will have an impact on AI at some time in the future. Of
> course we would all like practical ideas that we could implement and help
> us to achieve something spectacular ASAP, but the reality for most of us is
> that we are going to need to work at a problem with more fortitude and grit
> than sense in order to achieve something significant.
>
> My goal (with this thought-experiment) was to find design principles that
> could help us to avoid combinatorial complexity (or explosions) while
> working with discrete systems. (I am not saying that weighted reasoning
> should not be used, just that some discrete reasoning is necessary.)
>
> I am living the fantasy right now. But the thing is, even if the fantasy
> came crashing down and I wasn't even able to create even an inefficient
> version of what I am thinking about, the ideas that I am working on are so
> interesting that they have an intrinsic value and simple elegance that make
> them really cool. I am sorry that I cannot convey the reasons for my
> enthusiasm but part of it is that I totally love working on this problem.
>
> And it is completely on topic although I cannot make you see that now..
>
>
> Jim Bromer
>
> On Sun, Dec 3, 2017 at 5:17 PM, TimTyler <tim@tt1.org> wrote:
>
>> On 2017-11-30 12:58, Jim Bromer wrote:
>>
>> I came up with a conjecture that a parallel computer might be programmed
>>> with a polynomial time solution to Logical Satisfiability in a way a simple
>>> Turing Machine (with a single tape for dynamic read-write memory) could
>>> not. At first my mind drew a blank but then I spent the early morning
>>> thinking about it and I came up with a situation where a parallel device
>>> could operate in polynomial time but a Turing Machine could not - due to
>>> the traversals back and forth across the tape the TM would have to make.
>>>
>>
>> You've been talking about this sort of thing for years. Few on the list
>> seem
>> interested - probably because it is off-topic.
>>
>> Very briefly, I think that conventional wisdom is that polynomial vs
>> exponential
>> time solutions don't depend on the degree of parallelism of the computer
>> involved,
>> because parallelism is always finite, while the notion of polynomial vs
>> exponential
>> time is to do with how the resources required for a problem scale as the
>> problem
>> size tends towards infinity - which is very, very big.
>>
>> The time when parallelism does make a difference is when you have an
>> infinitely
>> parallel machine, and can magically multiply your question over infinitely
>> many inputs and magically condense your answer from infinitely many
>> outputs.
>> That set up would break a bunch of computational complexity theory
>> assumptions.
>> Nobody seems very interested in that case.
>> --
>>
>> __________
>> |im |yler http://timtyler.org/
>>
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-6537
>> 94b5
>> Modify Your Subscription: https://www.listbox.com/member
>> /?&
>> Powered by Listbox: http://www.listbox.com
>>
>
>
Categories: Discussions

Re: [agi] AGI and ontologies

AGI discussions @AGIRI - Tue, 2017-12-05 17:34
I am making such a system. It operates on the level of English throughout,
so naturally all its reasonings are understandable to humans.

Greetings,
Stefan

On 5 December 2017 at 03:40, David Whitten <whitten@netcom.com> wrote:

> I am of the opinion that AGI is most useful when the AGI is capable of
> generating explanations for the conclusions it makes.
>
> There are several systems for describing the base systems that might
> underlie the ability to explain, but I like a system based on Common Logic,
> with the knowledge in the system organized using a set of structural axioms
> to understand the why of the conclusions.
>
> Has anyone looked at any of these systems, like John Sowa's Conceptual
> Graphs? Does anyone want to talk about them?
>
> David Whitten
> 713-870-3834 <(713)%20870-3834>
>
>
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon&#x3E; Virus-free.
> www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link&#x3E;
> <#m_-2298460121054897309_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now&#x3E;
> <https://www.listbox.com/member/archive/rss/303/29262487-e504eaa5&#x3E; |
> Modify
> <https://www.listbox.com/member/?&&#x3E;
> Your Subscription <http://www.listbox.com&#x3E;
>



--
Stefan Reich
BotCompany.de
Categories: Discussions
Syndicate content